Auto-tune was actually an offshoot of software used to detect petroleum deposits underground.
The idea is that, by shooting sonic waves underground, they reflect and produce echoes that can be picked up by surface instruments. Clearly, a computer is going to be needed to interpret those echoes. Since those echoes were relatively chaotic, a "smoothing" function was written that could sort of force the detected wave data into a usable image.
Turns out, you can use that same software to map a wave form like the human voice onto clearly defined pitch frequencies that correspond to the scale of whatever key the piece is in. By altering values in the software, you can make the effect subtle so that it nudges the note onto the pitch, or you can make it a "hard clip" that produces a deliberate robotic quality. Cher and TI would famously use this.
There's no "wrong" way to produce music. It either works or it doesn't. Glissando notes (ones that slide up to the pitch) are a perfectly valid way to sing and play, and form the core of some artistic styles. Tempo variations happen as an expressive form in lots of musical genres. They allow the piece to "breathe'. Of course, Bach will have less room for tempo (and dynamic) variation than Rachmaninoff.
If you let a computer create the music, you get computer music. AI will create AI music. Maybe the listener likes it, or maybe they don't. A producer should strive to let the artist speak with their voice, unless the artist is deliberately being employed to let the producer speak.