Really Now, What’s So Bad About Auto-Tune Pop?

One thing you learn when you spend a bunch of time on YouTube looking up songs that use the electronic pitch-correction algorithm known as Auto-Tune is that a lot of people really hate Auto-Tune. Which is weird. It’s just a sound — obnoxious in the hands of the untalented, occasionally amazing in the hands of the amazing. Hating it is like hating falsettos, or the accordion. And, O.K., given the current omnipresence of the sound in pop music, it’s like hating the accordion during some strange period of accordion-fascism during which everyone’s paying top dollar to sound like early Los Lobos or Weird Al. Rap and R&B may have run with Auto-Tune more than other genres, but these days you can’t get away from it, even by listening to metal, country or Moroccan music.

The argument against Auto-Tune is almost always a taste discussion masquerading as a discussion of standards — the belief that there’s something inherently “wrong” with using Auto-Tune, either to fix the flaws in a vocal performance or creatively distort one, is born of a work-ethical view of music that prizes technical facility over inspiration. (As a country, we’re obsessed with singing, and with people being able to hit notes on command — it’s why we’ve made blockbuster hits out of the skills-over-art juggernaut that is American Idol, but also Glee, a show about how singing makes you a better person, and The Voice, a show about how singing is so important that even unconventionally attractive people should be allowed on television if they can do it.)

The truth is that artists and producers have been using technology (reverb, overdubbing, electronic harmonizers) to change the sound of their voices for decades. The link between “organic” live performance and recorded music was broken in the late ‘40s when Les Paul popularized multitrack recording. More important, there’s no cheating in art. And sure, you could go super-purist by listening to nothing but music recorded in cabins in the woods — but it’s worth noting that even Bon Iver’s Justin Vernon, whose “For Emma, Forever Ago” is the most acclaimed recorded-in-a-cabin-in-the-woods record of recent years, has made use of Auto-Tune both subtly and blatantly. Vernon’s “Woods” is proof of Auto-Tune’s potential as a creative tool; in their own way, all these songs are, too.

In order to avoid completely soaking you in Auto-Tune, I’ll post several Auto-Tune adjusted songs and what makes them so good, or so bad they are good, over the next few days. Below is a first taste.

Cher, “Believe” (1998)

In the late ‘70s and early ‘80s, Dr. Harold “Andy” Hildebrand worked in seismic data interpretation for Exxon, using digital-signal processing and mathematical algorithms to hunt for underground oil deposits. In 1984, the Landmark Graphics Corporation, a company he co-founded, shipped its first stand-alone seismic-data-interpretation workstations, which cost $240,000; in 1989, at 40, Hildebrand retired. (Landmark was acquired by Halliburton in 1996.) He always liked music; for a while, after he left Landmark, he studied composition at Rice University, and it was there that he started thinking about music-related applications for the technologies he worked with in the oil industry. In 1990, he founded a company called Jupiter Systems to market a sample-looping program called Infinity. In the late ‘90s, a dinner-party guest jokingly challenged Hildebrand to create a program that would let nonsingers sing on key. Hildebrand picked up the challenge, and in 1997, his company, now known as Antares Audio Technologies, introduced the first version of a vocal-processing software program called Auto-Tune.

It was marketed as a tool to let producers digitally correct bum notes in a singer’s performance, and it’s still used for that purpose in recording studios. But it became a cultural phenomenon for a different reason. Auto-Tune featured an adjustable “retune speed” setting, which controlled the amount of time it took for a digitally processed voice to slide from one note to the next. If you set the retune speed to zero, instead of mimicking the smooth note-to-note transitions of an analog human voice, the program made people sound like robots. In 1998, Cher’s dance single “Believe” became the first hit song to deploy that cyborg warble on purpose. Back then, the only people familiar with Auto-Tune and able to recognize its now-distinctive auditory “footprint” were recording engineers; when the British music magazine Sound on Sound interviewed “Believe” producers Mark Taylor and Brian Rawling about the track in 1999, they dissembled, attributing the effect to a device called the Digitech Talker, in what’s been retroactively interpreted as an attempt to protect a trade secret. It didn’t stay a secret for long. Depending on how you feel about Auto-Tune morally and aesthetically, the first time the Auto-Tune kicks in on “Believe” — it happens when Cher sings the words “can’t break through” and it comes out like “c@@@n’t br33ak thr0000ugh” — is either the dawn of a bold new era of sonic invention or the beginning of the death of individuality in pop music. It’s Alexander Graham Bell’s phone call or it’s the Matrix coming online. Life after love or life after people. Would anyone have cared about this song, if Taylor and Rawling hadn’t tweaked the vocals? Maybe. But I challenge you to tell me what it’s “about,” other than the Rise of RoboCher. (The video features Cher as a kind of holographic mother-goddess, soundtracking the romantic dramas playing out in a club full of central-casting young people, so maybe it’s a song about why girls with Natalie Portman-in-The-Professional bangs fall for jerks with Dave Pirner dreads. But it’s really about software.)

T-Pain feat. Lil Wayne, “Can’t Believe It (A Capella)” (2008)

Cher introduced the world to Auto-Tune as a special effect; a few years later, Kid Rock used it to strike a road-weary-rock-balladeer on 2000’s “Only God Knows Why”, bridging the gap between Lynyrd Skynyrd and C3PO. But it was Faheem Rasheed Najm, a Tallahassee-born singer/songwriter/producer known professionally as T-Pain, who made the sound inescapable. T-Pain first used Auto-Tune’s pitch correction software on a mixtape in 2003, then slathered it all over his debut solo album “Rappa Ternt Sanga,” which generated hit singles like “I’m Sprung” and “I’m N Luv (Wit a Stripper).” The omnipresence of zero-retune-speed Auto-Tuned vocals in hip-hop and R&B is essentially a post-T-Pain phenomenon; even rappers-ternt-sangas who’ve mastered the technology on their own instead of paying T-Pain to show up at the party and spike the punch owe him a debt. T-Pain thinks that Antares owes him, too. In July, he sued them for copyright infringement, seeking to prevent them from using his name and likeness in the marketing of Auto-Tune. (The fact that he’d signed a new endorsement deal with Izotope, an Antares competitor, and is planning to market a new and as-yet-top-secret voice-manipulation program code-named “The T-Pain Effect” probably had more than a little bit to do with this decision.) While T-Pain’s synthetic-yet-earthy vocals have graced plenty of great singles, I’m partial to this promo-only a capella version of the first single from his third album Thr33 Ringz, which turns a song about all the things T-Pain wants to buy for some “fly mamacita” into otherworldly cyber-gospel. Mr. Pain offers to set his potential paramour up in “a mansion in Wiscansin [sic]” and “a condo/All the way up in Toronto.” As sugar-daddy boasts go, these are pretty weird — I mean, wouldn’t a condo in Toronto be pretty affordable?

Kanye West, “Heartless” (2008)

During the recording sessions for his fourth album, 2008’s Auto-Tune-drenched “808s” and “Heartbreak,” Kanye West reportedly got some Auto-Tune tech support from T-Pain. But the real touchstone for “808s” isn’t anything in the T-Pain catalog. If anything, it’s closer to Neil Young’s 1982 album “Trans,” on which Young used a vocoder to distort his vocals to express the frustration he felt about his inability to communicate with his son Ben, who’d been born with cerebral palsy and was unable to speak. “Trans” and “808s” both used robotic voice effects to dramatize just how difficult it is to communicate a genuine feeling electronically; they constrain their emotional content in order to amplify it. (Similarly, each time we send an e-mail, we’re speaking in a “robot voice” and expecting nuance and humanity to come across.) Or maybe Kanye — still reeling from the breakup of his relationship with fiancée Alexis Phifer and the death of his mother during a plastic-surgery procedure — chose a device that would let him sing because he was too sad to rap. Either way: “808s” was a landmark record for Auto-Tune in black pop, proof that it could be a compelling aesthetic choice rather than an R&B-crossover crutch. Stripped of its natural authority, West’s singing becomes a pixilated whimper, the sound of a man trying and failing to insulate himself from hurt; the treatment throws the cracks in his voice (and his psyche) into sharp relief.