Easy Auto Tune Through Line In
Auto-Tune Access works with the Auto-Key plug-in (sold separately) to make this quick and easy. Just place Auto-Key on an instrumental or master track, and it will determine the correct key and scale. Then send that information to Auto-Tune Access with a single click. Online Guitar Tuner. You have found the fastest and right way to tune your guitar 🎸. The tuning will be done using the free online guitar tuner, working through a microphone on your device. This is good if there's just one or two notes out of tune in a vocal take. It goes without saying, too, that the MIDI input allows you 'play' the pitch correction in remarkable ways, twisting vocal and other lines into bizarre and wonderful shapes. Many hours of happy noodling lie ahead. Mar 16, 2012 If you want to get the auto-tune pitch correction effect but don't want to buy the $600 Antares plug-in you can use this FREE online Flash auto-tuner. It is called a 'game' but is actually a fully. Sep 17, 2018 Defining himself as a bastion of pure lyricism, he declared a “moment of silence” for Auto-Tune, a machine made obsolete through overuse.
Auto-Tune — one of modern history’s most reviled inventions — was an act of mathematical genius.
Feb 27, 2013 The Auto-Tune effect spread like a slow burn through the industry, especially within the R&B and dance music communities. T-Pain began Cher-style Auto. Feb 21, 2019 Since “Easy” is a relative term and varies from user to user. SO while I will provide an answer, it is up to you to decide if the apps/programs are easy to use. There are many auto tune apps and programs. Probably too many to be listed here.
The pitch correction software, which automatically calibrates out-of-tune singing to perfection, has been used on nearly every chart-topping album for the past 20 years. Along the way, it has been pilloried as the poster child of modern music’s mechanization. When Time Magazine declared it “one of the 50 worst inventions of the 20th century”, few came to its defense.
But often lost in this narrative is the story of the invention itself, and the soft-spoken savant who pioneered it. For inventor Andy Hildebrand, Auto-Tune was an incredibly complex product — the result of years of rigorous study, statistical computation, and the creation of algorithms previously deemed to be impossible.
Hildebrand’s invention has taken him on a crazy journey: He’s given up a lucrative career in oil. He’s changed the economics of the recording industry. He’s been sued by hip-hop artist T-Pain. And in the course of it all, he’s raised pertinent questions about what constitutes “real” music.
The Oil Engineer
Andy Hildebrand was, in his own words, “not a normal kid.”
A self-proclaimed bookworm, he was constantly derailed by life’s grand mysteries, and had trouble sitting still for prolonged periods of time. School was never an interest: when teachers grew weary of slapping him on the wrist with a ruler, they’d stick him in the back of the class, where he wouldn’t bother anybody. “That way,” he says, “I could just stare out of the window.”
After failing the first grade, Hilbrebrand’s academic performance slowly began to improve. Toward the end of grade school, the young delinquent started pulling C’s; in junior high, he made his first B; as a high school senior, he was scraping together occasional A’s. Driven by a newfound passion for science, Hildebrand “decided to start working [his] ass off” -- an endeavor that culminated with an electrical engineering PhD from the University of Illinois in 1976.
In the course of his graduate studies, Hildebrand excelled in his applications of linear estimation theory and signal processing. Upon graduating, he was plucked up by oil conglomerate Exxon, and tasked with using seismic data to pinpoint drill locations. He clarifies what this entailed:
“I was working in an area of geophysics where you emit sounds on the surface of the Earth (or in the ocean), listen to reverberations that come up, and, from that information, try to figure out what the shape of the subsurface is. It’s kind of like listening to a lightning bolt and trying to figure out what the shape of the clouds are. It’s a complex problem.”
Three years into Hildebrand’s work, Exxon ran into a major dilemma: the company was nearing the end of its seven-year construction timeline on an Alaskan pipeline; if they failed to get oil into the line in time, they’d lose their half-billion dollar tax write-off. Hildebrand was enlisted to fix the holdup — faulty seismic monitoring instrumentation — a task that required “a lot of high-end mathematics.” He succeeded.
“I realized that if I could save Exxon $500 million,” he recalls, “I could probably do something for myself and do pretty well.”
A subsurface map of one geologic strata, color coded by elevation, created on the Landmark Graphics workstation (the white lines represent oil fields); courtesy of Andy Hildebrand
So, in 1979, Hildebrand left Exxon, secured financing from a few prominent venture capitalists (DLJ Financial; Sevin Rosen), and, with a small team of partners, founded Landmark Graphics.
At the time, the geophysical industry had limited data to work off of. The techniques engineers used to map the Earth’s subsurface resulted in two-dimensional maps that typically provided only one seismic line. With Hildebrand as its CTO, Landmark pioneered a workstation — an integrated software/hardware system — that could process and interpret thousands of lines of data, and create 3D seismic maps.
Landmark was a huge success. Before retiring in 1989, Hildebrand took the company through an IPO and a listing on NASDAQ; six years later, it was bought out by Halliburton for a reported $525 million.
“I retired wealthy forever (not really, my ex-wife later took care of that),” jokes Hildebrand. “And I decided to get back into music.”
From Oil to Music Software
An engineer by trade, Hildebrand had always been a musician at heart.
As a child, he was something of a classical flute virtuoso and, by 16, he was a “card-carrying studio musician” who played professionally. His undergraduate engineering degree had been funded by music scholarships and teaching flute lessons. Naturally, after leaving Landmark and the oil industry, Hildebrand decided to return to school to study composition more intensively.
While pursuing his studies at Rice University’s Shepherd School of Music, Hildebrand began composing with sampling synthesizers (machines that allow a musician to record notes from an instrument, then make them into digital samples that could be transposed on a keyboard). But he encountered a problem: when he attempted to make his own flute samples, he found the quality of the sounds to be ugly and unnatural.
“The sampling synthesizers sounded like shit: if you sustained a note, it would just repeat forever,” he harps. “And the problem was that the machines didn’t hold much data.”
Hildebrand, who’d “retired” just a few months earlier, decided to take matters into his own hands. First, he created a processing algorithm that greatly condensed the audio data, allowing for a smoother, more natural-sounding sustain and timbre. Then, he packaged this algorithm into a piece of software (called Infinity), and handed it out to composers.
A glimpse at Infinity's interface from an old handbook; courtesy of Andy Hildebrand
Infinity improved digitized orchestral sounds so dramatically that it uprooted Hollywood’s music production landscape: using the software, lone composers were able to accurately recreate film scores, and directors no longer had a need to hire entire orchestras.
“I bankrupted the Los Angeles Philharmonic,” Hildebrand chuckles. “They were out of the [sample recording] business for eight years.” (We were unable to verify this, but The Los Angeles Times does cite that the Philharmonic entered a 'financially bleak' period in the early 1990s).
Unfortunately, Hildebrand’s software was inherently self-defeating: companies sprouted up that processed sounds through Infinity, then sold them as pre-packaged soundbanks. “I sold 5 more copies, and that was it,” he says. “The market totally collapsed.”
But the inventor’s bug had taken hold of Hildebrand once more. In 1990, he formed his final company, Antares Audio Technology, with the goal of innovating the music industry’s next big piece of software. And that’s exactly what happened.
The Birth of Auto-Tune
A rendering of the Auto-Tune interface; via WikiHow
At a National Association of Music Merchants (NAMM) conference in 1995, Hildebrand sat down for lunch with a few friends and their wives. Randomly, he posed a rhetorical question — “What needs to be invented?” — and one of the women half-jokingly offered a response:
“Why don’t you make a box that will let me sing in tune?”
“I looked around the table and everyone was just kind of looking down at their lunch plates,” recalls Hildebrand, “so I thought, ‘Geez, that must be a lousy idea’, and we changed the topic.”
Hildebrand completely forgot he’d even had this conversation, and for the next six months, he worked on various other projects, none of which really took off. Then, one day, while mulling over ideas, the woman’s suggestion came back to him. “It just kind of clicked in my head,” he says, “and I realized her idea might not be too bad.”
What “clicked” for Hildebrand was that he could utilize some of the very same processing methods he’d used in the oil industry to build a pitch correction tool. Years later, he’d attempt to explain this on PBS’s NOVA network:
'Seismic data processing involves the manipulation of acoustic data in relation to a linear time varying, unknown system (the Earth model) for the purpose of determining and clarifying the influences involved to enhance geologic interpretation. Coincident (similar) technologies include correlation (statics determination), linear predictive coding (deconvolution), synthesis (forward modeling), formant analysis (spectral enhancement), and processing integrity to minimize artifacts. All of these technologies are shared amongst music and geophysical applications.'
At the time, no other pitch correction software existed. To inventors, it was a considered the “holy grail”: many had tried, and none had succeeded.
The major roadblock was that analyzing and correcting pitch in real-time required processing a very large amount of sound wave data. Others who’d made an attempt at creating software had used a technique called feature extraction, where they’d identify a few key “variables” in the sound waves, then correlate them with the pitch. But this method was overly-simplistic, and didn’t consider the finer minutia of the human voice. For instance, it didn’t recognize dipthongs (when the human voice transitions from one vowel to another in a continuous glide), and, as a result, created false artifacts in the sound.
Hildebrand had a different idea.
As an oil engineer, when dealing with massive datasets, he’d employed autocorrelation (an attribute of signal processing) to examine not just key variables, but all of the data, to get much more reliable estimates. He realized that it could also be applied to music:
“When you’re processing pitch, you add wave cycles to go sharp, and subtract them when you go flat. With autocorrelation, you have a clearly identifiable event that tells you what the period of repetition for repeated peak values is. It’s never fooled by the changing waveform. It’s very elegant.”
While elegant, Hildebrand’s solution required an incredibly complex, almost savant application of signal processing and statistics. When we asked him to provide a simple explanation of what happens, computationally, when a voice signal enters his software, he opened his desk and pulled out thick stacks of folders, each stuffed with hundreds of pages of mathematical equations.
“In my mind it’s not very complex,” he says, sheepishly, “but I haven’t yet found anyone I can explain it to who understands it. I usually just say, ‘It’s magic.’”
The equations that do autocorrelation are computationally exhaustive: for every one point of autocorrelation (each line on the chart above, right), it might’ve been necessary for Hildebrand to do something like 500 summations of multiply-adds. Previously, other engineers in the music industry had thought it was impossible to use this method for pitch correction: “You needed as many points in autocorrelation as the range in pitch you were processing,” one early-1990s programmer told us. “If you wanted to go from a low E (70 hertz) all the way up to a soprano’s high C (1,000 hertz), you would’ve needed a supercomputer to do that.”
A supercomputer, or, as it turns out, Andy Hildebrand’s math skills.
Hildebrand realized he was limited by the technology, and instead of giving up, he found a way to work within it using math. “I realized that most of the arithmetic was redundant, and could be simplified,” he says. “My simplification changed a million multiply adds into just four. It was a trick — a mathematical trick.”
With that, Auto-Tune was born.
Auto-Tune’s Underground Beginnings
Hildebrand built the Auto-Tune program over the course of a few months in early 1996, on a specially-equipped Macintosh computer. He took the software to the National Association of Music Merchants conference, the same place where his friend’s wife had suggested the idea a year earlier. This time, it was received a bit differently.
“People were literally grabbing it out of my hands,” recalls Hildebrand. “It was instantly a massive hit.”
At the time, recording pitch-perfect vocal tracks was incredibly time-consuming for both music producers and artists. The standard practice was to do dozens, if not hundreds, of takes in a studio, then spend a few days splicing together the best bits from each take to a create a uniformly in-tune track. When Auto-Tune was released, says Hildebrand, the product practically sold itself.
How to add graphics.h to dev c++. With the help of a small sales team, Hildebrand sold Auto-Tune (which also came in hardware form, as a rack effect) to every major studio in Los Angeles. The studios that adopted Auto-Tune thrived: they were able to get work done more quickly (doing just one vocal take, through the program, as opposed to dozens) — and as a result, took in more clients and lowered costs. Soon, studios had to integrate Auto-Tune just to compete and survive.
Images from Auto-Tune's patent
Once again, Hildebrand dethroned the traditional industry.
“One of my producer friends had been paid $60,000 to manually pitch-correct Cher’s songs,” he says. “He took her vocals, one phrase at a time, transferred them onto a synth as samples, then played it back to get her pitch right. I put him out of business overnight.”
For the first three years of its existence, Auto-Tune remained an “underground secret” of the recording industry. It was used subtly and unobtrusively to correct notes that were just slightly off-key, and producers were wary to reveal its use to the public. Hildebrand explains why:
“Studios weren’t going out and advertising, ‘Hey we got Auto-Tune!’ Back then, the public was weary of the idea of ‘fake’ or ‘affected’ music. They were critical of artists like Milli Vanilli [a pop group whose 1990 Grammy Award was rescinded after it was found out they’d lip-synced over someone else’s songs]. What they don’t understand is that the method used before — doing hundreds of takes and splicing them together — was its own form of artificial pitch correction.”
This secrecy, however, was short-lived: Auto-Tune was about to have its coming out party.
The “Coming Out” of Auto-Tune
When Cher’s “Believe” hit shelves on October 22, 1998, music changed forever.
The album’s titular track -- a pulsating, Euro-disco ballad with a soaring chorus -- featured a curiously roboticized vocal line, where it seemed as if Cher’s voice were shifting pitch instantaneously. Critics and listeners weren’t sure exactly what they were hearing. Unbeknownst to them, this was the start of something much bigger: for the first time, Auto-Tune had crept from the shadows.
In the process of designing Auto-Tune, Hildebrand had included a “dial” that controlled the speed at which pitch corrected itself. He explains:
“When a song is slower, like a ballad, the notes are long, and the pitch needs to shift slowly. For faster songs, the notes are short, the pitch needs to be changed quickly. I built in a dial where you could adjust the speed from 1 (fastest) to 10 (slowest). Just for kicks, I put a “zero” setting, which changed the pitch the exact moment it received the signal. And what that created was the ‘Auto-Tune’ effect.”
Before Cher, artists had used Auto-Tune only supplementally, to make minor corrections; the natural qualities of their voice were retained. But on the song “Believe”, Cher’s producers, Mark Taylor and Brian Rawling, made a decision to use Auto-Tune on the “zero” setting, intentionally modifying the singer’s voice to sound robotic.
Cher’s single sold 11 million copies worldwide, earned her a Grammy Award, and topped the charts in 23 countries. In the wake of this success, Hildebrand and his company, Antares Audio Technologies, marketed Auto-Tune as the “Cher Effect”. Many people in the music industry attributed the artist’s success to her use of Auto-Tune; soon everyone wanted to replicate it.
“Other singers and producers started looking at it, and saying ‘Hmm, we can do something like that and make some money too!’” says Hildebrand. “People were using it in all genres: pop, country, western, reggae, Bollywood. It was even used in an Islamic call to prayer.”
The secret of Auto-Tune was out — and its saga had just begun.
The T-Pain Debacle
In 2004, an unknown rapper with dreads and a penchant for top hats arrived on the Florida hip-hop scene. His name was Faheem Rashad Najm; he preferred “T-Pain.”
After recording a few “hot flows,” T-Pain was picked out of relative obscurity and signed to Akon’s record label, Konvict Muzik. Once discovered, he decided he’d rather sing than rap. He had a great singing voice, but in order to stand out, he needed a gimmick -- and somewhat fortuitously, he found just that. In a 2014 interview, he explains:
“I used to watch TV a lot [and] there was always this commercial on the channel I would watch. It was one of those collaborative CDs, like a ‘Various Artists’ CD, and there was this Jennifer Lopez song, ‘If You Had My Love.’ That was the first time I heard Auto-Tune. Ever since I heard that song — and I kept hearing and kept hearing it — on this commercial, I was like, ‘Man, I gotta find this thing.’”
T-Pain — who is capable of singing very well naturally — decided to use Auto-Tune to differentiate himself from other artists. “If I was going to sing, I didn’t want to sound like everybody else,” he later toldThe Seattle Times. “I wanted something to make me different [and] Auto-Tune was the one.” He contacted some “hacker” friends, found a free copy of Auto-Tune floating around on the Internet, and downloaded it for free. Then, he says, “I just got right into it.”
An old Auto-Tune pamphlet; courtesy of Andy Hildebrand
Between 2005 and 2009, T-Pain became famous for his “signature” use of Auto-Tune, releasing three platinum records. He also earned a title as one of hip-hop’s most in-demand cameo artists. During that time, he appeared on some 50 chart-toppers, working with high-profile artists like Kanye West, Flo Rida, and Chris Brown. During one week in 2007, he was featured on four different Top 10 Billboard Hot 100 singles simultaneously. “Any time somebody wanted Auto-Tune, they called T-Pain,” T-Pain later told NPR.
His warbled, robotic application of Auto-Tune earned him a name. It also earned him a partnership with Hildebrand’s company, Antares Audio Technologies. For several years, the duo enjoyed a mutually beneficial relationship. In one instance, Hildebrand licensed his technology to T-Pain to create a mobile app with app development start-up Smule. Priced at $3, the app, “I Am T-Pain”, was downloaded 2 million times, earning all parties involved a few million dollars.
In the face of this success, T-Pain began to feel he was being used as “an advertising tool.”
'Music isn't going to last forever,' he toldFast Company in 2011, 'so you start thinking of other things to do. You broaden everything out, and you make sure your brand can stay what it is without having to depend on music. It's making sure I have longevity.'
So, T-Pain did something unprecedented: He founded an LLC, then trademarked his own name. He split from Antares, joined with competing audio company iZotope, and created his own pitch correction brand, “The T-Pain Effect”. He released a slew of products bearing his name — everything from a “T-Pain Engine” (a software program that mimicked Auto-Tune) to a toy microphone that shouted, “Hey, this ya boy T-Pain!”
Then, he sued Auto-Tune.
T-Pain vs. Auto-Tune: click to read the full filed complaint
The lawsuit, filed on June 25, 2011, alleged that Antares (maker of Auto-Tune) had engaged in “unauthorized use of T-Pain’s name” on advertising material. Though the suit didn’t state an exact amount of damages sought, it does stipulate that the amount is “in excess of $1,000,000.”
Antares and Hildebrand instantly counter-sued. Eventually, the two parties settled the matter outside of the court, and signed a mutual non-disclosure agreement. 'If you can't buy candy from the candy store,' you have to learn to make candy,' T-Pain later told a reporter. “It’s an all-out war.”
Of course, T-Pain did not succeed in his grand plan to put Auto-Tune out of business.
“We studied our data to see if he really affected us or not,” Hildebrand tells us. “Our sales neither went up or down due to his involvement. He was remarkably ineffectual.”
For Auto-Tune, T-Pain was ultimately a non-factor. More pressing, says Hildebrand, was Apple, which aquired a competing product in the early 2000s:
“We forgot to protect our patent in Germany, and a German company, [Emagic], used our technology to create a similar program. Then Apple bought [Emagic], and integrated it into their Logic Pro software. We can’t sue them, it would put us out of business. They’re too big to sue.”
But according to Hildebrand, none of this matters much: Antares’ Auto-Tune still owns roughly 90% of the pitch correction market share, and everyone else is “down in the ditch”, fighting for the other 10%. Though Auto-Tune is a brand, it has entered the rarified strata of products — Photoshop, Kleenex, Google — that have become catch-all verbs. Its ubiquitous presence in headlines (for better or worse) has earned it a spot as one of Ad Age’s “hottest brands in America.”
Yet, as popular as Auto-Tune is with its user base, it seems to be universally detested by society, largely as a result of T-Pain and imitators over-saturating modern music with the effect.
Haters Gonna Hate
A few years ago, in a meeting, famed guitar-maker Paul Reed Smith turned toward Hildebrand and shook his head. “You know,” he said, disapprovingly, “you’ve completely destroyed Western music.”
He was not alone in this sentiment: as Auto-Tune became increasingly apparent in mainstream music, critics began to take a stand against it.
In 2009, alternative rock band Death Cab For Cutie launched an anti-Auto-Tune campaign. “We’re here to raise awareness about Auto-Tune abuse” frontman Ben Gibbard announced on MTV. “It’s a digital manipulation, and we feel enough is enough.” This was shortly followed by Jay-Z’s “Death of the Auto-Tune” — a Grammy-winning song that dissed the technology, and called for an industry-wide ban. Average music listeners are no less vocal: a comb of the comments section on any Auto-Tuned YouTube video reveals (in proper YouTube form) dozens of virulent, hateful opinions on the technology.
Hildebrand at his Scotts Valley, California office
In his defense, Hildebrand harkens back to the history of recorded sound. “If you’re going to complain about Auto-Tune, complain about speakers too,” he says. “And synthesizers. And recording studios. Recording the human voice, in any capacity, is unnatural.”
What he really means to say is that the backlash doesn’t bother him much. For his years of work on Auto-Tune, Hildebrand has earned himself enough to retire happy — and with his patent expiring in two years, that day may soon come.
“I’m certainly not broke,” he admits. “But in the oil industry, there are billions of dollars floating around; in the music industry, this is it.”
He gestures toward the contents of his office: a desk scattered with equations, a few awkwardly-placed awards, a small bookcase brimming with Auto-Tune pamphlets and signal processing textbooks. It’s a small, narrow space, lit by fluorescent ceiling bulbs and a pair of windows that overlook a parking lot. On a table sits a model ship, its sails perfectly calibrated.
“Sometimes, I’ll tell people, ‘I just built a car, I didn’t drive it down the wrong side of the freeway,'” he says, with a smile. “But haters will hate.”
Our next post profiles an entrepreneur who wants to disrupt the only industry Silicon Valley won't touch: sex. To get notified when we post it →join our email list. A version of this article previously appeared on December 14, 2015.
Announcement: The Priceonomics Content Marketing Conference is on November 1 in San Francisco. Get your early bird ticket now.
In January of 2010, Kesha Sebert, known as ‘Ke$ha’ debuted at number one on Billboard with her album, Animal. Her style is electro pop-y dance music: she alternates between rapping and singing, the choruses of her songs are typically melodic party hooks that bore deep into your brain: “Your love, your love, your love, is my drug!” And at times, her voice is so heavily processed that it sounds like a cross between a girl and a synthesizer. Much of her sound is due to the pitch correction software, Auto-Tune.
Sebert, whose label did not respond to a request for an interview, has built a persona as a badass wastoid, who told Rolling Stone that all male visitors to her tour bus had to submit to being photographed with their pants down. Even the bus drivers.
Yet this past November on the Today Show, the 25-year old Sebert looked vulnerable, standing awkwardly in her skimpy purple, gold, and green unitard. She was there to promote her new album, Warrior, which was supposed to reveal the authentic her.
“Was it really important to let your voice to be heard?” asked the host, Savannah Guthrie.
“Absolutely,” Sebert said, gripping the mic nervously in her fingerless black gloves.
“People think they’ve heard the Auto-Tune, they’ve heard the dance hits, but you really have a great voice, too,” said Guthrie, helpfully.
“No, I got, like, bummed out when I heard that,” said Sebert, sadly. “Because I really can sing. It’s one of the few things I can do.”
Warrior starts with a shredding electrical static noise, then comes her voice, sounding like what the Guardian called “a robo squawk devoid of all emotion.”
“That’s pitch correction software for sure,” wrote Drew Waters, Head of Studio Operations at Capitol Records, in an email. “She may be able to sing, but she or the producer chose to put her voice through Auto-Tune or a similar plug-in as an aesthetic choice.” Best voice auto tune app.
So much for showing the world the authentic Ke$ha.
Since rising to fame as the weird techno-warble effect in the chorus of Cher’s 1998 song, “Believe,” Auto-Tune has become bitchy shorthand for saying somebody can’t sing. But the diss isn’t fair, because everybody’s using it.
For every T-Pain — the R&B artist who uses Auto-Tune as an over-the-top aesthetic choice — there are 100 artists who are Auto-Tuned in subtler ways. Fix a little backing harmony here, bump a flat note up to diva-worthy heights there: smooth everything over so that it’s perfect. You can even use Auto-Tune live, so an artist can sing totally out of tune in concert and be corrected before their flaws ever reach the ears of an audience. (On season 7 of the UK X-Factor, it was used so excessively on contestants’ auditions that viewers got wise, and protested.)
Indeed, finding out that all the singers we listen to have been Auto-Tuned does feel like someone’s messing with us. As humans, we crave connection, not perfection. But we’re not the ones pulling the levers. What happens when an entire industry decides it’s safer to bet on the robot? Will we start to hate the sound of our own voices?
They’re all zombies!They’re all zombies!
Auto-Tune has now become bitchy shorthand for saying somebody can’t sing
Cher’s late ‘90s comeback and makeover as a gay icon can entirely be attributed to Auto-Tune, though the song's producers claimed for years that it was a Digitech Talker vocoder pedal effect. In 1998, she released the single, “Believe,” which featured a strange, robotic vocal effect on the chorus that felt fresh. It was created with Auto-Tune.
The technology, which debuted in 1997 as a plug-in for Pro Tools (the industry standard recording software), works like this: you select the key the song is in, and then Auto-Tune analyzes the singer’s vocal line, moving “wrong” notes up or down to what it guesses is the intended pitch. You can control the time it takes for the program to move the pitch: slower is more natural, faster makes the jump sudden and inhuman sounding. Cher’s producers chose the fastest possible setting, the so-called “zero” setting, for maximum pop.
“Believe” was a huge hit, but among music nerds, it was polarizing. Indie rock producer Steve Albini, who’s recorded bands like the Pixies and Nirvana, has said he thought the song was mind-numbingly awful, and was aghast to see people he respected seduced by Auto-Tune.
“One by one, I could see that my friends had gone zombie. This horrible piece of music with this ugly soon-to-be cliché was now being discussed as something that was awesome. It made my heart fall,” he told the Onion AV Club in November of 2012.
The Auto-Tune effect spread like a slow burn through the industry, especially within the R&B and dance music communities. T-Pain began Cher-style Auto-Tuning all his vocals, and a decade later, he’s still doing it.
“It’s makin’ me money, so I ain’t about to stop!” T-Pain told DJ Skee in 2008.
“It’s makin’ me money, so I ain’t about to stop!”
Kanye West did an album with it. Lady Gaga uses it. Madonna, too. Maroon 5. Even the artistically high-minded Bon Iver has dabbled. A YouTube series where TV news clips were Auto-Tuned, “Auto-Tune the News”, went viral. The glitchy Auto-Tune mode seems destined to be remembered as the “sound” of the 2000s, the way the gated snare (that dense, big, reverb-y drum sound on, say, Phil Collinssongs) is now remembered as the sound of the ‘80s.
Auto-Tune certainly isn’t the only robot voice effect to have wormed its way into pop music. In the ‘70s and early ‘80s, voice synthesizer effects units became popular with a lot of bands. Most famous is the Vocoder, originally invented in the 1930s to send encoded Allied messages during WWII. Proto-techno groups like New Order and Kraftwerk (ie: “Computer World,”) embraced it. So did American early funk and hip hop groups like the Jonzun Crew.
‘70s rockers gravitated towards another effect, the talk box. Peter Frampton (listen for it on “Do you Feel Like We Do”) and Joe Walsh (used it on “Rocky Mountain Way”) liked its similar-to-a-vocoder sound. The talk box was easier to rig up than the Vocoder — you operate it via a rubber mouth tube when applying it to vocals. But it produces massive amounts of slobber. In Dave Tompkins’ book, How to Wreck a Nice Beach, about the history of synthesized speech machines in the music industry, he writes that Frampton’s roadies sanitized his talk box in Remy Martin Cognac between gigs.
The use of showy effects usually have a backlash. And in the case of the Auto-Tune warble, Jay-Z struck back with the 2009 single, D.O.A., or “Death of Auto-Tune.”
I know we facing a recession
But the music y'all making going make it the great depression
All y'all lack aggression
Put your skirt back down, grow a set man
Nigga this shit violent
This is death of Auto-Tune, moment of silence
That same year, the band Death Cab for Cutie showed up at the Grammys wearing blue ribbons to raise awareness, they told MTV, about “rampant Auto-Tune abuse.”
The protests came too late, though. The lid to Pandora’s box had been lifted. Music producers everywhere were installing the software.
Everybody uses it
Everybody uses it
“I’ll be in a studio and hear a singer down the hall and she’s clearly out of tune, and she’ll do one take,” says Drew Waters of Capitol Records. That’s all she needs. Because they can fix it later, in Auto-Tune.
There is much speculation online about who does — or doesn’t — use Auto-Tune. Taylor Swift is a key target, as her terribly off-key duet with Stevie Nicks at the 2010 Grammys suggests she’s tone deaf. (Label reps said at the time something was wrong with her earpiece.) But such speculation is naïve, say the producers I talked to. “Everybody uses it,” says Filip Nikolic, singer in the LA-based band, Poolside, and a freelance music producer and studio engineer. “It saves a ton of time.”
On one end of the spectrum are people who dial up Auto-Tune to the max, a la Cher / T-Pain. On the other end are people who use it occasionally and sparingly. You can use Auto-Tune not only to pitch correct vocals, but other instruments too, and light users will tweak a note here and there if a guitar is, say, rubbing up against a vocal in a weird way.
“I’ll massage a note every once in a while, and often I won’t even tell the artist,” says Eric Drew Feldman, a San Francisco-based musician and producer who’s worked with The Polyphonic Spree and Frank Black.
But between those two extremes, you have the synthetic middle, where Auto-Tune is used to correct nearly every note, as one integral brick in a thick wall of digitally processed sound. From Justin Bieber to One Direction, from The Weeknd to Chris Brown, most pop music produced today has a slick, synth-y tone that’s partly a result of pitch correction.
However, good luck getting anybody to cop to it. Big producers like Max Martin and Dr. Luke, responsible for mega hits from artists like Ke$ha, Pink, and Kelly Clarkson, either turned me down or didn’t respond to interview requests. And you can’t really blame them.
“Do you want to talk about that effect you probably use that people equate with your client being talentless?”
Um, no thanks.
In 2009, an online petition went around protesting the overuse of Auto-Tune on the show Glee. Those producers turned down an interview, too.
The artists and producers who would talk were conflicted. One indie band, The Stepkids, had long eschewed Auto-Tune and most other modern recording technologies to make what they call “experimental soul music.” But the band recently did an about face, and Auto-Tuned their vocal harmonies on their forthcoming single, “Fading Star.”
Were they using Auto-Tune ironically or seriously? Co-frontman Jeff Gitelman said,
“Both.”
“For a long time we fought it, and we still are to a certain degree,” said Gitelman. “But attention spans are a certain way, and that’s how it is…we just wanted it to have a clean, modern sound.”
Hanging above the toilet in San Francisco’s Different Fur recording studios — where artists like the Alabama Shakes and Bobby Brown have recorded — is a clipping from Tape Op magazine that reads: “Don’t admit to Auto-Tune use or editing of drums, unless asked directly. Then admit to half as much as you really did.”
Different Fur’s producer / engineer / owner, Patrick Brown, who hung the clipping there, has recorded acts like the Morning Benders, and says many indie rock bands “come in, and first thing they say is, ‘We don’t tune anything,’” he says.
Brown is up for ditching Auto-Tune if the client really wants to, but he says most of the time, they don’t really want to. “Let’s face it, most bands are not genius.” He’ll feel them out by saying, with a wink-wink-nod-nod: “Man, that note’s really out of tune, but that was a great take.” And a lot of times they’ll tell him, go ahead, Auto-Tune it.
Marc Griffin is in the RCA-signed band 2AM Club, which has both an emcee and a singer (Griffin’s the singer.) He first got Auto-Tuned in 2008, when he recorded a demo with producer Jerry Harrison, the former keyboardist and guitarist for the Talking Heads.
“I sang the lead, then we were in the control room with the engineer, and he put ‘tune on it. Just a little. And I had perfect pitch vocals. It sounded amazing. Then we started stacking vocals on top of it, and that sounded amazing,” says Griffin.
Now, Griffin sometimes records with Auto-Tune on in real time, rather than having it applied to his vocals in post-production, a trend producers say is not unusual. This means that the artist hears the tuned version of his or her voice coming out of the monitors while singing.
“Every time you sing a note that’s not perfect, you can hear the frequencies battle with each other,” Griffin says, which sounds kind of awful, but he insists it “helps you hear what it will really sound like.”
Singer / songwriter Neko Case kvetched about these developments in an interview with online music magazine, Pitchfork. “I'm not a perfect note hitter either but I'm not going to cover it up with auto tune. Everybody uses it, too. I once asked a studio guy in Toronto, ‘How many people don't use Auto-Tune?’ and he said, ‘You and Nelly Furtado are the only two people who've never used it in here.’ Even though I'm not into Nelly Furtado, it kind of made me respect her. It's cool that she has some integrity.”
That was 2006. This past September, Nelly Furtado released the album, The Spirit Indestructible. Its lead single is doused in massive levels of Auto-Tune.
Dr. EvilDr. Evil
Somebody once wrote on an online message board that the guy who created Auto-Tune must “hate music.” That could not be further from the truth. Its creator, Dr. Andy Hildebrand, AKA Dr. Andy, is a classically trained flautist who spent most of his youth playing professionally, in orchestras. Despite the fact that the 66-year old only recently lopped off a long, gray ponytail, he’s no hippie. He never listened to rock music of his generation.
“I was too busy practicing,” he says. “It warped me.”
The only post-Debussy artist he’s ever gotten into is Patsy Cline.
Hildebrand’s company — Antares — nestled in an anonymous looking office park in the mountains between Silicon Valley and the Pacific Coast, has only ten employees. Hildebrand invents all the products (Antares recently came out with Auto-Tune for Guitar). His wife is the CFO.
Hildebrand started his career as a geophysicist, programming digital signal processing software which helped oil companies find drilling spots. After going back to school for music composition at age 40, he discovered he could use those same algorithms for the seamless looping of digital music samples, and later for pitch correction. Auto-Tune, and Antares, were born.
Watch Diamond Factory, Anthrax Investigation, Auto-Tune, Luis.. on PBS. See more from NOVA scienceNOW.
Auto-Tune isn’t the only pitch correction software, of course. Its closest competitor, Melodyne, is reputed to be more “natural” sounding. But Auto-Tune is, in the words of one producer, “the go-to if you just want to set-it-and-forget-it.”
In interviews, Hildebrand handles the question of “is Auto-Tune evil?” with characteristic dry wit. His stock answer is, “My wife wears makeup, does that make her evil?” But on the day I asked him, he answered, “I just make the car. I don’t drive it down the wrong side of the road.”
“I just make the car. I don’t drive it down the wrong side of the road.”
The T-Pains and Chers of the world are the crazy drivers, in Hildebrand’s analogy. The artists that tune with subtlety are like his wife, tasteful people looking to put their best foot forward.
Another way you could answer the question: recorded music is, by definition, artificial. The band is not singing live in your living room. Microphones project sound. Mixing, overdubbing, and multi-tracking allow instruments and voices to be recorded, edited, and manipulated separately. There are multitudes of effects, like compression, which brings down loud sounds and amplifies quiet ones, so you can hear an artist taking a breath in between words. Reverb and delay create echo effects, which can make vocals sound fuller and rounder.
When recording went from tape to digital, there were even more opportunities for effects and manipulation, and Auto-Tune is just one of many of the new tools available. Nonetheless, there are some who feel it’s a different thing. At best, unnecessary. At worst, pernicious.
“The thing is, reverb and delay always existed in the real world, by placing the artist in unique environments, so [those effects are] just mimicking reality,” says Larry Crane, the editor of music recording magazine, Tape Op, and a producer who’s recorded Elliott Smith and The Decemberists. If you sang in a cave, or some other really echo-y chamber, you’d sound like early Elvis, too. “There is nothing in the natural world that Auto-Tune is mimicking, therefore any use of it should be carefully considered.”
“I’d rather just turn the reverb up on the Fender Twin in the troubling place,” says Arizona indie rock pioneer Howe Gelb, of the band Giant Sand. He describes Auto-Tune and other correction plug-ins as “foul” in a way he can’t quite put his finger on. ”There’s something embedded in the track that tends to push my ear away.”
Lee Alexander, one time boyfriend of Norah Jones and bass player and producer for her country side project, The Little Willies, used no Auto-Tune on their two records, and says he doesn’t even own the program.
“Stuff is out of tune everywhere…that to me is the beauty of music,” he wrote in an email.
In 2000, Matt Kadane of the band The New Year, and his brother, Bubba covered Cher’s “Believe”, complete with Auto-Tune. They did it in their former Texas Slo-Core band, Bedhead. Kadane told me hated the original “Believe,” and had to be talked into covering it, but had surprisingly found that putting Auto-Tune on his vocals “added emotional weight.” He hasn’t, however, used Auto-Tune since.
“It’s one thing to make a statement with hollow, disaffected vocals, but it’s another if this is the way we’re communicating with each other,” he says.
For some people, I said, it seems that Auto-Tune is a lot like dudes and fake boobs. Some dudes see fake boobs, they know they’re fake, but they get an erection anyway. They can’t help themselves. Kadane agreed that it “can serve that function.”
“But at some point you’d say ‘that’s fucked up that I have an erection from fake boobs!’” he says. “And in the midst of experiencing that, I think ideally you have a moment that reminds you that authenticity is still possible. And thank God not everything in the world is Auto-Tuned.”
The Beatles actually suckThe Beatles actually suck
Does your brain get rewired to expect perfect pitch?
The concept of pitch needing to be “correct” is a somewhat recent construct. Cue up the Rolling Stones’ Exile on Main St., and listen to what Mick Jagger does on “Sweet Virginia.” There are a lot of flat and sharp notes, because, well, that’s characteristic of blues singing, which is at the roots of rock and roll.
“When a (blues) singer is ‘flat’ it’s not because he’s doing it because he doesn’t know any better. It’s for inflection!” says Victor Coelho, Professor of Music at Boston University.
Blues singers have traditionally played with pitch to express feelings like longing or yearning, to punch up a nastier lyric, or make it feel dirty, he says. “The music is not just about hitting the pitch.”
Easy Phone Tunes For Windows
Of course that style of vocal wouldn’t fly in Auto-Tune. It would get corrected. Neil Young, Bob Dylan, many of the classic artists whose voices are less than pitch perfect – they probably would be pitch corrected if they started out today.
John Parish, the UK-based producer who’s worked with PJ Harvey and Sparklehorse, says that though he uses Auto-Tune on rare occasions, he is no fan. Many of the singers he works with, Harvey in particular, have eccentric vocal styles -- he describes them as “character singers.” Using pitch correction software on them would be like trying to get Jackson Pollock to stay inside the lines.
“I can listen to something that can be really quite out of tune, and enjoy it,” says Parish. But is he a dying breed?
“That’s the kind of music that takes five listens to get really into,” says Nikolic, of Poolside. “That’s not really an option if you want to make it in pop music today. You find a really catchy hook and a production that is in no way challenging, and you just gear it up!”
If you’re of the generation raised on technology-enabled perfect pitch, does your brain get rewired to expect it? So-called “supertasters” are people who are genetically more sensitive to bitter flavors than the rest of us, and therefore can’t appreciate delicious bitter things like IPAs and arugula. Is the Auto-Tune generation likewise more sensitive to off key-ness, and thus less able to appreciate it? Some troubling signs point to ‘yes.’
“I was listening to some young people in a studio a few years ago, and they were like, ‘I don’t think The Beatles were so good,’” says producer Eric Drew Feldman. They were discussing the song “Paperback Writer.” “They’re going, ‘They were so sloppy! The harmonies are so flat!”
Just make me sound goodJust make me sound good
John Lennon famously hated his singing voice. He thought it sounded too thin, and was constantly futzing with vocal effects, like the overdriven sound on “I Am the Walrus.” I can relate. I love to sing, and in my head, I hear a soulful, husky, alto. What comes out, however, is a cross between a child in the musical Annie, and Gretchen Wilson: nasal, reedy, about as soulful as a mosquito. I’m in a band and I write all the songs, but I’m not the singer: I wouldn’t subject people to that.
Producer and Editor Larry Crane says he thinks lots of artists are basically insecure about their voices, and use Auto-Tune as a kind of protective shield.
“I’ve had people come in and say I want Auto-Tune, and I say, ‘Let’s spend some time, let’s do five vocal takes and compile the best take. Let’s put down a piano guide track. There’s a million ways to coach a vocal. Let’s try those things first,’” he says.
Recently, I went over to a couple-friend’s house with my husband, to play with Auto-Tune. The husband of the couple, Mike, had the software on his home computer – he dabbles in music production – and the idea was that we’d record a song together, then Auto-Tune it.
We looked for something with four-part harmony, so we could all sing, and for a song where the backing instrumental was available online. We settled on Boyz II Men’s “End of the Road.” One by one we went into the bedroom to record our parts, with a mix of shame and titillation not unlike taking turns with a prostitute.
Easy Auto Tune Through Line In Excel
When we were finished, Mike played back the finished piece, without Auto-Tune. It was nerve wracking to listen to, I felt like my entire body was cringing. Although I hit the notes OK, there was something tentative and childlike about my delivery. Thank God these are my good friends, I thought. Of course they were probably all thinking the same thing about their performances, too, but in my mind, my voice was the most annoying of all, so wheedling and prissy sounding.
Then Mike Auto-Tuned two versions of our Boys II Men song: one with Cher / T-Pain style glitchy Auto-Tune, the other with “natural” sounding Auto-Tune. The exaggerated one was hilariously awesome – it sounded just like a generic R&B song.
But the second one shocked me. It sounded like us, for sure. But an idealized version of us. My husband’s gritty vocal attack was still there, but he was singing on key. And something about fine-tuning my vocals had made them sound more confident, like smoothing out a tremble in one’s speech.
Easy Auto Tune Through Line In India
The Auto-Tune or not Auto-Tune debate always seems to turn into a moralistic one, like somehow you have more integrity if you don’t use it, or only use it occasionally. But seeing how really innocuous-yet-lovely it could be, made me rethink. If I were a professional musician, would I reject the opportunity to sound, what I consider to be, “my best,” out of principle?
The answer to that is probably no. But then it gets you wondering. How many insecure artists with “annoying” voices will retune themselves before you ever have a chance to fall in love?
Easy Tune Download
Easy Auto Tune Through Line In Word
Video stills from:
TiK ToK by Ke$ha
Animal by Ke$ha
Believe by Cher
In The Air Tonight by Phil Collins
Buy U A Drink by T-Pain
Hung Up in Glee
Big Hoops by Nelly Furtado
Piano Fire by Sparklehorse and P.J. Harvey
Imagine by John Lennon
If i were a professional musician, would I reject the opportunity to sound 'my best,' out of principal?