With Artificial Intelligence threatening to make songwriters redundant, how do humans stay relevant in the music business?
In the summer of 1940 the music industry’s trade rag Billboard introduced something called the The Billboard Music Popularity Chart. This “trade service feature” compiled the top ten best-selling records across the USA to provide market insights for wholesalers and retailers who stocked records, radio programmers and songwriters and producers who were always on the lookout for musical trends that could be exploited. Billboard had previously published other such lists, like Sheet Music Best Sellers, Records Most Popular on Music Machines (i.e. jukeboxes) and even Popular Songs Heard in Vaudeville Theatres Last Week. Catchy. But that first systematic tabulation of the sales of 78-RPM singles in 1940 made history. Quickly growing beyond a top ten, it became a weekly fixture that would play a significant role in defining the musical landscape of the future.
In 1940 there was no such thing as an album, although a few canny marketeers were beginning to group together booklets of three or four 78s. Long-playing records had not been invented; the musical unit that mattered was the song: three minutes of instant human-to-human connection whose immediacy and impact no other artform has ever transcended. By the end of the decade, the music industry had twigged that individual songs could be the gateway to more lucrative album sales – and this became the model for music delivery.
Until now. Thanks to breakneck developments in technology that have drastically altered patterns of music consumption, particularly among millennials and centennials, we are once again in an age where the song reigns supreme. According to Fred Bolza, Vice President of Strategy at Sony Music Entertainment, “the unit of conversation around music has gone back to the short form. The binding glue between artist and fan is once more the song. Very rarely do the singles chart and the album chart coincide anymore, so the song is the most important thing.” The age-old craft of songwriting is “the only thing that gives you the chance of sustaining a career in popular music today.”
A computer can learn how to write music based on data and perfect patterns. But what you need in a really great song is imperfection
The pop song as we know it may have originated in the 1940s, been honed in the 1950s and perfected in the 1960s by the likes of Lennon and McCartney. But the principle of the pop song goes right back to the classical era, or even further. You might wonder what Ed Sheeran has in common with Franz Schubert, but listen a little deeper… The answer is: nearly everything.
Most of today’s most successful pop songs are based on a dozen or so chord sequences that were figured out in previous centuries. “In the present age, someone such as Adele is an original singer because of her voice, her attitude and her style,” notes the composer, broadcaster and writer Howard Goodall. “But the chords and sequences she and most pop writers are using have been around for a very long time… The originator of the three-minute pop song was [probably] John Dowland, way back in Shakespeare’s time.”
Mozart was one of the first “freelance” composers able to make a living without relying on an institution or an aristocratic patron. That made it all the more important that he wrote stuff people actually wanted to hear. Fortunately, he was a melodic genius, perhaps the greatest ever, and canny enough to realise that it was the songs he wrote that people were most drawn to; the arias that tug at the heartstrings and delight with their melodically inventive articulation of the human condition. The opera composers Giuseppe Verdi and Giacomo Puccini also wrote arias that were the smash hits of the day. When Verdi died in 1901, a quarter of a million of people lined the streets to mourn him and sang, as one, the Chorus of Hebrew Slaves from his opera Nabucco.
Above all, though, the modern pop song was created by Schubert. Franz was a melody machine. By the time of his death just before his 32nd birthday, he had penned more than 600 songs. And like any songwriter worth their salt, with every single one he sought to create something that would be instantly relatable and memorable. As Goodall says: “There’s not a moment where you have to listen ten times before you get your head around a song. He wants you to get it first time; there’s verse-chorus, voice and piano underneath, and he wants you to remember the chorus.”
These remain pretty standard rules of songwriting: there’s nothing much about Adele or Simon & Garfunkel or Leonard Cohen, he reckons, “that would have seemed alien to Schubert in terms of the chords, or the shape, the way the verse leads into the chorus, or the piano accompaniment.” The only thing that would strike Schubert as odd about an Adele song, in fact, is that a woman was its originator rather than its object. Most of Schubert’s songs are indeed about women: specifically, those who the singer-protagonists are in love with, and often rejected by. Like many of his successors, Schubert started composing songs as a lovesick teenager. And as the singer and Schubertian expert Ian Bostridge explains, in his songs “expressive sincerity comes before vocal prowess; authenticity and intimacy are at a premium.”
These qualities remain central to good songwriting today, whether you’re Adele, Drake or Ed Sheeran. That ability to nourish the self is surely critical to the survival of songwriting in the digital era. Technology is disrupting the music industry in myriad ways – even being touted as a replacement for human creators. We have been programming computers to write music since at least the 1950s, when composer Lejaren Hiller oversaw the first computer-generated score, the “Illiac” Suite for string quartet. But, where gains in artificial intelligence applications in disciplines like video games or facial recognition have been staggering, the music written by algorithms falls short. Google Brain, for example, recently announced their ambitious project Magenta, which aims to have computers produce “compelling and artistic” music filled with surprises. The only thing that’s surprising so far is how awful the output is.
No doubt software will get better at chewing data to analyse chord sequences, figure out what listeners like best, and design equivalents. According to Patrick Stobb, founder of “Musical AI” start-up Jukedeck: “there’s no rule of physics that says computers can’t get as good as a human.” Perhaps these works of digital art will become ubiquitous as soundtracks to video games or jingles. But we are a long way off computers writing mighty songs that stop us in our tracks. Songs that transport us, that move us, that make us cry or make us leap for joy. Songs that heal us, songs that teach us, songs that simply blow our mind because they happen to connect to us in a visceral way that isn’t explicable but feels like the freaking best thing ever.
In 2017 technology may dominate the means of making music, but songwriting remains a fundamentally emotional exercise: one that comes from an impulse to touch others by expressing and addressing something deep within ourselves. Machines, as clever as they may be, are not yet emotional. When it comes to computing the human heart, so long as there are people falling in and out of love, people will be triumph over the machines.
Bolza agrees. “A computer can learn how to write music, sure, but a machine takes a vast amount of data and makes things based on perfect patterns. And what you need in a really great song is imperfection. So you can recreate all the elements to try and sound like Lennon and McCartney, but the alchemy of being human is what makes Lennon and McCartney Lennon and McCartney. Error has to be part of anything that makes us feel, because life is riddled with it.”
Clemency Burton-Hill is a writer, broadcaster, musician and presenter of the BBC Radio 3 Breakfast Show.