When I was 15 I discovered The Smiths, a band whose name had by then long been synonymous with misery. But it was Morrissey’s unique style of being miserable – coquettish and laced with Northern English humour, flipping between self-pity and irony – that appealed to my teenage self. That and the grandiose but intricately layered sweeps of Johnny Marr’s guitar. I’d always cry at the same points in each song: the end of Hand in Glove, the chord changes before the chorus of Girl Afraid, the line in The Queen is Dead where he sings “we can go for a walk where it’s quiet and dry”. I’m still not sure why the last one had such an effect.
Two decades later, Spotify has built an algorithm that aims to quantify the amount of sadness in a music track. The streaming service has collected metadata on each of 35 million songs in their database, accessible through their web API, that includes a valence score for every track, from 0 to 1. “Tracks with high valence sound more positive (eg happy, cheerful, euphoric), while tracks with low valence sound more negative (eg sad, depressed, angry)”, according to Spotify. There are similar scores for other parameters including energy (how “fast, loud and noisy” a track is) and danceability, which is exactly what it sounds like.
The valence data has been a gift to bloggers and journalists with data science skills and a taste for the dark side. It’s been used to develop a ‘gloom index’ of Radiohead songs, to reveal the most depressing Christmas song, to find out which European countries prefer sad songs (Portuguese fado really is a downer) and to show that even Eurovision winners are getting gloomier. (A recent academic study, based on data from the open-source audio repository Acousticbrainz, also suggests UK chart hits have become sadder over the last 30 years.)
But how can an algorithm – which cannot feel a thing – tell the difference between a happy song and a sad one? “It’s an initially challenging concept, that you would be able to quantify the sadness that a song evokes”, says Charlie Thompson, the data scientist who developed the Radiohead ‘gloom index’ who blogs as RCharlie. Inspired by his approach, I decided to test the Spotify data out for myself using some of the most popular songs of the last half a century – Billboard number one hits. First, I found the names of all the number ones on the Billboard Hot 100 charts since they began in July 1958, a list of 1,080 tracks. Then I matched them to the Spotify data. Only one track wasn’t on Spotify: Over and Over by the Dave Clark Five. So, what’s the saddest song ever to hit number one?
Don’t worry, be happy
Before I reveal its name, let’s consider what you might expect a sad song to sound like. Perhaps it would be in a minor key? “Major modes are frequently related to positive valence, or more specifically to emotional states such as happiness or solemnity, whereas minor modes are often associated with negative valence (sadness or anger)”, explains Rui Pedro Paiva, Professor of Informatics Engineering at the University of Coimbra, Portugal and a specialist in music emotion recognition. Surprisingly, this is not the case among this group of Billboard number ones: while there are more than twice as many major as minor key songs, there’s no difference in average valence between them.
The algorithm is definitely on to something, but it’s not brilliant at coping with Lionel Richie
Perhaps a sad song would be slow, or lacking energy, like the movements of a sad person? This does seem to be the case with Billboard number ones: the lower valence tracks also tend to be lower energy. But some tracks are low valence but high energy – the angry tracks. So a better definition of a ‘sad’ song might be one that’s both negative in its mood and lacking energy, to distinguish it from an angry song. Let’s use both the valence and energy scores to find out the saddest track.
This chart would be familiar to music psychologists, who often visualise feelings in terms of valence and energy (or ‘arousal’), and divide them into quadrants based on four basic emotions: sadness, happiness, anger and calm. Sad songs (low valence, low energy) appear in the bottom left corner of the chart, happy songs (high valence, high energy) in the top right, angry songs (low valence, high energy) on the top left and calm songs (high valence, low energy) in the bottom right.
On the whole, number one hits tend to be pretty cheerful – the happy corner has by far the most songs. The most upbeat are Hey Ya!, Macarena (hey!) and Brown Sugar by The Rolling Stones. Don’t Worry, Be Happy is the calmest, most chilled-out song. Eminem’s Lose Yourself is off on its own in the angry quadrant. It’s not shown on the chart, but happier songs tend to be more danceable. And the most danceable number one? It’s Ice Ice Baby by Vanilla Ice, which I can totally get behind. But let’s look at what Spotify’s algorithm considers the most miserable songs, down in the sad corner.
Five saddest Billboard number one songs1958-2018, based on valence and energy data from Spotify
1. The First Time Ever I Saw Your Face – Roberta Flack (number 1 in 1972)
2. Three Times a Lady – Commodores (1978)
3. Are You Lonesome Tonight? – Elvis Presley (1960)
4. Mr Custer – Larry Verne (1960)
5. Still – Commodores (1979)
2. Three Times a Lady – Commodores (1978)
3. Are You Lonesome Tonight? – Elvis Presley (1960)
4. Mr Custer – Larry Verne (1960)
5. Still – Commodores (1979)
The saddest song ever to top the charts since 1958, according to the data, is The First Time Ever I Saw Your Face by Roberta Flack, which was number one for six weeks in 1972. It is not a sad song. It is a tender, soulful love song. Three Times a Lady by the Commodores is also a slow love ballad and Mr Custer is a comedy song about a soldier who doesn’t want to fight. Of the five ‘saddest’, only the Elvis track and Still, another Commodores track, could really be described as sad songs. The algorithm is definitely on to something, but it’s not brilliant at coping with Lionel Richie.
Lyrics clearly have a big impact on the mood of a song. The Spotify data appears not to take account of them, although the Radiohead ‘gloom index’ and the other studies do find a way to quantify lyrical sadness using sentiment analysis. So what is the Spotify data based on? They don’t release any information about this, so I ask Glenn McDonald, the company’s Data Alchemist. Yes, that’s his real job title. He’s the man responsible for Every Noise at Once, a visualisation of all 1870 music genres classified by the streaming platform, from ‘deep filthstep’ to ‘Belgian indie’.
Emotion perception in music is inherently subjective: different people might perceive different emotions in the same song – Professor Paiva
The valence dataset was developed using human training data, then extrapolated by machine learning, he tells me. Spotify use the track metadata to help editors make the mood-based playlists the platform is famous for: Happy Pop Hits, Easy 00s, A Perfect Day. “The data can find what a human would never have time to collect, but the human can make subjective and cultural judgments that the machines can’t.” I ask him which audio features the algorithm has learned to classify as happy or sad but he doesn’t (or isn’t able to) reveal much: “Valence is one of our elemental features, so it isn't described in terms of others”. The company is currently improving its emotional classification system by asking its users to tag short track excerpts with mood words. (I tried this and it’s not as easy as it sounds.)
Give it Away by the Red Hot Chili Peppers is 38% ‘loud n’ scrappy’ and 2% ‘alienated anxious groove’
It’s not just Spotify doing this. Gracenote’s Mood 2.0 employs a neural network to classify music tracks in terms of their mood profile, and the results are incredibly specific: Give it Away by the Red Hot Chili Peppers is 38% ‘loud n’ scrappy’ and 2% ‘alienated anxious groove’. Machine learning is also used in the academic field of music emotion recognition. Starting with a pool of tracks verified as having a particular emotional quality, for example a list of sad songs collated using mood word tags applied by human listeners, it’s possible for a computational model to “automatically learn a mapping between music clips and their respective emotions”, Professor Paiva explains. But it’s not an easy task. “Emotion perception in music is inherently subjective: different people might perceive different emotions in the same song.” Another fundamental hurdle is that “it is not well understood how and why some musical elements elicit specific emotional responses in listeners.” Hence my puzzling waterworks at that one Smiths line.
‘Darling, they’re playing our tune’
Machines can now learn, but so far, they lack the idiosyncrasies of humans, our fine-grained cultural knowledge and our ability to put what we hear into a very specific context. Computers lack emotional memories, too, those autobiographical associations that can imbue music with meaning and richness. (This tendency of music to forever remind us of emotionally powerful things that happened to us is known by music psychologists as the ‘Darling, they’re playing our tune’ theory.) “When you hear a song, you might remember where you were when you first heard it, and that will dictate how you’re going to experience that song in the future”, says data scientist Charlie Thompson. “When a machine looks at a song, it just sees a waveform. It doesn’t even really have a concept of time that’s meaningful.” Spotify’s Data Alchemist Glenn McDonald agrees: “Machines don’t ‘perceive’ music in any human sense. Humans have context and emotion and nostalgia and language and dreams and fears. It's like asking how an airplane goes sightseeing. The airplane doesn't. It's just a thing humans use to do human things at a larger scale.”
So when a machine learning algorithm classifies the mood of a track, what is it doing? It can’t attempt to classify the emotions you feel when you listen to a song, at least not yet. Instead, “most current Mer [music emotion recognition] systems are focused on perceived emotion”, says Paiva. That is, the emotion or emotions a person identifies or ‘sees’ in a song – Eminem’s tracks are angry; 70s disco is sexy and joyful; this song is sad. (There’s also a third kind, transmitted emotion, which is “the emotion that the performer or composer aimed to convey”.)
Felt and perceived emotion can be quite different, and the ambiguity of the words we use to describe them can bamboozle machines: “when a person uses the tag ‘hate’ it might mean that the song is about hate or that the person hates the song”, says Paiva. At the moment, the best Mer systems are about 70% accurate at recognising static emotions in 30-second musical excerpts, he tells me. That is, if you fed today’s star algorithm 10 song snippets, it would on average label three of them with the wrong emotion. That’s far from perfect, and reducing a track to a single value loses a lot of information about the emotional changes that happen over the course of a track.
But the performance of Mer systems is improving all the time. In five or 10 years’ time they’ll be much better. The technology has many potential uses, according to Paiva, from music therapy through to gaming and advertising: “Mer systems could be used to find songs that match a desired emotional context for some product or scene, or to use the audio information to recognise emotion in video.”
Why stop at music? People’s tastes in books, TV and radio may also offer a window on their soul – Andy Haldane
“We’re at a really interesting moment,” says Nicola Dibben, Professor of Music at the University of Sheffield. Data from online streaming services like Spotify, Pandora, Tidal and YouTube offers exciting opportunities to researchers who want to find out how the acoustic characteristics of music elicit particular emotions in listeners, she says. And the oceans of listening data such services create are potentially a precious source of insights about “people’s actual listening habits”, that is, “what people are really doing with music at a particular moment in time”, whether that’s singing in the shower or crying over a breakup. If those companies share their data with researchers, that is.
The crying game
There is a darker side. In a speech earlier this year, Bank of England chief economist Andy Haldane quoted a study by researchers at Claremont Graduate University (described here) suggesting there’s a link between song sentiment and consumer confidence. The researchers extracted musical and lyrical sentiment data from songs in the top 100 charts from various sources, including Spotify, to show that fluctuations in the average mood of songs could predict the monthly returns of various financial indices. People’s listening tastes appear to shift in tandem with the movement of markets. It’s an extraordinary idea but the logic is plausible: we’re more likely to listen to happy songs in good times, and sad songs in bad times. In his speech, the economist went further: “Why stop at music? People’s tastes in books, TV and radio may also offer a window on their soul.”
Haldane’s language is rather Orwellian. Do you want to give a suite of streaming companies, broadcasters and publishers access to your soul? Do you want them to sell your data to third parties? What if your soul is hacked? It’s easy to over-dramatise, but the availability of vast amounts of data gathered by music and other streaming sites does create raise questions of privacy, especially when it’s triangulated with other user data such as location, or used to sell us products. These questions could become more urgent if Mer systems learn to guess what might be going on in an individual listener’s mind, to detect the felt emotion, rather than just assigning each track an emotional label. “Unlike a piece of sheet music, vinyl LP or cassette tape, these new musical objects are actively listening to us, too”, write Richard Purcell and Richard Randall about streaming services in their 2016 volume on music listening. Streaming services are gathering data on our listening habits at the same time as, many argue, they’re also changing them via recommendation algorithms.
The window to your soul may reveal more than you think. Research in music psychology suggests musical tastes correlate with personality traits. If you like sad music, you may well be more open to experience and more empathic than someone who prefers their tunes ‘loud n’ scrappy’. But there’s a paradox: sad music is generally pleasurable to listen to. It doesn’t make you sad in the way that happy music can cheer you up, or a scary piano crash in a horror film can freak you out. Theories abound about why sad music should give us this paradoxical pleasure. Do sad songs provide catharsis, a safe space for wallowing in outsourced misery? Do they offer a kind of therapy, an excuse for self-reflection? We just don’t know yet, but the key to understanding why music moves us is going to be more complex than allocating each track one of four basic emotions. Unravelling the tangled web of human felt emotion may be a gargantuan task for machines to master, if we even want them to. Perhaps people simply enjoy the feeling of letting go, of being consumed by the musical soundworld, of being moved to tears. Not sad tears, not happy tears, but tears all the same.