It’s hard to pinpoint the exact time in history when genre labels were used to classify music, but the fact is that over the past century, and certainly still today, genre labels dominate. Whether organising your iTunes library, receiving music recommendations from apps like Spotify, or buying CDs at a record store, genre is the first way in which we navigate the music we like.

However, technological advances have now put millions of songs at our fingertips through mobile devices. Not only do we have access to more music than ever before, but more music is being produced. Places like SoundCloud have made it possible for anyone to record and publish music for others to hear. With this increased diversity in music that we are exposed to, the lines separating genres have become even more blurred than they were previously.

Advertisement

Genre labels are problematic for several reasons. First, they are broad umbrella terms that are used to describe music that vary greatly in their characteristics. If a person says they are a fan of “rock” music, there is no way of knowing whether they are referring to The Beatles, Bob Dylan, or Jimi Hendrix – but all three vary greatly in style. Or if a person tells you that they are a fan of pop music, how do you know if they are referring to Michael Jackson or Justin Bieber?

Genre labels are also often socially driven with little to do with the actual characteristics of the music. They are labels stamped onto artists and albums by record companies with the intent of targeting a particularly type of audience or age group.

Beyond genre

The fundamental problem is that genre labels often do not accurately describe artists and their music – they simply do not do them justice. A more accurate way to label music would be based solely on their actual musical characteristics (or attributes). Such a labelling system would also likely better account for diversity in a person’s music taste.

Advertisement

Recently, my team of music psychologists addressed this problem by developing a scientific way to create a basic classification system of music that is based on its attributes and not social connotations. The team included expert in musical preferences, Jason Rentfrow (Cambridge), best-selling author and neuroscientist Daniel Levitin (McGill), big data scientists David Stillwell (Cambridge) and Michal Kosinski (Stanford), and music researcher Brian Monteiro. Our research was published this month.

We had more than 100 musical excerpts spanning over 20 genres and subgenres rated on 38 different musical attributes. We then applied a statistical procedure to categorise these musical attributes and discovered that they clustered into three basic categories: “Arousal” (the energy level of the music); “Valence” (the spectrum from sad to happy emotions in the music); and “Depth” (the amount of sophistication and emotional depth in the music). The statistical procedure mapped each song on each these three basic categories. For example, Joni Mitchell’s “Blue” is low on arousal (because of the slow tempo and soft vocals), low on valence (because of the expressed nostalgia and sadness), and high on depth (because of the emotional and sonic complexity expressed through the lyrics and sonic texture).

The songs listed represent each of the three musical attribute clusters. Tricia Seibold | Stanford Business | http://www.gsb.stanford.edu/insights/can-your-personality-explain-your-itunes-playlist

Arousal, valence, depth

Will people start walking around wearing T-shirts that say “I love Depth in music”, or list themselves as fans of positive valence on their Twitter profiles? I doubt it. But it might be useful if people began to use attributes to describe the music that they like (aggressive or soft; happy or nostalgic). People’s music libraries today are incredibly diverse, typically containing music from a variety of genres. My hypothesis is that if people like arousal in one musical genre, they are likely to like it in another.

Advertisement

Even though these basic three dimensions probably won’t become a part of culture, recommendation platforms, like Spotify, Pandora, Apple Music, and YouTube should find these dimensions useful when coding and trying to accurately recommend music for their users to listen to. Further, it is also useful for scientists, psychologists, and neuroscientists who are studying the effect of music and want an accurate method to measure it.

Our team next sought to see how preferences for these three dimensions were linked to the Big Five. Personality traits (openness, conscientiousness, extraversion, agreeableness and neuroticism). Nearly 10,000 people indicated their preferences for 50 musical excerpts and completed a personality measure. People who scored high on “openness to experience” preferred depth in music, while extroverted excitement-seekers preferred high arousal in music. Those who were relatively neurotic preferred negative emotions in music, while those who were self-assured preferred positive emotions in music.

Source: David Greenberg

So, just as the old Kern and Hammerstein song suggests, “The Song is You”. That is, the musical attributes that you like most reflect your personality. It also provides scientific support for what Joni Mitchell said in a 2013 interview with CBC:

The trick is if you listen to that music and you see me, you’re not getting anything out of it. If you listen to that music and you see yourself, it will probably make you cry and you’ll learn something about yourself and now you’re getting something out of it.

Find out how you score on the music and personality quizzes at www.musicaluniverse.org

Advertisement

David M. Greenberg, Music psychologist, University of Cambridge.

This article first appeared on The Conversation.