Mental Well-Being, Multiculturalism and Diversity In Focus At This International Music Research Conference

Eleven research studies from IIITH’s Cognitive Sciences group that span a gamut of areas – from a cross-cultural study on music usage during the pandemic to generating an ML model that automatically creates appropriate mood tracks for specific passages in a book – will make their presence felt at the 16th International Conference on Music Perception and Cognition this month.

On its official webpage, the European Society for the Cognitive Sciences of Music (ESCOM) describes itself as an international non-profit society promoting not only theoretical, experimental and applied research in the cognitive sciences of music, but also a collaborative one among arts, humanities and sciences. While it organises a major international conference every three years, every six years this conference is merged with the International Conference on Music Perception and Cognition (ICMPC). This year, the joint conference is due to be held online between 28-30 July 2021.

Another interesting aspect of the conference is its adoption of the multi-hub format across many countries. A concept that was pioneered back in 2018 to increase participation from researchers whose mobility may be limited due to various reasons, including those from low-income countries, sees its continued relevance this year too, albeit in a fully virtual mode. Besides inclusivity, the conference’s focus is also on climate change. A multi-hub model seeks to reduce the carbon footprint, by limiting emissions. For the first time, the conference has a hub in India whose main organizers are Dr. Vinoo Alluri, Head of the Music Cognition lab at IIITH and Dr. Shantala Hedge, Associate Prof. Neuropsychology, NIMHANS, Bangalore.

In keeping with the conference theme of ‘Connectivity and Diversity in Music Cognition’, representation from the International Institute of Information Technology Hyderabad will see a wide array of studies presented either as extended abstract presentations or in the poster format. Here’s a brief look at each of them.

Covid and Cultural Differences In Music Consumption 
In times of crisis, people tend to turn towards music. The coronavirus and the everyday challenges associated with it have brought on a host of mental health issues such as stress and anxiety which can be mitigated by listening to music via the release of endorphins in the brain. To investigate how people are learning to cope with the “new normal” and regaining a sense of control through music listening, Faizaan Farooq Khan and Dr. Alluri in collaboration with Emily Carson and Suvi Saarikallio, University of Jyvaskyla conducted a study titled Cross Cultural Study on Usage of Music for Mood Regulation During a Pandemic. The focus of the study was on differences in musical listening habits across Indian and Finnish cultures. They found that while both cultures consumed music during the early days of the pandemic, the difference lay in how they used it. For Indians, music assisted them in 3 ways – to comfort, divert and to deal with their feelings and emotions. However, for the Finns, music served more as a distractive presence, for entertainment rather than as a coping mechanism.

Faizaan Farooq Khan

A Playlist For A Book
In what has great implications for immersive reading experiences, students Pranshi Yadav, Divy Kala, Nisarg Mankodi, Shivani Hanji and Dr. Alluri have used a deep learning model to automatically generate background music based on the emotional content of books. While there are other tools and models that automatically generate music too, what makes this initiative different is the adoption of tools from different fields. “We’re using NLP techniques and deep learning-based music generation techniques as well. Essentially what the tool does is to process language in order to extract emotions and use this emotion as an input to a music generator that produces music representative of the emotion,” explains Dr. Alluri. The team is currently working on building a retrieval system where it fetches the most appropriate background music from an Open source music platform like Freesound. “Music generation is not an easy task, leave alone for a specific emotion related to content. But the retrieval system seems to work very well from a perceptual point of view. I really think this has a lot of potential and I’m very excited,” says Dr. Alluri. The study titled Sing Me A Story: Background Music Generation for Books has been shortlisted for the Best Paper award.

 


Pranshi Yadav

Shivani Hanji
Divy Kala

 

 

 

 

 

 

Open To Joy As Well As Sadness
It is the thread of music consumption that stitches all the music cognition studies at IIITH together. A significant amount of work falls in determining individual differences in online music consumption – a research avenue that is emerging in the music information retrieval (MIR) domain. For instance, in the study titled, Personality Correlates of Preferred Emotions Through Lyrics by Yudhik Agrawal and Dr. Alluri, a Natural Language-based Deep Learning model was used to identify emotions from lyrics of songs extracted from listening histories of Last.fm users and associated with individual differences. Emotions such as Joy, Anger, Sadness and Tenderness were identified and correlations were performed with personality traits such as Openness, Extraversion, Agreeableness and Neuroticism. Demonstrating that people with personality traits such as Openness and Agreeableness are indeed open to all kinds of experiences, their listening preferences veered towards lyrics (in songs) depicting not only joy and tenderness but also sadness.

Yudhik Agrawal

New Age Deep Learning Model 
In an extension of the work described above, researchers Rajat Agarwal and Ravinder Singh under Dr. Alluri along with Petri Toiviainen, University of Jyvaskyla used a state-of-the-art deep learning based classification paradigm called the Transformers-Capsule network to fine tune identification of music emotion through lyrics. Their study titled Music Emotion Recognition From Lyrics Using Transformers-Capsule Network demonstrates that emotion-based information obtained from lyrics is more efficiently processed and evaluated using this model than plain transformer models.  According to Rajat, the superiority of the capsule network is due to the manner in which it mimics the way humans process textual information, that is, by looking at chunks and making sense of the whole.

Rajat Agarwal

What’s Your Poison? Music Or Lyrics?
In another study to discover what kind of people naturally veer towards music with lyrics versus those who prefer instrumental music, Sidhant Subramanian and Anant Mittal under Dr. Vinoo Alluri along with Jonna Vuoskoski, University of Oslo formulated a tool to quantify a person’s affinity towards one kind of music or the other. While studies in the past have highlighted that persons with personality traits such as Openness and Conscientiousness show a marked preference towards musical genres like jazz (which has fewer lyrics), these researchers extended their study to analyse individual scores obtained in the context of empathic traits. Their study titled Music or Lyrics? Individual Differences Associated with Listening Strategies revealed that the folks who prefer instrumental music had higher traits such as Fantasy, Openness and Conscientiousness. Explaining why empathy is an important element to be considered, Dr. Alluri says, “There are several studies that deal with personality. But empathic traits are very important too in the sense that there is a genetic basis to it, in addition to it playing an important role in the enjoyment of emotions conveyed through music. We are different in our predispositions, but at the same time the way we are brought up makes a huge difference too. And it’s these differences that explain why we listen to what we do.”

Anant Mittal

 

 

 

 

 

Sidhant Subramanian

 

Acoustics for Music Recommendation 
Online music recommendation systems typically utilize tags and metadata associated with the soundtracks, in addition to acoustic content-based features before making recommendations for listeners. In this modern age of #tagging everything, tags are increasing in their relevance and importance. However, according to Dr. Alluri tags may not be representative enough of the musical track because the features that you get from the acoustic content might be different, especially if you want a personalized experience. She explains this by saying that while a peppy song can be described as beautiful and lovely with associated tags of positivity, a sad song can be described the same way with similar positive tags though acoustically they are both radically different. “The acoustic content may not reflect the emotions represented by the tag,” she says. Therefore in the study titled, Tag-based and Acoustic feature-based emotions associated with online music consumption and personality, Yash Goyal, and Shivani Hanji under Dr. Alluri with Emily Carlson, University of Jyvaskyla set out to investigate if there is any congruence in tag-based emotions and acoustic-based emotions in the light of personality traits. They found that for traits like Extraversion, there is consistency in the way they describe the music, that is via tags and what the music really is, that is, the acoustic features. Similarly for traits like Neuroticism, the music they listen to (typically music that evokes sad emotions) is consistent with the tags that represent them and the acoustic quality inherent in them. “These findings are relevant for recommendation systems because at least for the two traits mentioned above, you can ensure that the acoustic content and tags are aligned with each other and hence one can rely on just acoustic features while recommending music”, says Dr. Alluri.

Yash Goyal

Seeking Melody Online
In the study titled Preference for Instrumental Music On Online Music Streaming Platforms Associated with Individual Differences, Ramaguru Guru Ravi Shankar and Yudhik Agrawal under Dr. Alluri trained their lens on instrumental music and set out to investigate their association with individual personality traits. They found traits such as Openness and Low Extraversion were related to listeners’ tendency to veer towards instrumental music or music with little to few lyrics such as Jazz. While this wasn’t surprisingly novel, the trait Neuroticism was also associated with a proclivity towards lyrical music. “This resonates with the work that we have been doing with the tastes of individuals at risk for depression as this trait also correlates with anxiety and psychological distress. Such individuals like to listen to sad music to regulate their mood states, and sad songs mainly rely on linguistic cues to convey the nuances of the emotion. Further studies that investigate genre-based categorizations of instrumental music to reveal musical associations specific to different individuals are required which can then help in developing more personalized recommendations,” says Dr. Alluri.

Ramaguru Guru Ravi Shankar

Playing Artists On Repeat
In the study titled Artist2Risk: Predicting Depression Risk based on Artist Preferences, researcher Yash Goyal and Dr. Alluri set out to investigate whether artist preferences in listening histories can be markers of depression risk. Interestingly enough, they found that those categorised as At-Risk of depression did not possess a diverse list of artists in their playlists but instead displayed a tendency to listen to the same set of artists repeatedly. According to the team, the next step in this direction would be to analyze how these artist preferences evolve over time. It would be particularly significant since depression if untreated can lead to a chronic condition and earlier the identification and diagnosis, the better it is.

Musical Choices And Gender
If the kind of music you’re listening to via online music streaming reveals your risk at depression, Shivani Hanji and Yash Goyal went a step further to see if gender-specific music preferences are related to depression risk. In their study titled Exploring gender-specific music preferences associated with risk for depression on online music streaming platforms,  

those At-Risk for depression listened to more of Neo-psychedelic-dream pop and 80s soul/funk, irrespective of gender. Not taking risk into account and examining only gender differences, they found a pronounced genre-preference for music like 80s soul/funk and Swing/Big-band jazz in males, while females showed a preference towards Indie-Alternative-Pop/Rock. They also found a gender-risk interaction such that males who were At-Risk for depression showed a lower preference for energetic genres such as Techno, House, and Chillout Trance music. One common characteristic of individuals at risk for depression is their heightened states of arousal, typically negative in nature which explains their dislike for music with high energetic arousal.

Applying HUMS to the clinically depressed
In 2015, music researchers from Finland developed a 13-item questionnaire, known as the Healthy-Unhealthy Music Scale (HUMS)  to assess how the youth engage in music listening that could also be used as an early indicator of clinical depression. To investigate if the same scale, that is musical engagement strategies, could be applied to the clinically depressed, researchers from NIMHANS, Bangalore ran a study in conjunction with Anant Mittal and Dr. Alluri from IIITH. It was found that people ‘At Risk’ of depression and the clinically depressed group had a tendency to employ music in a maladaptive manner – not to feel better and improve their mood but to wallow in anguish.

Complex Music And IQ
A case study which has relevance in the NeuroScience domain titled Appreciation Of Complex Music In A Cognitively Impaired Subject is also set to be presented at the conference. “We wanted to understand how under-developed, atrophying brains can retain the enjoyment of music when the other cognitive faculties are completely compromised,” says Dr. Alluri. Mohammed Yaseen Harris, who is the lead researcher of the study found evidence that the individual who was severely compromised in terms of cognitive abilities, showing little to no response to sensory stimuli would exhibit sensori-motor responses to “complex” instrumental music such as Beethovan’s Fur Elise, Mozart’s piano concertos and even Pandit Shivkumar Sharma’s santoor compositions. It seems to suggest that there are two levels to enjoying music  – a low-level physiological response of the kind displayed by the individual as well as a more cerebral engagement typically associated with a higher IQ.

Mohammed Yaseen Harris

What makes these entire set of submissions unique is that they cover a wide range of areas of Musicology. “Because they are all extended abstracts, they work as excellent starting points to develop those into full-fledged journal articles after getting feedback from the research community at ICMPC. Plus, this exposure gives students the required confidence and the motivation to pursue further research,” says Dr. Alluri.

Prof Vinoo Alluri

Sarita Chebbi is a minimalist runner, practising yogi and baker of all things whole-wheat, and sugar-free. Currently re-learning her ABC’s…the one that goes: A for algorithm, B for Bayesian, C for convolutional (neural network)….

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Leave a Reply

Your email address will not be published. Required fields are marked *

Next post