Research carried out at the D’Or Institute for Research and Education has revealed a new machine learning technology that can identify musical pieces from fMRI scans of the listener.
A new algorithm, developed by researchers at the D’Or Institute for Research and Education (Rio de Janeiro, Brazil) in collaboration with colleagues from Germany, Finland and India, can identify pieces of music from a listener’s fMRI data, according to research published in Scientific Reports. The research opens numerous avenues for research, for example improving brain–machine communication.
The researchers implemented a method that combined encoding and decoding in a two-stage approach. This initially involved mapping the brain responses elicited by listening to the music, followed by using this information to identify novel musical pieces from the fMRI data alone.
“The upcoming and more recent data analysis approaches that combine encoding and decoding models are very promising,” commented Sebastian Hoefle, co-author of the study.
“The interesting and motivating point here is that, once we know the correct mapping of musical features to the brain we could, in principle, predict and decode any novel musical piece.”
The fMRI data of six participants listening to 40 different pieces of music were captured. The musical pieces were chosen across a wide range of genres including jazz, classical, rock, pop and folk, with and without lyrics.
Hoefle explained: “The most complete model we would obtain, in theory, would be if we let people all over the world listen to all kinds of music.With the practical time limitation, we sampled the musical pieces from a relatively broad set of different genres.”
The algorithm encoded the listeners’ fMRI responses for each piece of music, taking into account different musical features such as tonality, dynamics, rhythm and timbre.
When the researchers presented a choice of two novel musical pieces, the algorithm demonstrated up to 85% accuracy in identifying the correct song from the fMRI data. The team then provided the algorithm with ten different options. The correct option was identified 74% of the time.
The study has also revealed key insights for future decoding algorithms, as Hoefle explains: “The longer the brain listens to the music, the better the identification works.”
“Our approach of testing the extent of several brain regions during decoding allows us to distinguish the exact brain regions that contribute positively to model performance and make statements about practical significance.”
Hoefle concluded: “Based on these new results, the future will show if reconstruction of auditory hallucinations could be turned into a practical application and treatment.”
The revelation brings hope that in the future, the technology might also improve brain–computer interfaces for those with locked-in syndrome, as well as potentially translating musical thoughts into song and decoding inner speech.