Google isn’t new to using AI to create music, having launched its MusicLM in January to generate music from text. Now Google has stepped up and is using AI to read your brain – and generate sounds based on your brain activity.
In a new research paper,brain2musicGoogle uses AI to reconstruct music from brain activity, as seen through functional magnetic resonance imaging (fMRI) data.
Too: How I used ChatGPT to write a custom JavaScript bookmarklet
The researchers studied fMRI data collected from five test subjects who listened to identical 15-second music clips in a variety of genres, including blues, classical, country, disco, hip-hop, jazz, metal, pop, reggae and rock.
They then used that data to train a deep neural network to learn about brain activity patterns and the relationships between different elements of music, such as rhythm and emotion.
Once trained, the model can reconstruct music from fMRI using MusicLM. Since MusicLM generates music from text, it was optimized to produce music similar to the original musical stimuli at the semantic level.
When put to the test, the music generated resembled the musical stimulus the participant had initially heard in characteristics such as genre, instrumentation, mood, and more.
on research page site, you can listen to several clips of the original music stimuli and compare them with the reconstructions generated by MusicLM. The results are unbelievable.
Too: Now you can chat with a famous AI character on Viber. This way
For one clip, the stimulus was a 15-second clip of Britney Spears’ iconic “Oops!…I Did It Again”. All three reconstructions were of the same crisp and upbeat nature as the originals.
Of course, the audio didn’t exactly resemble the original because the study focused on different elements of the music, not the lyrical component.
Essentially, the model can read your mind (technically your brain patterns) to generate the music you were listening to.










