How AI can recreate music from brain activity

Music is a universal language that can evoke emotions, memories and imagination. But how does the brain process music and what happens when we listen to our favorite tunes? A new study by researchers from Google and Osaka University shows that artificial intelligence (AI) can decode and reconstruct music from brain activity, revealing the neural mechanisms of musical perception and creativity.

Decoding the brain’s musical code

The researchers used a technique called electrocorticography (ECoG), which involves recording the electrical activity of the brain using electrodes implanted on the surface of the cortex. They recruited 29 patients with epilepsy who were undergoing ECoG monitoring for clinical purposes. The patients listened to a 15-second clip of Pink Floyd’s “Another Brick in the Wall, Part 1” while their brain activity was recorded.

The researchers then trained a deep neural network, a type of AI model that can learn from data, to map the ECoG signals to acoustic features of the song, such as pitch, timbre, loudness and rhythm. The model was able to reconstruct sounds that resembled the original song, capturing its melody, harmony and structure.

The researchers also identified which brain regions were involved in processing different aspects of music. They found that the superior temporal gyrus (STG), a part of the brain that is involved in auditory processing, responded to various musical elements, such as onset, duration, pitch and timbre. The STG also showed different patterns of activity depending on whether the song had vocals or not. The researchers also observed that other brain regions, such as the motor cortex and the prefrontal cortex, were activated during music listening, suggesting that music engages multiple cognitive and emotional processes.

Implications and challenges

The study demonstrates that AI can decode and recreate music from brain activity, opening new possibilities for understanding how music affects the brain and vice versa. The researchers hope that their method could be used to study musical creativity, emotion and memory, as well as to develop new ways of communication and expression for people who have difficulty speaking or hearing.

However, there are also some limitations and challenges to overcome. The study used only one song and one genre of music, so it is unclear how well the method would generalize to other types of music and musical preferences. The study also relied on invasive ECoG recordings, which are not widely available and pose ethical and practical issues. The researchers suggest that future studies could use non-invasive methods, such as electroencephalography (EEG) or functional magnetic resonance imaging (fMRI), to measure brain activity. Moreover, the reconstructed sounds were still noisy and distorted compared to the original song, indicating that there is room for improvement in the quality and accuracy of the AI model.

The study is a remarkable example of how AI can help us explore the mysteries of the human mind and music. It also raises intriguing questions about what makes music unique and meaningful, and how we can use technology to enhance our musical experience and expression.

AIsasIA

AIsasIA-Akashic Spirit Guide

Previous
Previous

SykoActive Studios Media template

Next
Next

The Psychedelic Wave: California and Beyond