From Medical Xpress:
A new technique for monitoring brain waves can identify the music someone is hearing.
Researchers at the University of Essex hope the project could lead to helping people with severe communication disabilities such as locked-in syndrome or stroke sufferers by decoding language signals within their brains through non-invasive techniques.
Dr. Ian Daly from Essex’s School of Computer Science and Electronic Engineering, who led the research, said, “This method has many potential applications. We have shown we can decode music, which suggests that we may one day be able to decode language from the brain.”
Essex scientists wanted to find a less invasive way of decoding acoustic information from signals in the brain to identify and reconstruct a piece of music someone was listening to.
While there have been successful previous studies monitoring and reconstructing acoustic information from brain waves, many have used more invasive methods such as electrocortiography (ECoG), which involves placing electrodes inside the skull to monitor the actual surface of the brain.
The research, published in the journal Scientific Reports, used a combination of two non-invasive methods—fMRI, which measures blood flow through the entire brain, and electroencephalogram (EEG), which measures what is happening in the brain in real time—to monitor a person’s brain activity while they are listening to a piece of music. Using a deep learning neural network model, the data was translated to reconstruct and identify the piece of music.
Music is a complex acoustic signal, sharing many similarities with natural language, so the model could potentially be adapted to translate speech. The eventual goal of this strand of research would be to translate thought, which could offer an important aid in the future for people who struggle to communicate, such as those with locked-in syndrome.
Dr. Daly added, “One application is brain-computer interfacing (BCI), which provides a communication channel directly between the brain and a computer. Obviously, this is a long way off but eventually we hope that if we can successfully decode language, we can use this to build communication aids, which is another important step towards the ultimate aim of BCI research and could one day provide a lifeline for people with severe communication disabilities.”
Link to the rest at Medical Xpress
PG says human brain/computer interfaces will continue to develop in many different ways, most good.
4 thoughts on “Decoding brain waves to identify the music we are hearing”
Perhaps a bit dewy-eyed, there, PG.
Once we humans acquire language, we think in language. Including “wrong think.” A simple way to determine what a person is thinking would be an enormous boon to dictators of all kinds – from the petty to the major.
Good point, W.
I worry a bit about altering brain waves to conform with what is good and right, either temporarily or permanently.
“There will be no love, except the love of Big Brother.”
Reminds me of the opening of SERENITY:
Conformity *must* be enforced.
After all, the other guys might be right.
I wish this had gone into more detail about the music. What were they listening to? Does this work with Mozart and Gramatik? Does the kind of instrument matter? What if someone is playing a theramin?
Joking aside, I’m with Writing Observer, I do not believe that the technology will be used solely for good. Currently the accusation of “you’re messing with my mind” is the domain of crazy people. If the tech becomes robust enough to actually mess with people’s minds, we’ll see the mother of all street fights. Sci-fi has warned us, has it not?
“We just wanted to cure this illness” is how the story starts 🙂
Comments are closed.