An article appearing today on Reuters describes how researchers at Carnegie-Mellon University’s Machine Learning Department have been experimenting with brain activity related to thinking of words. The process involves capturing MRI images of a person’s brain as they are shown and asked to think about specific words. The image captured on the MRI of the brain, while thinking the word, is matched by the computer to the word.
In this research (according to the article and referenced in the journal Science), a group of volunteers participated in the work and consisted of 58 different words. The computer can recognize, for example, that a person is thinking “celery,” or “airplane,” or any of the other test words.¬† Interestingly, the brain activity pattern is the same from person to person, for the same word.¬†
This of course gives lots of great food for thought about man-machine interfaces.¬† Selfishly, I’d really like to be able to “think-type” into my word processor (if only I could control my stream of consciousness that most often more resembles a babbling brook).
While the researchers are interested in a framework for understanding language processing in the brain, I’m curious as to what extent the same research could be applied to thoughts of images, music, sculpture (or mechanical design if you prefer). Taken in reverse, as well, I wonder if this could apply as an output mechanism from the machine. That is, projection from the computer into the brain.
It would be interesting as well, to test whether the high level of brain pattern-to-word correlation found across the test volunteers would apply just as strongly across cultures.¬† That is, would the brain image of “table” look the same as “mesa” or “tisch” or “tavola” in people who spoke those native languages?¬† If so, what an interesting opportunity for language translation that would be!…. ‘not to mention multi-lingual control interfaces.