If you are reading this then the recent research by Brian Pasley and colleagues in which speech sounds are reconstructed from measured brain activity has probably already come onto your neuro-radar. It's certainly drawn a lot of media coverage, with some great commentaries including this from Mo Costandi in the Guardian. I you have missed it, Helen Thomson at New Scientist does a good write-up.
Amongst the questions that interests me about this is one posed by Guardian Science correspondent Ian Sample in a tweet a couple of days ago (@IanSample). Namely, does this qualify as mind reading and if not, what would? The questions is perhaps an important one for both science and society to begin to consider because there seems to be no shortage of neuroscience research studies carried by the media that are suspended beneath some kind of‘mindreading science’ banner. Remember the work on communicating with individuals in a coma-like state? There was also the study that drove neuroscience bumper to bumper with philosophy by predicting decision making behaviour from functional magnetic resonance imaging (fMRI) signals before participants were consciously aware of having made a decision. Not to mention techniques that allow a robot arm to be controlled directly from measured brain activity, developments in the field of neuro-marketing, and studies that report recreation of viewed video clips from brain scan data. There’s a lot of this stuff around. Is any of it really mindreading?
My initial response (@Science2Inspire) to this question was that the term ‘mindreading’ implied ‘thought-reading’, in the sense that we are asking the question “could they tell what we are thinking”. Based on this, pretty much all of the ‘mindreading’ work to date doesn’t really qualify as mindreading as it decodes patterns that relate to either direct sensory input (speech, images) or imminent movement plans (decide to press left or right button). We have known for many years from the study of sensory perception and early sensory processing that the particular patterns of environmental energy (patterns of light, patterns of sound, patterns of touch) received by our senses are preserved as they are converted (transduced) to neuronal impulses by our various sensory receptor cells in the retina, cochlea etc. These patterns are preserved through several sensory relay stations of the brain.
For instance, a flash of light appearing in the upper left part of your visual field will always cause a corresponding change in brain activity in the lower right part of the visual cortex. If that flash of light moved a bit to the left and down, the brain response would move a corresponding bit to the right and up. This general feature of organisation in sensory systems is known as topography. So, whilst measuring and decoding activity patters from sensory brain structures to recreate the original sensory experience is an extremely impressive technical and computational achievement, it isn’t mindreading as I would define it.
The work on communicating with a person in a coma-like state is a bit different as what is being decoded appears to be a volitional thought, but in reality what is happening here is that the researchers are able to differentiate between two patterns of brain activity that are attributed to either a ‘yes’ or ‘no’ response. There is no question of whether these researchers could readout the wider thoughts of these individuals in any meaningful way. Again I do not wish to question the huge accomplishments of such science, but there is a child’s game on the market (MindFlex) that can measure ‘brain waves’ using a headband, determine the magnitude of oscillations in these brain waves of a particular frequency (one known to be attributable to concentration, or mental effort) and use this measurement to guide a ball around a maze. Whilst this ‘toy’ obviously doesn’t have the same application or sophistication as the coma-communication technology, what is being achieved is pretty much the same: differentiate within a small subset of brain activity patterns and link detected patterns to an outcome.
So, if we loosen our definition of mindreading to include ‘decoding of brain activity in any sense’, then I guess most of this work alluded to above that has attracted mindreading-type headlines, does qualify. But surely we want something a little more stringent than this: a Turing-type test that perhaps fits a little more with our general definitions of mindreading? The Oxford English Dictionary defines mind-reading as ”The act or process of discerning (or appearing to discern) what another person is thinking” (oed.com). This maps pretty well onto my lay-understanding of the term. Again, the emphasis here is on thought-reading, and so perhaps a more robust test for mindreading technology that encapsulated this might go something along the lines of:
"The ability to decode a novel, unexpressed thought from brain activity and represent it in a form that is subsequently recognisable by the thinker as their own."
Or something similar. I’d like to hear other people’s ideas for a testable definition relevant to neuroscience research. I included the word ‘novel’ because whilst any mindreading system would probably always need to betuned up on extensive ‘training’ data before it could produce accurate results, a strong test that requires decoding of a novel thought would ensure that we get away from the concern about a system simply being able to differentiate between a finite set of responses based on a familiar input pattern.
One final thought about the term mindreading concerns its somewhat Orwellian and traditional sci-fi connotations. The reporting of neuroscience stories featuring mindreading type research, even in the science media, is often characterised by statements such as “this technology may one day allow scientists to read minds” and terms such as "telepathy machine”. There are also often brief paragraphs that whilst explicitly stating that the reported research doesn’t YET allow scientists to read the minds of individuals without their permission, nevertheless raise the spectre of this in juxtaposition with the science.
The effect of all this would seem to be to perpetuate the links between this sort of science and dystopian, Big-Brother type societies in which this technology might be used to intrusively steal one’s thoughts. The danger is that such negative (and unfounded) associations might dent public and ultimately political support for work in fields where human brain activity is recorded and attempts are made to decode the mental representations therein. This would be a great shame, since the potential medical benefits of such science are enormous, including neuroprosthetic devices, brain-computer interface technologies and methods for communicating with individuals in a ‘locked-in’ state, to name a few of the more prominent. All of these possibilties should be there for the taking whilst our private thoughts will be remaining our own for some time yet.