Fake Banner
    So What Would Count As Mindreading?
    By Chris Martin | February 5th 2012 01:49 PM | 6 comments | Print | E-mail | Track Comments
    About Chris

    How splendidly strange it is that everything we know, feel and think is brought about by 1.5kg of pink jelly-like stuff that we call the brain? ...

    View Chris's Profile

    If you are reading this then the recent research by Brian Pasley and colleagues in which speech sounds are reconstructed from measured brain activity has probably already come onto your neuro-radar. It's certainly drawn a lot of media coverage, with some great commentaries including this from Mo Costandi in the Guardian. I you have missed it, Helen Thomson at New Scientist does a good write-up.

    Amongst the questions that interests me about this is one posed by Guardian Science correspondent Ian Sample in a tweet a couple of days ago (@IanSample). Namely, does this qualify as mind reading and if not, what would? The questions is perhaps an important one for both science and society to begin to consider because there seems to be no shortage of neuroscience research studies carried by the media that are suspended beneath some kind of‘mindreading science’ banner. Remember the work on communicating with individuals in a coma-like state? There was also the study that drove neuroscience bumper to bumper with philosophy by predicting decision making behaviour from functional magnetic resonance imaging (fMRI) signals before participants were consciously aware of having made a decision. Not to mention techniques that allow a robot arm to be controlled directly from measured brain activity, developments in the field of neuro-marketing, and studies that report recreation of viewed video clips from brain scan data. There’s a lot of this stuff around. Is any of it really mindreading?

    My initial response (@Science2Inspire) to this question was that the term ‘mindreading’ implied ‘thought-reading’, in the sense that we are asking the question “could they tell what we are thinking”. Based on this, pretty much all of the ‘mindreading’ work to date doesn’t really qualify as mindreading as it decodes patterns that relate to either direct sensory input (speech, images) or imminent movement plans (decide to press left or right button). We have known for many years from the study of sensory perception and early sensory processing that the particular patterns of environmental energy (patterns of light, patterns of sound, patterns of touch) received by our senses are preserved as they are converted (transduced) to neuronal impulses by our various sensory receptor cells in the retina, cochlea etc. These patterns are preserved through several sensory relay stations of the brain.

    For instance, a flash of light appearing in the upper left part of your visual field will always cause a corresponding change in brain activity in the lower right part of the visual cortex. If that flash of light moved a bit to the left and down, the brain response would move a corresponding bit to the right and up. This general feature of organisation in sensory systems is known as topography. So, whilst measuring and decoding activity patters from sensory brain structures to recreate the original sensory experience is an extremely impressive technical and computational achievement, it isn’t mindreading as I would define it.

    The work on communicating with a person in a coma-like state is a bit different as what is being decoded appears to be a volitional thought, but in reality what is happening here is that the researchers are able to differentiate between two patterns of brain activity that are attributed to either a ‘yes’ or ‘no’ response. There is no question of whether these researchers could readout the wider thoughts of these individuals in any meaningful way. Again I do not wish to question the huge accomplishments of such science, but there is a child’s game on the market (MindFlex) that can measure ‘brain waves’ using a headband, determine the magnitude of oscillations in these brain waves of a particular frequency (one known to be attributable to concentration, or mental effort) and use this measurement to guide a ball around a maze. Whilst this ‘toy’ obviously doesn’t have the same application or sophistication as the coma-communication technology, what is being achieved is pretty much the same: differentiate within a small subset of brain activity patterns and link detected patterns to an outcome.

    So, if we loosen our definition of mindreading to include ‘decoding of brain activity in any sense’, then I guess most of this work alluded to above that has attracted mindreading-type headlines, does qualify. But surely we want something a little more stringent than this: a Turing-type test that perhaps fits a little more with our general definitions of mindreading? The Oxford English Dictionary defines mind-reading as ”The act or process of discerning (or appearing to discern) what another person is thinking” (oed.com). This maps pretty well onto my lay-understanding of the term. Again, the emphasis here is on thought-reading, and so perhaps a more robust test for mindreading technology that encapsulated this might go something along the lines of:

    "The ability to decode a novel, unexpressed thought from brain activity and represent it in a form that is subsequently recognisable by the thinker as their own."

    Or something similar. I’d like to hear other people’s ideas for a testable definition relevant to neuroscience research. I included the word ‘novel’ because whilst any mindreading system would probably always need to betuned up on extensive ‘training’ data before it could produce accurate results, a strong test that requires decoding of a novel thought would ensure that we get away from the concern about a system simply being able to differentiate between a finite set of responses based on a familiar input pattern.

    One final thought about the term mindreading concerns its somewhat Orwellian and traditional sci-fi connotations. The reporting of neuroscience stories featuring mindreading type research, even in the science media, is often characterised by statements such as “this technology may one day allow scientists to read minds” and terms such as "telepathy machine”. There are also often brief paragraphs that whilst explicitly stating that the reported research doesn’t YET allow scientists to read the minds of individuals without their permission, nevertheless raise the spectre of this in juxtaposition with the science.

    The effect of all this would seem to be to perpetuate the links between this sort of science and dystopian, Big-Brother type societies in which this technology might be used to intrusively steal one’s thoughts. The danger is that such negative (and unfounded) associations might dent public and ultimately political support for work in fields where human brain activity is recorded and attempts are made to decode the mental representations therein. This would be a great shame, since the potential medical benefits of such science are enormous, including neuroprosthetic devices, brain-computer interface technologies and methods for communicating with individuals in a ‘locked-in’ state, to name a few of the more prominent. All of these possibilties should be there for the taking whilst our private thoughts will be remaining our own for some time yet.

    Comments

    vongehr
    "Big-Brother type societies in which this technology might be used to intrusively steal one’s thoughts. The danger is that such negative (and unfounded) associations ..."
    I'd bet you'd say that; it is your funding money, but where is your argument backing up "unfounded"? Advanced brain reading will most certainly be used in investigation scenarios, just like lie-detectors are. I hear them already: "Oh why would you object to go under the scanner if you are innocent; got something to hide?"
    Gerhard Adam
    ...just like lie-detectors are.
    Oh great, another inappropriate abused piece of technology to produce incorrect results.
    Mundus vult decipi
    Chris Martin
    Sascha,

    Believe me, I would share your concern if there was technology around that could 'steal thoughts'. But there isn't, and it doesn't look likely that there will be anytime soon. My use of 'unfounded' is based on the fact that no study yet has shown any potential in 'reading minds' in anything other than the superficial decoding of sensory input / pattern recognition mannerI describe. As someone who works with fMRI I can assure you that getting a decent signal out of the machine to even report on simple activation of a sensory area of the brain to a really powerful stimulus is difficult enough, without trying to interrogate signals that might tell me something about 'what you did last summer'.

    That said, I do agree with your sentiment and fear that if such tech was available 'they' probably would try to use it! This would be a battle that I would gladly join you to fight! Fact is though, the use of fMRI in legal setting already has a precedent, as discussed in this Nature News piece. What is happening here is that the scanner was being used (by the defense, not prosecutors) to try to show that the accused was unable to regulate the kind of impulsive thoughts that might have led his criminal acts in the same way as you or I could. This is essentially just seeing how effectively one area of the brain can inhibit activity in another - pretty mechanistic really with no way of knowing what any of the subjects thoughts are other than by virtue of the test conditions which instructed them to think particular things. In other research, see this paper which shows that with, whether juries are likely to convict more or less if fMRI was theoretically able to assist in prosectuing accused individuals depends entirely on the information they are given about the reliability of the machine.

    I do not rule out the possibility that one day a new form of technology could go down the mind-reading road (but I cannot envisage it right now), and like all science-tech developments discussion and debate about implications and possible (mis)use is essential. I think my point is that these tools have many non-mind-reading applications that could be of great benefit to many people and so it would be a shame to have a 'Frankenstein Foods' type scare over it all.


    vongehr
    Oh - I know the research and what can be done pretty well, but you seem to have a convenient lack of imagination of how to use it (not mentioning your underestimating the exponentially accelerating speed of technological progress).

    You do not need to "steal thoughts" in order to employ a little bit more advanced methods in interrogations. Even just mapping whether the accused has seen something already or not (different areas involved in learning lightening up perhaps) could be used. You map it out with things that you know are known/unknown to the person, then after you have the computer analysis being able to quite reliably distinguish, you show the pictures/sounds of interest. No way she can pretend not to have been at that place before or does not recognize that particular person, ... . I just made this example up in one minute on the fly here - no doubt there are much better examples that people who know more about this stuff will/have come up with.

    You say such is never reliable? Oh guess what - that did not stop anybody from killing people before, say after lie-detector tests. And you are perfectly correct with stating that initially it is ironically often a defense team that starts such.

    You are a nice person, you know; I am not trying to attack you; you are that nice scientist guy who thinks that his technology is so new and promising and special. You focus on the good stuff, which is laudable. But guess what, for everyone like you and me, there are a hundred focussing on nothing else but how they can use the latest gadget to somehow advance their own position or hit their enemy.
    Private thoughts are not private. A headband is not necessary. Audio broadcast is also available.

    Gerhard Adam
    Seriously, mindreading isn't possible, although it should be possible to extract data from the brain.  The problem is that intent and nuance can't be decoded.  So while there might be a means by which certain ideas can be extracted and understood, without having a context in which it is interpreted any actual "mindreading" is a fantasy.  Basically the definition I'm using is to indicate that "mindreading" is the ability to extract, not just data, but meaning from the brain. 

    This is amply demonstrated by witnessing the difference between an internet chat session versus a face-to-face conversation.  In the chat session, there's a much higher likelihood that expressions and intentions will be misunderstood, whereas in a face-to-face conversation there are all the additional subtle factors of expression and other interpretative factors that will come into the conversation.  Needless to say, they are vastly different experiences.
    Mundus vult decipi