Ever since the gap between neural and silicon circuits was bridged through a common electrical language, neural interface technologies have pushed us towards medical advances that border on the biblical: the deaf hear, the blind see, and the motionless move.  Cochlear implants have been an unprecedented success in biomedical engineering (1) and may only be surpassed by the visual implants currently in development (2).  The first tetraplegics with implants allowing them to move robotic limbs have been veteran cyborgs for half a decade (3), and many amputees now have prosthetics that they can use to move and feel (4).  Even simple direct stimulation technologies are now used to treat epilepsy, movement disorders, and depression (5).  

All of this is fantastic.  But what’s in it for the rest of us?

After all, a technology that allows for functional, practical interfacing with the central nervous system--particularly the cerebral cortex--should have enormous potential outside of medical applications.  We all have brains that are heavily used (regardless about what you may guess about certain individuals).  Most of us also have access to computers, and many even carry one or two computers with us at all times.  The present is called the Information Age precisely because our brains can delegate a lot of information storage and distribution indirectly (through our fingertips) to electronic devices.  And it isn’t as if the possibilities of neural interface technologies haven’t been considered, or even capitalized on (just ask Joss Whedon or the Wachowski brothers or Dr. Octopus).  So, it should be inevitable that companies start making headway towards bringing a revolutionary technology out of the hospitals and into the markets.

Right?

It’s hard to know what to call a device that would link a person’s thoughts, conceptualizations, and memories with a computer.  An Imagination Interface, perhaps.  By any name, it doesn’t take much time to realize that you have always wanted one.  Think of a world where you could selectively download your life’s memories- sights, sounds, experiences- onto a Blue-ray to show your grandkids (who, by that time, will have something better than Blue-rays).  Imagine being able to fantasize whole movies with all your favorite people and places and impossible things, and then putting it on your laptop for editing.  Imagine going to sleep at night and DVRing your dreams.  Imagine witnesses of crimes being able to show the police what they experienced instead of having to describe it.  Imagine what you could upload.  Imagine what you could download.  

The amazing thing is this: of all the obstacles that prevent such an interface from being realized right now, the ability to decode complex, localized neural firing patterns is not one of them.  

The neural interfaces that allow someone to move a robotic arm with the mind, although very elaborate in design and operation, work because of two basic principles: (1) each part of the brain is wired up to receive and send specific signals for specific purposes, and (2) a computer can understand correlations between neural firing patterns and events when given appropriate context.  Every part of the cortex of the brain can theoretically be accessed with an understanding of those principles.  

For example, interfaces used for robotic arm movement are placed (except in some amputees) in the part of the motor cortex that contains a local neuronal network that is specifically wired to send out movement commands to the natural arms.  The cortex is already set up to give the computer exactly what it needs.  The computer just has to interpret the firing based on event cues, as determined by the programmer.  To do that, the person wearing the interface has to go through some tests.  In one case, the person would be asked to imagine moving the hand up.  The interface would sense how the network responds the “up” command is conjured, and the next time those neurons fire in that way, the computer will move the robotic hand up.  With practice, the movements can become somewhat normal as the cortex and computer learn to communicate. It doesn’t matter that the computer doesn’t know what each individual neuron is really doing, it just has to correlate neural activity with a command it can execute. (6,7)

An Imagination Interface technology would build off of those same principles, but with one major problem.  In order for both principles to be used effectively, there has to be an area that normally computes and produces the types of signals that an interface designer wants to extract.  What is really required, and what would ultimately be the key to unlocking this technology, is an access point.

The kind of access point required would have to be some part of the brain that naturally compiles information to form mental images, sounds, event memories or concepts.  For example, say you just wanted a brain-machine interface that would allow you to create a single image in your mind--like an apple--and then transmit it to your iPhone.  You would close your eyes and imagine the color, shape, outline, texture and depth of the apple. We visualize all of this with the part of our brain called the visual cortex (back of the head) (8). The catch is that each aspect of the image is delegated to a different part of the cortex.  The different parts are far too spread out, on both sides of the brain, to place a single electrode grid that would cover everything.  To access the complete image, you would need to plug the interface into an area where all of those components are consolidated into a whole, and you would have to train the computer to recognize which signals from that area represent which components.  The challenge isn’t interpreting that access point.  The challenge is that a good access point may not exist.

We tend to think that there should be a place in our brain where an image, thought or memory comes together and exists, because we have access to our whole brain all the time.  But just because every part of the brain has a function doesn’t mean that every function of the brain is restricted to a single part.  Evolution has never had a reason to give us a convenient biological USB port.  One could, in theory, implant electrode grids all over the brain to access that singular perception, but there are obviously problems with being so invasive with your most precious organ.  One site is the ideal, if it exists.

And indeed, a feasible access point may still exist.  There are regions of the brain where lots of diverse information is integrated for memory or decision-making functions, and these may be just the places to tap in.  One is the hippocampal region.  Information from all of the sensory cortices is fed into the long, tubular structures that lie just beneath the temporal lobe (behind your ear) in order to evaluate and store short term memories.  Another region is the prefrontal cortex (under your forehead), which gathers information from around the brain--including the hippocampus--in order to make executive decisions.  The central router of the brain, the thalamus, interacts with every area of the cortex and may hold more information than we currently realize.  Lastly, because all mental information can be translated into language, the language output area of the brain, called Wernicke’s area, might be an unlikely but interesting option.  

All of these candidate access points have their pros and cons, but there is a more fundamental issue to consider.  The idea of finding a functional access point for our thoughts is much bigger than a simple logistics, engineering, or even biological issue.  Our thoughts and memories, our capacity to imagine and create internally at such a relatively high level, are what make us sentient human beings.  Trying to find an access point really means trying to identify a central brain region involved in consciousness.  In fact, the access point may be what some have called the Neural Correlate of Consciousness (NCC), a loosely defined and theoretically distributed unidentified neural network that defines the perception of our existence. (10)  Really, once you start dealing with consciousness, as any psychology or philosophy major will tell you, there is no definitive end to the road.  Not a good sign for those attempting to make a practical link that will hook up to a Blackberry.  

There are scores of logistical obstacles that would have to overcome even after an access point is discovered and decoded.  The brain is delicate and sensitive.  Accessing the brain requires more than a drill, gauze, and some ibuprofen.  Once the surgery is done, there would remain a significant device of some kind sticking out of the head.  What’s more, it isn’t clear how long current electrode grid set-ups can interact effectively with brain tissue without cells around the electrodes dying or reacting to the metal as a foreign substance (11).  It would require the care of both physicians and IT specialists, it would be extremely difficult to replace if it were ever damaged, and it goes without saying that it would not be cheap.  But all of these are bridges that we can cross when…and if…we arrive at them.

Finding a way to directly access human thoughts may be the most incredible and radical scientific breakthrough of anyone’s generation.  It would require an unprecedented multidisciplinary convergence of neuroscientists, neurologists, psychologists, computer scientists and engineers, not to mention ethicists and dedicated financial backers.  Even as the phenomenal applications of neural interfaces continue to develop, the lingering, impossibly high ceiling for the technology will remain, promising, but not yet delivering, revolutionary changes.  As it has been with much of our technological progress, the only thing between today and a future of brain-synching may be an act of imagination.  

For now, whoever dreams up the workable solution will have to settle for typing that idea into their computer.  With their hands.  How quaint.

References

1. Basura GJ, Eapen R and Buchman CA (2009) Bilateral Cochlear Implantation: Current Concepts, Indications and Results.  The Laryngoscope 119: 2395-2401.  

2. Mokwa W, Goertz M, Koch C, Krisch I, Trieu HK and Walter P (2008) Intraocular Retinal Prosthesis to Restore Vision in Blind Humans Conf Proc IEEE Eng Med Biol Soc 5790-5793.

3. Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, Branner A, Chen D, Penn RD and Donoghue JP (2006) Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature 442: 164–171.

4.  Schultz AE, Marasco PD and Kuiken TA Vibrotactile Detection Thresholds for Chest Skin of Amputees Following Targeted Reinnervation Surgery. Brain Res 1251: 121-129.

5.  Kuhn J, Gaebel W, Klosterkoetter J, Woopen C (2009) Deep Brain Stimulation as a New Therapeutic Approach in Therapy-Resistant Mental Disorders: Ethical Aspects of Investigational Treatment.  Eur Arch Psychiatry Clin Neurosci 259 (Suppl 2) S135-S141.  

6.  Schwartz AB (2004) Cortical Neural Prosthetics.  Annu Rev Neurosci 27: 487-507

7.  Truccolo W, Hochberg LR and JP Donoghue (2010) Collective Dynamics in Human and Monkey Sensorimotor Cortex: Predicting Single Neuron Spikes.  Nature Neuro 13: 105-111.

8.  Slotnick SD, Thompson WL and Kosslyn SM (2005) Visually Mental Imagery Induces Retinotopically Organized Activation of Early Visual Areas.  Cerebral Cortex 15: 1570-1585.

9.  Dickerson BC and Eichenbaum H. (2010) The Episodic Memory System: Neurocircuitry and Disorders.  Neuropsychopharmacology 35: 86-104.
 
10. Zeman A. (2002) Consciousness: a User’s Guide.  Yale University Press. New Haven.  Pgs.303-347.

11. Leach JB, Achyuta HK, Murthy SK (2010) Bridging the Divide Between Neuroprosthetic Design, Tissue Engineering and Neural Biology. Front Neuroengineering  2:1-19