Recent comments on a certain article (hey, what does that green button do?) about the future of neural interface technologies have brought up some valid ethical arguments.  Because I didn't want to go Jurassic Park and revel in the possibilities while ignoring the consequences, I thought it a good idea to break down the issues.

The arguments center around the (as yet unrealized) idea of a brain-machine interface that goes far beyond the capabilities now in hand.  For now, we use BMI to overcome sensory and motor deficits, and it is fantastic.  In the future, we will likely go beyond that and find ways to interact with higher-order processes, such as memory, perception and visualization.  That is assuming, as the article suggests, that an appropriate of useful access point for such a technology could be found.

When and if it happens, I think it will be the next step up for mankind unless it kills us all.

We're fortunate to live in an age where we have both history and fiction to inform us on the possibilities of new, powerful and human-changing technologies.  In this case, we also have the burgeoning field of Neuroethics, in which the problems are already being contemplated (thank goodness for philosophy majors, am I right?  Or is anything ever right?)

A paragraph from the University of Pennsylvania Neuroethics website serves as a useful introduction.  It was written by Martha Farah, Academic Director of the Center for Neuroscience and Society, in the section on Brain-Machine Interfaces:

"Research on electronic brain enhancement conjures up

frightening scenarios involving mind control and new breeds of cyborg.

The dominant role of the American military in funding the most cutting

edge research in this area does little to allay these worries. In the

short term, however, the ethical concerns here are similar to those

raised by the pharmacological enhancements discussed elsewhere on this

site: safety, social effects, and philosophical conundrums involving

personhood. Of course, the irreversible nature of some of the

non-pharmacological interventions exacerbates these problems.

In the long term, humanity may indeed find itself

transformed by the incorporation of new technology into our nervous

systems. An intriguing (and reassuring) perspective on this

transformation is offered by Andy Clark, who suggests that we are

already cyborgs of a kind, and no worse for it."

A few points right off the bat.  (1) The military makes all science seem evil.  They could fund an automatic puppy petter and it would still seem sinister.  (2) Terminator forever ruined the good name of cyborgs, and for everyone with emotion chips, that's sad.  (3) Everything in philosophy is a conundrum there's no need to specify.

With those things said, let's break down the ethical dillemas that a higher-order brain-machine interface would entail.

1.  Should anyone ever see a memory as it exists in the brain?
Our entire culture exists because we express what's in our brains.  An interface would simply cut out the middle man: namely, the body.

2.  Should memories be downloadable? 
For personal use, it could be the greatest app ever.  In the wrong hands...

3.  Should the technology to input data into a neural network- like a memory- ever be developed.
No, if only to keep it out of the hands of advertising agencies.  Easier and safer to leave it as an output only, like an MP3 player.

4.  Is it ethical to have an elective surgery to insert a permanent electronic interface into the brain?
It is invasive, and would be expensive.  Aesthetically, it would be the most extreme force of body piercing.  Practically, it might have a few more issues than the cochlear implant, which is not bad.  Legally, it would always, always have to be voluntary.   "Brainrape" would have to constitute mandatory life imprisonment or worse.

5.  Does the possibility of abuse require that a technology never be developed?
That's the big one, and we aren't even close to resolving it.  It's the question at the center of controversies surrounding gun control, nuclear technologies, the internet, biological supplements, child care, etc.  Has that stopped us from moving forward with technologies? No.  Have we seen them abused?  Yes.  Do we anticipate further abuse? Yup. Do we take the good with the bad?  That's the hard question.  And nope, I won't answer it.

6.  Would a higher-order neural interface bring death to mankind?
If you're a Dollhouse fan (my sympathies) than the answer is yes.  Other futuristic scenarios back that up.  My personal opinion?  Barring the creation of an incurable killer plague or igniting our entire nuclear arsenal, I don't see a technology being the cause of the death of our species.  What we're talking about is allowing devices a more direct window into the brain, and while that is a significant step, it isn't a huge one. 

7.  Should we ever live in a world where (a) all brains are linked as one into a collective and we can hear everyone's thoughts all the time, or (b) everyone walks around with a tiny monitor embedded in their foreheads that reveals what their thinking at any given time, or (c) we all link up to the World of Warcraft and live there like the Matrix?

Come on now.