Banner
    Interface II: The Wrong Hands
    By David Sloan | April 14th 2010 05:09 PM | 5 comments | Print | E-mail | Track Comments
    About David

    David is a neuroscientist in the field of sensory-limbic circuitry. He published his debut novel, [Brackets], in October 2012. He is a member of...

    View David 's Profile
    Recent comments on a certain article (hey, what does that green button do?) about the future of neural interface technologies have brought up some valid ethical arguments.  Because I didn't want to go Jurassic Park and revel in the possibilities while ignoring the consequences, I thought it a good idea to break down the issues.

    The arguments center around the (as yet unrealized) idea of a brain-machine interface that goes far beyond the capabilities now in hand.  For now, we use BMI to overcome sensory and motor deficits, and it is fantastic.  In the future, we will likely go beyond that and find ways to interact with higher-order processes, such as memory, perception and visualization.  That is assuming, as the article suggests, that an appropriate of useful access point for such a technology could be found.

    When and if it happens, I think it will be the next step up for mankind unless it kills us all.

    We're fortunate to live in an age where we have both history and fiction to inform us on the possibilities of new, powerful and human-changing technologies.  In this case, we also have the burgeoning field of Neuroethics, in which the problems are already being contemplated (thank goodness for philosophy majors, am I right?  Or is anything ever right?)

    A paragraph from the University of Pennsylvania Neuroethics website serves as a useful introduction.  It was written by Martha Farah, Academic Director of the Center for Neuroscience and Society, in the section on Brain-Machine Interfaces:

    "Research on electronic brain enhancement conjures up

    frightening scenarios involving mind control and new breeds of cyborg.

    The dominant role of the American military in funding the most cutting

    edge research in this area does little to allay these worries. In the

    short term, however, the ethical concerns here are similar to those

    raised by the pharmacological enhancements discussed elsewhere on this

    site: safety, social effects, and philosophical conundrums involving

    personhood. Of course, the irreversible nature of some of the

    non-pharmacological interventions exacerbates these problems.

    In the long term, humanity may indeed find itself

    transformed by the incorporation of new technology into our nervous

    systems. An intriguing (and reassuring) perspective on this

    transformation is offered by Andy Clark, who suggests that we are

    already cyborgs of a kind, and no worse for it."

    A few points right off the bat.  (1) The military makes all science seem evil.  They could fund an automatic puppy petter and it would still seem sinister.  (2) Terminator forever ruined the good name of cyborgs, and for everyone with emotion chips, that's sad.  (3) Everything in philosophy is a conundrum there's no need to specify.

    With those things said, let's break down the ethical dillemas that a higher-order brain-machine interface would entail.

    1.  Should anyone ever see a memory as it exists in the brain?
    Our entire culture exists because we express what's in our brains.  An interface would simply cut out the middle man: namely, the body.

    2.  Should memories be downloadable? 
    For personal use, it could be the greatest app ever.  In the wrong hands...

    3.  Should the technology to input data into a neural network- like a memory- ever be developed.
    No, if only to keep it out of the hands of advertising agencies.  Easier and safer to leave it as an output only, like an MP3 player.

    4.  Is it ethical to have an elective surgery to insert a permanent electronic interface into the brain?
    It is invasive, and would be expensive.  Aesthetically, it would be the most extreme force of body piercing.  Practically, it might have a few more issues than the cochlear implant, which is not bad.  Legally, it would always, always have to be voluntary.   "Brainrape" would have to constitute mandatory life imprisonment or worse.

    5.  Does the possibility of abuse require that a technology never be developed?
    That's the big one, and we aren't even close to resolving it.  It's the question at the center of controversies surrounding gun control, nuclear technologies, the internet, biological supplements, child care, etc.  Has that stopped us from moving forward with technologies? No.  Have we seen them abused?  Yes.  Do we anticipate further abuse? Yup. Do we take the good with the bad?  That's the hard question.  And nope, I won't answer it.

    6.  Would a higher-order neural interface bring death to mankind?
    If you're a Dollhouse fan (my sympathies) than the answer is yes.  Other futuristic scenarios back that up.  My personal opinion?  Barring the creation of an incurable killer plague or igniting our entire nuclear arsenal, I don't see a technology being the cause of the death of our species.  What we're talking about is allowing devices a more direct window into the brain, and while that is a significant step, it isn't a huge one. 

    7.  Should we ever live in a world where (a) all brains are linked as one into a collective and we can hear everyone's thoughts all the time, or (b) everyone walks around with a tiny monitor embedded in their foreheads that reveals what their thinking at any given time, or (c) we all link up to the World of Warcraft and live there like the Matrix?

    Come on now.   



    Comments

    Gerhard Adam
    I don't think you're exploring these ideas realistically.
    1.  Should anyone ever see a memory as it exists in the brain?
    Our entire culture exists because we express what's in our brains.  An interface would simply cut out the middle man: namely, the body.
    Not true.  Humans routinely lie.  We do it so much that most times we're not even aware that we're doing it.  Our culture is based on our ability to lie and keep secrets.  What we choose to express is vastly different than what's in our brains.
    2.  Should memories be downloadable? 
    For personal use, it could be the greatest app ever.  In the wrong hands...
    I'm not sure what the point would be.  What does it mean to download a memory for "personal use".  It doesn't get any more personal than being in your brain already.
    3.  Should the technology to input data into a neural network- like a memory- ever be developed.
    No, if only to keep it out of the hands of advertising agencies.  Easier and safer to leave it as an output only, like an MP3 player.
    This is a complete showstopper.  Such an ability is the ultimate form of tyranny.
    4.  Is it ethical to have an elective surgery to insert a permanent electronic interface into the brain?
    It is invasive, and would be expensive.  Aesthetically, it would be the most extreme force of body piercing.  Practically, it might have a few more issues than the cochlear implant, which is not bad.  Legally, it would always, always have to be voluntary.   "Brainrape" would have to constitute mandatory life imprisonment or worse.
    "Brainrape" would also be impossible since with the ability to control the mind comes the ability to assure people that they volunteered.  Once again, what would be the point?
    5.  Does the possibility of abuse require that a technology never be developed?
    This isn't a "possibility of abuse".  This is relinquishing total control over who you are as a person to an individual that has the absolute control over determining how you would feel about it.  This isn't "abuse" ... it's insanity.
    6.  Would a higher-order neural interface bring death to mankind?
    Yes.  Why is there always an implicit assumption that somehow we need such a thing?  Why does anyone think this is a good idea?
    7.  Should we ever live in a world where (a) all brains are linked as one into a collective and we can hear everyone's thoughts all the time, or (b) everyone walks around with a tiny monitor embedded in their foreheads that reveals what their thinking at any given time, or (c) we all link up to the World of Warcraft and live there like the Matrix?
    You're joking, right? 

    Just for the record, I don't consider any of these ethical dilemmas.  The supposed "benefit" has hardly been articulated and whenever people speculate about it, it invariably sounds like nonsense.  The most beneficial thing that people typically come up with is an enhanced memory and the ability to recall information. 

    Most of the other points presume a benefit and surprisingly seem so enthusiastic about giving up their personalities and freedom for some nebulous piece of electronic junk.  For those that don't quite understand, consider this ... your brain is the only place that is truly your own.  Your thoughts, your ideas, everything, is your own ... why does anyone think it would be a good idea to give that up?
    Mundus vult decipi
    Renaisauce
    First, let me say that I think it would be hard to consider any of this realistically, since it is still only theoretical and we have no idea what the ultimate manifestation of this technology would look like.  (For example, we can't assume that an interface that can download pictures can update commands that would interfere with a person's free will.  The same circuitry might not be involved).

    Yes, the last one (6) was a joke.  But I made it to make a point.  It's easy to get carried away with this stuff.  I bring up all the fiction references because I think there have actually been too many crazy evil portrayals of advanced brain-machine interfaces.  It's hard to be objective with the current zeitgeist.  Then again, works like Brave New World have kept us from cloning human armies and stuff, so I guess a little paranoia serves its purpose. 

    I accept the challenge of articulating why we would do this in the first place.  And again, I'm just talking about an interface that could interpret the brain signals that represent the pictures or words that we're thinking about.  I"m not talking about something that can put data in, which we would be both agree would be a bad idea.

    1.  Memory is fragile and easy to degrade.  I think it would be nice to store the ones I want to keep.  I don't like the fact that the millions of memories I have of my kid when a camcorder wasn't around are going to corrode within ten years, and may disappear completely.

    2.  Communicating complicated information from are brain to another person's brain is hard. Maybe it could be easier. How many times have you wanted to just show somebody what you're trying to explain? 

    3.  Our creative capacities are woefully inhibited by our talent levels.  I can't recreate a single image from my head on paper that looks like what I'm thinking, because I don't have the motor skill.  Many people have great ideas but don't have the vocabulary to conjure them out.  Finding a way to manifest our thoughts directly would overcome all that.  (although I definitely believe that there is something refining about the physical process of creating art.)

    4.  People spend their whole lives trying to figure out what people from other generations were thinking.  We have to speculate and theorize based on scraps of information to connect with our past.  Wouldn't it be interesting to reach a point where our great grandchildren have a record of how our minds actually worked?

    I believe that all of this combined would not rob us of our individualism, or break down our ability to interact.  I think it would be empowering.  It would allow us to express ourselves in a way that transcended our physical limitations. 

    The main issue, and I think the one that you are most upset about, is one of control.  Let's say that the interface existed, and that the brain could transmit thoughts onto a portable device.  I would say that it would be illegal to:

    (a) give that device any internet connection, or even be able to interact with other devices unless disconnected.

    (b) allow any individual to implant that device on anybody outside of a licensed hospital with a very stringent verification protocol.

    (c) allow any individual, corporation or government agency to ever be able to access another, even if that person is a terrorist suspect.  The contact hardware would have to be extremely personal, interacting with one device only, password protected, etc.  In other words, great pains would need to be applied to make the output control entirely under the control of the individual.

    I know that human nature is what it is.  I know that honesty and good behavior don't come easily or even very often.  But it isn't going to change soon.  Technology will.  And this technology is coming.  Neuroscience is understanding the brain, and understanding the brain will eventually mean being able to read it.  It's good that we have these conversations, because even if we don't know now how it will work, it's a good bet that it can be made to work, which means, one day, we will. 


    Gerhard Adam
    I"m not talking about something that can put data in, which we would be both agree would be a bad idea.
    Well that's the trouble isn't it.  Unless data can go back into someone else's brain, it's no good. 
    I think it would be nice to store the ones I want to keep.
    I understand what you're saying, but if we were going to talk practically about technology, you'd have to have something that effectively filters all the memory data and ultimately builds something like a DVD.  The problem here is that the hardware technology must try to provide some sort of coherent data flow, that your memory can't actually provide.  So whatever you download can't be any more accurate than the arbitrary data your brain has.  More importantly, even our memories aren't entirely accurate, so what is actually being preserved?
    How many times have you wanted to just show somebody what you're trying to explain? 
    Once again, I understand what you're saying, but this is impossible without allowing data upload.
    Wouldn't it be interesting to reach a point where our great grandchildren have a record of how our minds actually worked?
    No, it wouldn't.  That's the hallmark of insanity.  This is precisely what marks an individual as crazy because they can't distinguish between what's real versus imaginary from their own minds.  The process your describing treats our thoughts as if it's simply a DVD recording, instead of the millions of synapses firing at a variety of interconnected ideas.  It isn't a question of the access point, but rather whether thoughts are coherent enough to ever be presented as a linear view of data.
    (a) give that device any internet connection, or even be able to interact with other devices unless disconnected.

    (b) allow any individual to implant that device on anybody outside of a licensed hospital with a very stringent verification protocol.

    (c) allow any individual, corporation or government agency to ever be able to access another, even if that person is a terrorist suspect.
    None of this is workable, because there's simply too much at stake.  Do you really think any authority is going to balk at invading someone's brain if they thought they could find a criminal, or a terrorist, or gain evidence to convict someone in court.  However, the worst part about it, is that it would create the illusion that we're seeing something honestly and directly, without realizing how often our own brains make up information and make indulge in untruths.  To be held to such a standard of interpretation would be horrific.

    It is only the requirement that people have to work for the information they acquire that ensures that when we finally get it, we've probably got it reasonably correct. 

    If I were to consider the pure psychological aspect of it, I would argue that the brain isn't nearly focused or coherent enough to produce any long-term stream of data that can be interpreted.  It would be like watching a cameraman with ADHD trying to see the sights in a new city.  It would be madness.

    The worst part of the suppositions, is that many of these discussions suggest that such a human/machine interface can make humans more intelligent.  However, this is a logical impossibility.  If we assume that we could actually create an intelligent computer that could be merged with a human being, then one side of the interface would have to be "in charge".  I submit that which side is the more intelligent, will end up being the side that calls the shots.  Therefore if a machine can actually be made more intelligent, then that individual will cease being human and will be a machine with a human supplemental brain.

    While I understand what you're saying about your memories regarding your kid, but that's spoken like a young person.  You don't understand yet that you're still collecting millions of new memories, and then you'll supplement those with grandchildren and others.  Quite frankly, there isn't enough time in the world to relive those old memories, and in truth you won't need to, because if they're important enough to preserve, your brain already knows they're important enough to actually remember.  Unless an individual is diseased, I've never met a single person that claims they've forgotten an important memory.

    Mundus vult decipi
    It seems that the basic premise of your very interesting reflection is utilitarian: these technologies would be "the next step up" for mankind, unless they kill us all. Utilitarianism/consequentialism (e.g., Mill) is already a philosophical position, with various parameters in place for judging good and evil, namely collective benefit of the "species" (loosely defined as group of individuals recognized by some standard) and preservation of the rights ("liberties") of said individuals. This position thus reduces the ethical question to voluntarism and moral sentiment: nothing in the nature of technology or the nature of man is deciding the issue.

    To say this in another way: In order to decide whether technology-X is ethical for humans to use, one must define human, which a fortiori defines what it is for men to be perfect, and hence insofar as human beings strive to fulfill all the capacities they have insofar as they are human, they make ethically sound choices. Hence, to assert at the outset that "technology-X is the next step up, unless..." is a petitio principii: men are defined such as to include any technologies non-destructive of their subject, and hence the real criterion of moral judgment becomes a mutable measuring rod: inherently dependent upon people's tastes and the capacity of technology at the present moment.

    A lot of the comments seem to revolve around the concerns of a system which extracts information stored in the brain (taking a mental image and displaying it, recording how a brain works). I think the other direction is much more interesting: not enhancing our electronic systems with data from our brain, but rather providing our brain with data from electronic systems. I'm excited about the idea of extracting data (visual, numerical, even ... tactile?) from electronic systems without having to go through some sort of visual display unit.