Banner
    The Need for Emotional Experience Does Not Prevent Conscious Artificial Intelligence
    By Samuel Kenyon | January 18th 2013 09:58 PM | 16 comments | Print | E-mail | Track Comments
    About Samuel

    Software engineer, AI researcher, interaction designer (IxD), actor, writer, atheist transhumanist. My blog will attempt to synthesize concepts...

    View Samuel's Profile
    I have mentioned the book The First Idea by Greenspan and Shanker many times recently. Lest anybody assume I am a fanboy of that tome, I wanted to argue with a ridiculous statement that the authors make in regards to consciousness and artificial intelligence.

    Greenspan and Shanker make it quite clear that they don't think artificial intelligence can have consciousness:

    What is the necessary foundation for consciousness? Can computers be programmed to have it or any types of truly reflective intelligence? The answer is NO! Consciousness depends on affective experience (i.e. the experience of one's own emotional patterns). True affects and their near infinite variations can only arise from living biological systems and the developmental processes that we have been discussing.


    Let's look at that logically. The first part of their argument is Consciousness (C) depends on Affective experience (A):



    Let's go on the premise that this sentence is sound, even if it's not. Conscious AI can still be made even if this is true.

    But the next part states Affective experience (A) depends on Biological systems (B) and Development processes (D).



    Which means that Consciousness is, in turn, dependent on Biological systems and Development processes:



    That's bad because it entails that if a system is not biological or if the system doesn't have proper development processes, that system cannot be conscious:



    So the authors dramatically answer major questions of Strong AI in the negative with a deux ex machina of "true affect". The authors give no evidence or rationale for "true affect." Why would anybody assume that only living biological systems can experience emotional patterns? Biological organisms may have defined it, but that doesn't mean it can't be replicated in other substrates. Development is also a concept which can be implemented in an artificial system.

    The authors spend most of a book talking about affect and development and how symbols are formed, and then full stop E brake when they get to the possibility of replicating those very architectural and dynamic system concepts they spent so much time trying to explain.

    For those who think I may have taken this out of context, if they meant typical disembodied computers in comparison to embodied computers (e.g. robots), they would say so, and they don't. Simulated bodies are not considered either.

    In conclusion, I still know no reasons that would prevent making conscious Strong AIs.

    Comments

    Gerhard Adam
    I'm presuming that when you say replicate, you don't mean simply a simulation, but rather an AI that actually "experiences" these emotions, etc. in exactly the same context as a biological system would.

    The primary problem I see is one of values.  Since biological systems have the potential to die [i.e. stop living], then it completely changes the motivation, dynamics, and values that they develop with.  Injuries, and reproductive competition, are also factors that will influence their behaviors and provide the values that shape emotions and consciousness [cognition too].

    So, the problem with an AI, is to address how these types of values can be provided for in a different substrate.  Like it or not, death isn't particularly meaningful, if it simply involves turning off and turning back on.  Death is significant because there's no return, as a result, it carries a much heavier connotation with respect to emotions and values.  Similarly with injuries, or disease, where we are dependent on our ability to health or fend off disease.  These issues in biology give rise to the values and behaviors that take these situations seriously.

    Again, it would be difficult to imagine how such a value system could be imparted to an AI that doesn't share those risks.

    It is within that context that biology provides a completely unique environment on which these concepts are built.  Certainly they can be emulated or simulated, but it becomes difficult to image how an AI is supposed to experience something like fear if it can't die.  How an AI is supposed to experience things like hunger, loss, etc.  These are things that create the unique biological experience that gives rise to our emotions, behaviors, and cognitive processes.

    Maybe I'm missing something, but that seems to be the underlying problem as I see it.
    Mundus vult decipi
    SynapticNulship
    Certainly they can be emulated or simulated, but it becomes difficult to image how an AI is supposed to experience something like fear if it can't die.
    An organism that thinks it can die is exactly the same as one that thinks it can die but can't.

    Counterargument: What if it finds out it really can't die, like some kind of Superman? Well maybe that's an issue, maybe not. It seems to me the architecture based on organisms that could die paired with similar development leads it into a similar emotional structure, and the later realization that it can't die or it's in a simulation affects more of the higher level psychological issues.

    Aside from that, real death matters mostly, as far as I can tell, in the evolution of the mind, i.e. death too soon prevents reproduction and thus affects how the mind architecture evolves.
     Maybe I'm missing something, but that seems to be the underlying problem as I see it. 
    What you are missing is very clear. Why you can't see it is a bit more difficult!  :)

    I think your problem is semantic - you are looking for a way to define, say "fear" and cannot get past seeing it as having to be fear of something. And then you complicate matters by insisting that the fear be meaningful.

    It is perfectly true that biological brains are predisposed to entering states like fear because of their evolutionary origins. However this does not mean that "meaningless" fear cannot exist, indeed phobias are proof that such states are rather common.

    But if the state of fear can be created by a biologically irrelevant trigger as if it were a real threat, why should the same state not be created artifically in a system which has no obvious (if anthropomorphic) inherent purpose?  Why shouldn't a non-biological machine be afraid of spiders? 

    Such emotional responses may very well be imposed by the programmer - just as accidents of upbringing may create irrational fears. So, to chase this idea into a corner, I have to assume it is not "fear" as an aggregate of specific biological fears that you find difficult to credit AI with but fear in the abstract. Fear seems to come with a biological value judgement: it is unpleasant; it is (often) associated with possible death or damage to the organism.  However the latter is only a plausible account created by the external observer, the terrified mouse trying to run from a cat has no such thought.

    So the question is, can an artificial system create the same feeling of fear in the absence of biological purpose? Phobias show that fear is decoupled from actual biological purpose and associated survival mechanisms; the mouse shows that it is decoupled from conscious realization of its evolutionary origin. Clearly, artificial fear would also be decoupled from any biological purpose, the most proximate purposes being those of the programmer. But if fear can be created by irrelevant triggers then all that is needed for AI to experience fear is a general-purpose fear response which can be plugged into any stimulus the programmer fancies.
     
    This all applies to appreciation of art, only more so, which is just as conscious but is hard to see as a biological function gone wrong. At best the enjoyment of a Mozart aria may have its origins in biology but is now as far removed from any biological purpose as AI is from the stellar nucleosynthesis which produced the silicon for its chips.

    Of course all this is a big diversion from the hard problem of consciousness. Even with a fear response indistinguishable from that of a frightened mouse, there is precisely no way we can tell whether it is really conscious.  After all, we have no theory worth talking about as to how our own emotional states are conscious.

    Best to give it the benefit of the doubt.
     


     
    vongehr
    Start wondering why you care about such nonsense.  Take the good parts, write your own insights for us to destroy.  The stuff that gets published is too easy.
    Actually, I think there's a "non-ridiculous" argument against higher-order consciousness without emotion.

    We have one data point for higher-order consciousness: ourselves. There is no subset of human consciousness that exists without emotion. Given that, what's the justificiation for a hypothesis that consciousness can exist without emotion -- which is what most AI research is based upon? Even the lowest orders of consciousness exist only within biological organisms that have emotions.

    If you examine the apparent evolutionary purpose of emotions, it seems clear that consciousness and emotions are interlinked. Sadness, happiness, fear, etc. can all be derived from biological imperatives: survival, reproduction, etc.. Hormones, neurotransmitters, and similar biochemicals that contribute to emotional states are intrinsic to the human brain, and therefore must be intrinsic to consciousness. There's no evidence that this biological system can be reduced to component parts that can operate independently. Current AI research assumes that this is true.

    Don't misunderstand the point. My premise is not that conscious AI cannot exist because AI can't experience emotion. My premise is that conscious AI can exist only if it can experience emotion. This would require AI to have some kind of additional component that simulates the biochemical nature of the brain (e.g., artificial neurotransmitters that don't simply mimic electrical resistance). The brain is already an extremely complex system based on the staggering number of neurons and synapses alone; add the complexity of biochemical interactions and we have a completely different beast.

    SynapticNulship
    Given that, what's the justificiation for a hypothesis that consciousness can exist without emotion -- which is what most AI research is based upon?
    I wouldn't know. My AI research goes against the grain in many ways. You'll notice in this blog post I merely stated that AI could have emotions and affect and development, therefore those requirements to consciousness (if that premise is true) can still be met by AI.
    Gerhard Adam
    I guess one of the fundamental problems I have is that I haven't actually heard a definition of what this AI is supposed to be.  Any attempt to simply replicate a biological system sounds like it would merely be a simulation of that system.  As a result, one of my fundamental difficulties has been that [in my view] a "true" AI would have to be responsive to those things that are specific to the AI.  Emulating human behaviors or any other biological system would be suspect, since it couldn't actually experience those things.

    Therefore, in my view, this is analogous to creating a completely new species, with those emotions, etc. that are contextually relevant to the creature being built.  This seems to be the sticking point, because we don't actually know what the drivers are for any particular organism to possess the traits and behaviors that it does.  As long as AI simply seeks to copy existing systems, then it seems that it will also simply be creating simulations.

    At the root of all this, is that biological systems evolved their traits because those that failed to respond properly didn't survive to reproduce.  Consequently we find that populations evolved with the failures never having a presence in future generations.  In the case of robots or AI, we are left with the ability to adjust individuals [i.e. individual evolution, if you will].  However this renders suspect any changes, since we would need to consider who the instigator of those changes is.  If it is simply an engineer, then we can never truly separate the final result from the engineer.  If it is to evolve on its own, then the problems of replication, etc. would need to be resolved such that success and failure follows a similar trajectory to that experienced by biological systems.

    Therefore, if we are truly interested in pursuing this avenue, then we would have to acknowledge that we can't know what might "evolve".  Presuming an intellect, or a particular cognitive process, or even emotions, is something that we carry over from biological systems, but there is no basis for assuming that a new "species" of machine should possess any of those traits. 

    This brings us full circle to the question of what intelligence is, and what an AI would be like.  If we assume that every species is as "intelligent" as it needs to be, then what would be the basis for our assuming that a machine "species" can or should achieve the goals we set for it?
    Mundus vult decipi
    Sadness, happiness, fear, etc. can all be derived from biological imperatives: survival, reproduction, etc.. Hormones, neurotransmitters, and similar biochemicals that contribute to emotional states are intrinsic to the human brain, and therefore must be intrinsic to consciousness. 
    There's no evidence that this biological system can be reduced to component parts that can operate independently.
    You just did.
     
    I can't help but put this discussion in the context of the series of reports, mostly from Robert Elwood's lab, that hermit crabs and lobsters can learn to avoid dark corners where they have been given electrical shocks, "and this proves that they are conscious".

    This should be pretty simple to replicate in a mobile robot powered by a $100 microprocessor like an Arduino or Raspberry PI. I hope somebody does this soon, and cement the moniker of AI as "experimental philosophy".

    Or from the biological approach, those organisms have nervous systems that are simple enough that it should be possible to identify the specific cells or even genes that are "responsible for consciousness."

    Gerhard Adam
    Already been done with the worm C. elegans [about 300 neurons].  The point being we still have no clue of how any of it works together to produce anything.
    http://indianapublicmedia.org/amomentofscience/elegans-singularity/
    ...that hermit crabs and lobsters can learn to avoid dark corners where they have been given electrical shocks, "and this proves that they are conscious".
    How does learning equate to consciousness?  You can witness similar reactions in bacteria, and they don't even possess a nervous system.
    This should be pretty simple to replicate in a mobile robot powered by a $100 microprocessor...
    It's trivial to replicate, but it also doesn't mean anything.
    Mundus vult decipi
    This was done in 1949-ish by William Grey Walter. Two conditioned reflexes were added to a previously dumb robot.

    It would be foolish to credit such a simple system with consciousness and yet the intractability of the so-called "hard problem" leads to multiple schools of thought. Hard-line reductionists point to the impossibility of finding a definition of consciousness which is grounded in observable functions and therefore tend to dismiss the concept altogether - consciousness is just part of the illusion of self, they say. Others may merely assert it as a fundamental, irreducible fact. At the other extreme, none other than Fred Hoyle has opined that all information is conscious - even the single bit from an on/off thermostat is conscious. What's it like to be a thermostat? Well if you only have a single-bit mind then you are not going to be daydreaming about becoming a sophisticated electronic control box, your consciousness is that of a single bit. It probably isn't "like" very much at all. My personal opinion is that this is a matter of terminology - if you call your thermostat "conscious" then don't expect it to have feelings. It's only in systems that are complicated enough to show animal-like behaviour that you can then start to ask whether they are really conscious or merely zombies. Unfortunately, animal-like behaviour is precisely what the "single-neuron" tortoises of Grey Walter did show :)
     
    Bonny Bonobo alias Brat
     Unfortunately, animal-like behaviour is precisely what the "single-neuron" tortoises of Grey Walter did show :)
    Derek, knowing your penchant for turtles, I'm surprised you didn't mention Grey Walter's robot turtles too? I wonder which came first his robot turtles or his robot tortoises or were they maybe the same thing?
    My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http://www.science20.com/forums/medicine
    I wonder which came first his robot turtles or his robot tortoises or were they maybe the same thing?
    Either way it's Machina speculatrix all the way down :)
     


    Bonny Bonobo alias Brat
    Ha ha, very funny!
    My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http://www.science20.com/forums/medicine
    John Hasenkam
    Sadness, happiness, fear, etc. can all be derived from biological imperatives: survival, reproduction, etc.. Hormones, neurotransmitters, and similar biochemicals that contribute to emotional states are intrinsic to the human brain, and therefore must be intrinsic to consciousness.
    Emotions are more than that. I can listen to a beautiful piece of music and be moved, it has nothing to do with survival but perhaps a great deal to do with how our auditory apparatus matures. 

    In an old Text, The Evolution of Consciousness, Ornstein, IIRC, states that some paraplegics experience emotion differently and speculates this is because the adrenals are not subject to the same level of innervation hence the bodily component of emotions is absent. Emotions are a whole body experience. Even individuals vary widely in their emotional responses so I fail to see why we must insist that emotions as occur in us will somehow be manifest in AI. Thus ... 

     1940 Sep 27;92(2387):288-9.

    THE AUTONOMIC BASIS OF EMOTION.

    Gellhorn ECortell RFeldman J.

    Abstract

    (1) It is shown that hypothalamic stimulation in cats, with faradic currents eliciting the syndrome of sham rage, produces after the elimination of the sympathetico-adrenal system a hypoglycemia when the vagi are intact. After bilateral vagotomy the stimulation results in a slight and delayed rise in blood sugar. (2) If in cats in which, due to a sectioning of the spinal cord at the sixth cervical segment, the effect of central discharges on the sympathetico-adrenalsystem is eliminated, a rage response is elicited by a barking dog it produces a fall in blood sugar. The sectioning of the vagi below the diaphragm abolishes this reaction. From these experiments it is concluded that the normal emotional process as well as the sham rage reaction is characterized by a simultaneous discharge over the vago-insulin and sympathetico-adrenal system. The latter predominates in the normal animal and masks the effects on the former.

    Are paraplegics less emotional than us? Perhaps. Less conscious? No.  
    Emotions are visceral, whole body responses. Emotions have broad physiological and neurological consequences. There is a change in the whole "state of being", it is not just a feeling, it is a change in disposition towards the world. Emotions create a state of being that is hopefully optimal for addressing contingencies. What we call intelligence or cognition is about specific behaviors to address those contingencies but emotions set the framework that optimises the instantiation of those behaviors rather than initiating those behaviors. 


    Samuel I enjoy your explorations in this area. I was going to add further to your last post but got two paragraphs written before I realised how thoroughly lost I was. So I don't know whether to curse or bless you because this is a subject area that drives me crazy. But keep it up, me learn something! 






    I'm not surprised it drives you mad. Let's cut to the chase. You most certainly can describe emotions by their externally observable correlates and design a system that mimics everything present in an aminal. The only remaining issue is the hard problem and there is no known way of deciding whether the feelings are real, i.e. conscious.
     
    End of subject. :)