Banner
    Are Emotional Structures The Foundation Of Intelligence?
    By Samuel Kenyon | January 9th 2013 01:20 AM | 33 comments | Print | E-mail | Track Comments
    About Samuel

    Software engineer, AI researcher, interaction designer (IxD), actor, writer, atheist transhumanist. My blog will attempt to synthesize concepts...

    View Samuel's Profile
    It seems like all human babies go through the exact same intelligence growth program. Like clockwork. A lot of people have assumed that it really is a perfect program which is defined by genetics.

    Obviously something happens when a child grows. But surely that consists of minor environmental queues to the genetic program. Or does it?

    Consider if the "something happens as a child grows" might in fact be critical. And not just critical, but the major source of information. What exactly is that "nurture" part of nature vs. nurture?

    What if the nurturing is in fact the source of all conceptual knowledge, language, sense of self, and sense of reality?

    In the book The First Idea, Greenspan and Shanker claim that language and intelligence development in human children has a prerequisite of interactive emotions. Their evidence is primarily with the treatment of abnormal children. Supposedly no human child can achieve anything like standard human intelligence or language without going through specific levels of interactive emotional experience (I introduced these levels in a previous blog post, Emotional Developmental Symbol Creation).



    Interactive emotional experience is not just development precedence. It's also the foundation of the architecture of the mind, at least as far as grounding symbols. This type of mental symbol--and having a lot of them generated during development--is required for meaning and language, according to Greenspan and Shanker.

    But why isn't it enough to just separate action from perception to get a symbol and ground it some other way? In other words, are all concepts grounded--if you go down enough--with an emotionally-linked symbol?

    Greenspan and Shanker also point out the important development of conversation skills. Normal people like dialogue. They like reading it and they like hanging out and chatting with fellow humans. If you, gentle reader, do not like conversing with other humans, then you may want to consider how normal your upbringing really was. The human rhythmic capacity seems to be developed during infant-caregiver interactions. That's certainly easy to believe, but is it fundamentally tied into emotions? From their examples, it would seem that emotional interactions are the enabler. Likewise with planning and sequencing abilities. The key thing there, I think, is that emotional interactions enable the development--the mechanisms for planning and conversations--even if the concepts are eventually divorced from the childish emotional links.

    What of the basic mental symbols--the foundations of knowledge, understanding, ideas, imagination, and language? Are all of them linked to emotions, and specifically those emotions involved with infant-caregiver interactions? Or are the emotional interactions merely correlations? I suspect that given the evidence with abnormal children, perhaps the emotional grounding is at least one kind of grounding. It remains to be seen if that is the basis of all other symbols or is in fact parallel to other mechanisms.

    Comments

    vongehr
    What if the nurturing is in fact the source of all conceptual knowledge, language, sense of self, and sense of reality?
    Well, I don't see how you can ask this question meaningfully, I mean, how could it not be the source, especially given that you like to "insure" your point of view by saying that any system that comes out of the womb and is able to function fast (say bovine) has or in case of more intelligent AI systems would have always some sort of "shortcut" inside the womb or whatever anyway, and thus the nurturing is always there because you will not stop until you found something that you can label at least as the effective "shortcut".  I am not seeing you presenting us with any background on which your questions make sense, or better, could make sense if answered in any other way than the way in which you feel the urge to interpret them.  Of course no system develops if you drop it into a vacuum - but that is not an interesting question.  I think you are somewhat on the right track with grounding and emotions being entwined especially for humans, but the nurturing and cute baby part is getting you off that track.
    SynapticNulship

    Well, I don't see how you can ask this question meaningfully, I mean, how could it not be the source
    Generativists would not think it's the source, at least not for language. And then there is that problem of divorcing language from other parts of cognition.
    but the nurturing and cute baby part is getting you off that track.
    Ahh, but I specifically put no cute babies in this post. Perhaps you were imagining them?
    vongehr
    specifically those emotions involved with infant-caregiver interactions
    This is the cute baby strategy if I ever saw one, at least in my mind's imagination. ;-)
    SynapticNulship
    Would you feel better if I said interactions of the person who failed to use contraceptives with their monstrous spawn?
    vongehr
    No, I would feel better if you took this seriously as going back to what we discussed the last time, like the egg laying bovines or whatever, which you turned into space traveling dino-aliens for whatever rhetorical reason.  The point is serious: You seem to make the quite unsubstantiated claim about that evolution cannot lead to certain intelligent creatures.  I think that already cows being able to see and walk almost immediatelly after birth render you wrong.  It is not enough to point to language, that cows cannot immediately confirm a concept of self, for example.  I am surprised that an AI guy does this (although I appreciate it as a counterweight to unlimited optimism about cranking up computation speed).
    The point about cute babies:  This is how similar is often sold, like the feel-good about real humanity that a Chinese room cannot possibly be providing.  It usually accompanies weak arguments.
    John Hasenkam
    Samuel,
    1.
    No, I disagree with the idea. 
    2.
    Not sure but I wonder if lurking in your mind is the idea of Spearman's G - general intelligence. Not a great fan of that idea. "Intelligence" as an operational concept? ie. There are many ways of being intelligent rather than there being this entity "intelligence". 
    3.
    If you think emotions are that important for the development of intelligence I suggest you look into the life history of Paul Dirac. 
    4. 
    Emotions - bad concept for me. Too ill-defined both neurologically and in behavior. That we feel does not mean we do not cogitate. Emotions are a form of cognition, in my view an ancient broad based strategy for addressing environmental contingencies. 

    SynapticNulship
    2.
    Not sure but I wonder if lurking in your mind is the idea of Spearman's G - general intelligence.
    Nope.
     Not a great fan of that idea. "Intelligence" as an operational concept? ie. There are many ways of being intelligent rather than there being this entity "intelligence".
    I agree. I think we are both more in the Howard Gardner multiple intelligences camp.

    The questions I'm asking are about how mental symbols are physically grounded and the mental machinery that is constructed as a human (or other animal) develops.
    3.
    If you think emotions are that important for the development of intelligence I suggest you look into the life history of Paul Dirac.
    Well perhaps something to think about is whether autistic people who eventually learn to interact with others somehow and communicate (via writing or whatever) developed their internal thinking powers at the same time as the social / communication improvements. The argument you need is to find an example of someone who was a genius before developing some kind of communication / interaction abilities. That is difficult of course, because if they can't communicate enough like the normal range of humans we will not be able to guess what they can do and what they are thinking.
    4. 
    Emotions - bad concept for me. Too ill-defined both neurologically and in behavior. That we feel does not mean we do not cogitate. Emotions are a form of cognition, in my view an ancient broad based strategy for addressing environmental contingencies.
    Yes, an ancient broad based strategy. And wouldn't new layers of cognition be based on that ancient system?
    UvaE
    After clearing about a dozen spam messages among the responses, I must say that if intelligence is not rooted in emotion, spam-writing is surely rooted in a fruitless stubborness...
    Neuroscience has shown that brain structures such as the amygdala and the hippocampus are responsible for both emotional behaviors and memory consolidation, but of course that's not that the same as the daring claim that "emotional structures are the foundation of intelligence". 

    Memory consolidation itself is only one of many facets of intelligence. We're nowhere near understanding the neurological and biochemical mechanism involved when a New Caledonian crow or primate uses a tool or how we understand concepts, let alone come up with them.
    SynapticNulship
    Thanks for the comment Enrico. I seem to attract swarms of spambots.

    We're nowhere near understanding the neurological and biochemical mechanism involved when a New Caledonian crow or primate uses a tool or how we understand concepts, let alone come up with them.
    That is the kind of statement which has a philosophical underpinning of drawing a border around the brain and ignoring everything outside of that border when dealing with minds. The interactionists and externalists and whatever are saying wait a minute--what about how the brain interacts with the rest of the body and other bodies and other things in the environment? Is it explainable as a dynamic system? And how does the stupid mind of a fetus transform into an intelligent adult mind? And does ontogeny recapitulate evolution at all and in what ways? Even in the crow example, you clearly have some kind of proto-culture and learning in ontogeny space (as opposed to phylogeny) going on yet you completely ignore that and ask first about neurology and biochemistry.
    UvaE
    That is the kind of statement which has a philosophical underpinning of drawing a border around the brain and ignoring everything outside of that border when dealing with minds. 
    Of course there's interplay between mind and environment, including emotional experiences.. But there's no evidence that emotion is the foundation of intelligence. 

    Gerhard Adam
    But there's no evidence that emotion is the foundation of intelligence.
    But what else would be?
    Mundus vult decipi
    vongehr
    philosophical underpinning of drawing a border around the brain and ignoring everything outside of that border when dealing with minds.
    Going from unconscious sleep to REM, a physical system somehow supplied, a Boltzmann brain without childhood, suddenly flies around in its own world.  We will make such AI.  But will we accept the shuffeled memory contents as a dreamt experience if it does not allow the interpretation of a body in an environment that knows how to interact in there?  So "mind" is not so much about engineering a childhood or emotion practically, but about whether we accept the dream as a story. The mind is only in other minds.
    MikeCrow
    I think the first intelligent Ai's will have to be "raised", though I don't know it will require emotions, but we might end up wishing we'd added a way for them to emotionally bond to humans if we don't.
    Never is a long time.
    Thor Russell
    I agree. However there is another route to "AI" at the other extreme (i.e. we don't understand anything) where Kurzweil or someone similar manages to scan their brain into digital form, and then it goes crazy copying itself Agent Smith style and we quickly have more practical problems to deal with than the theory of grounding ... 
    There are of course multiple other scenarios many in between these two extremes, but there is certainly no guarantee we will actually understand "AI" the way we like by the time it happens.

    Thor Russell
    Thor Russell
    This looks a bit like putting the grounding problem elsewhere for me. How are emotions grounded then, and what exactly are they. Don't you need a watertight way to tell real from fake/ungrounded emotions (i.e. an AI just printing "I am sad I am sad I am sad") for this to be useful even if it was correct?
    And of course where do you draw the line around intelligence, is it symbolic thought your title implies? Insects don't seem to have emotions, yet could be said to be more intelligent than many computer  systems. If your claim is correct, what limit can intelligence reach before such nurturing is needed to progress further. (I am talking generally, not just observing earth based life forms).
    Thor Russell
    Gerhard Adam
    Don't you need a watertight way to tell real from fake/ungrounded emotions
    I'm not sure I see the relevance in requiring objective assessment of emotions.  Therefore it is irrelevant whether emotions are "fake".
    Insects don't seem to have emotions, yet could be said to be more intelligent than many computer systems.
    I suspect that part of the problem is that we tend to consider emotions only from the human perspective, instead of considering the baser more elemental forms it may take.

    In that respect I find myself considering things like "motivation" as perhaps being a precursor to more specific emotional states.  Basically there are a lot of mental states that don't fit particularly into the definition of intelligence or emotions.  What do we mean when we say we feel bad [i.e. as in sick] or are in a "bad mood"?  or when we feel good?  In other words, while being sick may entail specific physiological reactions, our mind experiences something that "feels bad". 
    Mundus vult decipi
    Thor Russell
    To me it seems like his theory requires objective assessment of emotions. It seems he is saying an AI would require real emotions to develop, so surely there then needs to be a way to determine whether they are real.
    Thor Russell
    Gerhard Adam
    I think that treats emotions like an engineering problem.  How do we know whether other people experience "real" emotions?  We presume it because their behavior reflects comparable attitudes and feelings.  We recognize when someone appears to behave unemotionally, because we are linked in a common cause regarding our interpretations of events and our expected responses to those events.

    However, if we think we can apply some objective standard, then what would it be?  Should human emotional experience by the criteria for assessing a machine?  Doesn't that put is in the same position as anthropomorphizing the behavior of animals? 

    It strikes me that this is precisely the problem we have in assessing intelligence in animals, because we can't objectively determine what it means to be that animal, and any interpretation we attempt to apply quickly becomes subject to criticism as utilizing a human bias. 

    After all, what would it mean to an AI to be happy?  Regardless of how "real" it is, what if we couldn't recognize it?  Doesn't that risk our simply dismissing some phenomenon simply because we can't relate it back to our own experience?
    Mundus vult decipi
    Thor Russell
    You don't seem to have understood my point. If we accept your argument just presented, then it means Samuels theory is wrong. 
    If emotions cannot be defined or recognized reliably or objectively, then they cannot be the basis of intelligence. The big box saying "interactive emotional experience" must be a well defined and measurable thing, or it cannot give rise to a clearly defined intelligence. You cannot have a subjective and impossible to define thing being the basis of a concrete and objectively measurable quantity, that just doesn't make sense.
    Thor Russell
    Gerhard Adam
    Couldn't the problem be just as readily described as your presumption that intelligence is "concrete and objectively" measurable?  After all, if you could do that, that would go a long way to answering a great many difficult questions.
    Mundus vult decipi
    Thor Russell
    No, if you think intelligence is not definable or measurable and neither are emotions, then you simply cannot say anything about either of them, or anything worthwhile on the topic. Claiming undefinable quantity "a" is related to undefinable quantity "b" is not science, philosophy or anything knowledge related at all. Nothing productive could come of it. Its clear that there is some agreement on what intelligence is to allow discussion, so there must be similar agreement on emotions to relate them to intelligence in any sensible way.

    Thor Russell
    Gerhard Adam
     Its clear that there is some agreement on what intelligence is to allow discussion...
    That's the problem though, isn't it.  We can agree on something for discussion purposes without having anything realistically defined that actually represents the thing being discussed.  This is precisely we so many discussions about intelligence go in circles, because we don't have a precise definition regarding what we mean, therefore we can't have a precise definition regarding the role emotions would play in achieving it.

    Essentially if we don't know what "it" is, then we can't reasonably argue anything about the requirements to attain "it".

    Any definition that is agreed upon, will create the necessary boundary conditions against which any other criteria can be evaluated.  Whether that ultimately represents reality is another matter.
    Mundus vult decipi
    SynapticNulship
    It is an engineering problem. And the engineer/programmer/scientist has special access to the inner workings because he/she designed it and has various "debugging" interfaces (both at the programming language level and preferably also in the various mental abstractions).

    There are two issues, however, which I think can be handled:
    1. The hermeneutic hall of mirrors: The designer must not name things or assume any biological equivalence unless there is sufficient reason to believe that the software/hardware item at hand really is similar enough to get that label, and in some cases it should be made explicit how that thing is similar or of the same kind.

    For example, a program printing that it is sad is useless, at least from the outside. So let's say you have made sure the Sadness module is in fact active and printing "I am sad". That's still useless because you can't just name a module Sadness and assume that IS sadness. So this means at first a lot of whatever models or programs you make and the data / runtime states will have arbitrary identifications ("Foobar-42") which only get special labels like "emotions" or "planning" or "thinking" or "consciousness" or any other more specific label like "sad because it lost its toy" until you are really sure the AI is doing that thing based on internal and external contexts and histories.

    2. Introspection / interface design: A cognitive system, like all software systems that become a bit complex, will not necessarily be easy to figure out during runtime. So the introspection has to be designed in there from the start so the scientist has the special access and can actually have a fighting chance of making some kind of sense from that data.
    Gerhard Adam
    ... any other more specific label like "sad because it lost its toy" until you are really sure the AI is doing that thing based on internal and external contexts and histories.
    I agree with your reasoning, but my question keeps coming back to who is setting up this value system to create the internal and external contexts and histories.  If it is external to the AI, then how can you ever be sure that something is really being experienced versus simply being simulated because that's how the AI was designed.

    More specifically, we can understand how a child can get hurt by falling, and cry from the pain.  However, no matter how carefully we designed the AI, such a situation would be absurd, because it couldn't feel pain in a meaningful manner.  So, to react in the same fashion would be a false emotion.

    Yet, we are simultaneously caught in empathetic situations because we can relate to the feelings of others, because of our own experience in comparable situations.  Again, how could this be addressed in an AI, without it simply being a contrived experienced without a shared biology?


    Mundus vult decipi
    Thor Russell
    OK fair enough, you make some judgement on the labels/emotions etc based on both how it behaves and what is going on in the inside. Introspection is also pretty important, even if you could do the biological equivalent of just copying an AI like in the airplane example, you would want to insert debugging/tracing options.
    A point about this approach where you mimic biology is that if done well enough it can't fail, just like an atomic copy of an aircraft couldn't fail, however what would it prove? You get a successful AI that includes emotional structures as you must if you copy biology sufficiently well. However we already know that artificial/biological intelligence can develop that way. To help answer the question of whether emotions are required or even part of the process, you would need to do a lot more, taking them out and seeing if it still worked and experimenting with all sorts of other ways to ground things. 


    Also back to the aircraft analogy, and a bit off topic, even if that model is correct to make an AI you still need to know a lot more about how the nuts and bolts of neuronal connections work and adapt their weights to learn etc. Otherwise its like having a perfect model of the aircraft but only wood not metal to build it out of.
    Thor Russell
    SynapticNulship
    To help answer the question of whether emotions are required or even part of the process, you would need to do a lot more, taking them out and seeing if it still worked and experimenting with all sorts of other ways to ground things.
    Indeed. You may notice that the theory on this blog post is not mine, it's from the book the First Idea. It is just one version of emotional grounding, and emotional grounding is one of the three major kinds I am looking at right now (not sure if there are others yet).
    The first point is quite an interesting one... What do studies say about either physical or cognitive manifestations of emotions? Can emotions be described as algorythms, affecting mind's logical process one way or another? For example, algorythm of "missing someone" would be close to recalling mind's experience with that "someone" and factoring in "what that someone would have done based on our observations of it?" in any given information processing task... Also, emotions should be very close to imagination, probably, used "at random" in instanses when mind is not given any particular task by its environment... But then, should mind even distinguish between "self" and "environment"? Should it be aware of its physical carrier?...

    SynapticNulship
    Can emotions be described as algorythms

    Based on the philosophies of mind is information, and the related philosophy that the mind is computational, then emotions are information, either programs and/or data used by other programs.

    So they may be describable by algorithms, although "algorithms" are of course not the only form of program or of thinking about programs. For instance, it might take a software architecture and/or interfaces and/or systems perspective(s).

     affecting mind's logical process one way or another
    You have a built in premise which may be unsound there. What is the "logical process"? Why is there only one? Isn't all logic contextual? Why would you assume that "the logical process" exists and is the main thread?
    SynapticNulship
    Insects don't seem to have emotions, yet could be said to be more intelligent than many computer systems.
    I suspect that part of the problem is that we tend to consider emotions only from the human perspective, instead of considering the baser more elemental forms it may take.

    Right. All organisms keep themselves alive. So we get into homeostasis. This then expands into the autonomic nervous system and then, perhaps, expands into the basis of primitive emotions and some motivations.
    Gerhard Adam
    It seems like there are two questions that need to be answered more specifically.

    (1).  It appears that intelligence is too ill-defined and is often conflated with learning or knowledge.  As a result, it is difficult to consider what is meant by a foundation in any sense.

    (2)  I'm also not sure what autism has to do with anything, since that merely reflects the individuals ability to interact with others and tells us nothing about the actual emotional state of the autistic individual. 
    Mundus vult decipi
    SynapticNulship
    I'm also not sure what autism has to do with anything
    Greenspan's treatment of autistic and similar children supposedly had great success by using clever ways to hook them into emotional interactions. One way is to do something like block a door the autistic kid wants to go through to annoy him and start breaking his constant repetition of a single word or the obsessive lining up of toys (interestingly, that is something I did as a child) or whatever. Supposedly this emotional interactions can then grow in complexity and the child starts being able to generate meaningful sentences or whatever.

    And then there is a trick of jumpstarting higher abstract thinking by starting off with something the child likes--like pizza--and explaining concepts in terms of what if I take your pizza and how much pizza will you give me to do this and so on (the example given was explaining taxes to a child with an apparent learning disorder).
    Gerhard Adam
    I found this comment in another post and it seems to hint at the issue of the emotions being considered.
    ...ANGELINA was able to create new game mechanics all by itself. It found useful ones, which is a lot more than a random level generator, it modifies code and then tests it and evaluates it and suggests new mechanics iteratively.
    http://www.science20.com/science_20/can_ai_write_video_games-100388
    Basically the question rests on what we mean by "evaluate" in this context.  Since that requires some value judgement, we can legitimately question whether the criteria is that something is more fun, or that it is exciting, or that it is boring, etc.  If so, then how does an AI evaluate something for which it experiences no emotion?
    Mundus vult decipi
    Thanks for the pointer to the book!

    In my response to this post, I wasn't above using a cute baby picture...