Banner
    Infants, Monkeys, Love, And AI
    By Samuel Kenyon | December 27th 2012 09:30 PM | 23 comments | Print | E-mail | Track Comments
    About Samuel

    Software engineer, AI researcher, interaction designer (IxD), actor, writer, atheist transhumanist. My blog will attempt to synthesize concepts...

    View Samuel's Profile
    Perhaps you have seen pictures or videos from the 1960s of rhesus monkey babies clinging to inanimate surrogate mothers.



    These experiments were by Harry Harlow, who eventually went against the psychology mainstream to demonstrate that love--namely caregiver-baby affection--was required for healthy development.
    Dr. Harlow created inanimate surrogate mothers for the rhesus infants from wire and wood. Each infant became attached to its particular mother, recognizing its unique face and preferring it above all others. Harlow next chose to investigate if the infants had a preference for bare wire mothers or cloth covered mothers. For this experiment he presented the infants with a cloth mother and a wire mother under two conditions. In one situation, the wire mother held a bottle with food and the cloth mother held no food, and in the other, the cloth mother held the bottle and the wire mother had nothing.

    Overwhelmingly, the infant macaques preferred spending their time clinging to the cloth mother. Even when only the wire mother could provide nourishment, the monkeys visited her only to feed. Harlow concluded that there was much more to the mother/infant relationship than milk and that this “contact comfort” was essential to the psychological development and health of infant monkeys and children. [1]

    According to Stuart G. Shanker [2], various primates reach levels of funtional-emotional development similar to the first 2-3 levels (out of 9) that humans accomplish. Perhaps part of the system is the infancy period is much longer for humans.
    Although a baby rhesus doesn't express its positive affects with the same sorts of wide joyful smiles that we see in human infants between the ages of two and five months, in other respects it behaves in a manner similar to that of a human infant. The rhesus baby spends lots of time snuggling into its mother's body or looking keenly at her face. It visibly relaxes while being rocked, and vocalizes happily when the mother plays with it. We can even see the baby rhythmically moving its arms and legs and vocalizing in time to its caregiver's movements and vocalizations.

    Shanker said this about Harlow's experiments:
    Although it was clear that the infants were deriving great comfort from the cloth-covered surrogates, they still suffered from striking social and emotional disorders.

    One might interject here: Well so what? Who cares about social and emotional disorders? Well, aside from gunshot victims. What about intelligence? What about self awareness? The thing is though, that intelligence and possibly even the capacity for basic symbolic thought--ideas--are developed via emotions and social interactions.

    Back to Shanker on Harlow's monkeys:
    They would rock back and forth to soothe themselves, stare into space, and compulsively suck their thumbs. Worse still, the appearance of other monkeys would startle them and cause them to stare at the floor of the cage.

    At some point Harlow made a modification to the surrogate mothers--he set them in motion.
    What is so interested about the latter modification is that, in addition to stimulating the rhythmicity of the infant's movements, these swinging surrogates may also have started to stimulate the infant's capacity for emotional communication...Harlow's team may have inadvertently discovered just how important such interaction is--even the caricature of interaction that occurred with the moving surrogate--for enabling a rhesus infant to develop the capacity to manage its way in a troop.

    In 1983 Ed Tronick came up with the Still-Face Paradigm to test breaking human mother-child interactions. According to Deborah Blum in her book Love at Goon Park:
    It occurred to him that the I-smile-you-smile-back kind of relationship could be the basis of an interesting experiment. It wasn't the physical smile that interested him so much. It was what it represented--the give and give back between mother and child...

    He and a colleague, Jeffrey Cohn, asked the mothers of three-month-olds simply to go blank for a few minutes while looking at their children...The mother had to present a face frozen into neutrality. No anger or thread. No humor or love.

    Infants almost instantly notice the change and try to remedy the situation. They attempt to regain the mother's attention.
    When a mother still refused to respond, babies tried self-comfort. They sucked their thumbs. They looked away. Then the babies tried again, just to see a little response. They'd reach for their best tools to engage their mothers...But after a while, confronted with only that blank face, each child stopped trying.

    So primates like monkeys and humans require the emotional caregiver interaction period as a requisite part of mental development.

    If one was wanted to make an AI robot that was similar in intelligence to a primate, it would have to have the equivalent of child development. And the emotional interaction stages are inherently part of that.

    One might argue that an AI robot could be built to be equivalent to an adult primate. In other words, just shortcut the messy biological ontogeny.

    However, that shortcut may still require at least one good run of a robot from infant to adult. Then that "gold" example system can be cloned, as long as the robots are identical. And live in very similar environments and societies.



    The golden AI child may be difficult to create, however. Especially if you are trying to shortcut development. Here is a crude metaphor:

    Imagine you decided to make an "artificial" airplane. You have discovered the primitive building block of airplanes: the molecule. So all you have to do to make an artificial airplane is scan every single molecule in an example airplane. And then carefully assemble all those trillions of molecules in the exact same structures and positions.

    I don't know about you, but I am doubtful that airplane would ever be finished, let alone finished correctly. And note that no consideration for interactions of dynamic systems was accounted for. Even if some plane-like object was in fact generated, I would not be the first test pilot.

    Of course, you could decide that there are lots of different building blocks at a larger scale. So then you figure out how those mere 6 million parts work, and the general interactions of the the parts and the system, and then you start fabricating copies of the important parts. You have to create various factories recursively in order to create all these parts. Eventually you get them working as a full system. You may have in fact made a working airplane that is similar to the example. But you basically just recreated the process that the original used. So you didn't shortcut development after all.

    References
    [1] Wikipedia, "Harry Harlow: Monkey studies"

    [2] Greenspan, S.I.&Shanker, S.G, The First Idea: How Symbols, Language, and Intelligence Evolved from Our Primate Ancestors to Modern Humans, Da Capo Press, 2004.

    [3] Blum, D., Love at Goon Park: Harry Harlow and the Science of Affection, 2nd Ed., Basic Books, 2011.

    Image credits:
    Nina Leen
    Robert J. Corley

    Comments

    Gerhard Adam
    You make a great point about emotions.  While I don't know how this might specifically relate to your point, but it has been my view that emotions are part of our cognitive priority system [at least in part].  In other words, it is emotional context that allows us to differentiate between thoughts/ideas that are more important.  It helps us prioritize what we remember, what we pay attention to, and how we evaluate decisions in light of thousands of inputs.

    Certainly emotions can play a larger role in helping re-inforce our social groups, I still believe that there is some degree of emotional development in any creature that has sufficient cognition to have to make priority-based decisions and information assessment.

    An example I've considered is trying to imagine driving a car and imagine being aware of maintaining your direction on the road, being aware that the sky is blue, grass is green, the sounds of the road, and a child running into the middle of the road .... all with exactly the same level of awareness and importance, because there is no emotional connection to any of these thoughts beyond the fact that we are aware of them as bits of information.  As a result, it is my point that it is the emotional context that allows us to determine which of these pieces of information are important and require attention versus those that we can ignore. 

    Side note;  This has always been what I expected one of the major problems with AI are, is because all pieces of information into a system are equal.  Certainly one can create circuits that may be able to filter out irrelevancies, but it seems like that would have to be something that is arbitrarily decided, rather than what would be available if a system could assess a real "value" to the information as provided by emotions.
    Mundus vult decipi
    SynapticNulship
    Well it wasn't my point but we can sort of link it back. Decision making is probably happening a lot under the hood, not just for the entire organism. For instance arbitration of some small mental network--what choice is made in that sub-system out of the possible choices at any given time? Emotions seem like they might be involved in a lot of those choices. It would seem that most conscious decisions we make have at best a small context outside of which there's no way to make any kind of logical or rational or "correct" decision...merely arbitrary.

    To head back towards infants on this topic, Minsky has suggested in the past (Society of Mind) that the sudden mood swings of an infant are due to global takeovers between various "agencies". So the infant starts with proto-specialist simple states (contentment, hunger, sleepiness, play, affection, etc.) and just wholesale switches between them. As it grows, however, the agencies become intertwined.
    Gerhard Adam
    So the infant starts with proto-specialist simple states (contentment, hunger, sleepiness, play, affection, etc.) and just wholesale switches between them. As it grows, however, the agencies become intertwined.
    This suggests that these early states are the result of "mapping" these emotional states to basic biological needs, so that a simple "reward-punishment" system is being established which becomes more refined.  This is one of the reasons why I've maintained that intelligence is something that is intimately linked to the mapping of the biological organism, and that without that context [i.e. of what's important to help me feel good and avoid unpleasantness] it is difficult to argue that intelligence is something that can be algorithmically derived.

    I'm not suggesting that this is your approach, but it is something that seems to perpetually surface when AI is discussed.
    Mundus vult decipi
    SynapticNulship
    I hear ya.

    I actually started hacking together a cognitive architecture in 2004 (Biomimetic Emotional Learning Agents) that was supposed to have homeostasis and innate survival drives as the underpinnings, including a priority system on top of which I proposed higher emotions might eventually be developed. It would also have to go through the equivalent of both phylogenetic learning and ontogenetic learning.
    vongehr
    This argues that evolution on any planet cannot lead to intelligent systems with social structure and technology and all that via animals that just dump their eggs into the ground, say.  Sure, such is unlikely (long "ontogeny" allows learning, which greatly increases adaptation), but this technicality is far from a 'social-emotions to self awareness' (for example) proof.  So, it sounds all lovely, but I don't buy it.

    BTW: It sounds lovely, and sure, making primate-like AI may help understanding primates better, but in case you want to do that because you believe that such robots will be somehow nice to us, please drop everything you are doing right now and first study primates, for example human history. 
    SynapticNulship
    This argues that evolution on any planet cannot lead to intelligent systems with social structure and technology and all that via animals that just dump their eggs into the ground, say.  Sure, such is unlikely (long gestation period allows learning, which greatly increases adaptation), but this technicality is far from a 'social-emotions to self awareness' (for example) proof.  So, it sounds all lovely, but I don't buy it.

    Not necessarily...the sentient dinosaur aliens in flying saucers might develop via some other messy dynamic mechanism. And maybe not use the whole genetic reproduction system. Maybe they just clone each other and the egg is actually the size of an adult. No growth, no unfolding and self organization of structures, no need to recapitulate any evolution.

     but in case you want to do that because you believe that such robots will be somehow nice to us, please drop everything you are doing right now and first study primates, for example human history.
    Robots being "nice" to "us" doesn't make any sense. Nice is contextual.
    vongehr
    via some other messy dynamic mechanism.
    Precisely
    And maybe not use the whole genetic reproduction system.
    Wait - what?  I meant to put into doubt your "So primates like monkeys and humans require the emotional caregiver interaction ... to make an AI robot that was similar in intelligence to a primate, it would have to have the equivalent of child development."

    SynapticNulship
    Your doubt is valid. However, making an alternate to primate intelligence requires a shortcut (or a long cut, perhaps).

    If it's totally alien, then it won't be "like" a primate. Unless you define "like" as having any kind of technology and social organization. Which I'm not necessarily against. When I wrote this blog, I was thinking more about the internals of the mind in combination with environment and social interaction.
    vongehr
    I was thinking more about the internals of the mind in combination with environment and social interaction.
    Yes, precisely, that is why I put in the egg layers who don't care about wire mothers being wrapped in carpet.  Your article leaves the impression as if you claim that such cannot have mind or social interaction or intelligence, because the emotions are not sufficiently taken care of (at least via short/long cut), and if I say I don't buy that, you seem to hide behind "primate-like" meaning perhaps just "having precisely our emotions" or so.  But you wrote
    What about intelligence? What about self awareness? The thing is though, that intelligence and possibly even the capacity for basic symbolic thought--ideas--are developed via emotions and social interactions.
    True for primates, very suspicious when it comes to general intelligence (e.g. AI), but of course, you can protect its truth by fudging the definitions of "emotion" and "social interaction" ... .  Did you ever have the chance to play around with drugs to the extend of switching off emotions?  If you do not simply include color for example as being an emotion, i.e. identify awareness with emotion, awareness does not require emotion.
    SynapticNulship
    You seem to be skipping the whole development part of it. Sure, adults can suppress emotions. But how did the architecture of their mind get to that point? And what happens if they permanently suppress emotions?
    vongehr
    You seem to be skipping the whole development part of it.
    You seem to say that grounding happens via behavior (remembered interaction), and if that did not happen via actual childhood or at least evolutionary history selecting (alien dino egg), anything that replaces that (say hardwired integrated circuit copy of the crucial part of one developed adult or even just a network that we made from scratch) is by definition a shortcut of that development.  Also, "emotions" can be defined such that they are always found in there.  I can agree with that, but would still reject any slogan like "intelligent mind needs social emotions", because it will be widely misunderstood.  I don't want to kill no conscious AI saying 'look, I know you do everything better than I can and you claim to be aware, but I saw that you did not care about that cloth wrapped wire feeder, so here I switch you off'.
    Gerhard Adam
    Did you ever have the chance to play around with drugs to the extend of switching off emotions?  If you do not simply include color for example as being an emotion, i.e. identify awareness with emotion, awareness does not require emotion.
    I  have to disagree.  One might be tempted to ask, how it felt, but that would be meaningless without emotion.  Of course, you're correct that awareness does not require emotion, since even bacteria have awareness, but I'm not sure that this adds much to the concepts.  After all, what is awareness beyond simply being receptive to input signals. 

    Similarly, in Samuel's response by referring to suppressing emotions, I view that as irrelevant, since suppression isn't the same as being without emotion. 

    Perhaps I'm wrong, but it seems that if emotions could truly be switched off, you would be unaware that it occurred, because such a state would have no significance in the absence of emotions, so there's no particular reason to have remembered it as an event. 
    Mundus vult decipi
    vongehr
    if emotions could truly be switched off, you would be unaware that it occurred
    I don't know how you guys define emotion, but surely, anything that makes no difference if it is switched off, is irrelevant and not worth discussing.  Drugs/injury at times switch off specific modules/functions, say personal identity or the feeling of a flowing time.  This leaves the individual unable to function as usual, but it is interesting that such ingredients can be absent without everything going black.  They are not apriori to mind in that sense.  To elevate something like social emotions to a crucial ingredient of (the developing in any environment of) symbols/thought/mind/intelligence needs a careful defining of "emotion".  Of course, if we go down to the society of neural networks in the mind, all mind is intersubjective, all reality is a social construction, belief, "emotion", but I would be surprised if that is what Samuel would hold as his last defense.  Monkeys hugging cloth-wrapped wire-mothers is a very specific quirk of nature, and that something like that should be necessary for "primate like" AI is basically his definition of "primate like".  But I get the general idea, and it is a good one, namely the ultimate grounding of symbols in behavior.
    Gerhard Adam
    No growth, no unfolding and self organization of structures, no need to recapitulate any evolution.
    That also means, no adaptation, which [in my view] equates to no survival.  In other words, such a system has virtually zero chance of occurring as a complete process, so therefore such processes must evolve to achieve that level of complexity.  Either we are postulating a fully developed complex organism essentially getting created by chance, or we are stuck with an evolving system.

    Mundus vult decipi
    "If one was wanted to make an AI robot that was similar in intelligence to a primate, it would have to have the equivalent of child development. And the emotional interaction stages are inherently part of that."

    That is using a logical fallacy called "false equivalence." You are falsely using anthropomorphism (i.e. giving AI human traits) to form a conclusion. AGI (artificial general intelligence) is not dependent upon human stages (or primate stages) of childhood development, particularly emotional/psycho-sexual development.

    Read Kurzweil's book: "How to Create a Mind." When I started reading your article (I was very intrigued by your heading that you were going to "synthesis concepts") I thought you were going to go into human interaction with robots/androids/AI/AGI. Instead, you went into the creation of an AGI mind using primate development (and emotional development at that) as a template. WTF?

    Hank
    Read Kurzweil's book: "How to Create a Mind."
    ha ha Good one!
    SynapticNulship
    You are falsely using anthropomorphism
    If it is anthropomorphism, it is by definition a fantasy. Saying it's "falsely used" makes no sense. I would argue that it's not anthropomorphism, however, because that means assigning human traits to inanimate objects or non-human animals. In this case, I am proposing that a Strong AI that is similar to a primate will have to undergo a similar mental development period. Let's say the primate is homo sapiens. We want a Strong AI that is similar in mental abilities--the normal definition of Strong AI. If you think that's anthropomorphism, then you are denying that Strong AI can happen. It is "animate"--or else it's not a strong AI.

    Instead, you went into the creation of an AGI mind using primate development (and emotional development at that) as a template. WTF?
    Billions of primates have achieved what you call "AGI" (I call Strong AI). How many brain-in-a-vat style programs have achieved Strong AI? I suspect that interaction and integration and flexible / amorphous techniques are going to be more important than you think.
    Thank you for reply to my post Mr Kenyon.

    First, anthropomorphism is defined as : an interpretation of what is not human or personal in terms of human or personal characteristics : humanization (Merriam-Webster dictionary). I don't consider this to be a "fantasy," but more like a metaphor (comparing two unlike things). Metaphors are very useful for illustration purposes, and it is postulated by the PRTM (the pattern recognition theory of mind) that the neocortex uses such a device in hierarchical form to make sense of input.

    What I am saying is that your metaphor humanizing AGI is false in the sense that primate emotional (abnormal) development bears no relationship to AGI. To claim that "billions of primates have achieved what you call AGI," is an example of begging the question, since they aren't virtual. OTH, interaction and flexible/amorphous techniques are going to be important to the development of AGI is very valid, I'm just saying that AGI would mimic emotions, not be them as it's core identity.

    Let me make a metaphor that is commonly made: dog anthropomorphism (in other words, giving dogs human equivalent emotions and motives). This metaphor is useful, in the sense that we humans share many traits with canines. OTH, it is a very bad mistake to draw conclusions based upon this metaphor. For example, I have three dobermans, and they are about as smart as a pre-verbal three year old human, but if I treated them like I would a three year old human child I would not get very good results, and I certainly can't draw valid conclusions from my dogs that would apply to a human child (unless you are talking about something basic like not feeding a dog kills it, so if I didn't feed a child that would result in it's death).

    "It is "animate"--or else it's not a strong AI." Animate: of or relating to animal life as opposed to plant life. OK, AGI is more like animal life than plant life, but AGI is not animal, it is something much more alien.

    SynapticNulship
    You are clearly set in your ways with a specific type of AI which based on the Kurzweil/Hawkins memespace, which is basically some kind of science fiction  "universal" intelligence that is like a brain in a vat or Asimov's MULTIVAC or HAL. You also say it will simulate emotion instead of actually having emotion.
    So, if you insist that your AGI--which you speak of as if it already exists--is alien and/or a simulator, then my post obviously doesn't apply to your AGI. I am talking about Strong AI, especially that which is as similar as possible to human minds operating in the environments we currently operate in.
    Gerhard Adam
    ...it is a very bad mistake to draw conclusions based upon this metaphor.
    I think that's over-simplifying and begging the question.  Certainly to ascribe human motivations to them is flawed, but it would be foolish in the extreme to not recognize anger, fear, happiness, etc.  While one may question the motivation, one questions the emotions at their own risk [especially the negative ones].

    Of course, it would be equally foolish to treat a dog as a human, just as it would be foolish to treat a human as a dog.  This doesn't support any claims regarding emotional context between the two species.
    ...I certainly can't draw valid conclusions from my dogs that would apply to a human child...
    Of course you can, because you do it all the time with all manner of creatures that display such emotions.  As I mentioned, it isn't difficult to recognize anger, and you would readily interpret any animal or human encounter with exactly the same caution.  Similarly you would react to a display of fear. 

    Problems occur precisely when such emotions cannot be read.  While a human may not lay his ears back, we recognize such behaviors in animals.  Similarly we can recognize what a snarl represents.  We know what the showing of teeth represents.  These behaviors are not nearly as far removed as is usually suggested.

    Unfortunately too often, there is an anti-anthropomorphism attitude which seeks to deny any relationship between humans and animals, and that is nothing short of presuming some magical distinction in humans and animals.
    Mundus vult decipi

    Dear Brain-in-a-Hat :)

    I agree. You might as well say it's impossible for a dog to weigh anything because humans beings have weight.
     
    John Hasenkam
    Certainly one can create circuits that may be able to filter out irrelevancies, but it seems like that would have to be something that is arbitrarily decided, rather than what would be available if a system could assess a real "value" to the information as provided by emotions.

    I have long suspected that AI people neglect the information processing value of inhibition, especially via GABA or endocannabinoids, both of which play a role in forgetting. Remembering everything is incredibly wasteful, determining salience is essential. I am mystified by studies showing how a single GABA neuron can regulate hundreds if not thousands of other cells. Where does the GABA cell receive in input to do this? How do the cells "know" what to inhibit? Anxiety is\was typically treated with GABA agonists - too much information. I wonder if an AI device would suffer the equivalent problem?  
    You can bet your life that a lot of information is processed and dumbed down before feeding into those inhibitory neurons. It would rather defeat the object if a supervisory system needed more resources than the target :)