Banner
    Nature-Inspired Development as an AI Abstraction
    By Samuel Kenyon | June 12th 2013 09:09 PM | 13 comments | Print | E-mail | Track Comments
    About Samuel

    Software engineer, AI researcher, interaction designer (IxD), actor, writer, atheist transhumanist. My blog will attempt to synthesize concepts...

    View Samuel's Profile
    I'm working on some ideas and a paper to present my version of biologically-inspired development. But not just as a single project or as a technique, but as an abstraction level.

    It's hard to explain, so let me first digress with this: The agent approach to AI became a mainstream part of AI in the 1990s, and one of the most popular AI textbooks of the past decade tries to frame all of AI in the context of agents. Certainly within a given project, one can refer to the agent layer of abstraction. But I wonder how much agent as abstraction actually matters.

    An abstraction in computer science hides the details that are "below" or "inside". We encapsulate and black box things. It's easy to see how straightforward this is with computers, where the abstractions rise from transistors in the electronics domain up to machine language (and then optionally up to assembly language) up to languages like C and then on top of that an application which in turn has an interpreted scripting language. Each layer saves the user/programmer from having to deal with the nitty gritty of lower levels on a daily basis (although they sometimes rear their ugly heads), and it promotes modularity--presumably lower layers have proven working parts that are then reused for many purposes as directed by the higher layers.

    And yet when we get to the concept of agent, I feel like the abstraction stack is in new territory. And this feeling gets more weird with development, at least my version of development.

    Mentioning agents was not just to talk about abstractions getting fuzzy, but also because it's one of those AI abstractions that never seemed to reach the glorious potential some may have hoped for. There are many abstractions and techniques in cognitive science and AI. A lot of people swear by certain specific abstractions, for instance neuropsychologists and similar-minded folks thing that neurons and their networks are the layer on which we should understand the human mind. AI people have their own obsessions with abstractions and/or models (like ANNs and HMMs).

    My Version of Abstract Development


    So now let me ease you into my version of development: First, consider the agent as an artificial organism, possibly virtual, possibly existing in the real world as a robot. Now imagine that it has an embryo stage (embryogeny) where it actually grows from some small version (analogous to a biological zygote) and into the first child stage form. For a robot that would be hard to make, but it's not impossible, and it's easy in a virtual world. Also, we are recording all of this data, including what goes on inside the artificial organism's mind (or whatever prototypical information patterns it has that will eventually become a mind).

    Next this organism begins various phases of "childhood". Again, physical body changes may occur. If we want to be like biology, we also keep the mind in synchronization with the body changes. Of course, that is one of the interesting experimental areas when using this development abstraction--how the mind changes structures and content in a feedback loop with the body. And we are still recording all this data.



    Now we also include a special other agent in the mix by default. Now this abstraction seems to have overlaps with a default framework. Anyway, the special agent is another artificial organism which is the analogue of a caregiver. This is a special training mechanism. The role of the parent in the early years of animal babies, especially humans, involves not just keeping a baby safe and teaching it some things, but also establishing basic mental structures via interaction.

    Also, during these phases, the default framework for this development abstraction would include the notion of environment changes. A typical pattern would be to start with a small relatively safe environment, and then gradually increase the complexity and/or danger as the artificial organism develops and learns.

    So these phases go on until adulthood. And depending on the experiment, the adult artificial organism is unleashed onto the world, where "world" is whatever adult environment the researchers have chosen. Data was recorded the whole time--both of the environment and of whatever we can record of the mental experiences of the organisms. Unlike biology we literally have special access as researchers into the minds of our digital creatures. All that data we recorded can be used to do cool and weird experiments, like going "backwards in time" to replay a phase, and even replay it with some variables changed.

    So that's generally it--for the lifetime part of it (aka ontogeny). I haven't even mentioned anything about the evolutionary timeline. I'm struggling with what is the best way to explain how development, namely my version, is an abstraction. This is tangled with my abstract default "framework" concept I sketched above. Any criticisms and suggestions are welcome.

    Comments

    Gerhard Adam
    If we want to be like biology, we also keep the mind in synchronization with the body changes. Of course, that is one of the interesting experimental areas when using this development abstraction--how the mind changes structures and content in a feedback loop with the body.
    I guess that's the part I struggle with.  How is this synchronization supposed to occur without already biasing the result?  Since the modifications to the body are of the designer's choice, then it strikes me that the concept of synchronization is completely artificial.

    The point here being that it fundamentally negates all the preceding stages of development since each transition is essentially brand new, so I'm not clear on what would be carried forward and how it would be maintained in any kind of a consistent fashion with the proposed physical changes.

    Just a thought.
    Mundus vult decipi
    SynapticNulship
    Since the modifications to the body are of the designer's choice, then it strikes me that the concept of synchronization is completely artificial.
    There's a large design / interaction space there. I imagined typical experiments would allow gradual, iterative morphology changes, and the phases of development are not necessarily discrete, but merely convenience labels--or perhaps labeled afterwards to reference significant competences achieved by the organism.
    "how the mind changes structures and content in a feedback loop with the body"

    Your biological model is interesting, but I'm not sure the mental evolution emerges from the physical growth. It would seem to me that the physical 'hardware' would have to evolve to serve the needs of the mental 'software". The 'embryo stage' could be considered the work of a few logic circuits. Then as the need for recording and storage arises, the 'hardware" body grows to accommodate the need of the 'software' mind and they move through the phases. The default "framework" is the overall conception of the purpose for the adult that is present from the beginning, and continues throughout as evolutionary pressure.

    But maybe I'm unto just a 'chicken or the egg' type thing and in the nature of the feedback loop it doesn't matter.

    SynapticNulship
    First, this may  not have been your issue, but I was using the term "feedback loop" in a very vague sense of dynamical systems with cause-and-effect loops.

    Your biological model is interesting, but I'm not sure the mental evolution emerges from the physical growth.
    I'm glad you mentioned that--perhaps I need to elucidate this more. Certainly any given design won't suddenly gain mental capacities just because the hardware changes anymore than a PC program from 1981 gains functionality running on a 2013 Intel Core i7 (it just runs faster).

    So that has to be designed into the experiment via the choice of seed for embryogeny. Of course, embryogeny is optional--or you can generate one baby configuration and just keep starting from that one. And that baby configuration, however it is produced, has to have the right self modifying software to change over time in response to environmental cues, observation, drives / instincts, and whatever curiosity it may have.

    So the the mental software, like biological wetware, is expected to self-modify as part of evolutionarily developed fated (for its common ecological niche) mental growth while simultaneously updating for gradual (and catastrophic) body changes and flexibly handle those body changes that affect the hardware that the mind software itself runs on.

    And it's not just the computational hardware that affects the mind--if enactivists, interactionists or other externalists are correct, human-like minds are in part a process of situated embodied organisms doing stuff in the environment and with other agents.
    Gerhard Adam
    I think an important question is being missed, namely what is the reason a biological organism grows from an embryo to adulthood?  Why does a child become an adult?

    What is the difference between the two states?  Certainly one could argue for experience, but that doesn't seem to be a particularly convincing reason.  Even physical maturity doesn't seem very reasonable, since there's no biological reason for any creature to not achieve a significant degree of physical maturity in a relatively short period of time.  Basically I don't believe there's any biological reason for a newborn to be physically immature other than to facilitate the birth process.  As a result, this promotes the survivability of the mother, while incurring the liability of a newborn to be protected until it finishes growing.

    As a result, without a clear understanding of what the purpose of these growth phases are, I'm not clear on what would be accomplished by replicating it in a machine.
    Mundus vult decipi
    SynapticNulship
    what is the reason a biological organism grows from an embryo to adulthood?  Why does a child become an adult?
    Instead of saying the reason a biological organism does anything, perhaps we should say what is the reason for copying this weird mechanism. One of the benefits is for fitting into the particulars of the context. From one high up point of view, maybe most humans are born into the same environment. But a more detailed view shows the vast differences in environments. The details of context. The stages of childhood allow for the ontogenetic adaptation to happen.

    As I also mentioned, the interaction during childhood can create mental structures. You might argue that an AI doesn't need that since we can design it beforehand, but perhaps you need to follow this process in a context to generate that first design in order to copy it to others.

    I think the major lack of flexible interfaces in AI and lack of amorphous computing means we are in a state where we need more adaptive approaches.
    Gerhard Adam
    I agree with your assessment, which is precisely why I've always had a problem with the concept of AI [as is generally pursued].

    As a point to consider; you mentioned the context of creating mental structures and you also mentioned adaptive approaches.  This raises the question of what is being adapted?

    In biological systems, the purpose is for the organism to "learn" or acquire the necessary information in order to protect its biological integrity [i.e. survive] and to learn to employ the mechanisms to acquire energy [i.e. food].  As a result, many of our feedback loops inform us of what causes pain, what tastes good, etc.  In short, between being taught and our own experiences we generally begin to build up a sense of the world we inhabit and how it can harm or help us. 

    As a result, in my view, this is directly related to the biological needs of direct survival.  Failure in this area will likely result in death.

    So, a significantly important question becomes ... what motivates your machine?  It can't technically die, so a major incentive that biological organisms have is removed from being adaptive.  This is where I believe AI has always made a wrong turn.

    As you may recall, I've often argued that I don't believe AI is possible.  Largely my view comes from my perception that researchers aren't actually interested in building an AI as much as they are in building a machine that can emulate humans.  Yet, in my view this represents a complete waste of time.

    From these questions, it seems that the important point should be ... how does one design a lifeform that is aware and capable of adapting/responding to the things that are important to it.  This would clearly follow some radically different paths than those employed by biological organisms.  In addition, one would have to address the problem of what it means to be a singular organism with no population to relate to or integrate into.  After all, it's hard to rationalize cooperate behavior if there's no one to cooperate with. 

    In short, it seems that we should be looking at biology to provide an understanding of how a self-sustaining system has evolved to address the problems of organism behavior and perpetuation.  However, we must also recognize that to construct an AI one doesn't have the luxury of allowing millions of years of evolution work out the kinks.  Therefore, we are effectively trying to create a fully developed species of machine into an existing infrastructure with no history on which to base its behavior.  It is this, that I think presents the biggest problem.

    Basically AI cannot work until someone can establish a reason why it should work.  In other words, a machine with AI must have a reason to exist besides human curiosity.

    Anyway ... those are some of the issues I have with this.



    Mundus vult decipi
    SynapticNulship
    Certainly an organism must try to maintain its own survival (ignoring for now the occasional suicide for the benefit of society that some animals feature). Tautologically, if an organism dies too soon and doesn't make it to reproduction (and stick around to take care of its progeny if need be) then it will disappear from the population.

    The evolution of development and each individual's development has the difficult requirement that every change--morphological and informational--has to maintain the survival mechanisms. There is no pause while things are rearranged.

    Emotions and motivations (as you say) should be tied to survival needs, but that doesn't mean all motivations are at that layer. One of the good aspects of behaviorist robotics (and later multi-layered architectures that slapped planning on top of behavioral layers) is this concept of always having the low-level basic needs running no matter what, although they can be inhibited (if designed that way) by higher modules.

    I tend to think of all these survival needs coming out of homeostasis in simpler organisms, and the homeostasis is still there along with other survival systems. I don't see why this cannot be done artificially--it's just most researchers skirt around the issue or do a very shallow suspiciously hall-of-hermeneutic-mirrors project and then abandon it.

    Gerhard Adam
    I don't see why this cannot be done artificially-
    I don't know if it can or can't.  What I'm suggesting is that it must involve actually creating a new artificial species that serves its own best interests and not simply emulates what others think such behavior would look like.
    Mundus vult decipi
    John Hasenkam
    As a result, without a clear understanding of what the purpose of these growth phases are, I'm not clear on what would be accomplished by replicating it in a machine.
    Yep, many mammals have fine motor skills straight out of the womb. Humans are different, our CNS is not that well developed at birth and some studies suggest it can take up to the mid 20's for the final myelination of axons for frontal lobe afferents. So I suspect these developmental phases are very much about CNS maturation post birth, especially in relation to myelination, and less to do with some learning curve. 

    vongehr
    You seem like one of those who dig down into a hole feeling strongly that something like quantum-goo must have something to do with mind, just that for you the secret ingredient is evolution and development. It is as bound to be true as it is that quantum mechanics tells us the phenomena we are possibly conscious of, but just like such certainty tells us nothing about how quantum-goo flicks the light of consciousness, you will equally not get to a non-silly conclusion without behaviorism that clarifies a functional role of your favorite ingredient. Since the whole point of robots is that we want to reproduce the adults without wasting resources on childhoods, you go down a blind alley, especially regarding the cute notion of a caregiver agent.
    What strikes me as interesting is your stressing this as an abstraction. This may be close to what I am thinking about, namely that there seems to be the mere necessity for a consistent evolutionary/developmental causal story to be possible, for example in order for there to be mind (or actually anything, like a physical universe). But in order for such to illuminate how my mind is possible right this moment, we need to show how the (potential) previous evolution shapes the evolution/natural selection landscape of mental structures that is going on now in my brain every few tenths of a second. The latter is more important for explaining mind.
    Are you aware of the similar problematic in fundamental physics? Why does the hunter-gatherer speer's trajectory need to be consistent with a big bang physics gazillion years ago? They are in one reality, but that reality stays implied by our minds. The relation is not an added and meaningless 'independent existence of a physical world that instantiates the minds' but the mere potential consistency of causal stories/descriptions if we were to analyze them, even if we don't.
    SynapticNulship
    You seem like one of those who dig down into a hole feeling strongly that something like quantum-goo must have something to do with mind, just that for you the secret ingredient is evolution and development.
    I will try to be careful not to give that impression in the future. However, any single blog post I make is by necessity brief and (hopefully) focused which has the downside it could be interpreted as my one trick or manifesto.

    Since the whole point of robots is that we want to reproduce the adults without wasting resources on childhoods, you go down a blind alley
    If that's the assumption, then all the more reason to challenge that assumption. I'm not against making "adult" systems, it would be stupid not to for practicality if you can. But where there are experiments to do and development of the adults to do, that is where various construction approaches should be investigated.

    especially regarding the cute notion of a caregiver agent.
    What strikes me as interesting is your stressing this as an abstraction.
    And that is also the hard part to grapple with--abstraction. The "caregiver" that you think is merely cute is a necessary option in my opinion. It is in more industrial terms a part of the environment to guide mental building through interaction. It doesn't necessarily have to even be as "living" as the child organism per say. But the fact that so many natural animals use this method in combination with the evolution of development seems to indicate we should "cover our asses" and at least include it in a default framework within this class of abstraction. Some scientists think that the child-parent interactions are critical for humans; if they're right there's no harm in starting with that (unless the abstract removes something critical from those interactions or adds something that isn't really there in biology) and later on it can be optimized. If they're wrong, then it will be optimized out by removing that interaction feature of the development environment.

    The relation is not an added and meaningless 'independent existence of a physical world that instantiates the minds' but the mere potential consistency of causal stories/descriptions if we were to analyze them, even if we don't.
    Im not sure how to parse that paragraph...I am trying to keep track of the enactivist and Heideggerian AI points of view, perhaps that is where you're trying to go there. I've mentioned Alva Noe in my blog posts before (maybe some before I was on Science 2.0) in which action is perception and consciousness is "more like dancing than it is like digestion". It's no coincidence I've studied the interaction design field in which affordances play a big role. I listed that in a blog post as a potential mechanism of meaning but I haven't gotten to writing blogs about that yet. Back to you comment about me digging in a hole, I would argue I'm digging in many holes! And connecting those holes together...
    vongehr
    I will try to be careful not to give that impression in the future. However, any single blog post I make is by necessity brief and (hopefully) focused which has the downside it could be interpreted as my one trick or manifesto.
    Well, as you know, I follow all your posts, so my comment on the hole was not about a single brief post but about the sequence of them.
    The "caregiver" that you think is merely cute is a necessary option in my opinion. It is in more industrial terms a part of the environment to guide mental building through interaction.
    You mean to hedge something as specific as "caregiver", introduced in other posts via cute babies, by there being a shaping environment in some sense? Good move. ;-)
    Im not sure how to parse that paragraph...
    Don't worry - I suspected that your abstraction was perhaps going into that direction, so there was a chance that the physics analogy could have struck a bell with you, but as it did not, perhaps I just do not get where you go with this. Perhaps I am wrong, but some of what you say is not far from what I work on, namely having the 'Myth of Jones' emerge inside the computer. If that happens, we have mind!