Banner
    Growing Robot Minds
    By Samuel Kenyon | January 11th 2014 10:30 PM | 4 comments | Print | E-mail | Track Comments
    About Samuel

    Software engineer, AI researcher, interaction designer (IxD), actor, writer, atheist transhumanist. My blog will attempt to synthesize concepts...

    View Samuel's Profile
    One way to increase the intelligence of a robot is to train it with a series of missions, analogous to the missions (aka levels) in a video game.



    In a developmental robot, the training would not be simply learning--its brain structure would actually change. Biological development shows some extremes that a robot could go through, like starting with a small seed that constructs itself, or creating too many neural connections and then in a later phase deleting a whole bunch of them.

    As another example of development vs. learning, a simple artificial neural network is trained when the weights have been changed after a series of training inputs (and error correction if it is supervised). In contradistinction, a developmental system changes itself structurally. It would be like growing completely new nodes, network layers, or new networks entirely during each training level.

    Or you can imagine the difference between decorating a skyscraper (learning) and building a skyscraper (development).



    What Happens to the Robot Brain in these Missions?


    Inside a robotic mental development mission or stage, almost anything could go on depending on the mad scientist who made it. It could be a series of timed, purely internal structural changes. For instance:

    1. Grow set A[] of mental modules
    2. Grow mental module B
    3. Connect B to some of A[]
    4. Activate dormant function f() in all modules
    5. Add in pre-made module C and connect it to all other modules

    Instead of (or in addition to) pre-planned timed changes, the stages could be in part based on environmental interactions. And I think that is actually a possibly useful tactic to make a robot adjust to its own morphology and the particular range of environments that it must operate and survive in. And that makes the stages more like the aforementioned missions as one has in computer games.

    Note that learning is most likely going to be happening at the same time (unless learning abilities are turned off as part of a developmental level). In the space of all possible developmental robots, one would expect some mental change gray areas somewhere between development and learning.

    Given the input and triggering roles of the environment, each development level may require a special sandbox world. The body of the robot may also undergo changes during each level.

    The ordering of the levels/ sandboxes would depend on what mental structures are necessary going in to each one.

    A Problem


    One problem that I have been thinking about is how to prevent cross-contamination of mental changes. One mission might nullify a previous mission.

    For example, let's say that a robot can now survive in Sandbox A after making it through Mission A. Now the robot proceeds through Mission B in Sandbox B. You would expect the robot to be able to survive in a bigger new sandbox (e.g. the real world) that has elements of both Sandbox A and Sandbox B (or requires the mental structures developed during A and B). But B might have messed up A. And now you have a robot that's good at B but not A, or even worse not good at anything.

    Imagine some unstructured cookie dough. You can form a blob of it into a special shape with a cookie cutter.



    But applying several cookie cutters in a row might result in an unrecognizable shape, maybe even no better than the original blob.

    As a mathematical example, take a four stage developmental sequence where each stage is a different function, numbered 1-4. This could be represented as:



    where x is the starting cognitive system and y is the final resulting cognitive system.

    This function composition is not commutative, e.g.



    A Commutative Approach


    There is a way to make an architecture and transform function type that is commutative. You might think that will solve our problem, however it only works with certain constraints that we might not want. To explain I will show you an example of a special commutative configuration.

    We could require all the development stages to have a minimal required integration program. I.e. f1(), f2(), etc. are all sub-types of m(), the master function. Or in object oriented terms:



    The example here would have each mission result in a new mental module. The required default program would automatically connect this module with the same protocol to all other modules.

    So in this case:



    I don't think this is a good solution since it seriously limits the cognitive architecture. We would not even be able to build a simple layered control system where each higher layer depends on the lower layers. We cannot have arbitrary links and different types of links between modules. And it does not address how conflicts are arbitrated for outputs.

    However, we could add in some dynamic adaptive interfaces in each module that apply special changes. For instance, module type B might send out feelers to sense the presence of module type A, and even if A is added afterwards, eventually B will find it after all the modules have been added. But, we will not be able to actually unleash the robot into any of the environments it should be able to handle until the end, and this is bad. It removes the power of iterative development. And it means that a mission associated with a module will be severely limited.

    The most damning defect with this approach is that there's still no guarantee that a recently added module won't interfere with previous modules as the robot interacts in a dynamic world.

    A Pattern Solution


    A non-commutative solution might reside in integration patterns. These would preserve the functionality from previous stages as the structure is changed.



    For instance, one pattern might be to add a switching mechanism. The resulting robot mind would be partially modal--in a given context, it would activate the most appropriate developed part of its mind but not all of parts at the same time.

    A similar pattern could be used for developing or learning new skills--a new skill doesn't overwrite previous skills, it is instead added to the relevant set of skills which are triggered or selected by other mechanisms.


    Image credits:

    1. Nintendo
    2. Georgia State University Library via Atlanta Time Machine
    3. dezeen
    4. crooked brains
    5. diagram by the author
    6. MyDukkan

    Comments

    vongehr
    "not be simply learning--its brain structure would actually change"
    you tried hard, but I see no fundamental distinction (between for example a zero weight and a cut connection)
    SynapticNulship
    What if I said changing a single variable in a computer program vs. rewriting the program?

    The ANN example was purely for communication since neural nets are so popular in CS both academically and industrially.
    John Hasenkam
    Biological development shows some extremes that a robot could go through, like starting with a small seed that constructs itself, or creating too many neural connections and then in a later phase deleting a whole bunch of them.


    Years ago I read a news report about a study concerning antipsychotic administration. This report claimed that in the space of just one or a few days the drug reduced basal ganglia volume by nearly 50%. I rejected the finding outright because I thought that possibility absurd. I was thinking of cell death, it turns out it is not cell death but dendritic and connectivity loss. 


    A few months ago a modern imaging study claimed that in the space of 24 hours there can be massive changes in the dendritic structures across the brain. At the fine detail the very physical structure itself is highly dynamic throughout life. One striking example of this is the recent finding that every time we remember something there may well be a physical substrate change in that memory. This might seem inefficient but it also may allow the new remembering to incorporate a wider set of associations than previously, hence increasing total learning. It is completely unlike computer hard disk memory. Memories are not stored and then brought together in some cerebral cpu, the memories are part of the processing so remembering involves at least the potential for a new interpretation, a new processing of the relevant information because it has become incorporated with the experiential learning that occurred after the initial memory formation.  


    There is always a lot of deletion going on in brains. In a study of the rat visual cortex the bods wanted to determine how the synapses change in response to visual stimuli. What happened is that after the stimulus microglia came along and consumed a whole bunch of dendrites in the relevant cells. This probably occurs as an "efficiency measure", with the strongest synapses being preserved to maintain the encoding of the stimulus. 

    The most damning defect with this approach is that there's still no guarantee that a recently added module won't interfere with previous modules as the robot interacts in a dynamic world.



    But that is precisely the strength of brain learning. There are mountains of interference going on, even at the sensory level there is cross talk across the modules. 


    You need to make a choice here: either completely abandon any attempt to emulate the way brains work or keep using brains as a model of learning. If you do the former you are stuck with the problem that there is no other obvious learning thing, if you do the latter you are stuck with the problem that we know bugger all about how brains learn stuff. Sorry to put it so starkly but at present, for all the blither blather about how brains work, at the most important levels of analysis we still are still shaking at fists at sky gods and underworld demons. 
    SynapticNulship
    Thanks for contributing some neuroscience insights.
    But that is precisely the strength of brain learning. There are mountains of interference going on, even at the sensory level there is cross talk across the modules.
    Certainly, for example generalization vs. specificity in brain learning, i.e. distributed learning (or "coarse coding"). However, in your effort to mash learning and development together without any hope of a difference, I think you may have overlooked my goal. My goal is to preserve the minimal structures and states necessary for the organism + environment to not only continue surviving without interruption, but to not completely lose previously attained major cognitive abilities.