Banner
    AAAI FSS-13 and Symbol Grounding
    By Samuel Kenyon | November 19th 2013 01:20 AM | 3 comments | Print | E-mail | Track Comments
    About Samuel

    Software engineer, AI researcher, interaction designer (IxD), actor, writer, atheist transhumanist. My blog will attempt to synthesize concepts...

    View Samuel's Profile
    At the AAAI 2013 Fall Symposia (FSS-13)12, I realized that I was not prepared to explain certain topics quickly to those who are specialists in various AI domains and/or don't delve into philosophy of mind issues. Namely I am thinking of enactivism and embodied cognition.



    But something even easier (or so I thought) that threw up communication boundaries was The Symbol Grounding Problem. Even those in AI who have a vague knowledge of the issue will often reject it as a real problem. Or maybe Jeff Clune was just testing me. Either way, how can one give an elevator pitch about symbol grounding?

    So after thinking about it this weekend, I think the simplest explanation is this:

    Symbol grounding is about making meaning intrinsic to an agent as opposed to parasitic meaning provided by an external human researcher or user.

    And really, maybe it should not be called a "problem" anymore. It's only a problem if somebody claims that systems have human-like knowledge but in fact they do not have any intrinsic meaning. Most applications, such as NLP programs and semantic graphs / networks, do not have intrinsic meaning. (I'm willing to grant them a small amount intrinsic meaning if that meaning depends on the network structure itself.)

    Meanwhile, there is in fact grounded knowledge of some sort in research labs. For instance, AI systems in which perceptual invariants are registered as objects are making grounded symbols (e.g. the work presented by Bonny Banerjee). That type of object may not meet some definitions of "symbol," but it is at least a sub-symbol which could be used to form full mental symbols.


    Image from Randall C. O’Reilly, Thomas E. Hazy, and Seth A. Herd, "The Leabra Cognitive Architecture:
    How to Play 20 Principles with Nature and Win!"


    Randall O'Reilly from University of Colorado gave a keynote speech about some of his computational cognitive neuroscience in which there are explicit mappings from one level to the next. Even if his architectures are wrong as far as biological modeling, if the lowest layer is in fact the simulation he showed us, then it is symbolically grounded as far as I can tell. The thing that is a "problem" in general in AI is to link the bottom / middle to the top (e.g. natural language).

    I think that the quick symbol grounding definition above (in italics) is enough to at least establish a thin bridge between various AI disciplines and skeptics of symbol grounding. Unfortunately, I also learned this weekend that hardly anybody agrees on what a "symbol" is.

    Symbols?


    Photo taken from the Westin hotel. I just noticed that Gary Marcus snuck into my photo.


    Gary Marcus by some coincidence ended our symposium with a keynote that successfully convinced many people there that symbolic AI never died and is in fact present in many AI systems even if they don't realize it, and is necessary in combination with other methods (for instance connectionist ML) at the very least for achieving human-like inference. Marcus's presentation was related to some concepts in his book The Algebraic Mind (which I admit I have not read yet). There's more to it like variable binding that I'm not going to get into here.

    As far as I can tell, my concept of mental symbols is very similar to Marcus's. I thought I was in the traditional camp in that regard. And yet his talk spawned debate on the very definition of "symbol". Also, I'm starting to wonder if I should be careful about subsymbolic vs. symbolic structures. Two days earlier, when I had asked a presenter about the symbols in his research, he flat out denied that his object representations based on invariants were "symbols."

    So...what's the elevator pitch for a definition of mental symbols?

    Notes
    1. My specific symposium was How Should Intelligence be Abstracted in AI Research: MDPs, Symbolic Representations, Artificial Neural Networks, or _____?.]
    2. You can read my paper An Ecological Development Abstraction for Artificial Intelligence and the poster.

    Comments

    Hi Samuel,

    Grounding requires an agent, agency requires consciousness. Consciousness is unified and persistent over time, you can see this from the fact that your memories are based on your earlier experiences. Since computers don't have experiences they aren't the kind of thing that can be an agent, they aren't the kind of entity things have meaning for.

    I'm a translator, I use computer translation software, The reason computer translation can't take over from me is that computers don't have experiences. Meaning is grounded in experience.

    SynapticNulship
    Grounding requires an agent, agency requires consciousness.

    Why should I believe that agency requires consciousness? Are tortoises not agents? What about monkeys?
    Consciousness is unified and persistent over time, you can see this from the fact that your memories are based on your earlier experiences. Since computers don't have experiences they aren't the kind of thing that can be an agent, they aren't the kind of entity things have meaning for.
    Your statement includes a premise that computers can't have experiences. Certainly robots, which contain computers in most cases, have experiences. It could be argued that there are all kinds of experiences happening in information systems, but they are typically meaningless and/or uninteresting. But then you will probably argue that "experience" requires consciousness. Well, I disagree, but even if we take that definition, then who says computer-involved entities cannot be conscious?

    So that is quibbling about your premises. But it doesn't, of course, make it practically easy at all to give computers--which most likely will have to be embodied (or simulated embodied) in order to get to animal-or-human-like semantics and ontologies as opposed to some alien semanitcs and ontologies--experience that matters and will support a meaning scaffolding.

    I think that meaning is at least partially grounded in experience. Even that "partial" is still a difficult (in some contexts) and often-misunderstood issue. As you say, translation doesn't understand deeply or in a human-like way. No existing semantic network, NLP, or deep learning (aka levels of feature detectors generated with unsupervised learning with sparse coding) that I know of does either.
    SynapticNulship
    And just to make sure that researchers in the AI fields I mentioned don't think I'm totally slamming them, I think that semantic networks, and even narrow neural and/or unsupervised learning approaches can be a part of cognitive systems that have meaning. And they have grounding at certain levels in rudimentary ways. But the full animal-like cognitive systems haven't been built yet.