Banner
2014: Postmortem

Oh no! I forgot to post a personal postmortem1 for the year 2014 like I did for the previous year...

Cognitive Abstraction Manifolds

A few days ago I started thinking about abstractions whilst reading Surfaces and Essences, a recent...

On That Which is Called “Memory”

Information itself is a foundational concept for cognitive science theories.But the very definition...

Polymorphism in Mental Development

Adaptability and uninterrupted continuous operations are important features of mental development...

User picture.
picture for Hank Campbellpicture for Sascha Vongehrpicture for Ladislav Kocbachpicture for Mi Cropicture for Helen Barrattpicture for Bente Lilja Bye
Samuel KenyonRSS Feed of this column.

Robotics software engineer, AI researcher, interaction designer (IxD). Also (as Sam Vanivray) filmmaker, actor.

Working on my new sci-fi movie to be filmed in 2016:
BRUTE SANITY... Read More »

Blogroll
Ever wonder how Society of Mind came about? Of course you do.

One of the key ideas of Society of Mind [1] is that at some range of abstraction levels, the brain's software is a bunch of asynchronous agents. Agents are simple--but a properly organized society of them results in what we call "mind."


The user(s) behind the G+ account Singularity 2045 made an appropriately skeptical post today about the latest Machines-versus-Humans "prediction," specifically an article "What Happens When Artificial Intelligence Turns On Us" about a new book by James Barrat.
One way to increase the intelligence of a robot is to train it with a series of missions, analogous to the missions (aka levels) in a video game.



In a developmental robot, the training would not be simply learning--its brain structure would actually change. Biological development shows some extremes that a robot could go through, like starting with a small seed that constructs itself, or creating too many neural connections and then in a later phase deleting a whole bunch of them.
Whenever a machine or moist machine (aka animal) comes up with a solution, an observer could imagine an infinite number of alternate solutions. The observed machine, depending on its programming, may have considered many possible options before choosing one. In any case, we could imagine a 2D or 3D (or really any dimensionality) mathematical space in which to place all these different solutions.


At the AAAI 2013 Fall Symposia (FSS-13)12, I realized that I was not prepared to explain certain topics quickly to those who are specialists in various AI domains and/or don't delve into philosophy of mind issues. Namely I am thinking of enactivism and embodied cognition.



But something even easier (or so I thought) that threw up communication boundaries was The Symbol Grounding Problem. Even those in AI who have a vague knowledge of the issue will often reject it as a real problem. Or maybe Jeff Clune was just testing me. Either way, how can one give an elevator pitch about symbol grounding?
Recently I voyaged with my girlfriend to the Ecuadorian Amazon rainforest near a black lagoon named Challuacocha.



To get there we flew to a miniscule airport in the town of Coca where everyone disembarking was kept locked in a hallway before being let out into the blinding sun to be accosted by various guides, none of which were from our destination--Sani Lodge. A random dude politely informed us that he knew our dude ("He has a ponytail like me"). We were whisked away in a taxi to the back of a hotel on the water to find our guide.