But first, let me briefly introduce the idea-triggering tome. Surfaces and Essences focuses on the role of analogies in thought. It's kind of an über Metaphors We Live By. Hofstadter and his French counterpart Sander are concerned with categorization, which is held to be essentially the primary way of the mind creating concepts and the act of remembering. And each category is made entirely from a sequence of analogies. These sequences generally get bigger and more complicated as a child develops into an adult.
The evidence is considerable, but based primarily on language as a window into the mind's machinery, an approach which comes as no surprise to those who know of Steven Pinker's book, The Stuff of Thought: Language as a Window into Human Nature. There are also some subjective experiences used as evidence, namely what categories and mechanisms allowed a bizarre memory to occur in a situation. (You can get an intro in this video recording of a Hofstadter presentation).
Books like this--I would include in this spontaneous category Marvin Minsky's books Society of Mind and The Emotion Machine--offer insight into psychology and what I would call cognitive architecture. They appeal to some artificial intelligence researchers / aficionados but they don't readily lend themselves to any easy (or fundable) computer implementations. And they usually don't have any single or easy mapping to other cognitive science domains such as neuroscience. Partly, the practical difficultly is that of needing full systems. But more to the point of this little essay, they don't even map easily to other sub-domains nearby in psychology or artificial intelligence.
Layers and Spaces
One might imagine that a stack of enough layers at different levels will provide a full model and/or implementation of the human mind. Even if the layers overlap, one just needs full coverage--and small gaps presumably will lend themselves to obvious filler layers.
For instance, you might say one layer is the Surfaces and Essences analogy engine, and another layer deals with consciousness, another with vision processing, another with body motion control, and so on.
But it's not that easy (I know, I know...that's pretty much the mantra of skeptical cognitive science).
I think a slice of abstraction space is probably more like a manifold or some other arbitrary n-dimensional space. And yes, this is an analogy.
These manifolds could be thought of--yay, another analogy!--as 3D blobs, which on this page will be represented as a 2D pixmap (see the lava lamp image). "Ceci n'est pas une pipe."
Now, what about actual implementations or working models--as opposed to theoretical models. Won't there be additional problems of interfaces between the disparate manifolds?
Perhaps we need a class of theories whose abstraction space is in another dimension which represents how other abstraction spaces connect. Or, one's model of abstraction spaces could require gaps between spaces.
Imagine blobs in a lava lamp, but they always repel to maintain a minimal distance from each other. Interface space is the area in which theories and models can connect those blobs.
I'm not saying that nobody has come up with interfaces at all in these contexts. We may already have several interface ideas, recognized as such or not. For instance, some of Minky's theories which fall under the umbrella of Society of Mind are about connections. And maybe there are more abstract connection theories out there that can bridge gaps between entirely different theoretical psychology spaces.
Recently Gary Marcus bemoaned the lack of good meta-theories for brain science:
… biological complexity is only part of the challenge in figuring out what kind of theory of the brain we’re seeking. What we are really looking for is a bridge, some way of connecting two separate scientific languages — those of neuroscience and psychology.
At theme level 2 of this essay, most likely these bridges will be dependent on analogies as prescribed by Surfaces and Essences. At theme level 1, perhaps these bridges will be the connective tissues between cognitive abstraction manifolds.