I can feel it in the air, so thick I can taste it. Can you? It's the we're-going-to-build-an-artificial-brain-at-any-moment feeling. It's exuded into the atmosphere from news media plumes ("IBM Aims to Build Artificial Human Brain Within 10 Years") and science-fiction movie fountains ...and also from science research itself, including projects like Blue Brain and IBM's SyNAPSE. For example, here's a snippet from the press release about the latter:

Today, IBM (NYSE: IBM) researchers unveiled a new generation of experimental computer chips designed to emulate the brain's abilities for perception, action and cognition.

Now, I'm as romantic as the next scientist (as evidence, see my earlier post on science monk Carl Sagan), but even I carry around a jug of cold water for cases like this. Here are four flavors of chilled water to help clear the palate.

The Worm in the Pass

In the story about the Spartans at the Battle of Thermopylae, 300 soldiers prevent a million-man army from making their way through a narrow mountain pass. In neuroscience it is the 300 neurons of the roundworm C. elegans that stand in the way of our understanding the huge collections of neurons found in our or any mammal's brain.

This little roundworm is the most studied multicellular organism this side of Alpha Centauri--we know how its 300 neurons are interconnected, and how they link up to the thousand or so cells of its body. And yet... Even with our God's-eye-view of this meager creature, we're not able to make much sense of its "brain."

So, tell me where I'm being hasty, but shouldn't this give us pause in leaping beyond a mere 300 neurons all the way to 300 million or 300 billion?

As they say, 300 is a tragedy; 300 billion is a statistic.

Big-Brained Dummies

About that massive Persian army: it didn't appear to display the collective intelligence one might expect for its size.

Well, as it turns out, that's a concern that applies to animal brains as well, which can vary in size by more than a hundred-fold--in mass, number of neurons, number of synapses, take your pick--and yet not be any smarter. Brains get their size not primarily because of the intelligence they're carrying, but because of the size of the body they're dragging.

I've termed this the "big embarrassment of neuroscience", and the embarrassment is that we currently have no good explanation for why bigger bodies have bigger brains.

If we can't explain what a hundred times larger brain does for its user, then we should moderate our confidence in any attempt we might have for building a brain of our own.

Blurry Joints

The computer on which you're reading this is built from digital circuits, electronic mechanisms built from gates called AND, OR, NOT and so on. These gates, in turn, are built with transistors and other parts. Computers built from digital circuits built from logic gates built from transistors. You get the idea. It is only because computers are built with "sharp joints" like these that we can make sense of them.

But not all machines have nice, sharp, distinguishable levels like this, and when they don't, the very notion of "gate" loses its meaning, and our ability to wrap our heads around the machine's workings can quickly deteriorate.

In fact, when scientists create simulations that include digital circuits evolving on their own--and include the messy voltage dynamics of the transistors and other lowest-level components--what they get are inelegant "gremlin" circuits whose behavior is determined by incidental properties of the way transistors implement gates. The resultant circuits have blurry joints--i.e., the distinction between one level of explanation and the next is hazy--so hazy that it is not quite meaningful to say there are logic gates any longer. Even small circuits built, or evolved, in this way are nearly indecipherable.

Are brains like the logical, predictable computers sitting on our desks, with sharply delineated levels of description? At first glance they might seem to be: cortical areas, columns, microcolumns, neurons, synapses, and so on, ending with the genome.

Or, are brains like those digital circuits allowed to evolve on their own, and which pay no mind to whether or not the nakedest ape can comprehend the result? Might the brain's joints be blurry, with each lower level reaching up to infect the next? If this were the case, then in putting together an artificial brain we don't have the luxury of just building at one level and ignoring the complexity in levels below it.

Just as evolution leads to digital circuits that aren't comprehensible in terms of logic gates--one has to go to the transistor level to crack them--evolution probably led to neural circuits that aren't comprehensible in terms of neurons. It may be that, to understand the neuronal machinery, we have no choice but to go below the neuron. Perhaps all the way down.

...in which case I'd recommend looking for other ways forward besides trying to build what would amount to the largest gremlin circuit in the known universe.

Instincts

It would be grand if brains could enter the world as tabula rasa and, during their lifetime, learn everything they need to know.

Grand, at least, if you're hoping to build one yourself. Why? Because then you could put together an artificial brain having the general structural properties of real brains and equipped with a general purpose learning algorithm, and let it loose upon the world. Off it'd go, evincing the brilliance you were hoping for.

That's convenient for the builder of an artificial brain, but not so convenient for the brain itself, artificial or otherwise. Animal brains don't enter the world as blank slates. And they wouldn't want to. They benefit from the "learning" the countless generations of selection among their ancestors accumulated. Real brains are instilled with instincts. Not simple reflexes, but special learning algorithms designed to very quickly learn the right sorts of things given that the animal is in the right sort of habitat. We're filled with functions, or evolved capabilities, about which we're still mostly unaware.

To flesh them out we'll have to understand the mind's natural habitat, and how the mind plugs into it. I've called the set of all these functions or powers of the brain the "teleome" (a name that emphasizes the unabashed teleology that's required to truly make sense of the brain, and is simultaneously designed to razz the "-ome" buzzwords like 'genome' and 'connectome').

If real brains are teeming with instincts, then artificial brains also want to be; why be given the demanding task of doing it all in one generation when it can be stuffed from the get-go with wisdom of the ancients?

And now one can see the problem for the artificial brain builder. Getting the general brain properties isn't enough. Instead, the builder is saddled with the onerous task of packing the brain with a mountain of instincts (something that will require many generations of future scientists to unpack, as they struggle to build the teleome), and somehow managing to encode all that wisdom in the fine structure of the brain's organization.

The Good News

Maybe I'm a buzz kill. But I prefer to say that it's important to kill the bad buzz, for it obscures all the justified buzz that's ahead of us in neuroscience and artificial intelligence. And there's a lot. Building artificial brains may be a part of our future--though I'm not convinced--but for the foreseeable, century-scale future, I see only fizzle.

~~

Mark Changizi is an evolutionary neurobiologist, and Director of Human Cognition at 2AI Labs. He is the author of The Brain from 25000 FeetThe Vision Revolution, and his newest book,Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man. This piece first appeared Nov 16, 2011, at Discover Magazine.