Banner
    Don’t Hold Your Breath Waiting For Artificial Brains
    By Mark Changizi | February 2nd 2012 09:06 AM | 15 comments | Print | E-mail | Track Comments
    About Mark

    Mark Changizi is Director of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella 2009) and Harnessed: How...

    View Mark's Profile

    I can feel it in the air, so thick I can taste it. Can you? It's the we're-going-to-build-an-artificial-brain-at-any-moment feeling. It's exuded into the atmosphere from news media plumes ("IBM Aims to Build Artificial Human Brain Within 10 Years") and science-fiction movie fountains ...and also from science research itself, including projects like Blue Brain and IBM's SyNAPSE. For example, here's a snippet from the press release about the latter:

    Today, IBM (NYSE: IBM) researchers unveiled a new generation of experimental computer chips designed to emulate the brain's abilities for perception, action and cognition.

    Now, I'm as romantic as the next scientist (as evidence, see my earlier post on science monk Carl Sagan), but even I carry around a jug of cold water for cases like this. Here are four flavors of chilled water to help clear the palate.

    The Worm in the Pass

    In the story about the Spartans at the Battle of Thermopylae, 300 soldiers prevent a million-man army from making their way through a narrow mountain pass. In neuroscience it is the 300 neurons of the roundworm C. elegans that stand in the way of our understanding the huge collections of neurons found in our or any mammal's brain.

    This little roundworm is the most studied multicellular organism this side of Alpha Centauri--we know how its 300 neurons are interconnected, and how they link up to the thousand or so cells of its body. And yet... Even with our God's-eye-view of this meager creature, we're not able to make much sense of its "brain."

    So, tell me where I'm being hasty, but shouldn't this give us pause in leaping beyond a mere 300 neurons all the way to 300 million or 300 billion?

    As they say, 300 is a tragedy; 300 billion is a statistic.

    Big-Brained Dummies

    About that massive Persian army: it didn't appear to display the collective intelligence one might expect for its size.

    Well, as it turns out, that's a concern that applies to animal brains as well, which can vary in size by more than a hundred-fold--in mass, number of neurons, number of synapses, take your pick--and yet not be any smarter. Brains get their size not primarily because of the intelligence they're carrying, but because of the size of the body they're dragging.

    I've termed this the "big embarrassment of neuroscience", and the embarrassment is that we currently have no good explanation for why bigger bodies have bigger brains.

    If we can't explain what a hundred times larger brain does for its user, then we should moderate our confidence in any attempt we might have for building a brain of our own.

    Blurry Joints

    The computer on which you're reading this is built from digital circuits, electronic mechanisms built from gates called AND, OR, NOT and so on. These gates, in turn, are built with transistors and other parts. Computers built from digital circuits built from logic gates built from transistors. You get the idea. It is only because computers are built with "sharp joints" like these that we can make sense of them.

    But not all machines have nice, sharp, distinguishable levels like this, and when they don't, the very notion of "gate" loses its meaning, and our ability to wrap our heads around the machine's workings can quickly deteriorate.

    In fact, when scientists create simulations that include digital circuits evolving on their own--and include the messy voltage dynamics of the transistors and other lowest-level components--what they get are inelegant "gremlin" circuits whose behavior is determined by incidental properties of the way transistors implement gates. The resultant circuits have blurry joints--i.e., the distinction between one level of explanation and the next is hazy--so hazy that it is not quite meaningful to say there are logic gates any longer. Even small circuits built, or evolved, in this way are nearly indecipherable.

    Are brains like the logical, predictable computers sitting on our desks, with sharply delineated levels of description? At first glance they might seem to be: cortical areas, columns, microcolumns, neurons, synapses, and so on, ending with the genome.

    Or, are brains like those digital circuits allowed to evolve on their own, and which pay no mind to whether or not the nakedest ape can comprehend the result? Might the brain's joints be blurry, with each lower level reaching up to infect the next? If this were the case, then in putting together an artificial brain we don't have the luxury of just building at one level and ignoring the complexity in levels below it.

    Just as evolution leads to digital circuits that aren't comprehensible in terms of logic gates--one has to go to the transistor level to crack them--evolution probably led to neural circuits that aren't comprehensible in terms of neurons. It may be that, to understand the neuronal machinery, we have no choice but to go below the neuron. Perhaps all the way down.

    ...in which case I'd recommend looking for other ways forward besides trying to build what would amount to the largest gremlin circuit in the known universe.

    Instincts

    It would be grand if brains could enter the world as tabula rasa and, during their lifetime, learn everything they need to know.

    Grand, at least, if you're hoping to build one yourself. Why? Because then you could put together an artificial brain having the general structural properties of real brains and equipped with a general purpose learning algorithm, and let it loose upon the world. Off it'd go, evincing the brilliance you were hoping for.

    That's convenient for the builder of an artificial brain, but not so convenient for the brain itself, artificial or otherwise. Animal brains don't enter the world as blank slates. And they wouldn't want to. They benefit from the "learning" the countless generations of selection among their ancestors accumulated. Real brains are instilled with instincts. Not simple reflexes, but special learning algorithms designed to very quickly learn the right sorts of things given that the animal is in the right sort of habitat. We're filled with functions, or evolved capabilities, about which we're still mostly unaware.

    To flesh them out we'll have to understand the mind's natural habitat, and how the mind plugs into it. I've called the set of all these functions or powers of the brain the "teleome" (a name that emphasizes the unabashed teleology that's required to truly make sense of the brain, and is simultaneously designed to razz the "-ome" buzzwords like 'genome' and 'connectome').

    If real brains are teeming with instincts, then artificial brains also want to be; why be given the demanding task of doing it all in one generation when it can be stuffed from the get-go with wisdom of the ancients?

    And now one can see the problem for the artificial brain builder. Getting the general brain properties isn't enough. Instead, the builder is saddled with the onerous task of packing the brain with a mountain of instincts (something that will require many generations of future scientists to unpack, as they struggle to build the teleome), and somehow managing to encode all that wisdom in the fine structure of the brain's organization.

    The Good News

    Maybe I'm a buzz kill. But I prefer to say that it's important to kill the bad buzz, for it obscures all the justified buzz that's ahead of us in neuroscience and artificial intelligence. And there's a lot. Building artificial brains may be a part of our future--though I'm not convinced--but for the foreseeable, century-scale future, I see only fizzle.

    ~~

    Mark Changizi is an evolutionary neurobiologist, and Director of Human Cognition at 2AI Labs. He is the author of The Brain from 25000 FeetThe Vision Revolution, and his newest book,Harnessed: How Language and Music Mimicked Nature and Transformed Ape to Man. This piece first appeared Nov 16, 2011, at Discover Magazine.

    Comments

    Gerhard Adam
    This is absolutely great!  I'm glad to see something that goes well beyond the cheerleading of unlimited future intelligence and human/machine interfaces as if they required little more than an organic USB port.
    Mundus vult decipi
    vongehr
    Mark, we do not understand the 300 neurons of the roundworm or a single human neuron, simply because we do not even understand a single neurons' dendritic structure sufficiently yet. But this does not support your point. You make it a point that we understand 300 but that this does not imply we will understand millions soon, however, I say you do not need to understand millions of subunits in a modular architecture in order to make one artificially. In fact, you do not necessarily need to understand them at all.

    Your military pictures here are nice writing, but in the end more misleading than helpful. I am not claiming that artificial brains that are the same as human's are around the corner, however:

    1) Artificial brains that are not like human ones but better for many tasks exist already.
    2) "Artificial" nanotech brains like human ones exist for millions of years already, nature made them, and we may tomorrow find how to copy mayor steps of this without understanding every neuron, thus ending up with a "human brain" [and all its shortcomings - after all, we should avoid to make the impression that human brains are the endpoint and bestest thing possible.]

    I like the "packing a mountain of instincts" point, but imagine that this, via artificial evolution in a simulated environment made to prepare for the tasks the brains are desired for to operate in, is manageable rather fast once the artificial brains are cheaply grown, which they may be surprisingly fast. The "buzz kill" is not that there are no such brains, the real buzz kill will be more to do with people finally realizing what kind of primitive cockroaches we humans really are. Of course, this point of view does not sell well at all. ;-)

    UPDATE: I combined this and some of the below criticism/discussion into the article "Robopocalypse Now".
    Gerhard Adam
    Your points are valid but, in fairness, there are many efforts currently underway that don't follow that particular perspective, but rather take the fanciful idea of reverse-engineering the brain as something that is relatively easy to do.

    So, while there may be many alternative approaches to doing as you suggest, the projects mentioned in the article are pursuing a pure digital approach to simulating and ultimately creating a human brain equivalent, for which I agree with Mark.  This isn't a viable approach since it rests too much on miracles and assuming that understanding the actual operation of the brain is simply a matter of increasing computing power. 
    Mundus vult decipi
    vongehr
    The overselling of particular AI variants as handing us artificial super-humans is the same for maybe more than 30 years now. This isn't the main charge here. Mark makes it sound as if we definitely will not have any artificial brains for a very long time to come while actually we have them already. I find the reasons he gives, although maybe he did not intend to do so, sound too much as if animal brains are something really super special, like the only things with souls, which is a dangerously ignorant position we may pay for dearly soon. Computers invaded already everywhere, robots have started, and they may well relatively soon (read: immediately on usual biological evolutionary time scales) not even need biological substrates and not give a millisecond of thought about whether we think they are "as good as humans".
    Mark Changizi
    "but imagine that this, via artificial evolution in a simulated environment made to prepare for the tasks the brains are desired for to operate in, is managable rather fast once the artificial brains are cheaply grown, which they may be surprisigly fast."

    But the problem is knowing what simulated environment. To pick the right one, one needs to understand the "teleome," as I call it. Otherwise one won't know what to select artificial brains to do.
    Gerhard Adam
    Actually I don't think so, if I understand Sascha's point.  That would be true if we were trying to replicate a biological organism perhaps with technology, but not simply if we want to create a specialized machine for a particular task/environment.

    In other words, it would be somewhat similar to the issue of Watson, playing Jeopardy.  It's hardly a real "intelligence" and yet, there's no question that for the environment (Jeopardy) that it was "raised" in, it is excellent.  It certainly outpaces humans in that.

    So, if our "environment" is narrow enough and the tasks specific enough, then we've already demonstrated that there are machines with sufficient "intelligence" to outperform humans given those narrow boundaries.  Now we all know that that isn't actually "intelligence" in any biologically meaningful sense, but if the objective is to increase abilities, then they would certainly qualify in that respect.
    Otherwise one won't know what to select artificial brains to do.
    Again, if we don't think in terms of replicating biology, one could also imagine specialty machines that perform a variety of tasks (similar to today's calculators, chess computers, watson, etc.).  A human being could easily be considered a symbiont of these systems and determine "what" is desirous to do, while the machines could simply go out and do it, outperforming comparable humans at the task.  In that respect, we may never replicate biology with technology, but we could certainly see all manner of technical implementations that are far removed from the fantasy of AI, that could still have profound evolutionary consequences for humans.
    Mundus vult decipi
    vongehr
    It is fashionable to demand the next big "ome" after the genome became so important - I only mention "Connectome". I think you thereby claim that in order to make some XYZ (here artificial brain) one needs to "really understand" (the 300 neurons, the teleome) while it is the very nature of evolutionary processes that they design XYZ without understanding anything. The use of evolutionary methods (e.g. designing fractal like antennas/lenses) leads to very efficient "artificial" systems where we precisely do not understand why they work so well. "Tele" means we know what we want it to do, what XYZ is supposed to be for, what environment we want to sell it for. Industry knows that always very well.

    You almost hedge your bets here: If tomorrow we have artificial brains in robots that out-compete us, you will perhaps call them "merely" bio/silicone hybrids and perhaps even "natural" instead of "artificial", plainly because we do not "really understand" how they work, proving we did not really make them - hey, how could we if we do not know how, right? Well, no single person really understands a computer today either, nor can we make a modern computer without the help of robots and computers, so yes, they make each other, they start to program each other, and in this sense we are all together in one natural evolution and even computers are not "artificial" (in the sense of "made by us while understanding how they work"). We may hold on to insisting that there are no "artificial brains" as marvelous as ours, but it will not stop the robots from kicking our butts regardless.

    Evolution has no interest in reproducing exactly what is already there and technology is part of evolution. If artificial human brains do not come along, the reason is not because we cannot make them, but because other ("better") things emerge and they may have no interest in resurrecting dinosaurs.
    Evolution has no interest in reproducing exactly what is already there and technology is part of evolution. If artificial human brains do not come along, the reason is not because we cannot make them, but because other ("better") things emerge and they may have no interest in resurrecting dinosaurs.

    Yes, early attempts at flight with flapping wings ... . Biology is messy with high error rates. "Intelligence" is an operational concept, "Intelligence" is not there. What is there is many behaviors and processes we call intelligent. We can and already do create many types of intelligent behaviors through technology which are often superior to what evolution created.

    This is indeed a brilliant article! I've been trying to communicate some of these thoughts for a few years, but Mark is such a great writer. Anyway, there are a couple of additional points to make here:

    1.We don't understand the biology completely. And where we understand mechanisms we don't understand what triggers them. We are just beginning to investigate the role of astrocytes. Cell signalling via proteins and amino acids is an active area of research. ( I use "we" loosely because I am not a neuroscientist. )

    2. The major differences between the small pieces of intelligence that we can demonstrate and human intelligence are creativity, self-aswareness, consciouness, emotions and the teleome. The drive to survive and to protect our young is so basic that it underscores our behavior. That fundamental drive has not been embedded in the "DNA" of artificial "brains." And that is just the most base of Maslow's hierarchy of needs.

    I think we have a long way to go to achieve true artificial intelligence. It is an exciting time to be a researcher! And perhaps our collective intelligence made possible by the Internet and our now-constant connectivity, will speed us on our way.

    Gerhard Adam
    And perhaps our collective intelligence made possible by the Internet and our now-constant connectivity, will speed us on our way.
    That's the worrying part that Sascha was talking about.  That technological symbiosis is also exerting its own evolutionary pressures on humans and may preclude ever building something like the envisioned AI.

    Just for the record, I don't believe that actual AI is even possible, primarily because of such poor understanding and definitions of what intelligence is.  However, if by some luck and circumstance we could replicate the behavior of the human brain, we'll discover that we'll have wasted our time in producing an unusable technology.
    Mundus vult decipi
    vongehr
    I don't believe that actual AI is even possible
    I is nature's AI, so if AI is impossible, I is impossible. Since I is possible, so is AI.
    if by some luck and circumstance we could replicate the behavior of the human brain, we'll discover that we'll have wasted our time in producing an unusable technology.
    I love the deeply sarcastic and biting punchlines that you sometimes come up with so nonchalantly. Yes, this is pretty much what I was saying. Why the hell would anything intelligent in their right mind willingly make something as horrible as a human as if we do not have way too many of those damn things already. Hoping for rationality, what will emerge will always lack plenty of aspects so that certain people will always be able to claim "well these systems are obviously not human".
    ....That technological symbiosis is also exerting its own evolutionary pressures on humans and may preclude ever building something like the envisioned AI. ...

    What! Are you suggesting we are not in control of our fate? That our intentions are not paving the way for history? Such ideas tend to terrify some people Gerhard. Ah well, the truth is coercive so they'll just have to get used to it.

    Gerhard Adam
    I'll go one step further and claim that we're not even in control of our intentions :)
    Mundus vult decipi
    SynapticNulship
    Nice article. At first I wasn't going to comment because there's nothing controversial here in my opinion. However, I think the first paragraph is somewhat arbitrary--

    I can feel it in the air, so thick I can taste it. Can you? It's the we're-going-to-build-an-artificial-brain-at-any-moment feeling.
    As far as I can tell from my vague notion of history (which is primarily through old books and articles), people thought that in the 40s, 50s, 60s, 70s etc., ad nauseum. Perhaps it's the classic issue of each generation thinks it's special. And each generation also has the naysayers.

    To flesh them out we'll have to understand the mind's natural habitat, and how the mind plugs into it. I've called the set of all these functions or powers of the brain the "teleome"
    Once somebody made a list of "things the mind does" or something like that. I forgot who it was--it might have been the late Push Singh--and I've always kept that in the back of my mind when thinking about cognitive architectures. I've been slowly mashing together concepts that I pick up from animal learning,  altricial skills vs. precocial skills, evo-devo, interaction design, perception as interfaces, perception related to action, product requirements and use cases, and various other concepts. As you can imagine, after considering all of that, how a mind plugs into its environment (which it is also part of) is of the great importance. So it's a good idea in general, and I say cheers not to "the" teleome but to multiple teleomes.
    Great article. I too do not believe we will ever create a develop an artificial brain, and that is a good thing.

    I also agree that this is a rather non-controversial article unless you are a software designer with the dream of building a better human brain. For me, it is far more interesting to work toward developing an understanding of human behavior and use that understanding to better the lives of humans that it is to work toward building products that replace us.