David Chalmers is a philosopher of mind, best known for his argument about the difficulty of what he termed the “hard problem” of consciousness, which he typically discusses by way of a thought experiment featuring zombies who act and talk exactly like humans, and yet have no conscious thought (I explained clearly what I think of that sort of thing in my essay on “The Zombification of Philosophy”).

Yesterday I had the pleasure of seeing Chalmers in action live at the Graduate Center of the City University of New York. He didn’t talk about zombies, telling us instead his thoughts about the so-called Singularity, the alleged moment when artificial intelligence will surpass human intelligence, resulting in either all hell breaking loose or the next glorious stage in human evolution — depending on whether you typically see the glass as half empty or half full. The talk made clear to me what Chalmers’ problem is (other than his really bad hair cut): he reads too much science fiction, and is apparently unable to snap out of the necessary suspension of disbelief when he comes back to the real world. Let me explain.

Chalmers’ (and other advocates of the possibility of a Singularity) argument starts off with the simple observation that machines have gained computing power at an extraordinary rate over the past several years, a trend that one can extrapolate to a near future explosion of intelligence. Too bad that, as any student of statistics 101 ought to know, extrapolation is a really bad way of making predictions, unless one can be reasonably assured of understanding the underlying causal phenomena (which we don’t, in the case of intelligence). (I asked a question along these lines to Chalmers in the Q&A and he denied having used the word extrapolation at all; I checked with several colleagues over wine and cheese, and they all confirmed that he did — several times.)

Be that as it may, Chalmers went on to present his main argument for the Singularity, which goes something like this:

1. There will soon be AI (i.e., Artificial Intelligence)
2. There will then soon be a transition from AI to AI+
3. There will then soon be a transition from AI+ to AI++

Therefore, there will be AI++

All three premises and the conclusion where followed by a parenthetical statement to the effect that each holds only “absent defeaters,” i.e., absent anything that may get in the way of any of the above.

Chalmers was obviously very proud of his argument, but I got the sense that few people were impressed, and I certainly wasn’t. First off, he consistently refused to define what AI++, AI+, or even, for that matter, AI, actually mean. This, in a philosophy talk, is a pretty grave sin, because philosophical analysis doesn’t get off the ground unless we are reasonably clear on what it is that we are talking about. Indeed, much of philosophical analysis aims at clarifying concepts and their relations. You would have been hard pressed (and increasingly frustrated) in finding any philosophical analysis whatsoever in Chalmers’ talk.

Second, Chalmers did not provide a single reason for any of his moves, simply stating each premise and adding that if AI is possible, then there is no reason to believe that AI+ (whatever that is) is not also possible, indeed likely, and so on. But, my friend, if you are making a novel claim, the burden of proof is on you to argue that there are positive reasons to think that what you are suggesting may be true, not on the rest of us to prove that it is not. Shifting the burden of proof is the oldest trick in the rhetorical toolbox, and not one that a self-respecting philosopher should deploy in front of his peers (or anywhere else, for that matter).

Third, note the parenthetical disclaimer that any of the premises, as well as the conclusion, will not actually hold if a “defeater” gets in the way. When asked during the Q&A what he meant by defeaters, Chalmers pretty much said anything that humans or nature could throw at the development of artificial intelligence. But if that is the case, and if we are not provided with a classification and analysis of such defeaters, then the entire argument amounts to “X is true (unless something proves X not to be true).” Not that impressive.

The other elephant in the room, of course, is the very concept of “intelligence,” artificial or human. This is a notoriously difficult concept to unpack, and even more so to measure quantitatively (which would be necessary to tell the difference between AI and AI+ or AI++). Several people noted this problem, including myself in the Q&A, but Chalmers cavalierly brushed it aside saying that his argument does not hinge on human intelligence, or computational power, or intelligence in a broader sense, but only on an unspecified quantity “G” which he quickly associated with an unspecified set of cognitive capacities through an equally unspecified mathematical mapping function (adding that “more work would have to be done” to flesh out such notion — no kidding). Really? But wait a minute, if we started this whole discussion about the Singularity using an argument based on extrapolation of computational power, shouldn’t our discussion be limited to computational power? (Which, needless to say, is not at all the same as intelligence.) And if we are talking about AI, what on earth does the “I” stand for in there, if not intelligence — presumably of a human-like kind?

In fact, the problem with the AI effort in general is that we have little progress to show after decades of attempts, likely for the very good reason that human intelligence is not algorithmic, at least not in the same sense in which computer programs are. I am most certainly not invoking mysticism or dualism here, I think that intelligence (and consciousness) are the result of the activity of a physical brain substrate, but the very fact that we can build machines with a degree of computing power and speed that greatly exceeds those of the human mind, and yet are nowhere near being “intelligent,” should make it pretty clear that the problem is not computing power or speed.

After the deployment of the above mentioned highly questionable “argument,” things just got bizarre in Chalmers’ talk. He rapidly proceeded to tell us that A++ will happen by simulated evolution in a virtual environment — thereby making a blurred and confused mix out of different notions such as natural selection, artificial selection, physical evolution and virtual evolution.

Which naturally raised the question of how do we control the Singularity and stop “them” from pushing us into extinction. Chalmers’ preferred solution is either to prevent the “leaking” of AI++ into our world, or to select for moral values during the (virtual) evolutionary process. Silly me, I thought that the easiest way to stop the threat of AI++ would be to simply unplug the machines running the alleged virtual world and be done with them. (Incidentally, what does it mean for a virtual intelligence to exist? How does it “leak” into our world? Like a Star Trek hologram gone nuts?)

Then the level of unsubstantiated absurdity escalated even faster: perhaps we are in fact one example of virtual intelligence, said Chalmers, and our Creator may be getting ready to turn us off because we may be about to leak out into his/her/its world. But if not, then we might want to think about how to integrate ourselves into AI++, which naturally could be done by “uploading” our neural structure (Chalmers’ recommendation is one neuron at a time) into the virtual intelligence — again, whatever that might mean.

Finally, Chalmers — evidently troubled by his own mortality (well, who isn’t?) — expressed the hope that A++ will have the technology (and interest, I assume) to reverse engineer his brain, perhaps out of a collection of scans, books, and videos of him, and bring him back to life. You see, he doesn’t think he will live long enough to actually see the Singularity happen. And that’s the only part of the talk on which we actually agreed.

The reason I went on for so long about Chalmers’ abysmal performance is because this is precisely the sort of thing that gives philosophy a bad name. It is nice to see philosophers taking a serious interest in science and bringing their discipline’s tools and perspectives to the high table of important social debates about the future of technology. But the attempt becomes a not particularly funny joke when a well known philosopher starts out by deploying a really bad argument and ends up sounding more cuckoo than trekkie fans at their annual convention. Now, if you will excuse me I’ll go back to the next episode of Battlestar Galactica, where you can find all the basic ideas discussed by Chalmers presented in an immensely more entertaining manner than his talk.