Sorry Ray Kurzweil, AI Hasn't Improved Much Since The 1960s
    By Hank Campbell | July 13th 2011 05:48 PM | 59 comments | Print | E-mail | Track Comments
    Is there a 'Singularity', an ascension of man being able to become machines or machines becoming sentient, coming in 2045?

    Given the current state of artificial intelligence and robotics, that would only happen if we were able to put a man on the Moon in a relative time scale of 3 minutes.  In reality, selling books and tickets to conferences showing "Man + (Black Box Full Of Magic) = SINGULARITY" not much progress has been made in decades much less anything leading us to believe the drastic inflection point needed will be happening any time this century.
    The robotics pioneer Rodney Brooks often begins speeches by reaching into his pocket, fiddling with some loose change, finding a quarter, pulling it out and twirling it in his fingers.

    The task requires hardly any thought. But as Dr. Brooks points out, training a robot to do it is a vastly harder problem for artificial intelligence researchers than I.B.M.’s celebrated victory on “Jeopardy!” this year with a robot named Watson. - In Search of a Robot More Like Us By JOHN MARKOFF, New York Times
    If you can't even fold towels, no miraculous ascension is happening by 2045.   To make this towel-folding robot even remotely interesting, they had to speed the video up 50X.   25 minutes to fold a towel is not all that great if you want your brain to be stored digitally by 2045.



    But they are making progress to at least doing more than folding a towel.    “Our end goal right now is to do an entire laundry cycle,” said Pieter Abbeel, a Berkeley computer scientist who leads the group behind the towel-folder, “from dirty laundry in a basket to everything stacked away after it’s been washed and dried.”

    That's not to make fun of their research, it is interesting stuff.  It is to make fun of people who are selling books to gullible people with the promise they will live forever if they can live to 2045.

    Comments

    Here's what everyone knows about Ray Kurzweil:

    Peddling alkaline water? Check.
    Taking massive quantities of pills without assessing the evidence? Check.
    Manipulating historical data to justify his conclusion? Check.
    Association with known cranks, kooks and other dingbats? Check.

    Rightful heir to Thomas Edison my ass. PZ Myers described him as the Deepak Chopra of the computer science crowd. Enough said.

    Gerhard Adam
    That's what happens when you propose to achieve something for which you don't even have a working definition.
    Mundus vult decipi
    “No one ever went broke underestimating the intelligence of the American public.”
    H. L. Mencken

    Okay, can we point out where exponential progress goes wrong? It is obvious that technology and our understanding of the universe and the power derived from the two is growing at a faster and faster rate each year. Do you think this will slow due to natural constraints? Instead of an exponential curve should this be an S curve or just plain not something that can be graphed and predicted? But I guess the real question is, will progress grind to a halt or even move backwards? Is a 'singularity' like event still inevitable? Given we remain in existence long enough of course.

    Hank
    Exponential progress is just a fetish.   We know at 25nm growth has to stop given physics of semiconductors, which means by 2015-18 we either have a new way to design processors or Moore's Law is dead.   Kurzweil's projection relies on an infinite Moore's Law - then a huge magical inflection point in AI.  That's not a roadmap, it's a fever dream.
    If 25nm is a barrier, then why is Intel making 22nm ICs?
    Why is Intel building a $9 Billion Manufaturing facility for 22nm fab?
    Why does Intel have working 14nm ICs in the lab?
    Why does IBM have a funtional 6nm gate transistor that is only 4-8nm thick?

    Hank
    You're confused on that these terms mean - a gate is not a CPU, nor is a wafer process - but I am not, so I will translate for you; using your understanding, the smallest CPU possible will be around 7.5nm, which is right in the timeframe I have said for when electrons stop moving and so does Kurzweil's book sales.  That will be a magnitude of thousands away from the singularity wet dream.  Will we go beyond that?  I haven't seen anything close to quantum computing coming out of narrow lab applications.
    MikeCrow
    When someone say a 22nm fab, it's minimum feature size(well that how I know it, I can see some saying it's gate length,YMMV).
    When you say the minimum CPU size is 7.5nm, what do you think minimum feature size (or gate length, if you're so inclined)is, and when?
    If Intel is building a 22nm plant, we're ~2 generations of plants away from a say 6-7nm feature size.
    Never is a long time.
    I think you're the one confused:

    You stated "We know at 25nm growth has to stop given physics of semiconductors,".

    You obviously meant 25nm fab, as the only thing commonly referred to in nm is half-pitch memory cell CMOS process size. If you meant CPU, you are grossly mistaken. When the industry says 32nm CPU what they actually mean is that the smallest feature size is 32nm.
    The CPU is what we commonly call the processor. It is nowhere near 25nm -- it is nearer to 25,400,000nm (2.54cm or 1 inch) on a side. CPUs are composed of (among other things) transistors, modern CPUs contain billions of them.

    The simple fact is 25nm is not a barrier, Intel has a 14nm node CMOS process working.
    IBM and NEC have both produced transistors that are 6nm or smaller.

    What the 14nm node means is that instead of 2.7 Billion transistors in a corei7 (32nm) you could have 10.3 Billion.

    BTW, they are now stacking CPUs vertically, right on top of eachother.
    http://www.hpcwire.com/hpcwire/2012-01-26/swiss_scientists_develop_3d_co...

    Hank
    Stacking is exactly the problem.  By 2045 it will still take an array of chips the size of a skyscraper to mimic the brain.  If that is a 'singularity', okay.  I appreciate that you share Kurzweil's unbridled optimism - he makes a lot of money off of you that way.  But optimism and belief is not a roadmap, it is a religion.
    So you concede that 25nm is no barrier at all, and now move your argument against to an unsupported STRAWMAN of a skyscraper sized "array of chips".

    Please provide the basis for your assumption. What percentage of 1 trillion cells in the brain do you propose are used as logic interconnection, as opposed to simply supporting metabolism? What percentage are only utilized as part of the autonomic system?

    What are the number of logic gates that you believe are needed to "mimic the brain"? After you provide those benchmark numbers, we will have a baseline to measure the computing requirements against.

    Hank
    So you concede that 25nm is no barrier at all,
    No, physics still applies, to religious acolytes and scientists alike.
    What are the number of logic gates that you believe are needed to "mimic the brain"?
    This question is the problem you have in understanding the core issue.  As Mark Changizi recently wrote (in better prose than I ever could), the 300 neurons of C. elegans are still lost to us so a human brain is not close regardless of how many logic gates you think you can create now or in your future wonderland.  That roundworm is the most studied multicellular organism in the world, we know how its 300 neurons are interconnected and how they link up to the thousand or so cells of its body but we're not able to make any sense of its "brain." 

    By the end of Moore's "Law", and it is ending as we speak, on the same timeframe I outlined 10 years ago, a die of 2015 will be able to do what a machine the size of a house could have done then - and we're going to be no closer to understanding the brain, much less using the childlike notion that we can just make a bunch of stuff redundant and it will be a brain.
    Just admit you didn't know what you were talking about when you said 25nm is a barrier. You obviously didn't understand b/c you stated a size of 7.5nm for a CPU, which is patently ridiculous. Based on your previous posts, I highly doubt you remotely understand whatever physics you think may apply.

    As far as your worm example, there you go using a common logical fallacy -- it is a non sequitur (it does not follow) to state we can't model and simulate the brain just because you allege we can't make sense of c. elegans. We can't ask a worm what it is 'thinking', likely because it doesn't.

    But we can ask people what they are thinking when we see parts of the brain light up on fMRI and PET scans, and other devices that actually see neurons firing.

    Hank
    I was actually trying to put it in terms you would understand, since you were mixing and matching different concepts. Wikipedia does a decent job on drift speed and I assume you understand the heat issues related to bigger die that are needed for these magical clock speeds you want, and there is a propagation velocity issue that happens even before electrons stop moving. Do a little bit of reading, you can ignore the math and just get a conceptual understanding and you will see the problem.

    I get that you are enthusiastic about Ray and that's fine, but you aren't going to go far being the crazy-haired guy telling everyone else they don't "remotely understand whatever physics you think may apply" - physics believes in you whether or not you believe in it, you can't just wish it away.
    MikeCrow
    The reason pc clock speed plateaued slightly above 3 ghz is heat. Your limited to about 100 watts with air cooling. You can go to around 250 watts with liquid cooling, though the liquid cooled Crays had ~ $25,000 worth of coolant.
    Never is a long time.
    Hank
    No one is going above 4 anyway but that is just one issue.  The big issue remains that - at least using current technology - we know where it ends.  We can have confidence in future young physicists - when I was young old guys were worried about a 100Mhz bus and various interconnect problems because of radio frequencies - but it got solved.  That does not mean we should be irrational and think we will create a robot brain.
    I never brought clock speed up. But since you did, graphene processors operating at 100GHz have existed since 2010, and graphene also resolves lots of heat related issues.

    FYI stacking ordinary silicon based processors reduces power consumption which, depending on the architecture, may be by up to 2 orders of magnitude (thats 100 times), which reduces heat.
    Physics --> energy in = dissipated heat out; less energy in = less heat

    MikeCrow
    There's lots of circuits that operate at higher clock speeds than 3ghz, and if you keep the number of transistors down to limit the power dissipation from switching losses to ~100w for air cooling it works fine. It's a trade off, more transistors, lower clock speeds, fewer transistors, higher clock speeds. Intel's new 22 nm 3d process will probably use less power(higher r_off), but they still have gate capacitance.

    As features shrink, gate capacitance does shrinks, but r_on goes up, and inter-connects get longer so there's still a lot of switching losses. For modern processors, switching power is the major consumer of energy (charging and discharging the gate capacitor).

    I'd like to see how stacking silicon reduces power consumption, got any links?
    Never is a long time.
    It's through the use of TSV -- through silicon via; this results in shortened interconnects as well as increased bandwidth due to the different architecture, this applies to IC/CPU, Memory, and FPGAs

    http://chipdesignmag.com/lpd/blog/tag/through-silicon-via/
    http://www.electronicsweekly.com/Articles/19/10/2011/52076/through-silic...

    There are several other papers and even a presentation or two on the web search on 3D chip, through silicon via, etc

    MikeCrow
    All this does is reduce output driver power, which will help (the second article suggest 30% improvement). But it does nothing for what I was talking about.
    Never is a long time.
    Gerhard Adam
    But we can ask people what they are thinking when we see parts of the brain light up on fMRI and PET scans, and other devices that actually see neurons firing.
    Oh, so that's how it works.  Perhaps the next time you have a problem with your PC you can hook it up to scanners and watch which circuits are active.  Then you can tell me how feasible that approach is to debugging code.
    As far as your worm example, there you go using a common logical fallacy...
    It's not a fallacy.  The only thing fallacious here is the unbridled optimism of those that don't understand biology thinking they have a cursory understanding of what they are claiming to emulate.  The really stupid part, is that you can't even define what you think you're capable of designing.

    Mundus vult decipi
    Actually, being able to ask your test subject what they are thinking, while watching a real-time brain imaging scanner, is analogous to running through a software debugger step-by-step, while being able to run a hardware diagnostic at the same time.

    And it is a logical fallacy to say "because we can't do A, we can't do B".

    And the bottom line is, -- we don't have to represent the brain exactly, there are metabolic processes in the brain that we never have to address to emulate cognition. We don't even have to emulate self-awareness to emulate machine cognition.

    There is no magical "mind" that is decoupled from the organic machine. I can prove that incontrovertibly: if the parts of a human brain that store memory and perform higher thought are damaged or destroyed, the memories are gone, along with personality traits that make that person who they are -- the record is filled with case studies. The greater the damage, the greater the deficit. Science has also shown that chemicals can irretrievably remove memories.

    Gerhard Adam
    You don't even know what you don't know and you think that you can resolve this complexity with a brain scanner?  No one is proposing a magical "mind". 

    Your analogy about a software debugger is seriously wrong.  You clearly think the entire process is linear and subject to such a simple analysis.  You're simply not appreciating the complexity of what's involved.

    Please don't respond with some trite response that indicating how computing power is increasing, or we'll have a working model within a year.  We are decades away from any real understanding.
    We don't even have to emulate self-awareness to emulate machine cognition.
    Well, you certainly know alot of what doesn't need to be done.  How about providing a working definition of what "cognition" actually is? 
    Mundus vult decipi
    Gerhard Adam
    Okay, can we point out where exponential progress goes wrong?
    Sure ... it assumes that there are no limits.  It should be obvious that there are physical limits in all aspects of the universe.  Reductions in how small something can be, how fast something can be transmitted, etc.  To suggest that progress can continue indefinitely simply flies in the face of the obvious way the world works.
    Mundus vult decipi
    Limits? Hmmm....what limits have we reached so far? The sound barrier comes to mind. The speed of light you may agree with, of course there is the possibility of the warp drive or subspace. Ya science fiction at the moment but didn't an invisibility cloak show up recently on the news? Maybe moore's law will get a hand from a transistor that uses muons and bosons....that would be a lot smaller than a single atom. Or since we really only care about computational capacity maybe something new will replace the transistor all together. Or maybe we'll come up with something far superior to binary. Not to further overuse this saying but your thinking is so inside the box all you can see are walls.

    Progress has continued for as far back in our history as we can see. It has never stopped and somehow you are predicting it will...and claiming your prediction is proven by science. What basis could you possibly have for this prediction. The fact that atoms are x size? What the heck does that prove?

    Sounds like you predict one day every idea will have been thought of, every innovation done and completed and the human race will dust off it's hands and decide it is time to relax...nothing more we can do here. I don't see that ever happening. I don't see a universe filled with limits and finite possibilites. Is this what you see?

    Gerhard Adam
    You obviously haven't been paying attention.
    http://www.cra.org/ccc/docs/init/Quantum_Computing.pdf

    Such views are common among people that think they are being optimistic, or  "open-minded".  It just isn't so. 
    The fact that atoms are x size? What the heck does that prove?
    It proves that you can't get to an element smaller than an atom. 
    I don't see a universe filled with limits and finite possibilites.
    Then, as I said, you haven't been paying attention.  It's one thing to consider how these might be accommodated, and how long it may take to reach them.  To pretend they don't exist is just silly.  It's like pretending that the sun will burn forever.
    Mundus vult decipi
    You have failed to consider three main things. First, as IT power grows it uses less resources because it concerns information and information is independent of mass. So, as IT has increased in power the cost of has fallen. Second, whilst there are physical limits to how small an information bearing substrate can be it is quite small enough to change everything. Right now it is possible to build computers from subatomic sized quanta [there are as many atoms in a grain of sand as there are grains of sand on earth]. There are many experimental examples now and because they can have two values per bit a quantum computer would be able to do in a fraction of a second what a conventional computer would take 1000s of years to do - and be a lot smaller. Finally, it is not necessary to have massively powerful IT to make robots that can do anything e can do as including revolving a coin with two fingers. You and many have seen robots that can do quite extraordinary things and with 10 or 20 times the IT power per £ it could be good enough to do anything in a house such as cleaning, tidying, etc. But most in the business expect IT to be 50 to 100 times as powerful by 2015 because of developments in multicore processors [Glasgow university recently demonstrated a 1000 core processor], photonic, graphene, 3d and many other technologies in the pipeline and that is before the advent of quantum computers expected in the 2020s.

    It may be a bit much to up load the contents in the brain but it is perfectly possible to have strong AI to replace all goods and services, to have a 1000 times more powerful internet to link the whole world and to improve health to such an extent that we can live forever by 2030 to 2040.

    Hank
    You make a few points; one is unclear in its origin, that IT is independent of mass and somehow uses less resources as it grows.   Like Kurzweil, you believe in magic if you truly believe that.   Second, while simplistic machines can be made at the very small level, they are just that - the idea of making a whole bunch of simple things do something really complex is what got Kurzweil into this mess in the first place, it relies on a magical inflection point.

    Regarding what robots can do, no, there have been no "robots that can do quite extraordinary things", they can't even fold a towel in under 25 minutes, they are as intellectually simple as a windmill but, again, more processors will not be the solution.  I know Kurzweil says so, that is the nature of religious faith, but taking humans, adding a lot of computing power and believing we will have a singularity is no different than believing in a religious ascension.     All of the computing power in the world to-date has not extended the quality of our lives even a little - we live slightly longer than when Kurzweil first got this idea but not better.    We get the same diseases and chronic problems, we just live with them longer.    Other than belief and hope, there is no reason to think computers will change that.
    Ai's not improved much?

    Watson? Deep blue? Self driving cars?

    I guess that's not different to 60s paper tape and punch cards then

    Hank
    You are confusing Artificial Intelligence with being able to sort faster.    A self-driving car is not thinking and Watson was just that - a contextual algorithm that allowed it to search for answers.   There as no AI in any of the examples you cite.

    A 3 year old can look at a cartoon image of a chicken I draw and know that's a chicken.   No computer can do that.  
    "You are confusing Artificial Intelligence with being able to sort faster. A self-driving car is not thinking and Watson was just that - a contextual algorithm that allowed it to search for answers. There as no AI in any of the examples you cite."

    I actually think I just had an aneurysm from reading this. What on Earth do you think the human brain is? It's a vast sorting machine - processing millions of different inputs and churning out outputs which will maximise its ultimate ability to pass on the organism's genetic code.

    The idea that the human brain is somehow "special" and "not just a machine" and that "it will never be able to be replicated by software" is a pervasive, insidious delusion which is generally enshrined by those who are either woefully uninformed, or should know better, but simply wish to feel that the human mind is in some way a transcendent entity, exempt from natural laws of physics.

    Despite the fact that there is now a plethora of research demonstrating that the brain is so mechanistic that even the entire concept of free-will is a red herring (1), there continues to be this enduring assumption that the brain is a black box and, for various dodgy metaphysical and pseudo-scientific reasons which usually leave neuroscientists at the point of apoplexy, will never be deciphered.

    The points you have raised are very clearly tinted by this point of view - I would advise dispensing with it, as it seems to impair your ability to appreciate the wonderful advances which have been made in AI research in the past few years.

    I would ask that you perhaps read over the wikipedia article entitled "AI effect" (when you have the time), which highlights the fact that practically every time an astonishing breakthrough has been achieved with AI research (like the Deep Blue chess match, or the Watson Jeopardy! game), this leads to a DEpreciation of the problem "Oh, we were obviously wrong! That clearly wasn't a REAL test of intelligence after all!" rather than an APpreciation of the AI.

    The AI winter is long over :)

    1. Our "conscious" selves are actually informed, by subcortical and unconscious regions of the brain, of decisions and movements AFTER the necessary neural cascades have already been initiated, so that action and "intent of action" occur effectively simultaneously, providing the illusion of free-will.

    Hank
    I actually think I just had an aneurysm from reading this. What on Earth do you think the human brain is? It's a vast sorting machine - processing millions of different inputs and churning out outputs which will maximise its ultimate ability to pass on the organism's genetic code.
    This simplistic and patronizing view of both neuroscience and consciousness is why 'singularity' people don't get much respect in science.    You regard the the totality of existence as a series of logic gates and if you simply add enough of those linearly, you have this nonlinear inflection point to transcendence.   It's magic, hocus-pocus, not science or even a roadmap to a possible science much less a technology goal.
       
    Seriously, you are simplifying both biology and self awareness far too much.  In another article I used the train wreck of the digital standard in music to show everything that is wrong with Kurzweil's thinking about thought - he is truly stuck in the 1960s.   You shouldn't be also.

    Sorry about your aneurysm but you are creating a self-contained world where what you think should be correct about the brain is actual fact - it is not the case.
    "This simplistic and patronizing view of both neuroscience and consciousness is why 'singularity' people don't get much respect in science."

    The curious thing is, Hank, that there is an ever-growing acceptance of the hypothesis which is presented by Ray Kurzweil's "singularity" idea in academic circles, because it is simply a logical consequence of trawling through what we currently know. Please allow me to explain (I am not trying to be patronising, just clear):

    1. Straight off the bat, we know that there is nothing more to consciousness than neurons - that's a given, if you talk to any undergraduate/lecturer/professor of neuroscience, psychology, biochemistry, medicine etc they will tell you so - since every thought, every memory, every wish, dream, pain, desire, love, preference, political view, ethical view, cultural memory and ancestral memory etc is encoded in neurons. To think otherwise takes us back to Descartes' view of dualism (the mind and "soul" are separate) which has been binned by pretty much everybody who is not enthralled by some Abrahamic religion, and has no place in serious scientific discussion.

    2. Now that we know that consciousness is materialistically based (encoded in the material world), that means, by definition, that with enough computing power and/or time we can simulate it (in the same way we could simulate the weather X amount of days into the future, with enough raw computing power, or complex biomechanical systems). Even truly chaotic systems (which the brain isn't) could be simulated with enough power, it would just take an ENORMOUS amount to simulate a comparatively simple thing.

    3. Now that we know it is possible to simulate the brain, all it is is a question of power and software. The power is, near-as-dammit, here: the brain contains roughly 100 billion neurons with 10,000 (and that's being generous) synapses on each one, connecting it to the others. From our previous two points we can deduce that it is not so unreasonable to think of each synapse as a binary operator when putting it in computer form (since each synapse, in essence, gives out a 1 or a 0 - it either produces a set amount of neurotransmitter as a result of its host neuron reaching action potential, or it does not). I realise it's a bit more complicated than that - as I remember from my neuroscience textbook that there are autoreceptors, g-protein-coupled receptors and secondary messengers, and indeed half of all the ion gates in the entire brain are actually moved to different locations per day etc, but this doesn't actually change the amount of calculations by an awful lot (since even though these ion channels were moved, it is not complicating the circuit, merely restructuring it, and autoreceptors can just be thought of as minor binary systems governing the major binary systems). Let’s say that it changes by quite a large amount – it increases by 10-fold. So we have 10 x 10,000 x 100 billion, which gives you 10,000 trillion operations per second. In computer terms, that’s 10 petaflops. The fastest supercomputer in the world, the K-computer in Japan, has a capacity of 8.126 petaflops - so according to Moore's Law, even if it only lasts another decade, we should have no problem meeting the necessary hardware requirements to have human-level intelligence on a computer. The software is merely a case of looking at brain cytoarchitecture - knowing which cells to connect to which, thereby forming a circuit, and this is gradually being achieved through wonderful research conducted every day with ever-improving brain-scanning techniques. And THAT only applies if you want to copy a specific brain, rather than just generate a purely digital one out of the ether.

    4. The final point is the time-frame, and this is somewhat more up to taste. For me, people seem to have been claiming that Moore's Law had only a few straggling years left to go before it finished for decades now. I see no particular reason why it should stop now - since we are nowhere near any "fundamental" barrier in computation (the actual "fundamental barrier" would be if every atom in a given system were a computational bit) so I trust that human ingenuity will continue to provide ever-accelerating returns through optics, memristors, quantum computers, 3D storage etc until we hit that barrier. Then we'll probably even overcome that somehow - but that's a different issue altogether.

    Going through each of the above points leads naturally to the ideas highlighted in Ray Kurzweil's singularity hypothesis. The "intellectual gag reflex" (which I thought was a wonderful way of putting it, as was written in the Time Magazine article on the singularity) which most people get when thinking about immortal robots and the singularity is thus an emotional response to the prospect, not a logical one.

    The most important point to bear in mind is that your argument that a reductionist, mechanistic brain is "magic, hocus-pocus, not science or even a roadmap to a possible science much less a technology goal" is not only in conflict with the scientific consensus and the evidence, but is actually the exact inverse of what an individual of your calibre who has such a high level of interest in these matters should be saying. The opposing view of dualism, or indeed even less extreme versions of it, is the epitome of the above quotation.

    Hank
    The curious thing is, Hank, that there is an ever-growing acceptance of the hypothesis which is presented by Ray Kurzweil's "singularity" idea in academic circles
    Well, there are two clarifications.  I say 'science' but you say 'academia' - I don't much care what people in the humanities or postmodernists think about what it takes to supplant millions of years of evolution - Kurzweil wants to sell books so he does care.

    You then proceed to make scientifically false claims but say "it's a given" - no, it is not a given, for example, that "there is nothing more to consciousness than neurons - that's a given, if you talk to any undergraduate/lecturer/professor of neuroscience, psychology, biochemistry, medicine etc they will tell you so"

    I hope they do.  I really, really do.   Part of the culling process in any field that Science 2.0 can help with is finding simple-minded idiots who shouldn't be teaching anyone anything and ridiculing them until they quit.   It worked with Hauser and Kanazawa so if I see a neuroscientist saying something so ridiculous, we can start there.   Please start including links.

    I don't have any sort of intellectual gag reflex on a singularity any more than I do alternative medicine.   What I instead look for are snake oil salesmen exploiting gullible people using science mumbo jumbo and vague correlations cocooned in a bubble where what they say is true.    Kurzweil is proof of the Bertrand Russell proposition that once a contradiction is allowed into a closed system anything can be proven.

    When Russell claimed that, a member of the audience said, "If 2 plus 2 equals 5, prove that I am the Pope."

    Russell replied "If 2 plus 2 is 5, then 4 is 5; if 4 is 5, then (subtracting three from each side) 1 is 2; you and the Pope are two, therefore you and the Pope are one."

    Congratulations.  Ray Kurzweil has made you Pope.  He simply got you to believe 2+2=5.
    I rather resent the fact that Mr. Campbell felt it fitting to delete the entire following discussion after the above point, thus misleading readers into thinking that he had provided a cogent enough argument to make me back down. Hopefully, the following will provide satisfactory evidence to readers that Mr. Campbell's arguments (and those of his supporters) disintegrate when put under any kind of scrutiny. (At least until he deletes it again... intellectual dishonesty is a disease from which the afflicted rarely recover).

    "Hank Campbell | 07/31/11 | 09:57 AM
    • Reply to This »

    • Link
    It is both extraordinary and terrifying that a blogger of a magazine calling itself Science 2.0 can be so misinformed. Check any neuroscience textbook, and indeed many biological and psychological books, and you will find that the premise from which they work is that every aspect of conscious being is entrenched in chemical exchange and neuronal cytoarchitecture.

    I sit here with 3 textbooks on the subject matter in my lap as I write this, they are each internationally recognised as being staple tomes for courses on neuroscience and psychology. They are:

    "Neuroscience: Exploring the Brain (Third Edition)" by Bear, Connors and Paradiso
    "Physiology of Behavior (Tenth Edition - Pearson International Edition)" by Neil R. Carlson
    "Perception (Fifth Edition)" by Blake and Sekuler

    Now, to my surprise I actually found it quite difficult to find explicit quotations illustrating the authors' opinions on the matter we are discussing - the authors deftly sidestep such value judgements most of the time, but nonetheless they are there. I bring to your attention a quotation from Carlson's book, Chapter 1, page 11 to support my point in this dispute:

    "...the following extract from [Hippocrates' work] On the Sacred Disease (epilepsy) could have been written by a modern neurobiologist:
    Men ought to know that from nothing else but the brain come joys, delights, laughter and sports, and sorrows, griefs, despondency, and lamentations. And by this, in an especial manner, we acquire wisdom and knowledge, and see and hear and know what are foul and what are fair, what are bad and what are good, what are sweet and what are unsavory... And by the same organ we become mad and delirious, and the fears and terrors assail us... All these things we endure from the brain when it is not healthy (Hippocrates, 1952 translation, p. 159)"

    The most important part is the first sentence, as it illustrates a prominent author's (a textbook author, no less) view on the tenets adhered to by modern neurobiology. Another quotation from the noted neuroscientist Vernon Mountcastle supporting the materialistic view I am trying to impress upon you can be found in the Perception book, Chapter 1, page 2:

    "Each of us lives within... the prison of his own brain. Projecting from it are millions of fragile sensory nerve fibers, in groups uniquely adapted to sample the energetic states of the world around us: heat, light, force, and chemical composition. That is all we ever know of it directly; all else is logical inference. (1975, p.131)"

    On pages 4 and 6 of Perception, Nobel Prize-winning brain scientist Roger Sperry is quoted numerous times (granted, these are tailored to the idea of perception more than other cognitive processes, but that is, after all, the nature of the book, and the wider principle still stands). An example is presented below:

    "According to Sperry, perceptual experience is a "functional property of brain processing, constituted of neuronal and physiochemical activity, and embodied in, and inseparable from, the active brain" (1980, p. 204)."

    But a particularly revealing quotation about the authors' opinions can be found at the bottom-left of page 7, where they reference philosopher (yes, I know - not exactly ideal, but he has worded the point I am making beautifully) John Searle. Again, the important part is the first line:

    "Searle expresses the view most investigators in the field of perception have adopted:
    "Mental phenomena, whether visual or auditory, pains, tickles, itches, thoughts, and all the rest of our mental life, are caused by processes going on in the brain. Mental phenomena are as much a result of electrochemical processes in the brain as digestion is the result of chemical processes going on in the stomach and the rest of the digestive tract." (1987, p. 220)"

    Once again, I must reiterate - these textbooks are COMPULSORY reading for most neuroscience and psychology degrees - if we ignore the opinions of their authors we may as well ignore the entire field. The fact that I, a lowly undergraduate, have had to go to such lengths to point out to a man of your senior position exactly what it is that he is missing, is a great shame (though I must confess I cannot suppress a certain primal thrill at attempting to go toe-to-toe with an intellectual most likely double my age).

    If you poo-poo the idea that consciousness is the product of electrochemical interactions in the brain, you are going against the entirety of what modern biological science holds to be true.

    In response to everything you have said so far, I will ask you one single question: if you contend that consciousness is not governed and housed by the brain... where exactly is it?
    Mammago (not verified) | 07/31/11 | 13:53 PM
    • Reply to This »

    • Link

    If you poo-poo the idea that consciousness is the product of electrochemical interactions in the brain, you are going against the entirety of what modern biological science holds to be true.

    In response to everything you have said so far, I will ask you one single question: if you contend that consciousness is not governed and housed by the brain... where exactly is it?
    Again, you seem to be able to look up terms but you are not comprehending. Instead of your coarse definition, we can pick any one. To a physicist or an electrical engineering, we are solely inductance if we get right down to it. Everything about us is done with inductance, the brain is just a conduit.

    So I don't need a singularity to be a computer or a person at all, it can be a wave or a wire. All we need is enough energy at enough frequencies and *poof* we have consciousness.

    And yet we don't actually create consciousness no matter how much energy at the large and small scale we analyze.

    No one knows where consciousness really is - I get that maybe you are young and you need to believe we are in some apex of human science and that the answer is right there in your three textbooks (because they are compulsory so they must be equivalent to Newton) but it isn't there any more than the answer to diseases were there 500 years. We won't see real answers to even the basics of neuroscience, much less consciousness, in my lifetime. We sure will not come close to a singularity by 2045, which is the point I am addressing.

    Your beliefs about consciousness are your own but they are just that, beliefs - if you want to engage in science, that is another matter but it will be quite humbling when you are not young and know just enough to be wrong.
    Hank Campbell | 07/31/11 | 14:14 PM
    • Reply to This »

    • Link
    "Instead of your coarse definition, we can pick any one. To a physicist or an electrical engineering, we are solely inductance if we get right down to it. Everything about us is done with inductance, the brain is just a conduit."

    .....So? That's what I've been saying... isn't it? That one of the fundamental components of human being is electricity? You seem to be trying to imply that my view on consciousness is skewed in the direction of my subject and try to illustrate that by providing another example of skewing.... which sounds very similar to my own again. How does that work?

    "So I don't need a singularity to be a computer or a person at all, it can be a wave or a wire. All we need is enough energy at enough frequencies and *poof* we have consciousness."

    I'm sorry but I don't understand this at all - the syntax and vagueness cry out for clarification and elaboration, please.

    "And yet we don't actually create consciousness no matter how much energy at the large and small scale we analyze."

    Well, we don't analyse "energy" in that sense, for a start. But even if we were to sweep that aside: of course we haven't created it. Not yet. Otherwise we wouldn't be having this debate - but we're making slow but exponential progress. Which was the entire point of the discussion in the first place...

    "(because they are compulsory so they must be equivalent to Newton)"

    No, but it means that they're about as well-respected by the scientific community as you get in this particular field (your lack of appreciation for the level of peer-review that they would have had to undergo leaves me deeply concerned, and is illustrative, perhaps, of some underlying prejudice against neuroscience).

    "We won't see real answers to even the basics of neuroscience, much less consciousness, in my lifetime."

    You REALLY have to clarify what you mean by the "basics of neuroscience" - because it's not like we haven't already done extraordinary things with neuroscientific knowledge, such as... oh, I don't know, driving wheelchairs with EEG outputs or having robots controlled remotely by mouse brains: http://www.youtube.com/watch?v=1-0eZytv6Qk

    And with regard to your comments cautioning against wild youthful flights of fantasy, I would like to direct your attention to the first of Arthur C. Clarke's three "laws":

    "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."

    The most annoying thing in this entire exchange is that you have completely ignored points which contradict (with evidence, I might add) what you have stated. This is detailed below:

    First off, you claimed that thinking about the brain in terms of a mechanistic viewpoint was "magic, hocus-pocus, not science or even a roadmap to a possible science much less a technology goal".

    I then told you how this argument was flawed: "we know that there is nothing more to consciousness than neurons - that's a given, if you talk to any undergraduate/lecturer/professor of neuroscience, psychology, biochemistry, medicine etc they will tell you so".

    You then baulked at that assertion, demanding references from respectable neuroscientists etc: "I hope they do. I really, really do. Part of the culling process in any field that Science 2.0 can help with is finding simple-minded idiots who shouldn't be teaching anyone anything and ridiculing them until they quit. It worked with Hauser and Kanazawa so if I see a neuroscientist saying something so ridiculous, we can start there. Please start including links."

    Which I then promptly gave to you here: "I bring to your attention a quotation from Carlson's book, Chapter 1, page 11 to support my point in this dispute:", and here: "Another quotation from the noted neuroscientist Vernon Mountcastle supporting the materialistic view I am trying to impress upon you can be found...", and here: "An example is presented below:...", and here: "But a particularly revealing quotation about the authors' opinions can be found..."

    Incidentally, upon Google-searching your name and Ray Kurzweil's together, I found quite a few articles which you have written on his projections, each a damning indictment. Such as this one:http://www.science20.com/science_20/ray_kurzweil_pushes_singularity_back...
    I found it particularly interesting that comments there seemed to try to raise points against your arguments much as I have done here, but to no avail. This has led me tentatively to wonder whether there isn't some underlying grudge against the man himself, rather than his ideas? I personally hate ad hominem details such as this being brought into discussion (hence my bristling at comments on my age) but I must say that I felt this relevant enough to be included as a possible confounding factor of your judgement.
    Mammago (not verified) | 07/31/11 | 15:40 PM
    • Reply to This »

    • Link

    I also attack homeopathy. Basically, if someone is using a veneer of science to try and exploit gullible people for monetary gain, I will go after it. If you google me and animal rights activists, anti-vaccine scaremongers and plenty of others you will also find a common theme. But you don't have a fetish for those the way you seem to have about magic so you don't get as concerned.

    (your lack of appreciation for the level of peer-review that they would have had to undergo leaves me deeply concerned, and is illustrative, perhaps, of some underlying prejudice against neuroscience
    No, but your desire to attribute motivation makes me think you should go into a social science rather than biology.

    You don't even understand how textbooks get written. There is no peer review. A buying editor at a trade publisher talks someone into writing a book for little money in the hopes the instructor makes it 'compulsory' so undergraduates have to spend a ridiculous amount of money on them. The latest research is not the best, as any biology student from 25 years ago who got told selfish genes were accurate or a physicist who learned string 'theory' more recently will attest. All peer reviewed, all in textbooks, yet all known within the science community to be too incomplete to hold up without far more evidence than existed and that never came to be. The notion that Science 2.0 might be biased against neuroscience for disputing one undergraduates biased interpretation of consciousness is too ridiculous to consider seriously.
    Hank Campbell | 07/31/11 | 16:29 PM
    • Reply to This »

    • Link
    The notion that Science 2.0 might be biased against neuroscience for disputing one undergraduate's interpretation of consciousness IS absurd.

    It is also nothing like what I said. I was going out on a limb that you, personally, might have a certain dismissive attitude towards the subject (please don't raise straw-men). And this conclusion was not reached without due cause - as any impartial reader of this thread will no doubt observe.

    I notice you still have not even touched on any reasons for exactly why my initial statement that the brain could be thought of as mechanistic, as indeed it is by the vast majority of researchers in the field of neuroscience, psychology etc. (which I keep on having to explain to you), was untenable. Nor have you addressed your failure to address the evidence I provided and reminded you of in my last post.

    While I do not have intimate knowledge of the process through which textbooks are constructed, I am aware of the myriad references which are provided in each one I cited. And these referenced papers most certainly ARE subject to ruthless peer-review.

    I must also point out that "selfish genes" and string theory are not good examples to bring into the limelight - since Richard Dawkins' book "The Selfish Gene" was heralded as a "a silent and almost immediate revolution in biology" (Grafen & Ridley (2006), Richard Dawkins: How A Scientist Changed the Way We Think, p.72), and variants of string theory are frequently turned to with great zeal by such prominent physicists as Brian Greene and Stephen Hawking as being key to providing a fundamental theory of everything.

    Look, I know that you are most likely an extremely experienced researcher in your field and a dedicated scientist (otherwise you wouldn't have been given a post here). However, it is important to address criticisms which are sensible (and I think that, given the level of detail I have put into mine and the respectable references I fall back on, that is not an unreasonable attribution) with due explanation. And I appreciate that you may not have the time to address every criticism presented to you in all of your blogs, you are probably a very busy man.

    But when you are vague in your counter-criticisms (not once have you provided a name or paper or article of someone other than yourself for me to check up on in this discussion), unaccommodating and derisive in your replies and persistently ignoring and side-stepping solid evidence and logical counter-arguments, you leave yourself open to being perceived as unscientific, and your arguments being perceived as groundless.

    A good scientist should point out the flaws in his opponent's argument carefully, with great clarity. And in this particular case, ideally should being warmly discussing the interesting pitfalls which can occur from inexperience and the reasons why they occur. The purpose is to explain to the inexperienced individual why they are wrong (if they are) and then encourage them along the correct path, not lean on the fact that they are inexperienced until they acquiesce.

    Nonetheless, I have rather enjoyed our discussion :) it has been very helpful for me in distilling exactly what kind of scientist I want to avoid becoming when I attain a position of authority in scientific circles. (This is NOT intended as an insult, but rather a hint, just to be clear).

    Thank you for your time, and I'll be looking out for your next post! :)

    Take care,
    Mammago.
    Mammago (not verified) | 07/31/11 | 18:26 PM
    • Reply to This »

    • Link

    ...since Richard Dawkins' book "The Selfish Gene" was heralded as a "a silent and almost immediate revolution in biology" (Grafen&Ridley (2006), Richard Dawkins: How A Scientist Changed the Way We Think, p.72)
    Not sure what your point is, since it isn't an accurate view of biology or evolution. It was simply a somewhat arbitrary counter-argument to group selection (which was equally flawed).
    Gerhard Adam | 07/31/11 | 23:15 PM
    • Reply to This »

    • Link

    ...we know that there is nothing more to consciousness than neurons - that's a given, if you talk to any undergraduate/lecturer/professor of neuroscience, psychology, biochemistry, medicine etc they will tell you so...
    ... and they will all be unequivocally wrong, except in the most trivially sense of their definition. That's like arguing that mathematics is just numbers, or that music is just notes. While true in a simplistic sense, it says absolutely nothing.

    The single most serious failing in all such discussions about consciousness is that they all avoid or miss the role of emotions. This is a fatal flaw in any consideration of intelligence or consciousness. This is precisely why all attempts at artificial intelligence uniformly fail. Once again, this doesn't mean that sophisticated algorithms can't be developed, or that specific functions can't be programmed to be more reliable, accurate, faster, etc. than humans. It just isn't intelligence.

    For you to claim that there is "nothing more to consciousness" than whatever ... simply illustrates your lack of understanding regarding the phenomenon. If you want to argue that you're backed by academics, then I would challenge them to come on this blog and defend such an untenable position.
    Gerhard Adam | 07/31/11 | 17:42 PM
    • Reply to This »

    • Link
    Emotions are housed in neuronal exchanges too - so once again, it's all about the neurons (and some hormones to a limited extent - if, for example, your adrenal gland sitting atop your kidney is pumping out too much adrenaline, it will, of course, affect your consciousness).

    In fact, emotional artificial intelligence is already being intensively investigated - here's a very nice article on the subject:http://www.newscientist.com/blogs/culturelab/2011/01/emotion-20.html
    Mammago (not verified) | 07/31/11 | 18:34 PM
    • Reply to This »

    • Link

    ...so once again, it's all about the neurons...
    Yes, if you accept the notion that your posts are all about the letters. If that doesn't sound quite right to you, then perhaps you'll get an idea as to why your other statements miss the mark as well.
    Gerhard Adam | 07/31/11 | 22:37 PM
    • Reply to This »

    • Link
    "That's like arguing that mathematics is just numbers, or that music is just notes. While true in a simplistic sense, it says absolutely nothing."

    These counter-arguments are not particularly compelling either - since it's perfectly easy to make the case of:

    Mathematics is built upon numbers and the relationships between them (which, let's face it, it is - there can't really be any argument there).
    Music is built upon notes and the relationships between them (again, there can't really be any argument there).
    And so consciousness is built upon neurons and the relationships between them (e.g. types, structure and number of connections to others).

    So we are (yet again) led right back to the same exact thing I have been saying throughout this entire discussion:

    It's all. About. The neurons.
    Mammago (not verified) | 08/01/11 | 06:40 AM
    Your point isn't disputed, it's simply trivial. As I said before, if the take-away from this is that your post is simply about letters, then you get the drift.
    Gerhard Adam | 08/01/11 | 13:05 PM
    • Reply to This »
    • Link
    The analogy that concepts expressed in writing are simply about the relationship between letters is not the same as the analogy that concepts expressed in mathematics are simply about the relationships between numbers, or that harmonies expressed in music are simply about the relationship between notes. The latter two are self-evidently correct (in fact, they are almost the definitions of the subjects), while the first is not correct.

    You say that my point is trivial - yet two of the three analogies (which are the only two self-consistent ones) that you provide to indicate my point's triviality actually support the argument I have been making all along, as demonstrated above.

    This infantile repetitive stating of the same flawed concept has now been undermined and debunked in almost every way that it is possible to do so. As such, this exchange is becoming most tiresome...
    Mammago (not verified) | 08/01/11 | 17:11 PM
    • Reply to This »
    • Link"

    Hank
     (At least until he deletes it again... intellectual dishonesty is a disease from which the afflicted rarely recover).
    Apparently not so devastating as paranoia.  I delete nothing unless it is spam.  I have had my doubts about Akismet but if they flagged you as spam, I am thinking they are smarter than I believed.
    The example of the low speed of the towel robot doesn't really fit with your argument. When we have an algorithm that can solve a problem slowly, the mere advance of Moore's Law is enough to get the useful capability. If the towel-folding takes 1200 seconds, as in that video (in fact it has since been greatly sped up by the researchers through algorithmic improvements), then another ten doublings of computer power will let it fold the towels in real-time.

    Arguments against AI progress should be about things that our programs can't do at all, even slowly using lots of hardware. There are plenty of such examples, without using an example that is clearly solvable with cheaper computation.

    Hank
    the mere advance of Moore's Law is enough to get the useful capability
    Moore's Law is only a law in the colloquial sense, though Kurzweil never seems to realize it.  We are fast approaching the end of semiconductor ability to improve but all it takes is more black box magic and some more books can be sold.   People + (new unknown computer physics) + (AI magic) = Singularity.

    I can also project we will live on Pluto and say 'because science is awesome, I know they will do it!' using his same irrational optimism, but that isn't really a prediction or a roadmap, it is fortunetelling.
    If you're that confident in a near-term slowdown of the rate of growth of computation per dollar, why not bet on it? Either at longbets.org or otherwise? We would just set a benchmark price per computation and bet on whether it is reached by a given year at even odds.

    Hank
    Unless you are reinventing the electron, I am right and you are instead hopeful about the physics issue we face in the computer world.    Given your religious belief in Kurzweil virtually anything that happens by 2045 will be 'proof' to you - you think the Watson computer is great AI rather than just being a good algorithm, basically a contextual Google.    But you also think that is the extent of the human existence so there isn't much way to bet on a singularity, you may think your washing machine is artificial intelligence.

    So forget 2045, by all means propose a way that given physics Moore's "law" will continue after 25nM.  I'll give you 1000:1 odds as long as the curve remains.
    I don't buy Kurzweil's 2045 projections, and said nothing about Watson.

    You noted impending barriers to improvement of the existing methods of chipmaking (which I never denied), saying that some major innovation would be needed to continue fast improvement in computation per dollar past 2015-2018.

    There are neutral sources, like the top500 list, computing periodicals, and so forth, that report computation per dollar figures, and longbets.org provides a neutral adjudicator of bet terms, so we don''t need to worry about ex post reinterpretations. We just pick a computing per dollar benchmark for 2020 (or later, if you're not confident enough for that year) and bet on whether the benchmark will be reached by that time.

    If you will really give $1000:$1 odds against computation per dollar still growing on trend in 2020 or 2025, I am happy to to set up that bet.

    Hank
    Write it up.  At 25nm we're out of density so unless your laptop is the size of a car, Moore's law is finished.  I suppose some quantum miracle might happen but more likely computers will become like a washing machine and you just replace them when they break and not because of performance.   By all means pick 2025 if you want - you don't seem to realize the more you pad the time, the less chance you have of being right. 
    The offer of 2025 was in case you were unwilling to bet at such ludicrous odds so near in the future. I will draft a bet text with suggested independent measures of performance and email it to you.

    There are plenty of reasons to doubt that the "singularity" will occur by 2045 (if at all), but if there's one thing I've seen time and time again, it's that the relentless, optimistic pursuit of progress is what brings about our modern scientific and technological miracles; the optimists and dreamers shape and build the world. The cynics and skeptics inherit and enjoy these accomplishments, while continuing to criticize others for dreaming.

    Artificial Intelligence is a wild card, and just as some are foolish for definitively stating when and how it will arise, those who condemn any chance for its existence are equally naive.

    With whom do I relate the most? Well, I can't say that I'm either an optimist or a pessimist, but I do know one thing...I much prefer to hear the speculations of a creative dreamer, rather than a talentless cynic.

    Bonny Bonobo alias Brat
    I much prefer to hear the speculations of a creative dreamer, rather than a talentless cynic.
    Ha ha, I have to agree.

    My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http://www.science20.com/forums/medicine
    Gerhard Adam
    I much prefer to hear the speculations of a creative dreamer, rather than a talentless cynic.
    How quaint that you slanted the argument in such a manner.  It's a wonderful view to take.  Unless, of course, you are listening to the speculations of a talentless dreamer, rather than a creative cynic.

    After all, it's easy to indulge in fantasies, rather than facing the real problems that science presents.  That way you don't actually have to take responsibility for ideas or follow them to their conclusion.  Instead you can simply ride on the coattails of those that do the work and claim that "you knew it all along".
    Mundus vult decipi
    Every single technology that exists today was denied by many people yesterday. And every single technology that will exist tomorrow is denied by many people today. The moral of the story is that many people never learn.

    Hank
    That's too broad to have any meaning.  We should all be typing on mechanical computers if all predictions from the past have to take hold in the future, as you contend.  Why am I not teleporting through the aether right now?  Someone predicted it, so it must be true, according to your comment.

    There is a big gap between 'Kurzweil says a singularity will happen and in an infinite universe he will someday be right' and making an actual practical argument for how and when it can happen.  Kurzweil's roadmap is basically taking a mechanical computer from 1890 and imagining what it will be like in 2000 - not even close.
    This article appears to have been written by an opinionated, uninformed middle schooler.

    Seriously? Hasn't improved much since the 1960s? What kind of delusional person could even form such a thought. All one has to do is compare AI in gaming to see the rapid change. 20 years ago, the best AI we could muster was Bowser in Mario. Compare that to AI of today, destroying people in Chess, making real players look like noobs in shooters, offering millions of unique interactions in Skyrim.

    This article is so clueless that it almost seems like it's fake, is this the Onion?

    I'll also add that the complexity that goes into making a robot fold towels is quite impressive, and is more of an intellectual feat than the writing of this article was.

    Gerhard Adam
    All one has to do is compare AI in gaming to see the rapid change. 20 years ago, the best AI we could muster was Bowser in Mario. Compare that to AI of today, destroying people in Chess, making real players look like noobs in shooters, offering millions of unique interactions in Skyrim.
    What a quaint view of AI.  I guess its appropriate if we abandon any meaningful definition of the word "intelligence".
    Mundus vult decipi