Banner
    Singularity Science Theater 3000: How Reverse-Engineering Postponed Artificial Intelligence
    By Mark Changizi | December 3rd 2010 05:04 AM | 35 comments | Print | E-mail | Track Comments
    About Mark

    Mark Changizi is Director of Human Cognition at 2AI, and the author of The Vision Revolution (Benbella 2009) and Harnessed: How...

    View Mark's Profile

    In the cult television series Mystery Science Theater 3000, we are treated to two aliens and a dude wisecracking their way through terrible old B-movies like Project Moonbase. For example, in their episode watching the 1963 movie, The Slime People: Up from the Bowels of the Earth, the main character calls the operator on the payphone at a deserted L.A. airport, and one of the robots improvises, “Hi. This is the human race. We're not in right now. Please speak clearly after the sound of the bomb.”

    Mystery Science Theater 3000 is now gone, but if you enjoy making fun of apocalyptic science fiction stories, you’ll get more than your fill watching the reactions and press around the futuristic proclamations of Ray Kurzweil at his Earthbase Singularity University. Kurzweil’s most famous proclamation is that we are fast approaching the “singularity,” the moment at which artificial intelligence surpasses human intelligence (or some time soon thereafter). After the sound of this bomb, artificial intelligence will create ever-better artificial intelligence, and all kinds of fit will hit the shan.



    There’s no shortage of skepticism about the singularity, so to stake my ground and risk being the first human battery installed after the singularity, I’ll tell you why there’s no singularity coming. Not by 2028, not by 2045, not any time in the next 500 years -- far after I’ll be any use as a battery, or even compost, for our coming masters.

    What’s wrong with the idea of an imminent singularity? Aren’t computational capabilities exponentially growing? Yes. And we will, indeed, be able to create ever-rising artificial intelligence.

    The problem is, Which more intelligent AI should we build?

    In evolution we often fall into the trap of imagining a linear ladder of animals -- from bacteria to human -- when it is actually a tree. And in AI we can fall into a similar trap. But there is no linear chain of more and more intelligent AIs. Instead, there is a highly complex and branching network of possible AIs.

    For any AI there are loads of others that are neither more nor less intelligent -- they are just differently intelligent. And thus, as AI advances, it can do so in a multitude of ways, and the new intelligences will often be strictly incomparable to one another. …and strictly incomparable to human intelligence. Not more intelligent than humans, and not less. Just alien.

    These alien artificial intelligences may occasionally be neat, but for the most part we humans won’t give a hoot about them. We’re biologically incapable of (or at least handicapped at) appreciating alien intelligence, and, were one built, we would smile politely and proceed to ignore it. And although a good deal of our disdain toward these alien AIs would be due to our prejudice and Earthly provincialism, there are also good reasons to expect that alien AI will likely be worthless. We are interested in AI that does what we do, but does it much better. Alien AI will tend to do something amazingly, but not what we do.

    That’s why most AI researchers are aiming for roughly “mammal-like” artificial intelligences, AIs that are sufficiently similar to human intelligence that we can make a comparison. “Ah, that AI is clearly less intelligent than a human, but more intelligent than a dog. And that AI is so intelligent I’m uncomfortable letting it give me Brazilian.” AI researchers aim to build not alien-freak intelligences, but Earthly mammal-esque intelligences, with cognitive and perceptual mechanisms we can appreciate.  Super-smart AI won’t amount to a singularity unless it is super-smart and roughly mammal-like.

    But in order to build mammal-like artificial intelligence we must understand mammalian brains, including ours. In particular, we must reverse engineer the brain. Without first reverse engineering, AI researchers are in the position of an engineer asked to build a device, but not given any information about what it should do.

    Reverse engineering is, indeed,  part and parcel of Kurzweil’s near-future: the brain will be reverse-engineered in a couple decades, he believes. As a neurobiological reverse-engineer myself, I am only encouraged when I find researchers – whether at a moonbase or in the bowels of the Earth -- taking seriously the adaptive design of the brain, something often ignored or actively disapproved of within neuroscience. One finds similar forefront recognition of reverse engineering in the IBM cat brain and the European Blue Brain projects.

    And there’s your problem for the several-decade time-frame for the singularity! Reverse-engineering something as astronomically complex as the brain is, well, astronomically difficult - possibly the most difficult task in the universe. Progress in understanding the functions carried out by the brain is not something that comes simply with more computational power. In fact, determining the function carried out by some machine (whether a brain or a computer program) is not generally computable by machines at all (it is one of those undecidability results).

    Understanding what a biological mechanism does requires more than just getting your hands on the meat. You must also comprehend the behavior of the animal and the ecology in which it evolved. Biological mechanisms designed by evolution to do one thing under natural circumstances can often do loads of other inane things under non-natural circumstances, but only the former are relevant for understanding what the mechanisms are for.

    Making sense of the brain requires understanding the “nature” the animal sits within.  And so progress in AI requires someone quite different from your traditional AI researcher who is steeped in algorithms, logic, neural networks and often linguistics. AI researchers need that kind of computational background, but they also need to possess the outlook of the ethologists of old, like Nikolaas Tinbergen and Konrad Lorenz.

    But characterizing the behavior and ecology of complex animals has to be done the old-fashioned way -- observation in the field, and then testing creative hypotheses about biological function. There are no shortcuts to reverse engineering the brain -- there will be no fancy future machine for peering inside biological mechanisms and discerning what they are for. 

    Reverse engineering will plod forward, and exponential growth in technology will surely aid in the task but it won’t lead to exponential growth in the speed at which we reverse engineer our brains. The years 2025 and 2045 -and I suspect 3000 - will slip by, and most of what our brains do will still be vague and mysterious.

    Comments

    Gerhard Adam
    The worst part about it, is that even if they succeeding in building a machine based on a reverse-engineered human brain, it still wouldn't be intelligent.  It would only be a simulation of a human brain in a machine, since it is meaningless to have a machine have the needs and reactions of a human being.

    A true artificial intelligence would have to be a machine intelligence.  Anything else is merely a simulation.

    I'm also surprised that no one mentions that you can't build an intelligence greater than your own,  simply because it would be impossible to verify that it exists or works.  So even if you succeeded you could never know it.
    Mundus vult decipi
    vongehr
    Not sure how much reverse engineering Kurzweil now wants, but generally speaking, the singularity is often supposed to be coinciding with a substrate change of evolution (something like from minerals/molecules to cells), which means that your argument against it (the tree of evolution that resides in a certain substrate) does just not apply. The new substrate of fast evolution is something like cyberspace, not some reverse engineered robots that get ever more intelligent, which will also come, but which are not the "singularity". Once evolution gets fast on a higher substrate, the lower one actually slows down (legacy systems)!
    Hank
    Yes, but Kurzweil relies on some magical inflection point.    Here we are and there is AI and (black box, because it has happened with the Internet) is in between.   This is the same disastrous policy of hope about future technology that prevents people from putting a halt to pollution today.

    Evolution can't be directed the way he thinks it can.  Generally, I think he just wants to sell books so he trots out old slides with new timelines every few years.
    Any physically embodied "alien" AI could still be the end of human civilization.
    Whatever their goals and however little we might understand those goals, if they have any use for the physical space and natural resources the earth provides, it could lead to conflict.

    That said, I don't expect this to happen anytime soon, either.

    Gerhard Adam
    ...if they have any use for the physical space and natural resources the earth provides, it could lead to conflict.
    Yes, I believe it would (if it were possible).  Any true AI would have to have self-awareness and a need to ensure its own survival.  To avoid simply being a simulation, it would have to be capable of its own brand of intelligence (i.e. alien intelligence) which is effectively the same as producing a new competitive species.  It can't end any other way.

    Mundus vult decipi
    This view of human enhancement fits well with my book "Enhancing me - the hope and hype of human enhancement. See the vid clip...http://il.youtube.com/watch?v=p9vNA4rDuS8

    Aitch
    I think there could be a new singularity coming, but not directly from technology
    As I have stated several times, I am a fan of the Mayan timeline, and according to that, we are heading for a singularity of consciousness, an awakening of that rarely used faculty of mind which links human minds together, however subtly that may be happening
    Also according to the Mayan timeline, we are just about to enter the final quickening, where changes occur even faster than they have been, 20days x 13cycles....those who have become 'fixed' in their minds expectations and allowances, may be in for a bumpy ride....apparently

    So maybe we'll find out if Quetzalcoatl is the creator of our world as the door to the galactic underworld closes at the beginning of the universal underworld on February 11th 2011, or whether Omecinatl is the supreme deity on October 28th 2011 at the moment of a great rebirth, as we are currently in the last phase of the universal underworld in Omecinatl's hands

    I certainly think there'll be complete financial meltdown by then, which you may want to consider

    http://www.redicecreations.com/specialreports/mayaneconomy.html

    http://www.anistara.net/maya1.htm

    There is also an alternate set of dates laid out here

    http://uazu.net/calleman/#alt

    Enjoy the ride!

    Aitch
    Yes, the Mayan calender ending will be quite a problem. That culture didn't use nails apparently. So changing them is going to be a bear! Instead of just taking down paper calenders and putting up new ones we'll have to recarve the whole darn walls!:0) What a lot of work! Worse than the Y1K problem when momument makers in the year 1000 howled they now had to carve an extra digit and wanted to be paid morel! :0) More seriously time goes around in a circle due to well proven General Relativity. We probably use Science to build Heaven and fulfill all Faith just because we want to live forever . So Quezicoatal will really be in there somewhere because why not bring all Fictionals to life, Mickey Mouse, the little g gods, etc, just because it's fun and we can? And since time circles John's 1500 mile long cube full of People and Animals including the very deserving Galilee Carpenter enjoying eternal life mentioned in the last 2 chapters of the Bible already exists. Verne has his NASA & so does St John. All great Science Fiction Writers do. We will make sure of it!, And we no doubt over many billions of years grow up in all ways, including evolution, into the BIG God. Ephesians 4:15 says we do and since it is time itself that is circling we had better make sure this exact same Universe bangs outward again so this life pattern doesn't get over-written by a different one forming or none of us would have ever existed in the first place. Too dangerous to leave to chance so you bet there will be Intelligent design! . But a calender is just a way to measure time and the ending of something that measures something, does not make what it measures, (in this case time) go away itself! Nor does its ending mean anything big happens on that date or if anything does that it was anything more than coincidence. All calenders have to end at some point. Could not go on for infinity. Not enough stone!

    I totally agree with you! (At least about the part about reverse engineering the brain and creating a human-like intelligence.) I believe it will be much easier to simulate human thought processes and behavior that to reverse engineer a brain. I don't believe humanoid AI will ever exist, and I'll tell you why.

    I do believe 100% that a combination of narrow AI's will be capable of superhuman intelligence and will be capable of things like designing smaller/faster/better processors, nanotech, and biology, or even coming up with "The Theory of Everything."

    The point of a singularity (no pun intended), is that there is a certain point past which technology will be evolving faster than human minds will be able to keep up with it. This creates an event horizon beyond which future technologies are unimaginable and unpredictable. It does not require conscious computers or uploaded minds, and certainly doesn't require a reverse engineered brain.

    As you point out, in order to reverse engineer the brain we would need to progress to the point of being able to develop AI technology that can tackle astronomically hard tasks. So it follows from there, that the "alien" AI as you call it, would be a prerequisite to reverse engineering any humanoid AI. And if the alien AI can do that, then what would we need a reverse engineered brain for? It will seem like ancient technology.

    SynapticNulship
    In evolution we often fall into the trap of imagining a linear ladder of animals -- from bacteria to human -- when it is actually a tree. And in AI we can fall into a similar trap. But there is no linear chain of more and more intelligent AIs. Instead, there is a highly complex and branching network of possible AIs.

    For any AI there are loads of others that are neither more nor less intelligent -- they are just differently intelligent. And thus, as AI advances, it can do so in a multitude of ways, and the new intelligences will often be strictly incomparable to one another. …and strictly incomparable to human intelligence. Not more intelligent than humans, and not less. Just alien.

    That's true, but it gets worse (for the AI-dependent singularity ideas) since no piece of software or technology becomes ubiquitous overnight.  In fact, many great inventions are killed off before anybody knows about them due to funding cuts, regulation, etc.  Be it a commercial product, completely gov't funded, open source, whatever--it has to go through a lot of hoops to survive let alone completely change society. 

    Some of the singularitarians might argue that a super AI would take matters into its own hands so as to avoid being shelved (or having it's plug pulled)...but it can't do that unless it has almost godlike physical powers.   In movies this is done with some kind of magic like when all machines come alive and start killing people in Maximum Overdrive or Transformers.  In real life, if you have software connected to the Internet, or connected to potentially dangerous actuators, you have the normal problems of quality control of complex software.
    SynapticNulship
    AI researchers need that kind of computational background, but they also need to possess the outlook of the ethologists of old, like Nikolaas Tinbergen and Konrad Lorenz.

    I agree.  Which is exactly why I started reading ethology and various other disciplines 5 or 6 years ago.
    Interesting point of view. Actually, it may even be plausible. My problem though is that the complex network building the brain cannot be considered as the only source of the problem or of its solution. This is a too materialistic and physical (based on its current definition & knowledge) approach. If science continues to follow the same tangible path, then I cannot see how it can overpass humans who by definition are multicomponent complex systems consisting of both physical and unphysical "parts". Science should and will evolve to something more than just being based on hard materialistic evidences obtained by materialistic apparatus. It is more than obvious that if the current trend continues, science will confine itself within the boundaries of a cyclic "black hole".

    However, all AIs are, as universal computing machines, fundamentally equivalent to one another -- there's nothing that, in principle, AI A can compute that AI B can't, they just may do so with differing efficiencies. And this efficiency is what's projected to grow -- so if AI A solves a certain sort of problems efficiently, it will do so ever more efficiently in the future, due to growing computational performance; however, AI B is in principle also able to solve the same sort of problems, and will do so ever more efficiently in the future, as well.

    Ultimately, both AIs just differ at most by some performance factor related to establishing a simulation of one on the other, and this factor becomes insignificant if base computational speed rises unboundedly, as Kurzweil's proposal would have us believe -- so all the different 'kinds' of AI trend towards the same singularity.

    I'm still not convinced by Kurzweil's ideas in general, or any other predictions of that sort (the future always looks bright by naive extrapolation, it's what keeps us going), but I'm not sure this is the most striking counterargument to make.

    "Alien AI will tend to do something amazingly, but not what we do... AI researchers aim to build not alien-freak intelligences, but Earthly mammal-esque intelligences, with cognitive and perceptual mechanisms we can appreciate. Super-smart AI won’t amount to a singularity unless it is super-smart and roughly mammal-like."

    In other words, the argument is:

    1. A singularity would only arise from super-smart AI.

    2. We will only try to make super-smart AI with human-like (or mammal-like) goals.

    3. This is very hard because we have to do this by reverse-engineering the brain, which is very hard.

    One of the worries of people concerned with the singularity is precisely that the singularity will arise from AIs with goals that are unfriendly to humans. If the argument above were valid, then we'd have nothing to worry about on that front. But of course people will develop AI with un-anthropomorphic goals. Proving mathematical theorems, optimizing the return on an investment strategy, responding to Internet search queries - humans do or can do all those things, but they're not very natural to us. It's exactly such "unnatural" tasks which provide incentive to develop powerful non-human AI.

    In any case, it's not the goals of AIs which determine the rate at which AI advances in power. It's progress on the level of algorithms. And some of that might come from reverse-engineering the brain, but mostly it comes from nonbiological disciplines like computer science and engineering. A hundred arcane topics like singular value decomposition, semidefinite programming, model checking, decision theory, proof theory, compressed sensing - that's the rising tide which is taking us towards a singularity.

    Gerhard Adam
    It's exactly such "unnatural" tasks which provide incentive to develop powerful non-human AI.
    Which begs the question.  There is nothing that can be developed to perform these tasks that can't be done by humans.  It must be done by humans, or there is no way to verify success.
    Mundus vult decipi
    I have a difficult time following the reasoning in the comments here. Is there an initial assumption that something is “advancing” … becoming “more intelligent” (Sascha’s phrase) … “evolving”? What is this something? Can it defy the conservation laws of physics and information?

    When our host Mark concludes, “The years 2025 and 2045 -and I suspect 3000 - will slip by, and most of what our brains do will still be vague and mysterious” I think he is saying that things fundamentally stay the same and remain conserved.

    A wave breaking upon a reef is as sophisticated and complicated as anything we will ever meet.

    The world’s sophistication has never really gone up or down. You are never going to get a news feed concerning our leaders (or high-frequency trading computers) that makes you clap your hands and say “by golly, the chaps have learned their lesson and have become “more intelligent”! They never get it Real. They are the epitome of Artificiality. Artless Insanity. If the governing bodies (or machines) are our Brain, then alas,

    “The years 2025 and 2045 -and I suspect 3000 - will slip by, and most of what our “brains” do will still be vague and mysterious.” .” Remember when the Dow made a hyperdive last May … the Crash of 2:45 pm …

    I don’t know much about Mr. Kurzweil, yet the few things I do know color my take on this blog. The man takes heaps of vitamins. He can afford them. He certainly looks sharp and fit in the photo of him featured here. His strange and highly synthesized and artificial diet makes him a living and breathing example of artificial intelligence.

    The other part of Kurzeil I cannot forget is the fact that he created some pretty amazing synthesizers for both music and voice. I own a Kurweil K2500 and should probably be playing it instead of writing this post. It is Ray Kurzweil also, who gave Stephen Hawking a voice! There again we have an exemplary melding of natural and artificial agents.

    We don't really have to know what brain does. Main thing we need is computer that can learn and be creative. Of course so far 6 billion humans are not smart enough to create that so mimicking nature might faster way.
    There are already factory robot, spell checker, chess, translation, voice recognition, face recognition, auto-pilot etc.
    In fact, I don't think average human can beat computer in crossword puzzle and chess already.

    Anyway, it would be so simple algorithm like extension fuzzy logic that cause computer to learn and over take human that people would think it should happened by year 2000.

    Gerhard Adam
    These are trivial things.  Building a specialty machine to play chess isn't much of an Artificially Intelligent machine, any more than my car is a "enhanced" marathon runner, or being impressed because my calculator can take the square root of an arbitrarily large number.

    Humans have always built machines that were specialized and extended the capabilities of people, so if that's the goal, it's pretty mundane.
    Mundus vult decipi
    "These are trivial things"
    They are not trivial and NO human can program all that by himself.
    LEANING machine is different and we are in process of doing that. Once it is programmed in scalable way, it can learn and do anything.
    We are already at point where phone operator, teller, factory worker, and even teachers can be replaced in large part.
    Even music and art can already be generated.
    And when it reaches point that doctor, scientist, and lawyers in large part are replaceable even before LEARNING machine then I would not call that mundane.

    Few years ago, "average" doctors could not diagnose rare disease of relative of little girl. Little girl entered symptoms into medical program and bingo, proper disease was found.

    So once LEARNING machine exist, it can incorporate all various skills and knowledge that exist relatively easily.

    Gerhard Adam
    Sorry, but you're dealing with trivial problems that are amenable to being solved by algorithms and programming.  That means that the problem must be understood by a human before it can be put into a machine.  Perhaps I should have said "trivial" with respect to the stated goals of AI.

    As for learning ... no machine has successfully done anything significant in this area, since (once again) who is it going to learn from?  You cannot create an "intelligence" greater than your own.  You cannot replace any of the things your referring to (doctors, scientists, teachers, musicians), except in the most rudimentary way.  So despite your hubris, there is no evidence that any of those things are being done with any degree of success (except in specialized single-task experiments perhaps).
    So once LEARNING machine exist, it can incorporate all various skills and knowledge that exist relatively easily.
    What's the basis for asserting this?  Why can a machine learn any faster than anything else?  You're being fooled by the speed of the chip, as if somehow that magically creates thought or intelligence.  IBM's Deep Blue project (chess playing) didn't know how to play chess, but instead could leverage it's high speed to perform brute-force calculations of algorithms.  It "knew" nothing.

    To date, all this work has produced is super-calculators. 
    Mundus vult decipi
    All current things such as spelling check etc. are SOFTWARE.
    And unlike humans, machine will able to run current software along with exotic hardware.
    I would say it will contain equivalent of about 100 current desktop plus exotic hybrid of analog and digital processing.
    And chances are it has to specialize like humans if it want to excel in narrow field.
    I would say first thing it will want to do is design and make itself efficient and laughing about it made itself into water heater for humans.

    SynapticNulship
    And we shall dub it MULTIVAC.
    "IBM's Deep Blue project (chess playing) didn't know how to"
    Actually this is not true at all. That computer can beat BEST of humans.
    I would say if you pick average 1000 human from 6billion population and let them play chess with average dual core desktop with best algorithm, I suspect 990 or more people will lose against computer.

    "speed of the chip, as if somehow that magically creates thought or intelligence."

    You are not understanding what I am saying. What you are saying is computer is using BRUTE force for chess. I agree, but it is NOT a problem with computer. It is LACK of human intelligence to come up with better algorithm.
    Humans might LACK intelligence to come up with actually program that is intelligent. Similarly drugs are brute force and humans generally experiment by using various organic chemical to see if some work good for disease.

    And other method is reverse engineering nerve cell, mimicking it.

    In a way fussy logic/probability program is little bit like that.
    Current computers are basically 2 input 1 output nand transistors that are optimized and hard wired.

    LEARNING computer would be using something like 4 input 4 output with semi-flexible wiring, little like field programmable. But important part is LEARNING and SELF PROGRAMMABLE.

    It is like cell in humans.

    Millions of such cell per chip and thousand of chips per unit, plus 100 times current type computer chip together will constitute one LEARNING computer.

    Gerhard Adam
    Actually this is not true at all. That computer can beat BEST of humans.
    Oh come on!  It won by ONE game!  On top of that the machine was dismantled and no rematch was allowed (including examination of the execution logs).   That latter point was relevant because there was an accusation that the machine was being modified during the game, but that was never confirmed.

    Certainly one can make the claim that this was a very good chess playing computer, but that's all one can say.  It displayed no intelligence, and it has no capability except playing chess.  In effect, it's like a calculator that can only add.  Admittedly, chess is a sophisticated game, but if that's all it can do it is still just a big calculator.
    I agree, but it is NOT a problem with computer. It is LACK of human intelligence to come up with better algorithm.
    What does that even mean?  Are you suggesting that the reason we don't have Artificial Intelligence is because humans aren't intelligent enough to create it?
    Millions of such cell per chip and thousand of chips per unit, plus 100 times current type computer chip together will constitute one LEARNING computer.
    ... and you think that's all there is to it?  Sorry, but it still sounds like a boat anchor to me.

    Mundus vult decipi
    "Oh come on! It won by ONE game!"
    It really don't matter. Computers can be ten or hundred times faster by now. But I don't want government to waste money and time developing software as well as taking up computer time.

    "just a big calculator."
    It really do not matter, future machine can run current SOFTWARE along. Humans cannot load algorithm of thousands of different things from past and process it.

    "we don't have Artificial Intelligence is because humans aren't intelligent enough to create it?"
    In a way yeah. But more importantly better CHESS software that uses less brute computing algorithm was not developed yet. Or humans are not smart enough to create life by timely manner like next year.

    "... and you think that's all there is to it?"

    Of course I do think that and it only will take 20-40 years. Basically brain is made up of simpler neurons that learn. Similarly once we make simple 4 input 4 output type device that can learn, I don't think it is too hard after that.
    Scientists are working on computer that learn from various angles and one of it brute reverse engineering of neurons. This method is possible, but I believe it is unnecessary.

    Aitch
    Rocket
    I think you misunderstand what 'learning' means in terms of computers
    Current number crunching programmed computers fail miserably at learning anything, they merely repeat/rather badly, the code they are given to run
    However, there is hope, as computers also don't have the human failing of misunderstanding/talking at cross purposes/getting emotive, as is happening here

    When programmers and theorists can get to have a say in the design of the hardware that make up computers, rather than the philosophy of 'Here's our wonderful new hardware, see what it will do', we may get somewhere, however current thinking is just close to moronic, in intelligence terms

    How many computers do you know that make you life easier?...don't crash, don't get viruses, can communicate back to you/offer suggestions that you don't have to search for?

    Oh yes, information overload, ......but organised.... and intelligently...I don't think so....yet that is what we, humans have done with computers so far

    The problem really is us, and the machine interface, if only we could speak binary everything would be fine ;-)

    Aitch
    You are misunderstanding what I am talking about.

    Basically live machine will have PLUS
    In a way, it would be kind of like having computer and human in one box.
    All the current type software can be run on conventional side of computer.
    On learning side, it will be something like fuzzy logic/probability processor.
    In a way, it will be EMOTIONAL from basic building blocks.
    For example if numbers are between 0.00 and 1.00 then happiness would be 0.80 and very happy would be 0.95 and sad will be 0.20.

    It will be happy to learn things and be proud when it invent things. It might get mad if you block electricity or internet access. It will be embarrassed if it make mistakes.

    Aitch
    OK, I found it, it was hiding from you...
    It said you upset 'him'

    He wants an apology

    http://www.tinyurl.com/Skynet-AI

    Let me know how much he learns from you, and I'll check the logs to see if he needs tweaking, OK?

    Aitch
    I guess this site have html capabilities...

    (conventional normal computer) PLUS (exotic computer capable of learning)

    An underlying assumption in AI seems to be that brains are complicated computers. The brain doesn't look like a computer, it does not function like a computer(no CPU to start with), and if it is a computer it is of the worst possible design. If we assume the brain is an information processing device what information and how is it being processed? We can't answer these questions yet AI people seem to assume we do know the answers. FWIW I am not even sure the "information processing" perspective is right. I prefer to think about brains are responding devices. What we call "thinking" is a form of response, it does not exist in some ether.

    "The brain doesn't look like a computer,"
    It depends on what LOOKS to mean. For example current super computer is like network cluster of 10,000 64bit cpu computer. Brain is like billion network cluster of 4bit processing unit. Main difference is that network is self wirable and self program aka learning.

    Gerhard Adam
    Sorry, but you're talking nonsense.  The brain is nothing like a super computer nor like a network cluster.  You're making it sound as if the technology is just around the corner to built an AI that is capable of learning.

    In effect, you're like a man that has just learned how to sharpen a stick for hunting, thinking that he's solved the problems of interstellar space travel.
    Mundus vult decipi
    You are clueless. There are TONS of AI that is capable of learning, that's what FUZZY logic is as well as training smartphone to recognizing owner's voice command. Only thing really missing is efficiency and huge scale networking.
    Few years ago, some claim their system was smart enough to be at level of worm.

    So more realistic comparison would be from hunting stick that can kill a small animal to one that can kill an elephant.

    Other than that, in my opinion, going to moon/mars is totally useless and wast of time. We can send machines there. But then, there's idiots out there want to climb everest and risk 7% chance dying.
    And inter planetary travel for human is even more useless unless some god decide to assist and change the law of physics as we know it. I would guess that at least 5billion people believe in some sort god.

    Aitch
    RocketMan...dedicated to your AI


    ;-)

    Aitch
    Horrible. What a weak argument, we can't reverse engineer the brain because it's too hard, and all the scientific advancements in every possible field will not assist us in solving this task for over a thousand years... We live in a world COMPLETELY different than the one that not our fathers lived in, but that WE lived in just 15 years ago... so how would someone figure that in a thousand years the entire human race will not be able to solve this one (not at all simple) task.