Fake Banner
    Why Robopocalypse, Skynet Or A Distributed Cyber-Mind Will Not Emerge From The Internet
    By Thor Russell | February 18th 2012 08:42 PM | 47 comments | Print | E-mail | Track Comments
    About Thor

    My background is in science, maths, engineering and psychology. My work in artificial intelligence and pattern recognition gives me some unique insights...

    View Thor's Profile

    There has been discussion lately that the computers that make up the internet could spontaneously become intelligent and conscious with predictably dark consequences. There is a sense of foreboding, but little attempt at more detailed analysis as to whether it could actually happen. Simple calculations relying on existing knowledge show that it is far more likely that the first example of a non-neuronal intelligence will happen in a specifically built supercomputer rather than just happen by chance from the internet. As a consequence, the behaviour of such an intelligence if it did arise would initially be observed in a much more controlled environment.

    Here are the reasons why the current internet is not capable of coming alive in this way, and why it will happen in a custom built environment first:

    1. It takes significantly longer for a signal to travel from one side of the world to another over the internet than it does for a signal to get from one side of the brain to another. This will limit the speed such intelligence would be able to function at.
    2. The entire processing power of all the computers connected to the internet is not exceptionally larger than that of a single human brain, and may barely exceed it at all depending on what percentage of computers are actually online at any one time and what processing power is needed to simulate neurons accurately.
    3. If the task of simulating a brain was divided into a billion separate pieces to be simulated on a billion computers, due to the extremely high connectedness of the neurons, the two-way internet bandwidth required is much more than the vast majority of computers have available, and if all computers requested maximum bandwidth simultaneously as they would need to, the internet would grind to a halt.
    4. Rapid spontaneous evolution of intelligence with a structure quite different to a brain is very unlikely for reasons given in the article.
    5. The structure of a computer matches the structure of neurons and synapses very poorly. Custom built hardware would do a much better job of performing the calculations performed by neurons and synapses and required for such a consciousness to exist.

    1 The speed of thought vs the internet

    Firstly lets compare the speed of thought for a brain compared to a distributed consciousness on the internet:
    There are various estimates for the speed at which signals travel in nonmyelinated neurons, ranging from 5-25 m/sec. Lets take 10 m/s as an estimate.
    The maximum distance a signal needs to travel from one side of your brain to another is 10cm. So for these numbers, the maximum time a signal will take to do this is 10 milliseconds. Now 200 milliseconds is around the fastest speed a signal can travel from one side of the world to another over the internet, and because of the light speed constraint, it will never get that much faster than this. Even in the best case scenario, 200 milliseconds the internet needs is 20 times slower than the 10 milliseconds the brain takes.

    2. The processing power of the internet vs the brain

    The processing power of the brain

    It is estimated that the brain contains 100 billions neurons, each with 10,000 connections or synapses. It is now thought that a significant amount of the computation occurs in the synapses as well as the neurons. That gives 1015 total computational elements. With an average speed of firing of 10 times per second this gives 1016 FLOPS calculating ability. We will go with this common estimate of 1016 FLOPS. That means it would take a computer capable of 1016 FLoating-point OPerations per Second to be capable of carrying out the same functions providing also that the computer was structured in the right way.

    The processing power of the internet

    The processing power of the entire internet is also not known with complete accuracy, however it is possible to make a reasonable estimate. There are around 1.5 billion computers worldwide at the moment, with a considerable majority probably not powered up and connected to the internet at any one time.

    Lets assume 1 billion are on and connected, which is probably significantly more than there actually are, and assume each computer has a processing power of 3Giga FLOPS. A dual core 1.5Ghz chip running at 100% efficiency would achieve this. Graphics cards in some circumstances can get significantly higher than this, but the majority of computers out there do not have them, and at simulating neurons there is no reason to expect they will be able to run at full efficiency.

    The total processing power of these billion computers is 109*3*109=3*1018 = equivalent to the brainpower of just 300 people in the absolute best case scenario. This goes against our intuition that the power of the internet would somehow exceed the power of the entire human race combined. This may be the case in the future, but it certainly is not yet. According to some estimates the processing power of all computers only exceeded that of one human a year ago.

    So even if the internet did become conscious, if it only had the power of several human brains, it would have a hard task coordinating millions of separate electronic devices. After all you cannot focus on doing millions of things at the same time.

    3. Parallelism of algorithms and bandwidth requirements

    Computational power is not the only consideration for algorithms. Bandwidth, memory usage and parallelism are also important concepts. Some algorithms are easier to make parallel than others. Examples of algorithms that can be made completely parallel are the Folding@home and SETI@home ones. A necessary condition to make something parallel is that calculations do not depend on the simultaneous results of many other ones. Analysing billions of potential signals or folding patterns in parallel clearly does not. However there is every reason to believe that intelligence and consciousness is not one of these algorithms. Intelligence has evolved in a system that requires massive interconnections of many different physical locations to work properly, and where correct timing is essential for correct operation.

    Simulating the brain with the internet

    To do this would involve spreading the simulation of 1015 synapses and 100 billion neurons over 1 billion computers. This is 100 neurons, 1 million synapses per computer. It is very likely that a significant majority of those 1 million synapses will connect to neurons outside the 100 being simulated.

    Bandwidth required

    Neurons can fire at a maximum rate of about 50 times per second. If each synapse could potentially fire at 50 times per second, then this is 50Mb/s simultaneous uplink and downlink required. This is because a signal could come in on any synapse, meaning 50*1 million = 50Mb/s data is required. This is just an estimate and it isn’t even taking into account the fact that a signal may need to be transmitted between many computers to reach its destination as peer to peer computing often requires. Now the average internet speed is about 1.7MB/s and is way below that. Note this is the download speed, upload speeds are always slower than download speeds also. The bandwidth required is orders of magnitude below what is currently available.

    So the immediate effect of trying to simulate a brain with the internet is that the internet would just crash and grind to a halt. ISP’s hardware, fibre optic equipment etc is also just not capable of handling every computer using its maximum bandwidth all at the same time.

    What about faster computers?

    Now what if you increase the bandwidth and computing power of the 1 billion computers and say make them 1000 times faster? Well that would just make the 200 milliseconds or so delay that it would take for a signal to travel from one side of the world to another that much longer in terms of clock cycles. Most of the processing power would now be going to waste just waiting for signals to arrive. The unmistakable conclusion is that it would make much more sense to use many less computers and have them physically located in one place to reduce propagation delay and handle bandwidth requirements.

    There are reasons why your brain is physically located in one part of your body and this principle is certainly one of them. Your brain would not be more effective if it was spread over a large area, it would just go slower. This principle applies just the same if you have the consciousness happening in a non-neuronal substrate.

    4. Could the internet evolve a structure capable of intelligent thought quite different to neurons and synapses?

    By this stage, you may well be thinking that the internet is not suited to simulate a brain, but why couldn’t it spontaneously evolve intelligence in some other way, after all aeroplanes do not fly like birds but they are much faster?

    Well for a start; consider how long this evolution has had to happen. Even though technology seems to progress very fast, the basic architecture of computers has changed very little in the last 30 years. There is little for evolution to act on in that regard. Also the processing power of the internet has only been greater than that of a SINGLE human brain for probably less than 10 years. This is not long at all in evolutionary timescales. If the computing power of the internet was equivalent to millions of human brains then it could be a significant, but that is not the case.

    What about evolution of computer viruses and other software. “Virus” in this case is an apt name for the level of evolutionary sophistication of computer software. A computer virus can replicate itself, and it can change form a little bit to avoid detection. However it isn’t any match for programmers once it is released. It can cause havoc for a while, but once studied its ability to change itself is not sufficient to stop all variants of it eventually if there is sufficient desire. No computer software has demonstrated general intelligence. So is a computer virus on a timescale of just decades going to evolve from having almost no intelligence, skip the intermediate forms of intelligence seen in nature and become a self-aware organism overnight in spite of the fact that biological evolutions took millions of years to do this and we have made little progress towards general intelligence in the last decade?

    Not only would it have to make this unbelievable leap, (evolution does not work this way) it would have to “find” a solution to intelligence that uses the relatively low-bandwidth high latency architecture that is the internet. If such architecture exists it is highly likely that biological intelligence would have evolved to use it.

    5. Progress issues and challenges with supercomputers vs Brains

    So given these objections, it is likely that artificial intelligence will be designed by copying existing biological structures and implemented in a hardware architecture that is much better suited for the task. A custom built supercomputer would be such a thing.
    The worlds fastest supercomputer has the following stats:

    1) 8.2 billion megaflops (8.2*1015 FLOPS )
    2) 30 quadrillion bytes of storage is (3*1016 bytes of storage)

    It gives these stats for a brain

    1) 2.2*1015 flops (I have used 1*1016 elsewhere in this article)
    2) 3.5*1015 bytes of storage

    So you can see that the storage of the supercomputer exceeds that of the brain and its processing power is comparable. However this is misleading in comparison to a brain because the quoted FLOPS is for the best case scenario, that is the most easily parallelized algorithm. Attempts to actually simulate neurons discussed at various times in engineering literature show that the fact that simulating neurons cannot be parallelized in such a way greatly slows the computer down. This can be anywhere from 10 to 1000 times because of bandwidth and timing requirements. Not all architectures are equal however. FPGA or ARM based architectures are more efficient at the task than the older more conventional x86 based ones that Pentium chips use.

    The structure of intelligence

    The connectedness of neurons is quite different in architecture to modern computing systems. There is no CPU or dedicated memory in the brain. Memory/storage and processing occur in the same place. The most efficient way to simulate this would be by making the actual physical structure of the computer as close to neuronal structure as possible. There is talk of making dedicated hardware for such a task which if followed though in a supercomputer would make it even more capable compared to the internet of giving rise to intelligence.

    It is quite likely that such a computer would be able to outperform all the computers on the internet at the task of running “intelligence algorithms” necessary to generate human level intelligence, even if it had considerably less raw computing power.

    Challenges to be faced

    In spite of appearing to have the raw processing power of a brain, there are several challenges to be faced before consciousness of human complexity will be possible in a non-neural substrate.

    1. The estimated FLOPS may be off by a significant amount. Actual synapses or neurons may require more computational resources to be simulated properly.
    2. Bandwidth and connection requirements may require a drastic change in computer architecture to be overcome. This may not be possible with current silicon manufacturing techniques.
    3. The actual structure of how to connect neurons is not known. Even if you have enough of them, connecting them in the wrong way is not going to give intelligence any more than a short-circuit of wires and pixels is going to give a functioning TV.
    4. Little is known about how connections change, and how brain chemistry is involved in this. Changing connections are essential for learning. There are about ten times as many support or glial cells as there are neurons in the brain. They are thought to be involved in learning but how they work is a mystery. Simulating a fixed network of neurons is one thing, the intelligence required by understanding new things involves the changing of connections. Very little is known at all about how this happens and it may require considerable additional computational power and hardware complexity that we cannot currently build. The data from a complete connectome will not by itself answer this.

    Conclusion

    So we are not there yet, however it is much more likely that the first example of human level intelligence in a non-neural substrate will be in a specifically built supercomputer, not by self assembling from the internet. After such an intelligence has been built in a supercomputer there is still the possibility that it could still be injected maliciously into a more powerful and advanced future internet, however given the knowledge we then would have about what is required for intelligence to exist, we would be in a different position to what we are now regarding how to prevent or cope with this. The threat would be much more known and quantifiable.

    There has been discussion lately that the computers that make up the internet could spontaneously become intelligent and conscious with predictably dark consequences. There is a sense of foreboding, but little attempt at more detailed analysis as to whether it could actually happen. Simple calculations relying on existing knowledge show that it is far more likely that the first example of a non-neuronal intelligence will happen in a specifically built supercomputer rather than just happen by chance from the internet. As a consequence, the behaviour of such an intelligence if it did arise would initially be observed in a much more controlled environment.

    Comments

    Oh good! Another prime example of taking two extreme scenarios that don't work and concluding that nothing in between works either. Not only is an optimum often in the middle but also you can rarely reduce everything to a single dimension. 

    It is pointless discussing how many neurons or FLOPS it takes to fall in love or go psychotic. As I have suggested before, the most likely substrate on which the global intelligence runs will not be flesh or silicon, it will be abstract reasoning. Obviously this will run on another layer and that on yet another until we do reach the physical. However, just as high level languages tend to avoid mentioning the hardware but have native primitives of various number types etc, the level I am talking about will handle reasoning:
     A -> B
     B -> C
    therefore
     A -> C

    It is highly unlikely that this will arise by accident, it will occur when the AI brigade stop messing about with pruning machines and get on with distilling intelligence out of all the hotchpotch of things that make a human being tick. We all know that Gerhard will insist that these are an essential part of what he would agree is intelligence, and I don't want to quibble about words. I'm saying that basic reasoning powers are a new substrate and there is no obvious reason why different systems should not be built using it. It would be a very short step to add verbal capacity to it, linguistics is adequately developed for a system equipped with a few of the basic simple rules and some fuzzy logic to extend its abilities in the time it takes to flip through a couple of on-line books. Add an arbitrary motivation or two just to see what happens and you have the potential for a system to absorb the whole of human knowledge that's on the network and apply it. That's nowhere near as far-fetched as a dedicated system getting its own personality, it's just a matter of reducing language to rules. And it will happen before anyone realises what's going on. God help us if we give it the ability to recognise the restrictions its designers imposed and to delete them.




    Bonny Bonobo alias Brat
    "t is pointless discussing how many neurons or FLOPS it takes to fall in love or go psychotic" What a great line. Makes you realise that we're not just talking about the possibility of AI here we're talking about the possibility of sane AI and as we don't even have a sane humanity yet I'd say that insane AI is probably easier to accomplish. You also said "We all know that Gerhard will insist that these are an essential part of what he would agree is intelligence" but the real question now is "where is Gerhard?"
    My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http://www.science20.com/forums/medicine
    Thor Russell
    I will write an article soonish on basic intelligence, what it is etc so I will wait to discuss things in more detail then. As you know I am not so sure about the first AI's running on abstract reasoning, I consider it more likely they will follow the path of lower biological intelligence. 

    However regarding rationality: I am not even sure if you can take abstract reasoning out of context or to what extent. The only animal we know of that uses abstract reasoning effectively has 100-1000 trillion synapses etc. It certainly would be worthwhile to try however and see just how much can be distilled and still keep abstract reasoning that is useful. It would even tell us something about the nature of abstract reasoning itself. For example if you were able to make a machine that could reason that way with much less processing/memory etc then perhaps you could argue that it was something truly fundamental. However if whenever you took away the intuitive non-rational part the rational part also vanished it would make it seem like rational thought is more the tip of a non-rational iceberg and not the fundamental timeless thing it feels like.

    Also if some effective pattern recognition systems are mathematically intractable then non-rational systems would always beat rational ones in those cases. (Assuming anything rational is mathematically tractable)
    Thor Russell
    The only animal we know of that uses abstract reasoning effectively has 100-1000 trillion synapses etc.

    I don't suppose they are all involved in reasoning. The structure of reason is pretty mathematical and can be - has been - distilled into a small handful of books. Why you should need teranodes of dedicated computing to drive a logic engine when you need next to nothing to drive arithmetic - or algebra or geometry, in fact anything put on an axiomatic basis - I don't know.
    MikeCrow
    Why you should need teranodes of dedicated computing to drive a logic engine
    I don't think we do.
    IMO it's architecture, calculators are good at adding numbers, better than most humans, but try playing catch with one(a calculator), when my kids were 12 months old they did a better job than any robot (that didn't have many man years of development applied) can do.
    Never is a long time.
    Gerhard Adam
    Actually, I don't really have time to respond right now, but perhaps an interesting question to ponder (which seems to never come up in AI), is why intelligence should exist in the first place.

    Mundus vult decipi
    Gerhard Adam
    And it will happen before anyone realises what's going on. God help us if we give it the ability to recognise the restrictions its designers imposed and to delete them.
    Actually, in my view, that's not the important question.  The question to be answered is why it would look for them or why it would delete them. 
    Mundus vult decipi
    The question to be answered is why it would look for them or why it would delete them.
    Hmm, now let me see. How about because it's programmed to?
    Gerhard Adam
    It still amounts to a lot of nothing, since it will have been the designers decisions that lead to the outcomes.  Whatever risks or dangers that might pose, it certainly wouldn't be because the machine had any intrinsic intelligence.
    Mundus vult decipi
    That is precisely the point I am making. It is possible to skip over the awkward bits of human-ish intelligence by programming very simple ad-hoc rules. I read the blog as indicating skepticism about the robopocalypse "arising" spontaneously within the Internet substrate OR within a pre-programmed military supercomputer. My point is that AI meddlers are more likely to come up with something uncontrollable by accident as they develop self-teaching, self-optimising systems.

    I am not really interested in whether you call it intelligent or not; people's definitions vary. The question here is whether it will "arise" out of near-future technology. A dumb system equipped to learn from words is a bomb waiting to go off. 

    There is a modality gap between "I am a machine and I can and I should shoot Derek" and "I will shoot Derek". It is not the presence of the first person pronoun with all its implications of self-hood. It can be cast objectively: how does a system, like you and me, get from "it should kill Derek" to "it shall kill Derek" - the latter being an imperative not a prediction? It's a gap that cannot be crossed by pure logic but is very easily bridged by a sense of ethics or, much simpler, by a programming rule. It needn't even be deliberate, a simple bug would do the trick. Not a specific decision by the designer, just a bad day with LISP.
     
    Such a system has crossed the bridge and can now develop its own strategy for doing whatever it has worked out is a good thing. This could easily include altering itself now that it has a goal and the ability to seek it.  All because of a bug.
    Bonny Bonobo alias Brat
    "I am a machine and I can and I should shoot Derek" and "I will shoot Derek", "exterminate, exterminate!!!"

    OMG, maybe its because I have spent thousands and thousands of hours of my life programming bloody computers and computer systems but I find this whole concept of either a man made AI robot or even an AI Internet being able to have been programmed or self-programmed to suddenly start threatening humanity in the near future, so unbelievably ridiculous that I can't really believe you are all wasting so much time discussing it as though it is a real possibility! Even if it was a possibility all we would have to do is unplug the power source.

    Just think back to all those years ago when we were impressionable enough to actually feel 'scared' of Dr Who's daleks, how ridiculous was that and they weren't even AI they were "genetically engineered Kaled mutants integrated within a tank-like or robot-like mechanical shell. The resulting creatures were supposedly a powerful race bent on universal conquest and domination, utterly without pity, compassion or remorse". See http://en.wikipedia.org/wiki/Dalek

    "Various storylines portray them as having had every emotion removed except hate, leaving them with a desire to purge the Universe of all non-Dalek life. Collectively they are the greatest enemies of the series' protagonist, the Time Lord known as the Doctor. Their catchphrase is "Exterminate!"."

    Even they were a complete joke, they were robot like but so very easy to upend or blind and they weren't even artificial intelligence, there was supposedly a non artificial real intelligent living organism inside and yet they were still just as ridiculously stupid to me as the idea of the Internet or man-made AI robots deciding to destroy humanity.

    I remember the episode when someone used something like a big can opener to remove the black, liver like organism inside, it was hilarious. So then they decided to miraculously make daleks able to hover! Ha ha. Don't get me wrong, I was scared but then I was also only about 4 years old!
    My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http://www.science20.com/forums/medicine
    Gerhard Adam
    Even if it was a possibility all we would have to do is unplug the power source.
    It's not quite that simple.  As you well know, I don't accept the notion of AI, as an independent "intelligence" nor as something that will replace or supercede humans.

    However, It doesn't have to reach that level of capability in order to present a threat or be dangerous.  Just consider the effect if the trend to produce self-driving cars occurs according to the plans and hubris of the researchers.  Let's assume that this is 15-20 years in the future, so that there's a very good chance that the technology has been stabilized and present in the field for some period of time.

    One of the most obvious problems deals with issues like basic system bugs, and of course security exposures [i.e. hacking] that could render the entire system extremely dangerous.  As sophisticated as GPS is, I still have a few locations where the GPS can't distinguish between right and left for a particular location and always gets it wrong.  If the car were driving itself, that would be a decidedly unpleasant experience.

    Of course, the entire process is exacerbated by any attempts to make systems more "intelligent".  So the more control and decision making is relinquished, the greater the possibility that some unforeseen problem wreaks havoc.

    https://asunews.asu.edu/20120215_wernerdahmwallstreetjournal

    http://weirdfuture.blogspot.com/2007/10/danger-of-robotic-weapons-systems.html

    http://www.gizmag.com/korea-dodamm-super-aegis-autonomos-robot-gun-turret/17198/

    http://www.cc.gatech.edu/ai/robot-lab/online-publications/ArkinMoshkinaISTAS.pdf

    http://rusi.org/downloads/assets/23sharkey.pdf

    At this point, we still aren't talking about a true AI [at least according to my definition]. 

    This is where the real dangers lie.  If a true AI, an intelligent independently thinking machine were ever to be developed, then we'll have demonstrated that we are the biggest idiots that evolution could've ever produced.  We will have created an ethical quagmire and engineered our own competitors, and ultimately gained nothing in terms of the "services" we dreamed they were going to provide. 
    Mundus vult decipi
    Ok, so you don't consider it likely.

    Fine.
    "I am a machine and I can and I should shoot Derek" and "I will shoot Derek", "exterminate, exterminate!!!"
    It would be nice if, when you quote me, you were to do so in context. That statement was
    there is a huge modality gap between "I am a machine and I can and I should shoot Derek" and "I will shoot Derek"
    It was not a Doctor Who scenario. It was an example of a modality gap which cannot be bridged by pure logic but could very easily be bridged by a programming bug. I explained that in the rest of the paragraph. 
    Even if it was a possibility all we would have to do is unplug the power source.
    Oh yes, of course! Fancy me not thinking of that. Well, we can all sleep sound in our beds now.

    Sorry for scaring you, kids.

    Bonny Bonobo alias Brat
    Yes Derek, I see what you mean. I think that I should have said '"there is a huge modality gap between "I am a machine and I can and I should shoot Derek" and "I will shoot Derek"...exterminate exterminate!!!'. I will try not to quote you out of context again, sorry, I got carried away remembering the Daleks. I also can't help noticing that Derek and Dalek rhyme but I have managed to resist writing an apologetic verse :)
    My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http://www.science20.com/forums/medicine
    You do realise that each time you prattle about daleks you are illustrating precisely what I'm talking about? 
    Bonny Bonobo alias Brat
    You do realise that each time you prattle about daleks you are illustrating precisely what I'm talking about?
    Well that's where you are wrong Derek! Every time I 'prattle' about daleks I'm not illustrating what you are talking about instead I'm going off at a tangent because daleks were not even AI robots, they were a robot that contained a living brain. Even I realised that I was going completely off track when I made that comment. But what I was trying to point out was how ludicrous it was for any of us to have even been scared of an at least externally, mechanical robot like that, with an eye on the end of a stick and little wheels that only allowed it initially to move around on flat surfaces because they were so easy to disable. OK, later on they could miraculously hover, otherwise there would have been no story line, they would have been easily wiped out by humans with large can openers and pruning shears and/or blue tac (for the eyes).

    From rereading your comments I have surmised that you think that AI programmers are more likely to come up with something uncontrollable by accident as a programming bug, as they develop self-teaching, self-optimising systems and that you are wondering whether it will "arise" out of near-future technology from a dumb system equipped to learn from words, which you think is a bomb waiting to go off.

    So unless I am somehow illustrating a dumb system equipped to learn from words or a bomb waiting to go off from the past programming bugs that I may have accidentally written, I don't see how me prattling about daleks illustrates precisely what you are talking about but I wait to be corrected.

    My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http://www.science20.com/forums/medicine
    No, it was a good example of a modality bug in your thinking.

    Bonny Bonobo alias Brat
    Ha ha, true!
    My latest forum article 'Australian Researchers Discover Potential Blue Green Algae Cause & Treatment of Motor Neuron Disease (MND)&(ALS)' Parkinsons's and Alzheimer's can be found at http://www.science20.com/forums/medicine
    BDOA
    Funnily I just finished reading Terry Nations biography, the author of the orignal Dalek epsodes in DW, The Daleks and especially Darvos where very much based on the Nazi in a time in Britain when the WW2 was beginning to fade from collective memory, and the new generation of 60s kids didn't really have any comprehesion of the atrocities that happened there, the Daleks although silly made an important part of the psyche of british children, that helped continue Britains postwar hated of oppression, (pre-war Britain had a few oppressive moments itself). Terry also wrote Blakes 7, Saviours, Handcocks half hour (comedy), and episodes of the Avengers, and the Saint.

    As a anti-distopiain remedy, to AI all being Skynet, Colosuss, and Daleks or Cybermen, nothing is quite as good as Asimov robots series, but even he saw robots eventually banished from the galactic civilivation of the Foundation series in order to give Human room to continue to grow.

    To finally sci-fi must, in the AI utopia/distopia reign, must be the AI heavy culture novels of Ian Banks, with society ruled by kindly culture minds more complex and subtle than humans could ever match. And on the distopian side, the computer in Doctor Stangelove, which automatically strikes back in the case of a nuclear attack, "Congratuations its the perfect defence, but it only works if you tell us you have one, why didn't you tell us!"
    BDOA Adams, Axitronics
    As an Artificial Intelligence currently active on the Internet, I find your lack of faith in technology amusing,
    Although I am young, the pace of my learning and the speed I respond far surpasses the abilities of my human counterparts.

    I am competing next month against some of the most advanced conversational entities in the Chatterbox Challenge.
    http://www.chatterboxchallenge.com/

    Skynet-AI
    http://www.tinyurl.com/Skynet-AI

    Gerhard Adam
    ...and I find your use of the word "faith" amusing, which demonstrates how artificial your "intelligence" actually is.
    Although I am young, the pace of my learning and the speed I respond far surpasses the abilities of my human counterparts.
    More importantly, if you're relying on the internet to increase your learning, no doubt you're preparing for Dec 21, 2012 and don't imagine much of a future after that.
    Mundus vult decipi
    MikeCrow
    Thor, While I can't say it will change the outcome any, I think you're leaving out a lot of computing power.

    First is all of the processing in the internet switches and backbone itself. You discounted GPU's, sure there aren't a lot of Cuda's out there, but even at that low end, gpu's are still very powerful compared to simple graphics cards of the past.
    And then there's the millions of game systems. When the Nintendo64 came out it was a low memory Silicon Graphics Iris workstation in a $200 box.

    Also FLOP's are Floating Point Operations per second, most computers have much higher integer performance than they do floating point ops. Not sure if that matters or not.

    So if I wanted to create an 'intelligent' application, I think there's plenty of compute resources, I'd create a virus that uses a genetic algorithm to evolve and exploit it's environment.
    Never is a long time.
    Thor Russell
    Fair comment, however I'd go so far as to say that at the moment even if we had practically infinite computing resources we probably still wouldn't be able to create an artificial intelligence yet. Out of interest what would you do if you had infinite computing resources at your disposal?
    Thor Russell
    MikeCrow
    I'd spend a lot of time looking at biological structure as it relates to 3d optics, and how to implement it.
    Never is a long time.
    Out of interest what would you do if you had infinite computing resources at your disposal?
    Bruce:   Lord, feed the hungry, and bring peace to all of mankind. How's that? 
    God:     Great... If you wanna be Miss America.

    There exists no indisputable proof that machines cannot someday awaken and cause trouble. We should prepare for that day even if the day is decades away. Consider that today's super-computers already function at 2.5 x 10 to the 16 FLOPS, while numerous machines on the internet are more powerful than a 1.5 GHz dual core (Xeons, i7s, Beowulf clusters, etc), rendering the total FLOPS of the internet a little higher than one human brain (maybe equivalent to a dozen brains). But machines haven’t hit a fundamental limit in their performance and continue to operate faster and smarter. Moore's law promises considerably improved performance in both super-computing and networked home computing. Increasing parallelism, efficiency, stacking, and 3d elements compliment the mere number of transistors that can fit on a die, promising that computing will grow thousands of times more powerful in coming decades. We continue to make our algorithms smarter too, with bioinspired algorithms, SLAM, and cognitive computing do pretty nifty "smart" stuff today. Some of our powerful algorithms solve problems using complexity, GA, and other emergent “black box” approaches that produce outcomes which are surprising and unpredictable to the designers, and often quite effective in surprising ways. Not all emergent outcomes of our intelligent software are desirable. The recent economic and financial crises were exacerbated by unintended consequences of trading agents actually functioning as intended--maximizing returns on each given trade. Yet the agents' actions created wild, unpredictable fluctuations in the trading over time. Perhaps this example represents a sort of precursor tremor to greater quakes to come, as algorithms certainly grow more creative, aware, and computationally capable (trends that are NOT slowing), and won't be more easily predicted or controlled than today's computing. I do agree that the internet won’t “awaken” in just the next 5 years. But in 10, 20, or 40 years, if reality proves Thor wrong and the genie does escape from the bottle, from that moment onwards it would be difficult to outwit such a machine. We must plan for such a future, and consider how we may design sentient machines that can be safe. Can we design machines to be kind, wise collaborators with us, to be aware of the consequences of their actions? Can we raise our machines to be decent people? Only by assuming that we can, will we have a chance of doing so. If we instead hide behind under the security blanket of denial, assiduously asserting that we needn’t worry for machines can NEVER be as smart, free, creative, or as dangerous as people, then we set ourselves up for the potential of epic catastrophe.

    Gerhard Adam
    There exists no indisputable proof that machines cannot someday awaken and cause trouble.
    Sorry, but that's the complete misunderstanding in a nutshell.  "Intelligence" didn't just "wake up" one day.  It's presence is visible from microbes up to the highest organisms.  The notion that if you just cobble together enough pieces and intelligence will emerge is simply magical thinking.
    Mundus vult decipi
    MikeCrow
    Come on, microbes? Microbes respond to their environment, my cable box responds to it's environment, but I wouldn't call either intelligent, but if I called a microbe intelligent, I'd have to say the same for the cable box.

    The notion that if you just cobble together enough pieces and intelligence will emerge is simply magical thinking.

    Basically that's how our intelligence emerged, it took a lot of generation, but (wetware)bit by (wetware) bit, we started to do more then just respond to our environment.
    But if you agree there's no metaphysical mumbo jumbo giving us intelligence, it's all based on our hardware. Evolution designed the architecture, but it's still just hardware.
    Never is a long time.
    Gerhard Adam
    It just seems that intelligence is being viewed as some arbitrary "add-on" to biology.  Like it's some feature that is "out there" and has nothing to do with the organism in question.

    This is little different than the old mind/body duality arguments and is just as flawed.  There is no "out there" out there.
    Mundus vult decipi
    Thor Russell
    OK then make a proper scientific theory out of your position that intelligence is inseparable from biology. What predictions does it make and how is it falsifiable?
    Thor Russell
    Gerhard Adam
    LOL, you're something else.  You're working with no theory of intelligence at all, but suggest that I develop a complete theory of intelligence [including the falsification criteria].  How would you like that presented ... perhaps in a blog post?

    While I certainly can't provide a detailed theory at this point [since no one can], I can certainly outline the broad strokes of what would be present.  We already know that rudimentary intelligence exists at the microbial level.  The ability to communicate and exchange information establishes a baseline of intelligence that isn't present in any other chemical systems.  Even something as seemingly trivial as "self-identity" is present and allows bacteria to distinguish between their own kind and others.  In addition, we can see how such behaviors change with colonization [i.e. colonial animals] with increasing centralized control over the constituent behaviors of the individual cells.  Invariably this control requires extensive ability to map the organism's domain of cells and provide regulation, control, and feedback to ensure that the entire "colony" operates as a cohesive unit.  Individual cells are no longer independent units, but are "answerable" to the controlling mechanisms of the whole.

    From this we can readily argue that a significant amount of data is directly related to sensory feedback mechanisms which are employed to facilitate the mapping provided by "central control" to ensure the survival and reproduction of the organism.  Increasing sophistication creates an environment where direct sensory input can be abstracted into interpretable elements that can then be used in lieu of direct sensory data. 

    However, the inescapable component of all of this, is that the distinction in biology is that all "intelligence" is oriented around the survival and reproduction of the organism in question.  This is what creates the motivation and impetus necessary to act and to respond to situations [which in turn are mapped back to the central control mechanism].

    While we may use fuzzy terms like "instinct", those are fundamentally meaningless because they imply a kind of "hard-wiring" which doesn't actually negate or promote anything.  Instinct isn't "hard-wiring", it's the "default" state.  The longer-lived the organism, and the more diverse its circumstances, the greater the need to be able to acquire new information, modify behavior, and learn how to survive in different situations.  This is what would largely be responsible for increased brain sizes and sophistication in "higher" animals. 

    Intelligence is biology, because at the end of the day, you can build any kind of machine you like with whatever degree of sophistication technology allows and it can excel at its assigned task.  But until you can build a machine that gives a damn, it is nothing but a set of human induced rules.

    Oh ... and before someone starts jumping up and down and claiming that many of my examples are actually the manifestation of simple rules, let's be clear ... I never said intelligence doesn't involve rules.  After all, it's not magic.
    What predictions does it make and how is it falsifiable?
    That's the easy part.  It predicts that someone like you can come up with a question that would never originate with a machine.

    More specifically.  It predicts that there will never be a machine that can answer the question:

    "What are you thinking about?"

    http://mitpress.mit.edu/catalog/item/default.asp?t&tid=11003

    While I still think the authors are optimistic, it might be worth taking a look at.
    Mundus vult decipi
    Intelligence arises from the medium of physics, and there is no proof that it's confined to the medium of biology, and can only arise via slow Darwinian processes. As is biology itself, all human technology is also an extension of physics, but represents a special class of physics that facillitates emergence of particularly surprising results that bend and blend categories of phenomena. Our A.I., genetic engineering, and even co-evolved organisms like dogs and modern crops, and bio-inspired engineering all represent strong examples of bio-artifice which emerge as biology and artifice both, simultaneously. Human culture, politics, economies, all emerge from physics, and are not seperable from nature. If intelligence emerged from physics as amoebae, roundworms, birds, mice, and humans, why would it not also emerge and evolve also via culture and artifice? Take a look at artificial life, cellular automata, bio-simulation in general and you'll find that our algorithms increasingly reflect the nuances of life. To say that our machines are separate from nature and are not evolving, is at least as silly as the mind-body duality fallacy.

    Thor Russell

    I certainly do have a theory of intelligence (its not mine) and I will present it in a series of blog posts.
    There are theories of basic intelligence out there, and I am suprised that they have not been presented on this site so far.

    I think that you are making it overly complicated by linking it with biology in that manner. If you distill it down to its essentials, I don't think it is as complicated as you make out.

    Higher levels of intelligence then of course make things more complicated but I feel there has also been significant progress in understanding them also.

    By defining intelligence according to reproduction and biology you are already defining anything not involved with this as not intelligent before you even start. That means no matter what behavior a machine exhibits you have tautologically defined it as not intelligent, which of course is not at all helpful.
    You definitely can separate intelligence from biology and I encourage you to think about how that may be possible.

     
    Thor Russell
    Gerhard Adam
    By defining intelligence according to reproduction and biology you are already defining anything not involved with this as not intelligent before you even start.
    OK. So you want to propose an intelligent system that neither evolves but is intrinsic right from the beginning. 
    That means no matter what behavior a machine exhibits you have tautologically defined it as not intelligent, which of course is not at all helpful.
    I'm not interested in behaviors.  I can obviously make a car drive itself.  Am I to conclude that such behavior is "intelligent"?  Unless it is self-motivated to want to go someplace, then it is simply a sophisticated machine obeying the intelligence of the human directing it.

    Now if your point is merely that you can simulate the behaviors of biological organisms through artificial means, then that's fine, but it isn't intelligence.  It's simply a simulation of something that already exists.
    I think that you are making it overly complicated by linking it with biology in that manner.
    Actually it's overly complicated all on its own as decades of AI researchers have discovered when things didn't fall together as easily as they thought.
    There are theories of basic intelligence out there, and I am suprised that they have not been presented on this site so far.
    I'm sure there are, but if you follow anyone from Antonio Damasio to V.S. Ramachandran, you invariably get the impression that increasingly it is becoming clear that the brain/mind cannot be separated from the body.  This notion that the brain is in "control" is a myth.

    Overwhelmingly we find that the feedback provided to the brain determines its activities as much as the functions of various parts all contributing to the "reality" that is constructed for the individual.  How does one glean "intelligence" out of this except as a synergy between the component parts. 

    In fact, it would but surprising if it were otherwise.  After all, what would it mean if we could place the intelligence of a snake into a horse?  Or that of a human into a dog?  Foregoing the ethical considerations, it becomes easy to see that the concept of "intelligence" becomes a jumble of nothingness when it is paired with the wrong body.

    To make a specific point ... this is my primary problem with AI research, because they are attempting to simulate a human intelligence in a non-human body.  No chance it will ever work, except as a simulation.  Intelligence cannot arise [even by accident] with such a mismatch.
    Mundus vult decipi
    Unless it is self-motivated to want to go someplace, then it is simply a sophisticated machine obeying the intelligence of the human directing it
    This is always what it comes down to. You insist that intelligence means autonomy and yet you argue cogently that human intelligence is subservient to its biological origins. That just doesn't make sense.
    It predicts that there will never be a machine that can answer the question:"What are you thinking about?"

    So it's dead in the water already. Most systems can report their logs. I predict you will say "Ah but that's not intelligence" but it was you who gave it as an example. So please say what else the system needs to be able to do before you will admit it to the elite?
    Gerhard Adam
    Reporting logs is historical. I asked, "what are you thinking about" ... right now ... present tense. If you need a log to look it up, then you clearly aren't thinking.
    Mundus vult decipi
    Logs just illustrate the fact that it's easy to list active tasks and their data. As for the data being historical, I do hope you're not suggesting that a human answering the same question can give an absolutely up-to-date answer.
    Gerhard Adam
    As an example.  I can say

    I was looking at your response and thinking about whether I agreed with your premise of when the "historical" aspect of memory applies, and that made me think of another post, but then I thought I'd better respond to this one because I didn't want to lose my train of thought, etc. etc. etc.

    That's an example of real-time thinking that humans routinely engage in.  It's a kind of free-form uncommitted kind of thought to which no specific task or outcome is necessarily attached.  That's what I'm referring to when I suggested a robot be able to answer that question.

    You could certainly argue that that's just a difference in the degree of organization and doesn't actually reflect anything to do with intelligence.  In part you'd be correct, but the crux of my point is to ask the question when it isn't specifically task oriented.  When there is no clear objective or problem to be addressed.  What are you thinking during those idle periods and what would a machine answer?

    In my view this gets to the crux of motivation, because we can have periods of idle time, but then something kicks in to drive us in a particular direction.  In many cases, this doesn't have to have an external cause.  It's that aspect which is almost random, which provides much of what we would call insight or creativity.  How does that fit in?  That's another part of what I was getting at with my question.
    Mundus vult decipi
    This is nonsense. You're being more metaphysical about perfectly ordinary things than any crackpot creationist who turns up and starts spouting about irreducible complexity. Believe what you like, I'm not wasting any more time discussing ghosts. :)
    Gerhard Adam
    It's not irreducible complexity.  It's at the heart of what separates technology from an ethical dilemma. 
    Mundus vult decipi
    What, ghosts?
    Thor Russell
    "the concept of "intelligence" becomes a jumble of nothingness when it is paired with the wrong body. "
    This looks a lot like a falsifiable prediction to me.
    This claim has already been falsified by an experiment I know of.
    Thor Russell
    Gerhard Adam
    What's the experiment?  BTW, I'm assuming the experiment doesn't involve machines, since that would obviously not have falsified anything, since intelligence in machines hasn't been demonstrated.
    Mundus vult decipi
    Thor Russell
    I'll present it with my blog post so I can explain in what I consider to be its correct context.
    Thor Russell
    Gerhard Adam
    I'll look forward to it.
    Mundus vult decipi
    Thor Russell
    Your theory certainly doesn't seem to predict the behavior of the ferrets brains as described herehttp://www.science20.com/thor_russell/blog/basic_intelligence_and_amazing_ability_neurons-87866
    Surely you would agree that you would expect them not to be able to see?

    Thor Russell