There has been discussion lately that the computers that make up the internet could spontaneously become intelligent and conscious with predictably dark consequences. There is a sense of foreboding, but little attempt at more detailed analysis as to whether it could actually happen. Simple calculations relying on existing knowledge show that it is far more likely that the first example of a non-neuronal intelligence will happen in a specifically built supercomputer rather than just happen by chance from the internet. As a consequence, the behaviour of such an intelligence if it did arise would initially be observed in a much more controlled environment.

Here are the reasons why the current internet is not capable of coming alive in this way, and why it will happen in a custom built environment first:

1. It takes significantly longer for a signal to travel from one side of the world to another over the internet than it does for a signal to get from one side of the brain to another. This will limit the speed such intelligence would be able to function at.
2. The entire processing power of all the computers connected to the internet is not exceptionally larger than that of a single human brain, and may barely exceed it at all depending on what percentage of computers are actually online at any one time and what processing power is needed to simulate neurons accurately.
3. If the task of simulating a brain was divided into a billion separate pieces to be simulated on a billion computers, due to the extremely high connectedness of the neurons, the two-way internet bandwidth required is much more than the vast majority of computers have available, and if all computers requested maximum bandwidth simultaneously as they would need to, the internet would grind to a halt.
4. Rapid spontaneous evolution of intelligence with a structure quite different to a brain is very unlikely for reasons given in the article.
5. The structure of a computer matches the structure of neurons and synapses very poorly. Custom built hardware would do a much better job of performing the calculations performed by neurons and synapses and required for such a consciousness to exist.

1 The speed of thought vs the internet

Firstly lets compare the speed of thought for a brain compared to a distributed consciousness on the internet:
There are various estimates for the speed at which signals travel in nonmyelinated neurons, ranging from 5-25 m/sec. Lets take 10 m/s as an estimate.
The maximum distance a signal needs to travel from one side of your brain to another is 10cm. So for these numbers, the maximum time a signal will take to do this is 10 milliseconds. Now 200 milliseconds is around the fastest speed a signal can travel from one side of the world to another over the internet, and because of the light speed constraint, it will never get that much faster than this. Even in the best case scenario, 200 milliseconds the internet needs is 20 times slower than the 10 milliseconds the brain takes.

2. The processing power of the internet vs the brain

The processing power of the brain

It is estimated that the brain contains 100 billions neurons, each with 10,000 connections or synapses. It is now thought that a significant amount of the computation occurs in the synapses as well as the neurons. That gives 1015 total computational elements. With an average speed of firing of 10 times per second this gives 1016 FLOPS calculating ability. We will go with this common estimate of 1016 FLOPS. That means it would take a computer capable of 1016 FLoating-point OPerations per Second to be capable of carrying out the same functions providing also that the computer was structured in the right way.

The processing power of the internet

The processing power of the entire internet is also not known with complete accuracy, however it is possible to make a reasonable estimate. There are around 1.5 billion computers worldwide at the moment, with a considerable majority probably not powered up and connected to the internet at any one time.

Lets assume 1 billion are on and connected, which is probably significantly more than there actually are, and assume each computer has a processing power of 3Giga FLOPS. A dual core 1.5Ghz chip running at 100% efficiency would achieve this. Graphics cards in some circumstances can get significantly higher than this, but the majority of computers out there do not have them, and at simulating neurons there is no reason to expect they will be able to run at full efficiency.

The total processing power of these billion computers is 109*3*109=3*1018 = equivalent to the brainpower of just 300 people in the absolute best case scenario. This goes against our intuition that the power of the internet would somehow exceed the power of the entire human race combined. This may be the case in the future, but it certainly is not yet. According to some estimates the processing power of all computers only exceeded that of one human a year ago.

So even if the internet did become conscious, if it only had the power of several human brains, it would have a hard task coordinating millions of separate electronic devices. After all you cannot focus on doing millions of things at the same time.

3. Parallelism of algorithms and bandwidth requirements

Computational power is not the only consideration for algorithms. Bandwidth, memory usage and parallelism are also important concepts. Some algorithms are easier to make parallel than others. Examples of algorithms that can be made completely parallel are the Folding@home and SETI@home ones. A necessary condition to make something parallel is that calculations do not depend on the simultaneous results of many other ones. Analysing billions of potential signals or folding patterns in parallel clearly does not. However there is every reason to believe that intelligence and consciousness is not one of these algorithms. Intelligence has evolved in a system that requires massive interconnections of many different physical locations to work properly, and where correct timing is essential for correct operation.

Simulating the brain with the internet

To do this would involve spreading the simulation of 1015 synapses and 100 billion neurons over 1 billion computers. This is 100 neurons, 1 million synapses per computer. It is very likely that a significant majority of those 1 million synapses will connect to neurons outside the 100 being simulated.

Bandwidth required

Neurons can fire at a maximum rate of about 50 times per second. If each synapse could potentially fire at 50 times per second, then this is 50Mb/s simultaneous uplink and downlink required. This is because a signal could come in on any synapse, meaning 50*1 million = 50Mb/s data is required. This is just an estimate and it isn’t even taking into account the fact that a signal may need to be transmitted between many computers to reach its destination as peer to peer computing often requires. Now the average internet speed is about 1.7MB/s and is way below that. Note this is the download speed, upload speeds are always slower than download speeds also. The bandwidth required is orders of magnitude below what is currently available.

So the immediate effect of trying to simulate a brain with the internet is that the internet would just crash and grind to a halt. ISP’s hardware, fibre optic equipment etc is also just not capable of handling every computer using its maximum bandwidth all at the same time.

What about faster computers?

Now what if you increase the bandwidth and computing power of the 1 billion computers and say make them 1000 times faster? Well that would just make the 200 milliseconds or so delay that it would take for a signal to travel from one side of the world to another that much longer in terms of clock cycles. Most of the processing power would now be going to waste just waiting for signals to arrive. The unmistakable conclusion is that it would make much more sense to use many less computers and have them physically located in one place to reduce propagation delay and handle bandwidth requirements.

There are reasons why your brain is physically located in one part of your body and this principle is certainly one of them. Your brain would not be more effective if it was spread over a large area, it would just go slower. This principle applies just the same if you have the consciousness happening in a non-neuronal substrate.

4. Could the internet evolve a structure capable of intelligent thought quite different to neurons and synapses?

By this stage, you may well be thinking that the internet is not suited to simulate a brain, but why couldn’t it spontaneously evolve intelligence in some other way, after all aeroplanes do not fly like birds but they are much faster?

Well for a start; consider how long this evolution has had to happen. Even though technology seems to progress very fast, the basic architecture of computers has changed very little in the last 30 years. There is little for evolution to act on in that regard. Also the processing power of the internet has only been greater than that of a SINGLE human brain for probably less than 10 years. This is not long at all in evolutionary timescales. If the computing power of the internet was equivalent to millions of human brains then it could be a significant, but that is not the case.

What about evolution of computer viruses and other software. “Virus” in this case is an apt name for the level of evolutionary sophistication of computer software. A computer virus can replicate itself, and it can change form a little bit to avoid detection. However it isn’t any match for programmers once it is released. It can cause havoc for a while, but once studied its ability to change itself is not sufficient to stop all variants of it eventually if there is sufficient desire. No computer software has demonstrated general intelligence. So is a computer virus on a timescale of just decades going to evolve from having almost no intelligence, skip the intermediate forms of intelligence seen in nature and become a self-aware organism overnight in spite of the fact that biological evolutions took millions of years to do this and we have made little progress towards general intelligence in the last decade?

Not only would it have to make this unbelievable leap, (evolution does not work this way) it would have to “find” a solution to intelligence that uses the relatively low-bandwidth high latency architecture that is the internet. If such architecture exists it is highly likely that biological intelligence would have evolved to use it.

5. Progress issues and challenges with supercomputers vs Brains

So given these objections, it is likely that artificial intelligence will be designed by copying existing biological structures and implemented in a hardware architecture that is much better suited for the task. A custom built supercomputer would be such a thing.
The worlds fastest supercomputer has the following stats:

1) 8.2 billion megaflops (8.2*1015 FLOPS )
2) 30 quadrillion bytes of storage is (3*1016 bytes of storage)

It gives these stats for a brain

1) 2.2*1015 flops (I have used 1*1016 elsewhere in this article)
2) 3.5*1015 bytes of storage

So you can see that the storage of the supercomputer exceeds that of the brain and its processing power is comparable. However this is misleading in comparison to a brain because the quoted FLOPS is for the best case scenario, that is the most easily parallelized algorithm. Attempts to actually simulate neurons discussed at various times in engineering literature show that the fact that simulating neurons cannot be parallelized in such a way greatly slows the computer down. This can be anywhere from 10 to 1000 times because of bandwidth and timing requirements. Not all architectures are equal however. FPGA or ARM based architectures are more efficient at the task than the older more conventional x86 based ones that Pentium chips use.

The structure of intelligence

The connectedness of neurons is quite different in architecture to modern computing systems. There is no CPU or dedicated memory in the brain. Memory/storage and processing occur in the same place. The most efficient way to simulate this would be by making the actual physical structure of the computer as close to neuronal structure as possible. There is talk of making dedicated hardware for such a task which if followed though in a supercomputer would make it even more capable compared to the internet of giving rise to intelligence.

It is quite likely that such a computer would be able to outperform all the computers on the internet at the task of running “intelligence algorithms” necessary to generate human level intelligence, even if it had considerably less raw computing power.

Challenges to be faced

In spite of appearing to have the raw processing power of a brain, there are several challenges to be faced before consciousness of human complexity will be possible in a non-neural substrate.

1. The estimated FLOPS may be off by a significant amount. Actual synapses or neurons may require more computational resources to be simulated properly.
2. Bandwidth and connection requirements may require a drastic change in computer architecture to be overcome. This may not be possible with current silicon manufacturing techniques.
3. The actual structure of how to connect neurons is not known. Even if you have enough of them, connecting them in the wrong way is not going to give intelligence any more than a short-circuit of wires and pixels is going to give a functioning TV.
4. Little is known about how connections change, and how brain chemistry is involved in this. Changing connections are essential for learning. There are about ten times as many support or glial cells as there are neurons in the brain. They are thought to be involved in learning but how they work is a mystery. Simulating a fixed network of neurons is one thing, the intelligence required by understanding new things involves the changing of connections. Very little is known at all about how this happens and it may require considerable additional computational power and hardware complexity that we cannot currently build. The data from a complete connectome will not by itself answer this.

Conclusion

So we are not there yet, however it is much more likely that the first example of human level intelligence in a non-neural substrate will be in a specifically built supercomputer, not by self assembling from the internet. After such an intelligence has been built in a supercomputer there is still the possibility that it could still be injected maliciously into a more powerful and advanced future internet, however given the knowledge we then would have about what is required for intelligence to exist, we would be in a different position to what we are now regarding how to prevent or cope with this. The threat would be much more known and quantifiable.

There has been discussion lately that the computers that make up the internet could spontaneously become intelligent and conscious with predictably dark consequences. There is a sense of foreboding, but little attempt at more detailed analysis as to whether it could actually happen. Simple calculations relying on existing knowledge show that it is far more likely that the first example of a non-neuronal intelligence will happen in a specifically built supercomputer rather than just happen by chance from the internet. As a consequence, the behaviour of such an intelligence if it did arise would initially be observed in a much more controlled environment.