Many people including Stephen Hawkings and Elon Musk are worried about the possibility of an artificial program that might become intelligent and take over the world. The idea is that at some point in the future we may be able to develop artificial intelligences through programming that are equal in intelligence to humans but capable of living much faster, and able to rewrite their own programs to become even more intelligent.

Then, the story goes, within a short time of first "awakening" as a human level of intelligence self modifying program, it might be able to "lift itself by its bootstraps", to build more and more super intelligent versions of itself,. It would start by rapidly learning all the physics, maths, and science we already have, and then having mastered all that, it would start to develop new theories and ideas that we can't even imagine. With the fast computers that it would be running on in this future, the idea is that it might go through all of this perhaps in as short a time as a few hours.

In this way, the story continues, from a human level intelligence it might rapidly develop into an artificial super intelligence, and start to do things we can't begin to understand, because we just aren't clever enough, designing and using machines we have no idea what they are, or what their purpose is, and that we couldn't understand even if it explained them to us as clearly as it possibly could.

And then, the fear is that these newly born super intelligent AIs wouldn't have our best interests at heart - and perhaps have other objectives that mean nothing to us. Or else - that it would just lead to a society that humans find scary because we haven't a clue what is going on and why the AIs are doing the things that they do. The result would be a dystopian - a future that is undesirable or frightening. It's the opposite of a utopia, a future where everything is wonderful.

While, the story continues, if we can make sure this future goes in a direction that we want, it could instead become a utopia, all our physical problems solved by the ingenious discoveries of these super intelligent AIs,

It sounds like pure science fiction. Indeed, something like this is often used as a plot theme in sci. fi.

But those who think this will happen often mention near future timescales. They call it the "Singularity" because of this predicted moment of sudden rapid exponential rise in the capabilities of AI. For more background see Tim Urban: The Artificial Intelligence Revolution.

There are many academics, science fiction writers and so on, who actually think this is going to happen some time in this very century, some say it will happen as soon as 2040 or earlier. They call it the future Technological singularity

However there are some who think that this can never happen in the way described. And that's how I see it myself.

I wrote this originally as my answer to a "knowlege prize" question on Quora: What constraints to AI and machine learning algorithms are needed to prevent AI from becoming a dystopian threat to humanity?

I didn't expect to get the prize, but it was fun to enter. Though there were others who were skeptical about the idea that we'd be able to write computer programs that can understand things like a human in the near future, I was the only one to put forward this idea that strong AI is non computable, amongst all those who answered.

If things like this were settled by votes, then those with my view would lose easily. But they aren't of course. In science, and physics, minority views have often turned out to be correct, so can't be dismissed just because they are held by a minority. This idea that strong AI is non computable has a prestigious past with Kurt Gödel himself espousing it. So, anyway here goes, here is my answer, see what you think of it.

MY ANSWER

I do think that more intelligent creatures than us may well be possible, and that we could maybe create them also. But I think this won't happen through programming.

That then leads to different ideas about what the ethical challenges are. It also makes it unlikely, to impossible, that you get a sudden, self reinforcing, recursive runaway effect like this. It becomes much more like David Brin's idea of "uplift" in his fictional Uplift Universe. Indeed I'd see that as a form of artificial intelligence, and the most likely way to strong AI, using genetics to increase the intelligence of other creatures like dolphins, or chimps, or indeed ourselves.

And as you will see in my answer, this also leads to the idea that strong AIs need protection from us as much as we need protection from them.

Following Penrose, I think strong AI is a part of non computable physics. This suggests that rather than computer programs, it might involve genetic manipulation, artificial living cells, or biological neurons.

So then strong AI might require us to attempt to create human babies with enlarged brains through genetic manipulation, or to splice gene sequences from humans into the DNA of a blue whale with its huge brain, or some such. The ethical issues then would be similar to those you would get from genetic modification of human embryos.

Or, if mechanical, they might be machines that have much more in common with creatures made of living cells than our present day computers. Not just neural nets, which attempt to abstract the cells of our brain into simple logical units, as those are part of computable physics.

I'm a mathematician and programmer and in the 1980s through to the 1990s, I studied mathematical logic at postgraduate level at Oxford university (with Robin Gandy as my supervisor, studying Strict Finitism, i.e. ways of doing mathematics such as calculus, without infinity).

My logic research was in a different area but that's how I first came across Roger Penrose's ideas about the limitations on what's possible with machine learning. He was in the process of writing his books on the subject at the time, The Emperor's New Mind and Shadows of the Mind , and gave talks about them to logicians, philosophers, and physicists in Oxford, which I attended.

I'm a computer programmer now, though not in the field of Artificial Intelligence - I'm the author of Bounce Metronome Pro and Tune Smithy amongst other programs.

I recently wrote a science blog article about this topic and made it into a kindle booklet on Amazon

If Programs Can't Understand Truth - Ethics of Artificial Intelligence Babies 

FIRST - TO CLEAR UP WHAT IT MEANS TO SAY THAT STRONG AI IS PART OF NON COMPUTABLE PHYSICS

So, first, following Roger Penrose's ideas, I don't think that there is any chance at all that the methods being followed in pursuit of machine learning will lead to artificial intelligences.

There's a lot of confusion about what it is Penrose is actually saying, never mind the arguments he has for his point of view. So that's what I'd like to focus on here. And then show how this leads to a rather different projection for the future and attitude to what the issues are in weak and strong AI.

NOT ALL PHYSICS IS COMPUTABLE

There is no reason at all why all of physics has to be computable, i.e. capable of being simulated in computer programs. And indeed, it is not.

As a simple example, there is no way to simulate true randomness with a computer program, without external input. You can do a good approximation, but however hard you try, it will always have some patterning to it.

This is a simple example to show that we can't simulate everything in the world with computers. We can't simulate true randomness (without a random feed).

Yet we have examples of pure randomness in physics, e.g. in radioactive decay. So we can't simulate all of physics in a computer program.

I think AI is another example of non computable physics; a much more difficult example than randomness. I think that the essence of what makes us able to recognize and understand truth is in the bit that gets left out of any computer simulation. There are many problems in maths that are non computable, so why not also in physics?

Now, many people have already decided they don't accept Roger Penrose's argument as valid. If so - just forget about his particular argument, and think about his conclusion instead, and the question it raises.

IS STRONG AI A PROBLEM IN COMPUTABLE PHYSICS?

So first, we can't just dismiss it by saying that all physics is computable, because it isn't. We've seen that radioactive decay is a counterexample, for starters. Also, we have a strong filter operating here, since we try to reduce everything to computable problems. When a situation is not amenable to computer models, we simplify it until it is, and use approximations.

For instance in quantum mechanics, then the most complex system we can model exactly is the hydrogen atom. In the Newtonian theory of gravity, we have exact ("analytic") models for the 2 body problem but normally have to use approximations when we get to three or more bodies mutually interacting.

With models of the brain similarly, we can't model every atom, nowhere near. The idea is to model it at a higher level, as simplified nodes in a neural network in place of neurons. It's an approximation, has to be.

So, why assume the functioning of the brain has to be computable? Perhaps this process of simplification is the step where we lose whatever it is that permits humans to understand truth?

His argument has lead to a lot of discussion. Some like myself find it convincing, a lot find it unconvincing.

But one thing it does show up is that nobody has any proof that programming can lead to strong AI.

His opponents attack his arguments and conclusions, he replies, and they reply again, resulting eventually in long intricate arguments that hardly anyone can follow. It's understandable that not many people find this convincing.

But none of them have ever come up with any logically convincing argument in the other direction.

It's an asymmetrical situation. We have a disputed proof that the problem is non computable. As far as I know we have nothing by way of an attempt at logical proof that strong AI is within the realm of computable physics.

IMPRESSIVE PROGRAMMING AND ANTHROPOMORPHISATION

I remember how impressed I was when I first "talked" to a computer program back in 1971 using a teletype.

It may have been one of these, I don't remember the model but it looked like this, a machine with a continuously scrolling paper roll. You could type on the keyboard, and then it would type back at you, after a pause to work out what to say, simultaneously scrolling the paper up so you could read what it said. This is just the machine for communicating with the computer, which was in another room.

By Rama&Musée Bolo - Own work, CC BY-SA 2.0 fr, File:Teletype-IMG 7287.jpg

It was just yes / no questions, with the computer asking me the questions and going on to a different question depending on what I said - but back then it was just "wow" the idea you would type words on a teletype, even just Y or N, and the computer would then respond by typing something back instantly.

To me, in an era where nobody had computers, and they were only available as huge research machines that filled an entire room - it felt almost like there must be a mind there directing its typing.

But it wasn't just newbies like me, on my gap year before university, learning programming and encountering a computer for my first time, in the Culham Research labs computational physics group for the United Kingdom Atomic Energy Authority (with the team lead by Richard Peckover - obituary here ).

Back then many people, even experts, were impressed by the capabilities of computers. Things like -" WOW - a computer can play a game of checkers against a human!!!!"

This is even earlier, back in 1961 Claude Shannon "Father of information theory":

"I confidently expect that within a matter of ten or fifteen years, something will emerge from the laboratories which is not too far from the robot of science fiction fame"

1:52 into this video (and checkers game is 0.50 into it)

The Thinking Machine (Artificial Intelligence in the 1960s)

Now of course, none of that would impress anyone now. Computers asking us questions are even a nuisance sometimes, like our word processors asking us "Are you writing a letter" and offering to help. And a computer able to play checkers wouldn't be "Wow" but "Duh". Fifty five years makes a big difference in technology, especially when you look back at past predictions.

Still, even now, chatbots can be quite impressive to those who are new to them, for a few seconds until they trip up. I expect most of you have tried this, but if not, try chatting to one of these chatbots such as the Mitsuku Chatbot.

Deep Blue is able to beat human opponents, but does it by looking much further ahead and considering many more possibilities than humans do.

Deep dreams is also quite impressive to us in its way, in the way it "finds things" in a scene that aren't there, much as we see things in clouds.

Also we get impressed by machines that are able to walk.

Atlas, The Next Generation

Commentators even read human intention into them. But this is just a machine that can walk like us and avoid obstacles. It does not have emotions or intentions, and for instance, there is nothing there to mind about being pushed over or repeatedly getting the boxes taken away from it. It's just a program doing what programs do.

It's a natural tendency to anthropomorphise anything that resembles us, even dolls and action figures of course. Back in the sixteenth century people were very impressed by clockwork automata, such as Jacques de Vaucanson's flute player.

I'm sure we will get artificial intelligence that is good enough to seem almost alive to us in the future in more and more challenging situations, and sometimes for more than just a few seconds at a time. And in specialist areas like chess or go, they can be as good as humans or even better.

But as with all those previous examples, as we get used to them, they will seem less impressive. I don't think this process of refinement will ever exhibit true intelligence. It just seems a more elaborate version of the clockwork automata that were so impressive in the sixteenth century.

SCOPE OF THIS ANSWER

In this answer I will cover his conclusion, not the arguments, which are intricate.

I will also give reasons why I think that the task of making a strong AI - an artificial intelligence functionally equivalent to human intelligence - might be harder than almost anyone in the artificial intelligence community supposes, at least several orders of magnitude harder. And talk about what form it could take if it involves non computable physics.

And then also, I'll talk about our responsibilities for such intelligences, and the improbability that they would have any kind of coherent objective or world view when first created. And I'll talk about how they would nevertheless have a capacity for suffering, and so would also need to be protected from us.

NO PARTICULAR CONSTRAINTS FOR COMPUTER PROGRAMS

With this background, my conclusion in this answer is that there is no need for particular constraints on machine intelligence if by that you mean restraints on computable AI.

That's because this can never lead to self aware intelligences with understanding and purpose. It can't do so if computer programs are unable to understand truth. How can you have awareness and an objective, if you don't know what it is for something to be or not be, or to be true or not true?

CONSTRAINTS FOR PARTICULAR AREAS OF PROGRAMMING

Though we do need constraints in particular areas. For instance for self driving cars, there are obvious ethical issues there. Are they safe enough to be trusted in place of a human driver? For Fly-by-wire - we have already decided those are safe enough, in their role as essential assistant to a pilot. Those planes would fall out of the sky without their computers. They are now accepted as routine, with the Airbus A320 as the first airliner with an all digital fly by wire control system in 1984.

Airbus A320 family - first commercial airliner to depend on fly-by-wire, so dependent on the computer to stay in the air. By Julian Herzog, CC BY 4.0, File:Lufthansa Airbus A320-211 D-AIQT 01.jpg . At the time it was a daring decision, but now such systems are unremarkable. We have learnt that they are reliable and suitable for passenger jets.

For autonomous robotic soldiers - most would say that giving them the ability to kill humans they identify as "opponents" without human supervision would be a step too far at present. We need to be careful, on a case by case basis.

And computer programs can certainly out compete humans in specialized areas. So, the constraints needed depend on the area in which they are being used. Image recognition algorithms need to be monitored to make sure they don't mislabel images in ways that offend humans for instance as the recent case of google image tagging identifying black people as gorillas shows up.

Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms

Google of course is a leading player in image recognition. It uses neural nets to tease out details from an image and identify features. When you put the results back into the neural net over and over again, you can reinforce this effect. See DeepDream

You can upload your own images to try them out here: Deep Dream - Online Generator

And this video starts with random noise and repeatedly applies Deep Dream to it:

Google Deep Dream Zoom "Inside an artificial brain"

Which shows both the power of weak AI, but also its fallibility. It is easy to use weak AI to draw out details from a scene that aren't there.

It's the same also with text based information searching. Watson who won jeopardy against human opponents. IBM Watson Wins Jeopardy, Humans Rally Back sometimes got the answers easily, for instance it is great on questions of geography. But sometimes says silly things. For instance giving Montreal as the answer in a question about US cities, even though its database would say it is in Canada - using its fluidity of interpretation in a way a human wouldn't.

That's also weak AI. There are many situations where we might need to be careful about using it. E.g. information mining has privacy concerns, and there's the real possibility that humans may come to rely on computer programs that make fallible conclusions in situations where a mistake has serious consequences.

But it's not true artificial intelligence in the sense of "Strong AI".

THE IDEA THAT A FLEXIBLE ENOUGH PROGRAM IS BOUND TO HAVE TRUTH GLITCHES

So, to expand on this, first to summarize Roger Penrose's idea. He argues that computer programs will never understand mathematical truth. That however they are programmed then there will always be things that we as humans can see to be true, which they can't see.

It is obvious that whether a computer program's output is truthful or not depends on the programming. So for instance you can easily program a calculator to say that 2+2 = 5 or whatever it is you want it to say. Occasionally you get glitches in programming of calculators leading to them making mistakes - not as blatant as that but Windows Calculator has a long history of such glitches. For instance this is a fun glitch in the current version of Windows calculator

=

If we were computer programs, then you could get genetic effects that lead to some people honestly believing things like that. That 2 - sqrt(4) = 1.068281969439142e-19. They would swear blind that this is what 2-sqrt(4) is because they could see it for themselves, clear as day.

After all, we all since some people are lightning calculators, then it's something humans can do, we all potentially have that capability. But there must be disadvantages to that, or it would be selected for, and everyone would be able to do that. So why not have people who not just are slow at maths as we are, but who actually say and believe that 2 - sqrt(4) = 1.068281969439142e-19? There would be no evolutionary disadvantage at all, as you don't normally need to even work out sqrt(4) in ordinary life and death situations ,or any situation that early humans would encounter.

So that's the intuition behind it. But of course he gives a much more elaborate argument, using Godel's theorem.

Basically his argument is that if it has a program, then you can inspect that program and derive a statement you can see to be false which the program will declare to be true. It is just like this Windows calculator falsehood - but he says that any computer program versatile enough to do simple maths and to reason about what it is doing at a meta level, i.e. show "understanding" of maths, will always have truth glitches, only much more subtle than the Windows calculator example.

It doesn't even need to do intricate maths. If it is able to count, and add and multiply, and "understand" what it is doing like a human, then by his argument, it already inevitably has these truth glitches.

That's all I'll say about his central argument here, as it is not the main focus of this answer. I talk about it a lot more in my If A Program Can't Understand Truth - Ethics Of Artificial Intelligence Babies

WHAT ABOUT HUMANS THAT LIE OR BELIEVE THINGS THAT THEY CAN'T PROVE?

Of course humans often lie, or believe things on inadequate foundations. But they do so on a basis of an understanding of what truth is. If you don't understand what truth is, the most you can do is to say something nonsensical that betrays your lack of understanding of what truth is. You can't lie if you are unable to recognize truth.

When a chatbot says something mistaken, there is no-one there lying, or superstitious, because it doesn't know what truth is.

Also, whatever strange beliefs some people may have, our daily life is based on a foundation of many truths we understand implicitly. You know that you are inside a house (if you are), and that you are typing on a computer keyboard. That if you open a door you can go outside. That humans are small enough to get through a doorway. You know how far it is to the road, or to the post office, and how to get there.

A computer program could be programmed with all that knowledge too. But - if Penrose is right, it can never "know" these things in the way a human being can do.

For instance if a computer program has the nearest post office marked on its map and it says that you have to go out of the door and turn right to get to it, but actually you have to turn left - then it isn't lying. It is just a bug in its programming. Not unless you program it to lie - but in that case it is you, the programmer, that is doing the lying, not the program as such. At least if Penrose is right. Because if his conclusion is right, it is impossible for any computer program to understand truth or lies in the way a human can. So is not telling truth or lies, it is just doing whatever you programmed it to do.

The reason for focusing on maths as Penrose does is because there it is really clear cut what truth is. So then easier to spot if the program has no understanding of what truth really is.

A SIMPLER ARGUMENT

As well as Penrose's argument, there's a simpler one, not at all logically compelling but intuitive. The argument is that if you truly understand truth, but are programmable, a programmer could put a logic bomb in, so that on certain occasions you just spout nonsense.

E.g. someone could edit your source code so that every time you see a shooting star you say that 2 + 2 = 5 and for that period of time think it is truth.

Since a being that truly understands truth surely can't be reprogrammed in that way, then anything that can be programmed can't understand truth.

It's not logically conclusive at all. Those who think strong AI is possible can point to glitches in human thinking such as optical illusions, hallucinations, hypnotism, and so on. And can point out that neural nets can't be edited to say whatever you like in the same way as other computer programs, and say that deep learning makes computers behave much more like humans.

But I think it has some force to it. Gets you thinking about it a bit, through a much simpler argument than Roger Penrose's one. It may help you to begin to understand the point of view of some people (like myself) who think that programmable strong AI is impossible.

GÖDEL'S OWN TAKE ON IT

Gödel's argument is not particularly convincing - and he says as much himself, but it shows that he thought himself that probably human minds can't be reduced to a finite machine (i.e. Turing Machine).

He looked instead at diophantine problems - simultaneous polynomial equations in several unknowns with only integer solutions. His result showed that no finite system of axioms and deduction rules is sufficient to solve all diophantine equations. In this quote, by "absolutely unsolvable" he means - not solvable by any human being.

"So the following disjunctive conclusion is inevitable: Either mathematics is incompletable in this sense, that its evident axioms can never be comprised in a finite rule, that is to say, the human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine, or else there exist absolutely unsolvable diophantine problems of the type specified"

He goes on to say that he finds the first of these two possibilities more plausible, that "the human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine"

See"Some basic theorems on the foundations of mathematics and their implications" - lots of missing pages in the preview, sorry.

The main difference is that Roger Penrose took this a lot further, applied it to physics, and supplied what he considered to be a proof that human minds can't be reduced to a program, using ideas of Turing machines combined with Gödel's results.

Gödel died in 1978, so there is no way to know what he would make of Penrose's argument.

But it helps to show that Penrose's view that human intelligence can't be reduced to a computer is one that you can share, whether or not you accept his argument for it.

So anyway let's go on and see what some of the consequences are of this, if true.

SO PROGRAMS CAN ONLY HAVE A SIMULATED UNDERSTANDING OF TRUTH

If true, it means that no program can ever be programmed to truly understand truth.

They can be programmed to simulate an understanding, but in situations not tested for, they could easily then say things we see as just totally silly and obviously false. They always need human programmers to continually tweak their programs to deal with these truth glitches.

That's certainly true of all the programs written so far. For instance Deep Blue has no idea what a chess piece is. It's been designed to play a superb game of chess, but not to recognize chess sets, or to understand what a game is. Self driving cars don't know what a car is. Chatbots, give the impression that they understand what they are saying for a few seconds but soon trip themselves up.

Artificial intelligence proponents claim that with "deep learning" we can solve this.

But if Penrose is right, then their approach will never work. It will surely lead to computer programs that get things right over a wider and wider range of different circumstances, but never to a program that truly understands truth as we do.

SO WILL NEVER PASS FOR HUMAN

This means that computer programs will eventually trip up. But Penrose's argument doesn't tell you how long a computer program could continue to fool humans. His argument, if true, just shows that if you are an expert in logic, you'll be able to trip it up eventually, and what's more you'd need immensely complex logical statements to test it with. It's more of an "in principle" argument than a practical test, really, at present.

If you are convinced by it, then - you think somewhat like this: "Okay so a computer program will have one truth it can't see, its Godel sentence - so therefore it doesn't understand truth in the way humans do. So it's likely to show this lack of understanding in other ways too".

But a computer already can pass as human for a few sentences as the chatbots show.

So where you go next from there is a matter of judgement - how long can a robot pass as human for? How human-like can it be? After all even clockwork automata seemed human-like in the sixteenth century. A shop dummy can pass for human for a second or two until you spot that it hasn't moved and doesn't look quite human.

Turing wrote in his paper that by now we would already have computers that can pass as human for five minutes in text conversations with a 70% chance.

"I believe that in about fifty years' time it will be possible to programme computers, with a storage capacity of about 10⁹, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

Computing Machinery and Intelligence, A.M. Turing

We are a long way from that - except in special situations with restricted topics, or where the chatbot "cheats" by pretending to be a human with limited understanding or language or conversation abilities.

However when asked about the unrestricted Turing Test, where you can ask anything, in a BBC interview on Radio 3 in 1952, Turing says:,

Newman:

"I would like to be there when your match between a man and a machine takes place, and perhaps to try my hand at making up some of the questions. But that would be a long time from now, if the machine is to stand any chance with no questions barred?"

Turing:

"Oh yes, at least 100 years, I should say."

BBC interview, Radio 3, 1952 'Can Automatic Calculating Machines Be Said To Think - Turing Digital Archive

I don't think myself that a computer program will ever pass the Turing Test if it is required to show human understanding in the test, for extended periods of time, maybe never even for five minutes, or at any rate, probably can't keep it up for an hour ever, with a human who is reasonably alert and aware of the possibility that it might not be human.

Of course it can pass by just being silent, if silence is permitted. Or by imitating a human with poor understanding of English. But those I see as a kind of a "cheat". It can pass by talking to a human that is absorbed in other thoughts and not paying much attention. Or a human that talks a lot and expects little by way of response than "yes", or "great" or whatever - you could write a chatbot that would do fine for minutes or hours for someone who talks like that :).

HOW THE BRAIN WORKS - REASONS FOR SUPPOSING IT IS FAR MORE COMPLEX THAN AI RESEARCHERS SUPPOSE, FROM THE BEHAVIOUR OF AMOEBAS

Another thing that may lead to the conclusion that artificial intelligence is far harder than most people think, is to look at how the brain works.

So, first, the idea of how the brain works used by many researchers in the field of strong AI is that the brain is made up of neurons which are each basically quite simple logical units. The idea then is that if we can build a neural net that resembles the brain it would approach artificial intelligence. Then more tweaking could get us to the goal.

That's a formidable end point to aim for because the brain has so many neurons with so many connections between them. But they think it is achievable, sometime in the future, perhaps a few decades from now when computer speeds and the amount of fast access memory in a computer have both increased immensely.

This is the idea of initiatives such as the Computing - Human Brain Project and the Blue Brain Project. They think that you could simulate the brain with of the order of a few hundred exabytes of rapid access memory (an exabyte is a million terabytes) and may be able to make this more practical using a lower resolution model (with rapid access memory of "just" a few hundred petabytes - a petabyte is a thousand terabytes), that works by accessing a higher resolution model in slower access memory from time to time. That then may seem quite achievable, as we moved from kilobytes to megabytes and then gigabytes of rapid access memory in a short timescale of a few decades.

Well there are reasons for supposing the brain is at least several orders of magnitude more complex than this approach suggests. Because - even an amoeba can make decisions and has a fair degree of basic intelligence. It can distinguish food, escape predators, seek out more habitable conditions etc.

Amoeba

If you modeled the behaviour of an amoeba with a neural net you would need thousands of neurons. But it doesn't have any.

So, surely our brain's neurons are more complex than just simple logical units? Otherwise a being with a single amoeba type cell for a brain would out compete another being with thousands of neurons as conventionally understood.

Of course I'm not saying that these big projects like the Human Brain Project and the Blue Brain Project are useless. We may learn a lot about the functioning of the human brain from them.

I agree that that these simplified neuron models are valuable and lead to insights into brains and computer vision. It's just the idea that it captures all of it. I think it only captures some of the things neurons do.

By the amoeba analogy, they may be dealing with a comparatively crude "macro layer" of how the brain functions.

So far, this idea that their model is missing some of the details of what goes on inside neurons just suggests that it may take an extra few decades to reach their goal if we keep increasing rapid memory capacities a thousand fold every couple of decades.

Maybe instead of exabytes we need zettabytes (a thousand exabytes) or perhaps more likely, yottabytes (a million exabytes). Instead of the 2040s, are we thinking of the 2080s?

If that was all there was to it, then it would still seem an achievable goal to eventually reach strong AI.

We don't seem to be close to reaching the limit of what's physically possible. Perhaps for 2D chips but once they start getting massively 3D there's a lot more play to go. For instance, we can in principle make a transistor from a single atom Single-atom transistor beats Moore's Law by eight years (Wired UK).

Avogrado's number is 10²³, and a yottabyte is only 10²⁴. So in principle you could have even a yottabyte of single atom transistors in a few grams of material.

With quantum computing maybe we can go even further with qubits.

The possibility of reaching yottabyte levels of rapid access memory seems not impossible, eventually.

IN ADDITION, IF THE ARGUMENT IS RIGHT, IT'S NON COMPUTABLE

But then, in addition to this, if Penrose is right, the brain has to be non computable in its processing, not capable to be reduced to any form of computer program at all.

Since we don't have analytical solutions even for the hydrogen atom, the brain has to be incapable of exact simulation in a computer program. The main question really is whether what has to be left out in the simulaton is essential to how it works. If what we do is non computable, then any simulation will always leave out things that are essential to how it works.

In this case the repercussions are far reaching. Not only are our brains not neural nets - they aren't anything that can be modeled digitally. No attempt at modeling reality in a computer, however accurate, can permit beings within that digital model of reality that are able to understand truth as we do.

His argument is about mathematical truth because truth in maths is far easier to tackle. But in intention it applies to all forms of truth, to any form of actually "knowing" something instead of saying it just because your programmer or evolution leads you to say it in particular circumstances.

So in that case then it doesn't rule out the possibility of artificial intelligences. But they would have to involve non computable processes.

WIDE RANGE OF COMPUTABLE PROCESSES

This also excludes quantum processes as usually understood. Because those, though they massively speed up conventional computing, are still computationally equivalent to turing machines. Anything you can do with a quantum computer as usually understood, you can do much more slowly with a conventional computer program. Quantum computing

It also rules out parallel processing as this is shown to be computationally equivalent to Turing computation too.

It also rules out hardware neural nets, or anything else that can be simulated in a computer program.

It also rules out programs that are programmed to modify themselves. This is still equivalent to Turing computability. You can make a turing machine interpreter to run the program as data and let it modify itself, and keep running this over and over, and the result analyzed computationally is just a turing machine with data like any other.

Use of a Random oracle - which gives genuinely random answers selected from a finite set to any question - does take you beyond Turing Computable however - you can show that a Turing machine can't simulate genuine randomness.

Indeed that's a good example of non computable physics which shows clearly that there are limitations to what you can do with a computer program. Though whether this is of any use for artificial intelligence is another matter, probably not.

So, how do humans have this capability to see for ourselves whether things are true or not?

SEARCHING FOR NON COMPUTABILITY IN PHYSICS AND NATURE

If Roger Penrose is right, the way forward towards understanding this, and developing artificial intelligences might be to search for non computable physics. And Roger Penrose has a suggestion - but it is just a suggestion so far. We shouldn't be too hung up on whether this is the correct solution as there may be other ways to do it. But it may give an idea of how it is at least possible that there is some form of non computable physics going on.

He thinks that in our brain the individual cells use cellular automata type processes in the microtubules. All cells have microtubules that are usually thought of as primarily used for structural material, like our bones, though much more flexible - for instance when amoeba move, they rapidly dissolve and regrow the microtubules along the leading edge.

But they have a cellular automata type surface layer to them.

He has figured out a way they could use this to do some form of computation - this diagram is supposed to show steps in such a computation.

So, these wouldn't just be the amoebas and other cells' "skeletons" - they would be their "brains" as well, in effect.

So far that is not going to take you beyond Turing machines. It just enables the neurons to be far more complex than normally understood, so permitting things such as amoeba intelligence and suggesting our brains are orders of magnitude more complex than most AI researchers believe.

But his next step introduces the non computability. He thinks that these are quantum processes and that you get quantum coherence in the brain - not just within the cell but spanning many cells. And eventually when the amount of mass involved in the coherent quantum state reaches the Planck mass - or about the mass of a hair in your eyebrow - that it then collapses due to gravitational effects. As his "day job" as it were, he is a theorist in the field of quantum gravity - so this is something he knows a lot about.

PENROSE'S ORIGINALLY FAR FETCHED INVOCATION OF QUANTUM BIOLOGY - NOW CONFIRMED AS A NEW DISCIPLINE

The idea of such large scale quantum processes seemed far fetched at first. Originally when I heard him talk in the 1980s, then he came close to being laughed at. Most physicists and biologists would say that quantum processes simply couldn't occur in the brain at all, because it is "too warm" or so they thought.

He came up with arguments to suggest that it is possible, but it didn't convince too many back then as there was no direct evidence for it.

But - others thought it was possible even then. And it turned out, the minority of a few biologists who said quantum biology is possible were proved right. Eventually confirmed in 2007.

Now quantum biology is a whole new discipline. For instance, photosynthesis is more efficient than can be explained without quantum coherence and entanglement. Enzymes may use quantum tunneling over long distances. Magnetoreception in birds navigating by the Earth's magnetic field uses quantum entanglement to put an electron in two states at once. It's also found to be an important factor in how we are able to smell things.

If you missed this news, you can find out more about it here: You’re powered by quantum mechanics. No, really… And more techy, Quantum physics provides startling insights into biological processes. And a good summary in Wikipedia here, Quantum Biology.

So then his idea is that this actually happens on a very large scale in our brains. And then the collapse is an example of a non computable phenomenon.

This still takes it way beyond the biology we know about. Especially the non computability. We are nowhere near the stage where we could study a Planck's mass of particles in a quantum coherent state to find out whether it can self observe, trigger an "Orchestrated Objective Reduction" as they described.

But it is no longer so far fetched sounding as it was in the 1980s, now we know not just that quantum biology occurs, but that it is a widespread phenomenon in biology.

NON COMPUTABILITY IN A SIMPLE SYSTEM

You might get the impression from Penrose's argument that understanding of truth would come out of increasing complexity. Godel sentences piled on top of Godel sentences until it is almost infinitely complex.

But - though it requires infinite complexity to code it as a program, it need not be immensely complex if it involves non computable physics.

To show how you could have non computability with a simple physical system, though rather an ideal one, suppose you have a binary system of two planets orbiting their barycentre. Let's suppose they are featureless spheres, except that they have markings so that you can count the rotations. And this is an ideal system - no friction, nothing acting on them. Isolated from all external influences.

Now suppose that the ratio of the two spin rates is Chaitin's constant. Using that system, if you could observe it, you could solve the halting problem by just counting revolutions of the two wheels and using that to estimate Chaitin's constant to more and more accuracy.

That of course also proves that such a system could never be modeled in a computer program by Turing's proof of the impossibility of solving the Halting problem .

So a physical system of just two components can still embody non computable physics. It is basically a kind of physical "oracle". So, it is just to show possibility, not that a practical simple non computable system would actually resemble that in any way at all.

If it is true, as Penrose hypothesized, that the non computability arises as a result of an entangled Planck's mass of quantum coherent state collapsing due to gravitational interactions - well that would also be tremendously complex - it's only the mass of an eyelash, but that's more than 10²⁰ hydrogen atoms in mass .That's a lot of quantum coherence and entanglement.

That's just a hypothesis at present however. It could be some other form of non computable behaviour. Which could be either immensely complex or somewhat simpler - there is no way to deduce which it is at present.

To AI researchers, these may seem extraordinary claims. But seen from the other side of the fence, the claim that strong AI can be achieved by more and more complex programming also seems an extraordinary claim!

SO ARTIFICIAL INTELLIGENCES CAN'T BE PROGRAMMED TO BE ETHICAL - BUT WILL BE BORN BEWILDERED

However it works, this would then mean that artificial intelligences, if we ever create them, can never be programmed. So you can't protect against them by tweaking the programming. You don't program them to be ethical. But they would also surely be "born" bewildered. Indeed I think it would be an ethical issue whether it is right to create them at all if this is correct.

EASIEST WAY TO CREATE ARTIFICIAL INTELLIGENCE MAY BE TO TWEAK BIOLOGICAL INTELLIGENCE

It would then be similar to the question of whether we should create any intelligent aware creature, a hybrid of a human and a dolphin say, using gene sequences from humans to "enhance" dolphins, to make a dolphin have a human-like brain. Or perhaps even blue whales with their gigantic brains.

Or genetic enhancement of humans, attempts to tweak the genes that influence a human brain to make us super intelligent.

Or even just artificial breeding of dolphins, or humans with the aim to increase our or their intelligence.

Indeed, if Penrose is right, it might be that forever into the future, the easiest way to make artificial intelligences is to tweak biological inteligences in some such way as this.

Or to create new forms of artificial life - not in a computer program, but using artificially enhanced biological cells of some sort.

Or some form of quantum device, yes, but it involves somehow creating immense coherent quantum states involving enough matter to reach the mass of an eyebrow hair, by other methods, and somehow exploiting the spontaneous "self observation" collapse of that state. We are nowhere near being able to develop something like this at present however.

There already are AI researchers who try to build computers using real neurons instead of logical neurons. See Computer circuit built from brain cells. So this line of development also might lead to true artificial intelligence if Penrose is right.

Another line of development like this is the slime mould computer.

https://www.sciencedaily.com/releases/2014/03/140327100335.htm) ">Logical circuits built using living slime molds

So this also could potentially lead to strong AI if Penrose is right.

WHAT IF WE DO FIND A WAY TO CREATE TRUE ARTIFICIAL INTELLIGENCE

If this view is correct, then the artificial intelligences wouldn't be made up entirely of well understood logical units such as a neural net - because if you could do that you'd have something that could be implemented as a Turing machine.

It could have neural nets, yes, but there would have to be more than that. Either some way that the neurons are communicating that's not easy to spot in the signal - or perhaps what seems to be noise in the signal is extra communication, something to do with the quantum entangled states Penrose talks about, and the spikes are more like a carrier wave, or doing preliminary processing. If the communication is mainly through quantum entanglement and coherence, as Penrose suggests, this would be hard to detect as the entangled state would be destroyed when you try to observe it.

So, Searle's "chinese room" idea wouldn't work - the idea of a room that consists of many people just passing bits of paper around, following rules, that collectively show an understanding of Chinese even though nobody individually does.

You can't break it down into a series of rules that are just being followed blindly.

So, all ideas for an "off switch" or controlling the AI or putting restrictions on their programming would be impossible to do. How can you program in restrictions if it is not running a program? How can you switch something on and off if it is some form of life, like a slime mold or using real neurons, or whatever? Maybe you could anaesthetize it to make it unconscious - but we don't really understand how anaesthesis works in humans.

In that case, though I can't prove this, it also seems that if they really were able to understand truth, that it is likely that they would be beings that suffer pain. Perhaps even physical pain. At a minimum they probably see the world visually, and have haptic feedback to touch it. With that much, they could come to interpret certain sensations as unwanted so painful.

If not that, mental distress seems likely. They see certain things to be true or false, because we are taking that as a defining characteristic of strong AI. So, they are likely to have goals that they create for themselves as well, goals which they understand, not just programmed to follow. And they can then see whether they are achieved or not.

This is different from just programming a robot to avoid things, which doesn't require any understanding of what is happening. That is easy to do with a couple of sensors and the simplest of programs - you have no suffering or joy there, surely.

But if the AI understands truth as humans do, then it is a being that understands its situation. So then they would be capable of experiencing genuine suffering, as a human does, and also joy. I don't know how to prove that this is inevitable, maybe it isn't, but it seems likely.

So that makes it a matter of ethics. They would need regulations to protect them from us.

And then in the other direction, whether they are ethical in the way they relate to us, or not, as for human babies, would depend on their upbringing. We'd learn from each other.

And ethically, you couldn't raise an intelligent aware computer that is sandboxed off from the rest of the universe. Just as for humans, it would be the ultimate of cruelty. It would surely suffer greatly.

Another likely consequence is that since it has no program to copy, it's probably impossible to clone it. You could make a clone of me that is identical genetically, but we'd be like identical twins. We might have very different interests and understanding. We don't have any way to make a human that is identical to me, with same memories, same thought patterns etc, and there is no possibility of this on the horizon. It would be the same with a strong AI, I think.

ETHICAL ISSUES FOR TRUE NON COMPUTABLE AI

This line of development I think does need great care and has ethical implications of many types.

Both ways - we have a responsibility to those creatures we create, if we do create artificial intelligent life. And as I said in the intro they would be beings capable of pain and suffering, almost certainly.

And not in a form where we can "program out" the pain or add simple blocks to stop it, or just "switch it off". Because if this is right, they don't have computer programs and never can have such. And though surely they can be killed, they probably don't have a "state" that can be saved or copied, and can't just be "powered down" and then brought "back to life" when needed.

So it's an issue that can't be solved by programming. Much like a human - dolphin hybrid - if we get things wrong and they are experiencing intolerable suffering, we can't just power them down or attempt to program their brains so they don't suffer or to give them healthier ways of interacting with the world.

TRUE ARTIFICIAL MICROBIAL LIFE IS A RISK - BUT FOR DIFFERENT REASONS

Artificial life however I think is an significant risk, not for reasons of artificial intelligence though. If we build artificial life, then this life just possibly could end up being better, in some way, than ordinary DNA based life.

This is not an academic thing any more. Scientists have made modified e-coli with heritable DNA with six bases instead of four (See First life with 'alien' DNA. See also the Unnatural base pairs (Wikipedia) ). I think most people would agree, we have to be careful about releasing artificial life like this in the wild. For this reason they take great care to make artificial life forms in such a way that they can't survive in the wild. The six bases life could only survive for as long as the scientists fed it with the artificial nucleotides it needed for its DNA. After they stopped doing that, it substituted ordinary nucleotides in its place.

There are many other base pairs, thousands of them, that could in principle be incorporated into DNA.

As well as that, xenobiologists have a possible roadmap that could lead to creation of a microbe that uses XNA instead of DNA for its genetic information. They can do it by using the cell's own machinery for most of the stages. Still keep RNA for transcription, and everything else, the proteins etc, are as before, but replace DNA and DNA polymerase by XNA and XNA polymerase. It's a formidable challenge but may be possible.

See the section on Kick starting XNA systems in Xenobiology: A new form of life as the ultimate biosafety tool

In the same way I think we have to be super careful about returning extraterrestrial microbes or any forms of life to Earth. For instance with a Mars sample return - I don't think that is safe to do unless it is thoroughly sterilized or we know exactly what is in it or are 100% certain whatever is inside can't possibly get out.

As an example, what if it is a form of life that is better at photosynthesis and has a more efficient metabolism than terrestrial green algae? It could take over from our green algae, and marine phytoplankton more generally - maybe slowly at first, but exponentially, and end up with a form of life that is prevalent in the oceans and yet, not edible to Earth life, indeed could easily be poisonous. Not through design or intent, just because it has a different biochemistry and produces chemicals that are similar enough to get misincorporated or disrupt terrestrial biology.

If future research into artificial intelligence follows similar lines, making artificial cells first and building up from there, or tweaking DNA, then I think that could be a risk too, not for the intelligence but for the cells that make it up. That we might create some form of self replicating cell to use as a substrate - and then that cell takes over from DNA - not through any intelligence or intent, just because it is a more efficient self replicator than DNA based life.

So I do think we need strong controls and oversight for the whole field of artificial life and creating cells based on novel and exotic biochemistry. Or nanoscale self replicators also, and any form of self replicating technology. A self replicator doesn't have to be intelligent to be a problem. Indeed that's the risk of unintelligent self replicators with programmed goals, such as in the "paperclip event horizon", where the only objective is to become more and more efficient at making paperclips. Their inability to change their own goals, or to "understand" what they are doing or be reasoned with, is part of the problem.

So then I'd see any restrictions on artificial intelligence as related to this restriction on artificial life. The problem is the self replicating rather than the intelligence.

If it is intelligent, at least it can be reasoned with, and there is a chance that it can develop similar ideas of ethics to ourselves, and it would understand what it is doing. and stop it, if it is problematical for us. So in some ways the less intelligent the self replicators, the more of a problem they may be.

THE NATURE OF ARTIFICIAL INTELLIGENCE BABIES

So strong AI perhaps has. artificial living cells that make up its body, chemical in structure, just like our cells. Or maybe it is some quantum computer with trillions of entangled qubits that are continually collapsing when they reach a Planck mass. Or some such.

Nobody can program it because it is non computable. So how does it develop any intelligence and purpose and so on?

Surely it starts off helpless as a child. So what happens next would depend on its upbringing. If brought up in an ethical way, then it would feel affinity to other rational beings. We would not want to be cruel to a machine intelligence baby, once we relate to it as another being, another person like us, but with unusual birth. In a similar way, it would feel affinity to us as rational creatures like itself.

So, as long as it is well brought up and not treated cruelly - it would be a relationship like a mother with a child. It would care for us because we are friends, and we care for it, and it likes us, and it learnt everything it knows, initially, from us.

And then it would recognize that we have the capacity for pain, and it knows what pain is in itself. Pain in the form of frustration at least and quite possibly also interpreting some of its sensations as painful also. Because it would have to sense the external world.

And it would surely understand joy also, and recognize it in us.

So it would have some form of empathy.

Even autistic people have empathy. They may find it hard to express it. They may also have difficulty actually recognizing feelings in others. They may not understand how others react to them. But they are still able to empathize with others. Do People With Autism Lack Empathy? One distinction is made between cognitive and affective empathy. People with autism lack "cognitive empathy" - the ability to infer what someone else is thinking or feeling. But their "affective empathy" is intact - the drive to respond with an appropriate emotion to someone else's emotional states. (Paraphrasing from Research Project: Empathy in autism).

I think that's something that any rational being will be able to do, to empathise.The only way to switch it off is to somehow block their awareness of other beings as entities.

That it's got some kind of a "mechanical substrate" is neither here nor there. It's sort of like encountering ETs. They might have radically different biology, but they would surely still be capable of empathy and understand such things as pain, joy, frustration etc.

But I think we are a long long way away from this. This is also so far future as to be science fiction at present.

While genetically engineered life, we could do right now if we were unethical like the Nazis but there would be an outcry, and rightly so. I don't think we will do that in the near future either.

RIGHTS AND RESPONSIBILITIES OF US TO THEM AND THEM TO US, LIKE ETS OR "DOLPHIN PEOPLE"

So - then we have responsibilities to them as much as they do to us. And they would of course have rights. Would have dolphin people rights and people with augmented human brain rights and beings with continually collapsing quantum states rights or whatever.

We are then as responsible to them as if we give birth. If you give birth to a child who excels at things you wish you could do, you'll be proud, not scared.

And - just as for giving birth to a biological child- well maybe we aren't ready yet to raise artificial intelligence creatures, and if so, well we just shouldn't do it. Maybe to do this now is like a six year old child wanting to have a baby to look after. They have no idea how much is involved in doing this.

Maybe it is something we can learn how to do from ETs if we get contact from them, or maybe it is something for a million years into the future.

We have laws against experimenting on fetuses, and genetic manipulation, so it would be like that, something that is just not permitted. And understanding the situation, probably few people would even want to try to break that law, just as few people now would want to do illegal genetic experiments on late term unborn human babies.

THREATS OF PROGRAMMABLE AI

I think there are two things here - first the self replicators. Like artificial living cells, they don't need to have much weak AI at all to be a threat, just the weakest of weak AI, whatever is needed to replicate.

This does need care, though we are a long way from achieving a programmable nanoscale mechanical self replicator. We are quite close to building a "Clanking replicator" with RepRap - RepRapWiki

RepRap Open Source 3D printer.

Add to that the ability to actually print out computer chips - and we aren't so far from that also, very inefficient slow printed circuits made with a nanoscale 3D printer. Then we wouldn't be far away from a "clanking replicator". I can see that as a potential for maybe a few decades into the future.

But these are of less concern at least in near future because they are large, and relatively easy to control.

If this turns into nanoscale technology, it's an issue. Also, if we are able to make clanking replicators like this that head off into the galaxy to other stars, then they need careful control to make sure they can't evolve, perhaps eventually into nanoscale replicators or more capable robots of any size.

I think however that for galactic exploration, they are inherently much safer than human colonists. There is so much concern for robotic self replicators filling a galaxy - but unlike humans, programmable robots can be controlled, can have "off switches" and can be limited to a finite number of generations, say 10, and so on, all things biologically or ethically impossible for humans. I think myself that when you talk about colonizing a galaxy, the human self replicators are a much more serious and difficult issue than robotic self replicators. See my Self Replicating Robots - Safer For Galaxy (and Earth) Than Human Colonists - Is This Why ETs Didn't Colonize Earth?

Even closer to home, clanking replicators probably will need control of some sort, for instance an "off switch" so that you can stop all your replicators right away in case of any issue. If we get to this stage of technology, that would be a sensible precaution, along with limitations on number of generations.

And similar are controls needed for nanoscale replicators, as well as other precautions which we could develop as their capabilities become more apparent. But this is somewhat distant future at present.

Much nearer to the future, we are close to achieving artificial self replicating lifeforms with a totally alien biochemistry, and this I think needs a lot of care. The main requirement should be that it can't reproduce in the wild. Which can be ensured, for instance, by making it dependent on chemicals only available in the laboratory.

Then - there's the idea of a super intelligent program arising that can understand humans, how they think and invent new technologies. This I think is just pure science fiction and not something to worry about at all.

I think the idea of a strong AI that somehow arises from programming is no more likely than strong AI arising from clockwork automata in the sixteenth century, even though modern weak AI is far more complex.

The weak AIs will do more and more things that we have come to associate with humans. So, due to our tendency to anthropomorphize we will see them as human like when we first encounter the new behaviour, such as answering questions, playing games with us, walking like us, auto piloting planes, driving cars, speaking with human intonation, or translating text from other languages. But none of that takes them closer to strong AI than the flute playing of Jacques de Vaucanson's famous clockwork automaton of the sixteenth century

CONCLUSION

So I do think it needs care, but have identified different areas of risk from the ones usually suggested.

  • Ordinary computable "intelligence" mainly needs common sense such as we already apply to them, especially when put in life and death scenarios, as for fly by wire planes, self driving cars, robots that assist with operations on humans, applications for warfare and nuclear weapons, or nuclear power stations etc.
  • Self replicating technology needs particular care, once we have it. The main issue for weak AI is self replication, and we don't need to worry about the super intelligent AI scenario.
  • Strong AI would come from artificial living cells or genetic manipulation, most likely, and involves similar ethical dilemmas to genetic enhancement of intelligence.
  • There is no way to control strong AI through programming, because they don't have programs, and sandboxing is too cruel to attempt and would likely lead to the exact issue we don't want, of a crazy out of control AI who suffers greatly and maybe even wants to retaliate for what we have done to them.
  • The AI would learn its ethics from us. We would also learn ethics from them. Is possible they are much more ethical beings than we are indeed.
  • Strong AIs in whatever form need to be protected from us, as much as we need to be protected from them. They would be like our children.

So then it is much like the problem of how to bring up your children to be good people. No simple solution, and hopefully as we mature as a civilization, if we ever do create AIs, we will treat them ethically and give them a decent upbringing and learn from each other.

I think however that we are a long way from doing this, actually deliberately creating new forms of strong AI other than ourselves.

Attempts to create super human intelligences through genetic manipulation of human or dolphin DNA would surely be treated as unethical and forbidden. In the same way we should not create true AIs and try to bring them to maturity, whatever their biology, or whether they are quantum machines, or slime mold computers or whatever they are until we know how to do it in a compassionate and ethical way. And then when we can do that, we have a decent chance that the AI itself will be compassionate and ethical also, like ourselves.

As for proving that our AI children will be safe, then whether they are uplifted dolphins, or gene manipulated humans, or artificial life built from the ground up, or whatever they are - well you can't. In the same way, you can't prove 100% that your child won't turn out to be a dictator, but people have children anyway. It's like that.

Why would strong AI children be less safe than biological children? Or have a less developed sense of ethics?

Indeed with increased intelligence and an ethical upbringing, they might well have stronger ethics than ourselves.

MIND UPLOADING

This, of course, would mean that mind uploading into a digital computer is impossible. Because if a computer can't be programmed to understand truth, whatever got uploaded wouldn't be able to either, only say things because its program makes it say them. So you would lose what is essential to being human.

BTW I don't myself rule out the possibility of things that are pure mind either, not physical at all. I don't think we know enough to do that. That's a philosophical / religious question which science is not equipped to answer at present. Because what mind is, is not understood, I'd contend. But I wanted to keep to simpler things rather than get into philosophy and religion.

It might be, who knows, that eventually we need a deeper understanding of mind to make progress on this, not just physics. And after all mind does have a role in quantum mechanics, according to some interpretations at least, as "the observerr".

But it could be that we can go a long way using just non computable physics also. These arguments, if true, are just saying that it has to be non computable, and since physics has that potential, there is no need to look further for an explanation; we may be able to find it in new physics.

TL:DR SUMMARY

My main point here is that (in my view) first, it is not possible at all, this dystopian future of programmed artificial intelligences that are vastly cleverer than us. Computer programming and algorithms can never lead to true AI or true learning.

Instead, it may arise from genetic manipulation or artificial life or related developments. And then as it is not programmed or programmable, no more than we are, then the issues are similar to ones you have with genetic manipulation, e.g. of children, to have enhanced intelligence, or splicing human with dolphin DNA etc.

So then, it's an ethical issue both ways. We have responsibilities to artificial intelligence, and them to us.

And in that case I think it may well be unethical at our current state of knowledge because it could create a suffering being that we might even be unable to help relieve its suffering.

If so, it might be advisable to just not permit research into true AI of this form, just as we wouldn't permit splicing human DNA with dolphin, or chimp DNA, were it possible. Perhaps not a moratorium for all time, but until we understand the ethical implications much better and have a better idea how to raise such beings in a compassionate and ethical fashion. And that should apply even to the likes of slime mould computers if it ever gets to the point where it seems likely they can develop any form of understanding or awareness.

But that there is no need at all for any such moratorium on the projects based on developing computer programs, simulating the brain using neural nets and so forth.

My article on my science blog about the topic is here:

If A Program Can't Understand Truth - Ethics Of Artificial Intelligence Babies

Which I've also made into a kindle booklet

Ethics of Artificial Intelligence Babies eBook: Robert Walker: Amazon.co.uk: Kindle Store

On sample return

You might also like Peter Bentley's answer to What do AI researchers think about the Wait But Why article on AI? He is an expert on AI and digital biology who has written many books and papers on the subject. I got the Claude Shannon video from his answer.

"I've said it before and I'll say it again - building (and educating) intelligence takes *time*. Lots and lots of it. Yes, we will continue to invent some amazing new technologies in the coming years. Yes, in two decades our current technology will seem pathetic. But the technology inside your heads works at nanoscales. It's billions of years ahead of us. Anyone who really believes we're going to create human-level AI in the next few hundred years needs to learn some basic biology. I suggest working with a few neuroscientists for a while. Believe me - it's worth it.

"In the meantime, if you really want to make human level intelligence? Have kids, folks."

by Peter Bentley

SEE ALSO

If you are intrigued by Penrose's ideas and want to find out more, from the horse's mouth" about his theory, see Consciousness in the universe: A review of the ‘Orch OR’ theory

This originated as my answer to What constraints to AI and machine learning algorithms are needed to prevent AI from becoming a dystopian threat to humanity? You can read the other answers there for an idea of some of the range of views on this topic.

See also my: If Programs Can't Understand Truth - Ethics of Artificial Intelligence Babies

See also:

Get notifications of new blog posts

If you want to get alerts every time I do one of these posts, join my Robert Walker - Science20 Blog Alerts facebook page.

To get a red alert every time I post a new science20 article, or post an idea for a new article, then select "all on" in the page's Liked drop menu above.

Or subscribe to the associated twitter feed.