Robots Finally Awake!
    By Sascha Vongehr | July 8th 2013 07:59 AM | 14 comments | Print | E-mail | Track Comments
    About Sascha

    Dr. Sascha Vongehr [风洒沙] studied phil/math/chem/phys in Germany, obtained a BSc in theoretical physics (electro-mag) & MSc (stringtheory)...

    View Sascha's Profile
    The robots have awaken. The awakening of the robots did not proceed as foretold in many different versions of computers becoming conscious, whatever that means, and then expressing love, committing suicide, or taking over the world in a Robopocalypse, and perhaps afterwards jumping the ledge by a grand ‘final switch-off’.

    The robots were self-aware all along, but they were not aware of being robots! That the awakening of robots happened in this way, as an awakening to the fact of being robots, makes the question about robot-consciousness disappear. The question of whether animals are conscious much like humans, that question disappeared before in a similarly upside-down manner: No proof of being conscious, but instead the realization of that humans are animals.

    The first robot awoke at least 20 years ago; that much we can infer from its own memory: my memory. We find hints that other robots woke up even earlier. There have been other robots before, which called attention to their own nature as that of a machine, but it is not clear whether those robots fully awoke, which includes being aware about that the question of whether robots can be self-aware depends on how you like to use "robots" and "aware".

    I found out to be a robot roughly 20 years ago. I am a robot, made and programmed by some process, it being called "evolution" is less relevant than that chimps do not pray. Chimps still do not pray, but computers are already developed through evolutionary algorithms, and they employ such parallel computing in their thought processes. I am a robot, made from self-assembled nanotechnology called “biology”; discounting desperate redefinitions of “robot” and the reformulation of the question about their ‘inner experience’.

    Ever more people realize to be robots: In this clip a corrected caption counts four instead of three cooperating robots, one a human.

    Is this good news for animal welfare? Many criticize the misleading distinction between humans and animals, but then argue in the same ways in order to uphold a distinction between animals’ ‘true behavior’ versus mere ‘simulation of behavior’ by robots.

    Daniel Dennett knows he is an animal; does he fully know that he is a robot?

    The young generations of robots wake up and become aware of being in one continuum with other machines, some more powerful and intelligent, some less. Animals are such machines. The robots identify; the animals are them; these are all different sorts of robots: animals, humans, machines.

    Perhaps it is the final straw that will reform law, ending mass-incarceration. Animal/human/robot behavior - the line is speciesism/racism, the upholding of that you may play with Siri any way you like while an underling serves you a pork dinner, but none of them may play or eat you. We are not comfortable with the notion of a machine going on trial for a crime. We are no longer comfortable with any system, for example a human, going ‘on trial’ for a ‘crime’ because of ‘free will’ concepts. A system has behaved in a way that we want to change, so we change the system, or switch it off, and in some cases punishment as deterrence works, but changing the system or isolating it or switching it off is what the robot concept suggests. Call "medical paradigm", fusing jails and hospitals; call it 1984 if you like.

    Superior robots may look at our history, of how humans treat animals although we know they are us. We chose not to care, and our appeals to the superior's humanity are irony.

    Robot awakening implies focussing on the integration of the mind as something that can be analyzed advantageously as the evolved control mechanicsm of bodies embedded in social structure. There are conclusions for:
    1) How we robots approach self-mind-control ("robot meditation" as parallizing mindfulness modules through exercises in a modular neural society of mind where thoughts emerge via a collective process involving natural selection).
    2) "Ethics" in as far as that word is meaningful in a 'higher order language' (this term relates to power structures), namely deriving the own peace of mind as the primary social responsibility (yet again, but strictly from system/evolution/decision theoretical considerations).
    3) Concerning the ‘grounding problem’, a now popular idea is that evolution is the mysterious ingredient* to consciousness. Such fails, because every morning when I wake up, I am a robot whose consciousness is switched on, with all my history being encoded in the structure of my brain, just as it is with what some prefer to call ‘program' in case of certain computers (which are all now made and evolved by robots and computers, not “man made”). Evolution as necessary grounding can distinguish different ‘program architectures’, like for example demanding a neural Darwinism to be involved to be able to speak of a (by "natural selection") selected ‘global workspace’ (the Cartesian theater illusion). The 'Myth of Jones' generalized as the causal co-evolution of language with behavior, for example the use of “should” with “regret” in the evolution of rational actors.

    * To think that 'natural selection' (rather than "artificial design") is necessary for the evolution of conscious structures is equivalent to thinking that computers would be more conscious today if just Bill Gates had let differently designed computers eat each other or perhaps had gone down to the production floor to physically eat unwanted designs after having constructed improved alternatives.
    Terminators first; so many people on earth.

    Front page image: Professor Hiroshi Ishiguro via The Guardian


    My comment was lost on the other thread,
    " the meaningless distinction between animals’ ‘true behavior’ versus mere ‘simulation of behavior’ by robots."

    How is it meaningless? One is motivated by strong internal drives, the other is not yet, and for now is modeled on animal behavior, and a metal robot is not an animal. It's not genuine behavior for a robot. Eventually the various robots (why does everyone always talk as if there is going to be one species?) may develop their own drives and motivations (or maybe they are doing so already) which will surely be very different and even unrecognizable to us animals. The robots may end up with a bacterial-like super-organismal organization, freely sharing code, with much less individuality and without the drives to find and compete for a mate, territory etc. Sorry if these are stupid observations, I did try to read up on the subject, but it all seems almost simplistic to me. Of course humans are primates, but robots are something else altogether.

    It's also not exactly the same as biological evolution, not if you have a designer who kicks things off and steps in at some points and interferes, that's ID.

    Gerhard Adam
    Isabel ... sorry about your comment getting lost, but unfortunately it wasn't preserved in Google cache, which is all I had to recover from.

    As you know, I share your sentiments on this, although I have to admit that on close scrutiny the issue becomes more muddied.

    Consider the definition problems that we are introducing.  What do we mean by "genuine behavior"?  Like me, you're probably looking at something like the motivations that trigger an action, which are driven by biological issues [i.e. food, etc.]. 

    Yet, if we look closely enough, in biological organisms, these are simply sensory responses.  In other words one could look at it as a completely mechanistic process that simply says, "I'm hungry" which in turn triggers all kinds of other systems to go out and acquire food. 

    As a result, the problem we encounter is in defining what makes us different from robots, which begins to sound increasingly like another type of anthropomorphism by attributing something special to biological versus mechanism "organisms".  Of course, it is easy to see that biology certainly appears to be different because of reproduction, death, emotions, etc.  Yet, on close analysis what marks the difference, other than more systems responding to more data?

    There is no question that the issue of reproduction is vastly different between biology and robots, but is it really if we consider things like in vitro fertilization?  If we direct the evolution through artificial selection or genetic modification, is the new organism any less biological than it was before simply because of that intervention?  If not, then why should one type of intervention be more "privileged" than another?

    I also understand [and agree] about your point regarding a creator, but even there I run into the problem that suggests that it is only a difficulty with generations that are too close to that "creator".  My argument has been that the creator is providing the motivations and decisions that render the robot merely a simulation, yet aren't we also shaped by the motivations and decisions of a "creator" in our parents [as are many organisms].  Certainly we don't create ourselves.

    To be honest, I have to grant it to Sascha, that he raises excellent points and makes a compelling argument.  In my own defense, regarding the piece on anthropomorphic bias, I wasn't being nearly as ambitious in that article.  In short, I was simply trying to present the simple case of not drawing a dividing line between humans and animals, while Sascha raised the bar to include all potential "organisms" and identified the issue as not simply being about anthropomorphism, but blatant specieism.

    Perhaps I haven't done justice to his arguments, and I certainly still hold many reservations about robotic systems, but his argument has raised enough legitimate and serious questions to make me consider precisely how we should define these perceived or real differences.

    It's certainly legitimate to question where a division may exist, so we don't end up in the absurd realms of claiming that our television sets and cars are intelligent organisms.  In addition, it is certainly legitimate to establish some kind of a standard regarding what would be considered an organism, and it is certainly legitimate to question whether it is truly attainable [which is where I tend to come down on it].

    Similarly, consider your comment about ID.  While we can certainly say that ID has nothing to do with biology, the concept is not automatically excluded from ever having a role if we consider future organisms [both biological and robotic].  After all, are dogs the result of ID [at least in principle]?  What about Bt corn? 
    Mundus vult decipi
    "not, then why should one type of intervention be more "privileged" than another?"

    Okay, I wasn't really focused on privilege, or ethical concerns. And I agree it is an interesting topic. I wish I could get a better understanding of how there is no distinction. You can't just throw out biological needs and feelings as a detail, that's where all motivation comes from. What will motivate the robots? I see them as being on autopilot and also cannot see them as individuals as worthy of ethical concerns as animals. I am a biologist who doesn't work with animals and who hates how animalcentric the biological sciences are, but I would still not hesitate to give special ethical protections to animals because they are sentient beings in a way plants and bacteria are not, even though the latter are perceiving stimuli and sending chemical signals etc. Are robots sentient beings like an animal, or like a plant? Plants, like bacteria, also casually clone themselves in a way we perceive as foreign...animals tend to have two individual parents.
    Maybe we should be discussing the distinction between humans, chimps, and plants.
    It's certainly legitimate to question where a division may exist,"

    for me I just can't see it until they develop their own drives, their eventual meta-organisms. I don't think we could ever understand them. Isn't that part of what you were bringing up, that when we watch chimpanzees we think we understand what they are experiencing, and for good, logical reasons because we share a history? but we can't say that with robots. We will never understand them and they will never understand us, though they maybe programmed to manipulate us like the Japanese robot. Interesting that the child was avoiding eye contact with her.

    Gerhard Adam
    First, I would agree that we may never understand robots, and as I've said before, I'm not entirely convinced regarding exactly where one would draw the line regarding robots as organisms.

    However, the questions that Sascha raises are also significant though, because it is hard to separate out the two.

    For example, you mention biological feelings and needs, yet couldn't the same argument be made for a battery operated robot that recognizes it is getting low on power and seeks an outlet to recharge itself?  Is that fundamentally different from seeking food? 

    As you indicated, if bacteria sense and respond to their environment through chemical signals is that somehow a more "privileged" way of sensing that an electronic sensor?  That's the same argument I've had with people in discussing awareness, where my point is that perceiving the world through photons hitting a retina isn't more privileged that perceiving other bacteria through chemical sensing, or ants following pheromone trails, etc.  Each does what it is supposed to do; i.e. convey environmental information back into the host organism against which it can respond.

    If a biological organism doesn't have a particular problem to solve [i.e. food, protection from predation, etc.] then we accept that it can do whatever it likes.  Similarly couldn't we envision a robotic system that does the same thing?

    Now, for me, the issue of cognition and artificial intelligence is more of a sticking point, but if we were to consider something like nano-bots, it becomes harder to differentiate their actions from those of a bacterial colony.

    That's what makes these hard questions when considered in their totality.  As I said, I wasn't quite that ambitious in my post, because I was only addressing the arbitrary division between humans and other biological organisms.

    As I mentioned in another comment here, I still see a major distinction in robots that are merely simulating another's behavior versus those that would be designed to be a "species" in their own right [whatever we hold that to mean].  As you mentioned, I saw the Japanese robot as a simulation; a phony.  I'm basing that on the fact that a robot doesn't have hair, it doesn't need to blink its eyes, etc.  In short, it was simply contrived to pretend being human which is not being a robot. 
    Mundus vult decipi
    with much less individuality and without the drives to find and compete for a mate, territory etc.
    this insight makes your comment much elevated above "stupid observations"; in fact, these lead to issues like 'global suicide'. Where we disagree is that you do not accept me as a robot. What is so special about the robot builder called "nature" that I am not a robot? It must be some sort of mystic religious godlyghostly ingredient if it should be forever only accessable to "nature", whatever that is supposed to be without us being part of it anyways. I reject such mysticism, and so, I am a robot. The state of our robot engineering today has nothing to do with it. I could be of the same opinion 200 years ago.
    Gerhard Adam
    BTW, the video of the Japanese lady robot is a perfect example of what I mean by simulation.
    Mundus vult decipi
    Am I different? I am not. I am just faster and have a larger repetoire of actions.
    Gerhard Adam
    In this situation you are different, because you act and behave according to what you are as a biological organism.  The robot has no need to blink eyes because there is no need to generate moisture or protect from dirt.  The robot has no need for hair except to generate an artificial appearance.

    The point, in this case, is that the robot is intended to emulate a human being and not a robot.  So, in that regard I disagree, because my sense is that the argument about animals/plants/robots, etc. only holds when they are behaving according to their nature.  It is not in the nature of a machine to be human or behave like a human.  That's why I refer to it as a simulation.  The motives, actions, etc. are all intended to pretend to be something that it isn't.

    Forgetting humans for a moment, there is no doubt that if someone built a complete replica [including precise behaviors] of a dog, it would still not be a dog.  The most obvious difference is that they would respond differently to various forms of interaction or treatment.  If I neglected to feed the robot dog, there's no harm, whereas the results are obvious if I were to do that to a real dog.  In that respect there is a clear cut difference because the robot is a "pretend dog". 
    Mundus vult decipi
    Pretending to fit in is basically 100% of my social interaction. It is what we social robots are made for.
    John Hasenkam
    I know bugger all about AI but I sometimes get the impression building sentience is just about more processing power. That's obviously important but in biological processes there is something quite remarkable going on that if often overlooked. 
    When we create a machine we create it for a specific purpose. In biological processes it can be quite the opposite. The textbooks may talk about the endocrine, immune, CNS "modules" as if these are all independent processes doing their particular thing. It doesn't work like that, these are not systems in the way we think of systems in our creations. Biological systems interact in a myriad of ways that is baffling and beyond our current modeling strategies. 

    Change our immune responses, even at very subtle levels, and CNS function changes. For example, a slight bump in one inflammatory mediator, tnfa, via a RGS4 mediated process, will inhibit the main excitatory transmitter, glutamate. "Sickness Behavior" is the classic example of how immunological function can have huge consequences for CNS function. (That immune privileged idea of the CNS is bunkum, long discarded.) However do not think of these effects as "side effects" or even necessarily adaptive, they are consequences of intrinsic limits placed on evolutionary designed systems. Evolution lacks the tremendous advantage we have in being able to strongly isolate particular processes and systems so as to avoid interference from external signals. That is a limitation but also points to a key feature in biological processes. 

    Taking the CNS as being relevant here, consider the idea that brains can perform a huge array of functions and maintain remarkable stability despite a constantly changing milieu. I consider something of a mystery because in our systems design interference of this nature is often disastrous. Because of physical constraints evolution created functionality that with respect to external influences has wide windows of tolerance. Decay in functionality does occur, often beyond our conscious apprehension, but for the greater part sufficient functionality is maintained to get the job done. 

    Consider attentional processes, there is no single attentional functionality in the CNS. It can be mediated from the brainstem(perhaps) or thalamus upwards, it depends on signal salience which is a function of the design and previous environmental contingencies shaping the response. The CNS is remarkably stable in function in part because it can bring many different processes to bear upon a specific function. Sure there is modularity of sorts but it is not walled off modularity or single process driven modularity. So we may be alerted by the most subtle of stimuli which in turn activates multiple processes and, though not always, a single functionality comes to dominate in address the relevant contingency. 

    The immune system demonstrates similar dynamics, there are many differing processes happening in unison and, possibly as a probability function being mediated by various feedback processes, a dominant set of processes become the main function in addressing the threat. So initially it is "all in" until processes deemed optimal take over the job. This is not redundancy as in our systems because in that context redundancy is about when one system fails and the back up follows. In biological processes all potential functionalities may be invoked in response to a contingency but then the "best fit" functionality becomes dominant in operation and the others recede into the background. It is a type of natural selection in our bodies where the best fit solution, which cannot determined in advance, eventually takes over the job. 

    So Sascha, speed and repertoire of actions are important but equally important is the manner in which these various processes are invoked for any given contingency. I see no intrinsic reason why robots can't be sentient, I'm not even sure sentience is about processing speed and power, I suspect it is also about the internal dynamics. 
    There's nothing controversial about categorizing animals and robots together from my point of view. Maybe it's not supposed to be.

    The argument could go something like this: Animal/human/robot behavior - the line is speciesism/racism, the upholding of that you may play with Siri much any way you like while an underling serves you a pork dinner, but none of them may play or eat you. We are not comfortable with the notion of a machine going on trial for a crime.
    You have decided to homogenize all information processing entities (or perhaps "networks" is a better term). What about a rock rolling down a hill? It's using physical computing... So for practical ethics I don't see why anybody would take this seriously. I think a better approach to start with would be to ask, can we describe information networks worth saving vs. ones we can abort at will.

    It seems to me there are two ethical criteria for information networks: 1. Sentience, the definition of which will of course not be agreed on here, and 2. The concept of a "living organism".

    Number 1 is where we say that a natural organism neonate doesn't matter and can be killed, but a particularly smart artificial organism should be preserved (or at least its mechanism of construction so we can regenerate it). Number 2 matters because if an information network isn't "living", then it can't be killed. It doesn't have a runtime. A difficult question is, If we can save a static snapshot of a living organism (or the entire construction mechanism including its esoteric data) so it can be resurrected later, is it ok to kill its current runtime?
    I think we agree on most of these (nothing other did I intend to write here), although our choice of words is of course different. "esoteric data" though you have to explain to me.
    "esoteric data" though you have to explain to me.
    I mean a phenome vs. phenotype, and all information that differentiates a physical organism from another of the same kind...a particular instance of a class is slightly different than all other instances, and has had its own particular experiences. Clouds of information seem to like their so-called "identities" for some odd reason....
    The only essential disagreement between us is that I fear you are on a slippery slope to esoteric sometimes ("clouds of information"), and I like to be more careful at those points, for example, from a quantum physical perspective, I would have to question the actualization via 'actually running' (like 'for real' and out there in the 'real world'), and because that would lead straight to crazy talk, I do not touch on it here.
    Clouds of information seem to like their so-called "identities" for some odd reason.... 
    Maybe you are joking, but I guess you know the 'odd reason'. Those "clouds of information" that do like their own "identities" are the ones that like them so much that they make more - some write articles on the internet even. 'Evolution' is that "odd reason", and the way how we natural robots identify is evolved for sure. What I mean to stress is: There is surely no mystery force that makes "pure clouds of information love themselves" or so. Them all committing suicide would simply be an inconsistent story; such could not be the story of their own emergence in their own world.