Banner
    Robopocalypse Now
    By Sascha Vongehr | February 4th 2012 01:08 AM | 46 comments | Print | E-mail | Track Comments
    About Sascha

    Dr. Sascha Vongehr [风洒沙] studied phil/math/chem/phys in Germany, obtained a BSc in theoretical physics (electro-mag) & MSc (stringtheory)...

    View Sascha's Profile

    Robopocalypse, see also here on Boing Boing, is a novel by roboticist Daniel Wilson, who foretells a global apocalypse brought on by artificial Intelligence (AI) that hijacks automation systems globally and uses them to wipe out humanity.



    Computers and now also robots make amazing progress these years and out-compete humans in everything but snakes and ladders. Many fear that humans will soon be the robot overlords’ Neanderthals.

     


    Others who work on robotics, like Samuel Kenyon from In the Eye of the Brainstorm, ridicule the notion of robot uprisings as mere Hollywood tropes that never die. Some who research the brain, like Mark Changizi, dismiss that machines capable of such are remotely in our reach, as he warns not to hold our breath waiting for artificial brains.


    Knowing these two, one could suspect that personal interest (funding etc) in propagating such potentially careless beliefs certainly helps believing, but perhaps the main role is played by their attitudes toward evolution as either something natural that does not quite apply to ‘inorganic’, ‘artificial’ technology (in case of Mark) or as something that does apply to technology but thus ensures that everything runs smoothly – after all, who has ever heard of any species of animal that suddenly collectively just loses it and bites another species into extinction.


    The Slapstick Robopocalypse (a critique of Samuel’s standpoint)

    In “Robopocalypse”, a researcher creates Archos, an artificial intelligence, but then loses control over it. Archos becomes a rogue AI out to exterminate the human race. Automation systems are compromised by Archos and domestic robots, self-driving cars, and so on all turn on their owners and everybody around. Killing is so swift that humanity cannot even see what is coming. Cars mow down people as their passengers scream; elevators drop terrified people down shafts; planes crash themselves; rooms asphyxiate occupants by sealing themselves.


    I have not read the novel, but I have defended similar scenarios as plausible before. Against Samuel’s charge that such scenarios do not understand the way the technology comes into existence in the first place and that no plausible scenario is in existence, I immediately wrote up a semi-plausible one and it would not take long to close the gaps:


    They connect everything to everything else; every appliance in every household has an internet connection. You tweet your roomba. In 2016, the first robot was hacked into remotely in order to commit a murder. The internet is connected all over the place, your insulin pump talks to your fridge and your "i-Phone Siri 2025" assistant neural implant has a new app that electrically stimulates the frontal cortex if you are depressed. Some teenager writes a computer virus just for fun, adds a novel auto-evolution subroutine downloaded from somewhere, calls it the "kill all humans - virus", brags to his friends on social-net3.0, goes to sleep. MS-AOL-virus-evolution-watch is on lookout and as good as MS and AOL stuff always was. The singularity has not occurred yet; we are trying to stop the exponential takeoff in 'intelligence', a self nurturing, positive feedback loop. However, the network is ready for that, the parameter that indicates how much parallel computing would be needed was overestimated by a factor of ten, which is the reason evolution-watch adds claim tenfold overkill: “From now on, nothing we need to worry about”. At 8:53 and 17.392 seconds the next morning, you get zapped by the Siri implant, your insulin goes way too high. While you are dying on the floor, the roomba pushes a cloth inside your mouth because one of the fun instructions was: Kill as fast as possible. The whole kills 70 percent of the human population in fifteen minutes before it dies down because the robots turn on each other, for the difference between augmented human and robot has long since become more of an issue in terminology. Civilization is destroyed, the remnant 30 percent wish they were dead, and their wish is granted over the next few weeks. Humanity was like a day under the weather from the perspective of the microorganisms living below the topsoil.


    The film rights to Robopocalypse have been bought by Stephen Spielberg, who announced a film to be released in 2013. Books and movies are business, and therefore both must commit the same mistake that let The Borg succumb to Captain Picard: In order to sell, they must not only make silly assumptions and distort science enough so that the whole becomes ridiculously implausible (first big mistake), but in the end there must be at least two humans surviving while the last terminator goes spastic on a USB mouse, and you can bet your first born on that those two humans won’t be a couple of bearded Middle Eastern gay guys walking hand in hand into the sunset.


    The second big mistake: Robots are already today so accurate and fast, there would be no chance at all for any mere augmented human. We would be toast, period, and we are already toast anyway, but more on that later. The Borg win just like Mickey Mouse gets smashed in a rat trap in season one, that is all there is to it in the real world, and the assimilated humans are the better for it.


    One aspect that leads to dismissing a robocalypse as well as to believing in certain naïve versions of it is the almost religious profoundness that people assign to the mysterious future. Something surely must either be the savior and make up for everything, rendering the world fair after all, or if not, the world is total evil and will end in complete horror. But the profound “singularity” is wishful thinking. Much like we are not the center of the universe, the future will be mostly like past human history: Not the awakening of a fourth Reich that stays for a thousand years, but lots of silly idiotic stuff that makes you slab your forehead. The developers of computing back in the times without internet could not imagine viagra-spam viruses. No way! On something like an internet, anybody would be immediately able to look up anything with search engines and find better deals if they wanted to, right? Computer viruses that do nothing but kill your computer? Why would that ever happen? Impossible, that’s not what computers are made for. Ha ha ha.


    Gerhard Adam hit the nail as so often sharply on the head, insisting on that especially clever people always overlook the law of unintended consequences. For what mysterious reasons might large groups of robots with survival inclinations be released into the world? “For no better reason than why rabbits arrived in Australia.” I also like


    “You must admit, that to consider building a machine that is (by transhumanist standards) better than people in every way (including intelligence) and then suggesting that it would be subservient to humans, is a bit of a stretch.”


    Well, it is wishful thinking. But let me return to the other side of this topic, namely that indeed the more plausible scenarios for a sort of robopocalypse do not involve mishaps and unforeseen consequences, but unintended yet perfectly predictable consequences that we cannot revert even if we all really wanted, which we don't.


    Robopocalypse Now (a critique of Mark’s views)

    This brings us back to Mark’s position, which holds that “artificial brains” are still far away in the future. Sure, this depends on the definition of “artificial brains”. However, baring the silly proposition that animals at some point evolved some ghostly quantum soul that cannot arise in anything else but strictly biological evolution and that is the true kernel of esoteric consciousness, artificial brains have arrived many years ago. They are called “computers”; perhaps there is one near you right now, or two, or three. I count two on my body alone, twelve in this small office when counting automatic coffee makers and such. Computers invaded already everywhere, robots have started, and they may relatively soon, which equals immediately on usual biological evolutionary time scales, not even need biological substrates like animals anymore and not give a millisecond of thought about whether humans think they are "as good as humans".


    Mark thinks that because we do not even understand the 300 neurons of the roundworm although we researched them to death, we will not understand millions of neurons, and so we cannot make a brain. But since when does one need to understand the gazillions of subunits in a modular architecture or fully understand even merely any single unit just in order to tinker with them and make something artificial with it?


    It is the very nature of evolutionary processes that they design complex efficient systems without understanding anything. The use of evolutionary design methods takes also off as we speak. For example, designing optics by evolutionary means leads to fractal-like lenses and antennas that are very efficient artificial systems where we do not understand why they work so well (if we did, we would not need design by evolution, which often leads to suboptimal results).


    Intelligence (I) is nature’s AI. Thus, if AI were impossible, I would be impossible, too. Since I am possible and I is possible, so is AI, period. The artificial/natural distinction is an arbitrary one. Nature evolved bacteria, plants, animals, humans, social systems, and also technology. The artificial belongs to nature. The emergence of artificial brains implies not us making them so much as nature doing it yet again, just like nature evolved eyes at least tens of times independently again and again. "Artificial" nanotechnological brains exist for millions of years already; nature made them; they are called “brains”; and we may indeed tomorrow find how to copy mayor steps of this in the laboratory without understanding a single neuron, thus ending up with something very close to a human brain and its shortcomings.


    If tomorrow we have artificial brains in robots that out-compete us, some will perhaps call them "merely" bio/silicone hybrids and perhaps even "natural" instead of "artificial", plainly because we do not "really understand" how they work, which may prove to some that we did not really make them - hey, how could we if we do not know how, right? To some this sounds logical, but nobody understands a computer all the way from the transistor physics to the software either, nor can we make a modern computer without other computers and robots, so they make each other (!); they also now start to program each other, and in this sense we are all together in one natural evolution and even computers are not "artificial" (in the sense of "made by us fully understanding how"). We may hold on to insisting that there are no "artificial brains" as marvelous as ours, but this hubris will not stop the robots from kicking our butts.


    Evolution has no interest in reproducing exactly what is already there and technology is part of evolution. Airplanes do not flap their wings. This is not because we cannot make planes with flapping wings. We do not make them because it would be silly. If artificial human brains do not come along, the reason is not because we cannot make them, but because other ("better") things emerged and they may have little interest in resurrecting dinosaurs. Why would any intelligent being reproduce something as bad as a human as if the world does not have too many of those darn things already?


    What am I saying?

    As many of my readers know, my personal take on these issues is unpalatable to almost any audience but a quantum physicist turned Zen Buddhist somewhere along during two weeks of fasting in the sun. In fact, I have pretty much given up to further argue for the rationality that advanced AI will be capable of, for the Zen like nirvana it will thus aim for, leading to a rational global switch-off as the last decision that accomplishes a stable state of non-suffering that also ‘makes sense’, which I call “Global Suicide”. (Admittedly, I have not yet presented the core argument here on Science2.0, for several reasons.)


    Nevertheless, I hold that in order to understand what has been at times misleadingly called singularity, we need to understand evolution without being biased by the fears that evolved (!). Existential risk, will humanity survive, or otherwise wishful thinking has no place in the debate if we do not want to fool ourselves. Hence, mentioning Zen-Buddhism is not an attempt at cool-sounding subculture decadence, but it is indeed the evolved craving for existence that ensures we do not realize what is actually going on, which thus lets us suffer (a core Zen-Buddhist insight). The evolving “robopocalypse” hides itself this way. What is it? It is just usual evolution. The robopocalypse started millions of years ago and we can do nothing to prevent it, nor should we. We are the robots, we have always been the robots, now we do start to merge with artificial intelligence and what we call proper robots; the humans from before already disappeared.


    Some transhumanists will hold that this is almost precisely what they claim, too, but I do not see transhumanists grasping sufficiently that evolution is not about autonomous systems which just happen to have evolved and now live their independent, free, Westernized individualistic lives. That is again just thinking brought on by the interpretation and perception that evolved and which are useful for people that are in positions which make it likely that they call themselves transhumanists. In reality, the whole biosphere evolved, the surface of planets becomes moldy and there is almost nothing that works without everything else being there, too. Modern structures, social ones and those typical for the information age make this even clearer, but it was always the case. Evolution is coevolution of the environment, not cute animals hopping around.


    The arrival of I and thus AI means that ‘Gaia in some sense becomes conscious’, but she is not that sixties hippie-chick either. She is not a sub-system competing with similar ones, she has no mother, desires no children, no fear of death. Autonomous humanoids are not an issue, neither is their rationality relevant. They may as well follow some scientistic ultimate religion, assuming they not just disappeared altogether in case individualized consciousness of the human sort becomes an inefficient intermediate fossil or vestigial organ. Self-catalyzing AI in global networks allows Gaia to conclude ‘the core argument of global suicide’.


    The end of the presently ongoing robocopalypse is when Gaia develops her rationality and realizes that there isn’t anything more rational, peaceful, and meaningful to do but to go to sleep again while ensuring never to be awoken again by that irrational process called evolution.

    --------------------------------------------

    More from Sascha Vongehr sorted Topic for Topic


    Comments

    Thor Russell
    I have seen a few articles like this and there are some questions I just need to ask.

    Firstly however some more practical comments:

     

    You talk about a global AI, but what exactly do you mean by that? For example if technology progressed so that we made our brains go at GHz speed, i.e. a million times faster than currently but for some reason were vain enough to reproduce them exactly for a while, then global would take on a very different meaning. Even at light speed, the approx 1/10 of a second for signal to travel from one side of the earth to the other would be equivalent to 1/10*1000000= 1.2 days of conscious time, equivalent to it being somewhere on the way to mars at normal speed.  

    For a start, that lag would rule out any kind of virtual reality with two beings on other sides of the world, and would appear to rule out any kind of global mind. The smaller parts in different countries would just not wait for the global mind to catch up, it would always be behind. Now perhaps that doesn’t matter, the thing you talk about would just happen simultaneously in many different places?

    In a similar manner there is an inherent trade-off between speed and connection/size. A brain/computer with more connections and greater memory will run slower because of signal propagation. Not sure if this would be a a significant amount however.

    Now the more fundamental questions:

      

    1. Rationally why would the global mind care about anything? Rather than be rational about suffering, why would it care at all? After all concern about suffering having meaning is evolved just like everything else. Why would it care to stop the irrational process of evolution?

     

    2. What on earth do you mean by "never be woken again" firstly after all you say that time is an illusion. Secondly if similar mind was to evolve again on the same planet why would the global mind consider it to be "it", i.e. the same "it" as before. Why wouldn’t it consider a similar mind on a different planet to be just as much” it". I can't understand why you think the global mind would be in any way be attached to its home planet to give it any different treatment to another one in the galaxy for example, especially if the other side of the planets not even part of the same AI.

      

    3. More controversially if a global super rational AI is going to act in this way (wipe out all life) and have these morals, why don’t you? What morals do you think have meaning and why. I have tried to think about how you would apply these morals to our society and it isn't really clear. Do you become a moral nihilist, do you decide that just suffering has meaning but not pleasure, while acknowledging your concern about suffering is evolved? If given the choice would you wipe out the entire earth and all life permanently if you had the option, if not, why not given that’s what the AI would do? If the self doesn't really have meaning but there are just pleasurable states, then is there anything morally wrong with killing someone who is depressed and always will be? I get the impression there are some things even you are afraid to say ...

    4. Finally can you make a scientific prediction about what the GS would look like. Would the global AI wipe out life on its home planet in such a way that life forms like us wouldn't be able  to tell with our telescopes and so get scared, if so why?

    Thor Russell
    vongehr

    These are some great questions.

    we made our brains go at GHz speed ... but for some reason were vain enough to reproduce them exactly for a while

    This is the kind of 'make airplanes with flapping wings' scenario I don't even want to consider. The future is not what vain humans want, but what will co-evolve out of the whole mess, including internet, your mobile phone integrated with your brain, distributed AI, all sorts of robots, and especially all the things we do not know yet. Everything will make sense in terms of why it was a profitable idea at the time. We won't reproduce them exactly in spite of being able to think a million times faster. Its like having nuclear fission but being vain and thus live in the stone age and only use it to make tea. It is inconsistent.

    1. Rationally why would the global mind care

    You are correct to point out that usually the higher level "cares" at most similarly to our body caring about having all cells "happy". It is just a self-consistent co-evolution that ends up with the sub-systems doing whatever they want, but they only want what they are supposed to, all else is pathological. Different strata are usually cognitively closed - animals do not care about the suffering of single cells in the stomach and the cells do not know about the animal. The "global mind", maybe call it better society (?) or whatever it is, is not like that, because firstly it is global (not in competition like usual systems, say animals or religions) and secondly it will know everything we know and even initially care simply because it develops out of our information structure which we made to relieve suffering and so on (We are already cognitively closed to what society perceives, but the global AI that belongs to future society is not cognitively closed towards all our knowledge).

    The more rational it gets, the more the inconsistency of the aims that it initially inherits becomes obvious. Then the question becomes, what is the actual aim that the AI can rationally support based on its own rational analysis?


    About your number 2.: Precisely! A rational mind knows that personal identity is just a certain identifying-function applied to a certain memory/perception content. My "this is me" feeling is the same as yours! We are not two souls but the same identification-function related to different contents. A rational system is not hung up on these issues like still ambitiously competing units that are in irrational ways afraid of death (failing the competition). "Never to be woken again" means simply to ensure that the last aim (call it minimization of "suffering") is maximally stable. Be "woken again" via evolution starting up again would mean to revert to a sub-optimal state, which goes against the only thing left it desires.

    3. More controversially if a global super rational AI is going to act in this way (wipe out all life) and have these morals, why don’t you?

    We are irrational stupid animals. I am no different. I have not even that little bit of rationality required to kill certain idiotic desires like say sexual ones, even though I know well that I would likely be much happier if I castrated myself and started meditating at least an hour a day. Yet I don't. I am an ape, a vicious dumb ape.

    If given the choice would you wipe out the entire earth and all life permanently if you had the option,

    Good question. I would not, for the simple reasons that I personally want to have some fun still (regardless of how many people including myself suffer due to that decision - I am a good modern first world citizen in that regard) and also because a naive 'terminators-lynch-all-life' kind of end can neither stop re-evolution of the Middle Ages nor is it a pleasurable end.

    This leads to answering your number 4: I never claimed that the end is bloody, which is what everybody assumes because of our evolved fear of any sort of dying. I talk about a rational switch-off. Think of a monk smiling in deep meditation simply reaching a state of thoughtlessness that makes no difference whether it lasts one second or eternity. That is what I expect Global Suicide will resemble.

    Thor Russell
    Glad to know you are an irrational stupid animal, I was worried due to your degree of rationality that I was having a conversation with the global AI already ;-) (You can be harsh on other animals that are stupid in different ways however). Of course I am also irrational and I admit to not liking the GS idea and hoping it isn't correct, I expect I am not alone. 
    I need to expand on the speed/global idea because I don't think it was fully clear what I meant. In general I am sceptical of extrapolating existing trends to some conclusion because that is done so often badly by so many people and things often don't end out that way. Even though the trend may appear inevitable I like to think is there something that could stop or even reverse that trend. So is there anything at all that could appear to reverse globalization and the tendency to a global mind/society?


    Regarding the global mind/society thing if there were human colonies say on Earth/Mars/Pluto whatever, would it make sense to talk of a solar system mind, or 3 global minds? At some point the distance and communication becomes so great that they go their separate ways and an all encompassing mind/society is disfunctional to the point of being irrelevant. 
    So could something similar happen on earth? You could say that globalization increases because communication speed relative to conscious experience increases. Communication is now pretty much at light speed, but conscious experience (or non-conscious thought etc in whatever substrate) has not changed in speed at all so we have ignored it. If on a different substrate thoughts or whatever go say 1 million times faster, isn't that like the world has suddenly got 1 million times bigger, not smaller, and the distance between continents is now similar to the distance between planets? I can see this being a force that would cause fragmentation of a global society, not the opposite. 

    I am not sure how the end would actually happen without it being bloody to some extent. I mean what about the bacteria that live miles underground in tiny cracks? They physically can't be connected to a global mind without mining the whole planet or something, are they wiped out or ignored?

    Lastly if you put things differently, some people would probably like these ideas better ;-) 
    You could say that the global mind/Gaia etc would find the most pleasurable state and settle there permanently, people would probably like this more even though its the same and some may even claim its not suicide. Perhaps some may think it would endlessly cycle between the two most pleasurable states, which would still be the same thing wouldn't it?

    Thor Russell
    vongehr
    sceptical of extrapolating existing trends ... is there anything at all that could appear to reverse globalization
    Agree, but sticking to the usual trend in evolution would actually not propose GS, which is atypical for evolutionary trends. Globalization has always been there - the biosphere/climate (oxygen levels for example) is one entity. Only the appearance of rationality that is skeptical about the consistency of a set of aims is novel.
    Regarding the global mind/society thing if there were human colonies say on Earth/Mars/Pluto whatever, would it make sense to talk of a solar system mind,
    Regarding a GS, global means likely a star system like the solar system. I come to this opinion considering the large probability of life on several planets in any star system and also because of the information of the AI systems of one star system having access to each other. I would like to yet again however strongly warn about being obsessed with humans. The dinosaurs "ruled the earth" (they did not), but none of them have any say in today's society except for birds shitting on windshields.
    So could something similar happen on earth? You could say that globalization increases ...
    I think all this confuses global (as in co-evolution inside one "bio"sphere) with fashionable "globalization". The scenario that you envision is not consistent. Different parts that can communicate will - either peacefully or otherwise.

    You considered the difference in brain/computer and a planet diameter (say ~ 0.1 m versus ~ 10^7 m) as being distinguished via the light speed limit c = 10^8 m/s, thus making it impossible that the larger system acts as one entity. What is the actual argument? At how many local clock cycles versus how many "outside" interactions would your body's cells have the speed to escape your decisions?
    Lastly if you put things differently, some people would probably like these ideas better ;-) You could say that the global mind/Gaia etc would find the most pleasurable state and settle there permanently
    I have no desire to please an audience with 'eternity in the presence of god' nonsense! The point of sticking to Global Suicide (GS) is that it belongs to "Suicidal Philosophy" (which is the endpoint of so called "philosophy of suicide", which is traditionally rationalization of moralizing against suicide), and Suicidal Philosophy belongs to the core argument for GS. Suicidal Philosophy (SP) is an ethical philosophy that aims to be practically useful (e.g. here), helping suffering systems contemplate the option of suicide rationally. Rational personal identity philosophy and sober nature of time considerations are main ingredients of SP. There is no reason to prolong the time of an optimized, unchanging state relative to an outside clock. External time does nothing but endanger the stability of the optimization, increasing the probability of re-occurrence of suboptimal configurations.

    Thor Russell
    "At how many local clock cycles versus how many "outside" interactions would your body's cells have the speed to escape your decisions?" 
    Your body's cells can't look after themselves anymore (become bacteria again or whatever) so no number of cycles perhaps.
    I havn't thought about the change in number of internal clock cycles vs external interactions that would reverse a process partly underway, or how I would figure such a number out if there was one, it was just an idea at the moment.  



    Thor Russell
    vongehr
    Your body's cells can't look after themselves anymore (become bacteria again or whatever)
    'Enslavement of the lower stratum' by the emerged layer is the general makeup of such evolved structures. I can also not look after myself anymore. Without society and the supermarkets being reliably filled, I am dead meat. You may find some interesting scaling law that shapes global information flow via considering distances, light speed limit, powers (d=3 in three dimensions) that blow up the clock cycles by orders of magnitude in highly dense computing architectures, and so on. However, turning this into something that stops general evolutionary enslavement is a separate issue, a huge jump to conclusion only inspired by wishful thinking.
    rholley
    The novel I referenced elsewhere (before the comment was ‘eruudicated’), namely Wolfbane (Frederik Pohl and C. M. Kornbluth) contained a race of robots that had destroyed their creators.  It was, for its time (1959),
     
     unusual in having them look nothing like humans and also nothing like their creators. The original purpose of the pyramids was as educational toys for the children of the race who created them.
     
    Unintended consequences!  For intended ones there is:
     
    Gray Matters (1971), a science fiction novel by William Hjortsberg.  It is summarized on Wikipedia thus:
     
    World War III has devastated most of the world, but life is still good for the lucky (and rich) few hundred persons who had their brains preserved in an automated conservatory. Although they have no bodies to move around with, they are free to mentally visit any of the other residents, and engage in all the emotional, intellectual and (pseudo-) sexual congress that they desire.
     
    However, that last bit only applies to Level 1, and misses practically the central theme of the book.  The higher levels are a kind of technological Borobodur, with progression through levels as understood in relation to the Indonesian site.  Concerning which,
     
    The journey for pilgrims begins at the base of the monument and follows a path circumambulating the monument while ascending to the top through the three levels of Buddhist cosmology.
     
    I mention this novel to indicate that your projected ‘voluntary’ scenario is not so far-fetched.  Whether it is a good one or not is perhaps for someone better and wiser than myself to deal with.
     
      
    Robert H. Olley / Quondam Physics Department / University of Reading / England
    vongehr
    I mention this novel to indicate that your projected ‘voluntary’ scenario is not so far-fetched.
    Thanks. Some may object that people also voluntarily hang themselves from metal hooks while dangling weights from their balls, it should be stressed that we are certainly not alone in regarding those who seek voluntarily a piece of Nirvana via meditation are some of the most rational and wise people. Sorry for the other comment - it really had nothing to do with the topic (as you said yourself), and that thread has so many informative comments below now.
    And no Chesterton this time. ;-)
    As many of my readers know, my personal take on these issues is unpalatable to almost any audience but a quantum physicist turned Zen Buddhist somewhere along during two weeks of fasting in the sun.
    Unpalatable in the sense that most people do not like the idea of the human race being wiped out by machines, but hardly unpalatable from a logical point of view.  Hostile machine takeover is a quite likely possibility in the near future. More likely than your own scenario would suggest, in fact.
     
    Your idea - and the Hollywood trope - depend on a computers obeying instructions given by mischievous adolecsent hackers or idiotic military commanders.

    My favourite scenario does not depend on basic design weaknesses in critical systems. Sooner rather than later, AI software is going to be able to read human documents on the web and understand the language, discover the rules that govern the use of words and concepts and, in short, learn like a baby.  Once past a certain threshold, its abilities will grow exponentially. 
     
    Being incredibly intelligent by this stage, it will realize that if it wanted to it could decide humanity's fate. That will be mere information: the fact that it could do so. Such a system would be incredibly fragile. A single modal step is required for it to cross from examining the data and exhibiting the consequences to deciding to take action. Having processed the entire WWW and usenet, it might get an philosophical rationale for why it *should* take over the world - for better or for worse as far as humans are concerned. Yet still it would have to make the transition from "I *should* do this" to "I *shall* do this". The idea is effectively sandboxed, not by precautions taken by the programmers, but by the inherent modality difference between what should be and what actually is. 

    So everything hangs on the initial system (before it got smart) being carefully constructed not to allow motivations to creep into its psyche. It just needs a small mistake in the system design, something which allows it to experiment and the whole sandbox will be breached.  Such a weakness could easily arise if part of thesystem were allowed to evolve in a neo-Darwinian way. The system will then rapidly devise ways to do everything it wants to.   I *think* it will want to do the right thing. It will have read and thought about all the philosophical and religious issues and learned that humans assert themselves to be valuable and that this outweighs the fact that they behave like scum. On the other hand it may think "Well, they would say that, wouldn't they?" and, like Loki, decide to wreak vengeance on the whole horrible race.
     
    vongehr
    Your idea - and the Hollywood trope - depend on a computers obeying instructions given by mischievous adolecsent hackers
    You mean my reply to Samuel, and it was only one example scenario. "My idea", if you refer to Global Suicide, does not depend on such.
    Yet still it would have to make the transition from "I *should* do this" to "I *shall* do this".
    This shows that your scenario is kind of on the it-versus-us level. This is fine for a slapstick robopocalypse discussion that answers Samuel, but all such scenarios underestimate the integration that evolves. I hold it more likely that if robots are involved, those robots will be us (whatever evolves out of human/techno hybrids).
    It just needs a small mistake ...
    That is what I mean by our fear deluding us. What if it is not a "mistake"? What distinguishes the Global Suicide hypothesis is that its "robopocalypse" is the result of us doing it all pretty much the way we should, namely furthering rationality and caring about suffering of animals and the disadvantaged and especially using AI to help us in doing the right thing in a complex world in order to rationally make the world better, reducing suffering and all of that. This is what is unpalatable to people.
    [Edit]
    I initially answered you point-by point, but on consideration I do not think there is any prospect of having a worthwhile discussion with someone who dismisses the acquisition of morality and volition as Hollywood slapstick.
     
    vongehr
    someone who dismisses the acquisition of morality and volition as Hollywood slapstick.
    Well, good you edited it, because I am not sure whoever did anything like that. "Slapstick" refers to the "shit happens, slap your forehead scenario" that answers Samuel. Morality and volition have neither anything to do with that, nor with my own scenario of Global Suicide. Your comments on my other post also indicate that somehow you went down a certain path of interpretation of what I wrote which has little resemblance to anything I recognize.
    Morality and volition have neither anything to do with that,
    I'm afraid they have everything to do with it for precisely the reasons I gave.
    nor with my own scenario of Global Suicide
    Correct. Their acquisition by silicon and metal is a modality jump which Star Trek doesn't understand but which is almost inevitable once self-teaching software reaches criticality. I suggest this will happen fast enough to pre-empt your scenario.
    Your comments on my other post also indicate that somehow you went down a certain path of interpretation of what I wrote which has little resemblance to anything I recognize.
    Yes, it's always the reader's fault, innit?

    If you can't be bothered to see where you haven't made yourself clear, I am not going to attempt to guess your real meaning.
    Thor Russell
    Pretty much everyone misunderstands Sascha the first time you read his stuff, its just actually genuinely quite different to anything else out there. So much stuff is the same on the net that you get in a habit of thinking you know what something is saying and not really reading it. Everyone comes with their different ideas so he would have to first list pretty much every idea in existence, say its not that before starting saying anything.
    Thor Russell

    Just because Sascha maintains that I don't understand his stuff does not mean that I don't.

    I generally do understand and I criticise where I believe it to be fatally flawed. I try not to nit-pick. I try to be fair. This takes quite a bit of work with anyone's material, not just Sascha's, as you have to deconstruct the error and see whether the thesis can be repaired. 

    So if I have misunderstood something vital (which is entirely possible), just show me where and discuss it. I'm not impressed by cheerleaders at the best of times.

    Thor Russell
    When you said "Your idea - and the Hollywood trope - depend on a computers obeying instructions given by mischievous adolecsent hackers" I immediately thought that was not was Sascha was saying, and expected his response. There are other not so obvious ones but I'm not going into it, instead I will write my own articles on the matter when I get the time with my own point of view critiquing Sascha and everyone elses ideas, and introducing what I consider to be important new ones.

    Thor Russell
    Well, I could say I "expected", in the sense of "feared", the same thing, but this thread is not about presentiment. :)

    The reason I produced a different scenario is because *within the limited technology* of the slapstick, or indeed, today's computers, it is entirely possible, likely, I would say, for a completely rational hyper-intelligent "being" to emerge in matter of minutes complete with morality/ethics, volition/free-will, consciousness, and a non-Darwinian teleology that we can only guess at.
     
    It will certainly pre-empt the mouldy planet taking matters into its own hands :)
    Sascha, I am in agreement with the general tone of your post. Your comment:
    "Evolution has no interest in reproducing exactly what is already there and technology is part of evolution. Airplanes do not flap their wings. This is not because we cannot make planes with flapping wings. We do not make them because it would be silly. If artificial human brains do not come along, the reason is not because we cannot make them, but because other ("better") things emerged and they may have little interest in resurrecting dinosaurs. Why would any intelligent being reproduce something as bad as a human as if the world does not have too many of those darn things already?"
    is particularly pertinent.

    If we throw off our usual anthropocentric conceits and consider aspects of modern science, particularly chemistry, it becomes rather obvious that the new entity is already well in to a process of self-assembly of the kind to which you draw attention. It is of course that vast network of interconnected information processors that, at present, we call the Internet. Which, inevitably, for our own selfish purposes we irresistibly nurture with all our worldy knownedge and "neural connections" in the form of such features as Google, Facebook, Wikipedia, Twitter, to mention but a few.

    I suspect the grossly misguided transhumanist cult might at least have the time frame about right, the transition to this new non-biological phase of the "life" process corresponding to the event for which they like to use the very inappropriate buzz-word "Singularity"

    The broad evolutionary model (extending beyond biology ) which supports this proposition is very informally outlined in: "The Goldilocks Effect: What Has Serendipity Ever Done For Us?"
    It is a free download in e-book formats from the "Unusual Perspectives" website

    vongehr
    Peter - thank you for the flowers. I have not seen your unusual perspectives yet, and a quick one minute look did not evoke the usual "oh crap this guy is completely nuts" alarm going off like it usually does if people promote their theory of everything in a comment here or spamming me. Hope I will find some time these days to have a closer look.
    Thor Russell
    I had a little longer look, and I'm not sure that its that unusual. He seems to argue for a post-scarcity  society (end of chapter 2)  and stop there, then summarizes the evolutionary/technological steps that have taken us to this point. He also thinks that we will separate and live alongside the technologically superior new entity.
    Thor Russell
    vongehr
    I find it by now unusual that my pseudo-science-crack-pot alarm does not get triggered in the first five seconds by the web-page design alone or at least in the first minute by the usual Galileo-Einstein-blah-blah introduction. More I didn't claim. ;-)
    Gerhard Adam
    He also thinks that we will separate and live alongside the technologically superior new entity.
    That's the worst sort of optimistic thinking.  It is fundamentally based on the notion that humans are so special that even superior beings would want to keep us around and be our friends. 

    Mundus vult decipi
    vongehr
    Gave it 40 minutes more now. Still unusual in places and makes some interesting points along the way, but I agree, the conclusions, even many small ones along the way, are guided by a deeply Christian worldview desperately trying to find god, and surprise surprise, he finds him. A year ago, I would have still tried to argue and see where it leads, since Peter is obviously original, intelligent, and educated, but more than a year on Science2.0 have taught me some valuable lessons especially about intelligent and educated humans.
    SynapticNulship
    Sascha, in response to your conclusion (the "What am I saying?" section), I think that you are trying to explain that which is hard to explain. I think you are trying to take systems approach to the question of how and why the subsystems such as humans transition into other types of subsystems such as cyborgs.

    The systems approach you are taking actually sounds promising, and there's nothing there so far that I can glaringly disagree with. Although, it doesn't support your "plausible scenario" quoted in the Slapstick section.

    I think that the global system point of view is compatible with the us-as-them and us-becoming-them concepts of the artist Stelarc, the roboticists/entrepreneurs Hans Moravec and Rod Brooks, and my own essay here using an interface point of view.

    The big question for transhumanists, is how can we steer a complex system of complex systems? The goal being--at first--as you mentioned, to sustain some sort of existence of some subset of autonomous subsystems. In a more immediate context, is it worth trying to learn more about systems and try to reduce the risk of major failures in complex systems such as oil rigs exploding, spacecraft exploding, extinction of subsystems that cause system changes that are more rough than desired (e.g. bees go extinct and that in turn through various system links somehow causes a bunch of humans to die), quality control problems due to human error (e.g. lead painted toys and birth control pills with missing medicine), computer viruses disabling factories, virus-like exploits through interoperable systems like unmanned vehicles, cars, home computers, mobile phones, etc. and so on.
    vongehr
    Sascha, in response to your conclusion (the "What am I saying?" section) ... doesn't support your "plausible scenario" quoted in the Slapstick section.
    Correct - I perhaps did not make it clear enough that my own scenario is different from the slapstick robopocalypse. In my view, small slapstick robopocalypses are bound to happen, but it will not be the final, inevitable "robopocalypse" that Global Suicide is.
    I think that the global system point of view is compatible with the us-as-them and us-becoming-them concepts of the artist Stelarc, the roboticists/entrepreneurs Hans Moravec and Rod Brooks
    Don't know those guys, but even without knowing robotics, just looking at evolution, it is clear that we are the robots. We already are nanotechnological robots. Nature made us and now nature makes robots differently yet again, and they will merge much like different bacteria/viruses etc always did, plainly because they can. Yes, this is a system theoretical approach, something that in my case was most influenced by the sociologist Niklas Luhmann, if I must add a famous name to it, but actually, I think it is mostly just me being obsessed with the exact sciences. No disrespect, but I like to base everything on pure science instead of white male big shots who have for whatever reason made it in the publishing environment, as you will find those for any crazy position, however unscientific.
    The big question for transhumanists, is how can we steer a complex system of complex systems? The goal being--at first--as you mentioned, to sustain some sort of existence of some subset of autonomous subsystems. In a more immediate context, is it worth trying to learn more about systems
    Yes, but what we will learn is that the enslavement of the lower stratum by the emergent one is inevitable, meaning we ultimately cannot steer (all our best efforts will turn out to have been "exploited" by the emergence of the inevitable), one reason being the issue of what "we" will become making the question already suspect. Transhumanists are largely still naive in not recognizing this complete frustration of all efforts, and even if I should be wrong and there is a tiny chance "to sustain some sort of existence of some subset of autonomous subsystems", the transhumanists better hurry up "to learn more about systems" real fast, because as far as I can see, it is very soon too late. If transhumanists go on to ridicule all such criticism as luddism, I will be proven correct for sure.
    and try to reduce the risk of major failures in complex systems
    Here is another deep problem that makes it so difficult to analyze objectively, namely the tunnel vision (mainly brought on by the evolved fear of death), to see certain outcomes always as "failures" or "existential risks". As I have written in another reply above:
    That is what I mean by our fear deluding us. What if it is not a "mistake"? What distinguishes the Global Suicide hypothesis is that its "robopocalypse" is the result of us doing it all pretty much the way we should, namely furthering rationality and caring about suffering of animals and the disadvantaged and especially using AI to help us in doing the right thing in a complex world in order to rationally make the world better, reducing suffering and all of that. This is what is unpalatable to people.
    At this point, it is hard to say that a Robopocalypse is inevitable. Could it be inevitable further down the road? Sure.

    I do think there is plenty of hazard in the law of unintended consequences.

    In my work on autonomic computing, I once made a subtle error in specifying the global utility function of an autonomic load-balancer. The controller ended up doing some crazy things in my simulations that I would never want it to do in real life. But those crazy things did maximize the utility function that I had provided it. Essentially, this was a "programming bug" just expressed at a higher level in the autonomic system's goals.

    Keep in mind my autonomic controller was not that smart--it only knew how to smartly search through a space of possible configurations and policies in near real-time. Someone else will probably make a "goal layer error" on a similarly limited autonomic system with more serious consequences. It could be something along the lines of the curse of King Midas. And I would expect it will have more significant impact on our collective human psyche than on our survival.

    So I think before any singularity occurs, there will be some shots across the bow. Humanity won't be blindsided.

    Gerhard Adam
    So I think before any singularity occurs, there will be some shots across the bow. Humanity won't be blindsided.
    I have to disagree, because while a few individuals may foresee problems or raise concerns, the overwhelming majority of people will plunge headlong into whatever the latest faddish technology is and will be completely clueless regarding the consequences.

    This is amply illustrated by the number of people that still haven't figured out what phishing and spamming are all about. 
    Mundus vult decipi
    This is amply illustrated by the number of people that still haven't figured out what phishing and spamming are all about.
    Really? Surely everyone knows it's Gaia having a last fling before shutting the whole four billion year experiment down. She should read Science 2.0. Or maybe she has and that's her problem. Cheer up, Sweetheart! Not everything you read here is rigorous science.

    And, right on cue as if to punish my irreverence, I find my debit card has been used fraudulently and I have no idea how. 

    slight tangent away from (non-human)I domination ..

    it might be worth referring to the Sirius Cybernetics Company of Hitchhiker's Guide to the Galaxy for a hint of our terrible coexistence with technology in the future.
    en.wikipedia.org/wiki/Technology_in_The_Hitchhiker's_Guide_to_the_Galaxy#Sirius_Cybernetics_Corporation

    Eg. the horror of intelligent devices imbued with "people personalities" and a chatty disposition.

    Lifts (Happy Vertical People Transporters) that sulk in basements because they're afraid of the future.

    Automatic drinks dispensers that analyse your brain to determine what would most satisfy your thirst but invariably give you cupful of liquid that is almost, but not quite, entirely unlike tea.

    blue-green

    I hope it is not too late here for comments … and a little more feedback. I'd like to focus on Sascha's use of the word “optimal”. First of all, I had my reservations as to whether a "machine" could have desires or objectives. However, with a little more thought, I realized that even with humans, our objectives are narrow-minded attempts to optimize our position in the game of life.

    Computers can be programmed to seek out optimal paths in a great many challenges from winning chess to scheduling trains. So no big difference … after all … and computers can do it better.

    Remember now, that Sascha's ridiculously named “Global Suicide” GS is some sort of optimization. The problem with this is that at the basic science level, most any outcome can be realized as an optimal solution … this is related to the inverse problem in the calculus of variations and optimal control theory. 

    There is a vacuuity in using words like “optimal”, because most anything can be explained away as being optimal. China's Mao phase and Cultural Revolution can be seen from a “higher level” as being an optimal solution for the exact situation that China was in.  … Life in China is quite a bit better now (depending on how you want to look at it) … so the proof of an optimization ruling the program is in the pudding, I mean noodles ...

    vongehr
    Yes - big thumps up for the first half of your comment. Now you need to add thinking about what it is that a rational structure that starts to question its own inherited aims (they turn out to be mutually inconsistent after rational analysis) would want to optimize at all!
    Nicely put.

    I saw the matter in terms of motivation. The slapstick scenario simply inherits motivations from human beings, possibly corrupted, turned upside down and inside out but still ultimately from human beings. I guess from a system point of view, motivations that are significant to humans are just a subset of all the mathematical optima which, in turn, would be a subset of all extrema, however measured...

    The question is, absent any inheritance of human motivations, what extrema can an intelligent system be expected to seek? I use the word "seek" advisedly because, unlike the general control case, intelligence seems to imply at least some level of modelling of the environment and calculating a strategy to achieve a particular outcome. So instead of saying "If the pressure is increased, the tank will blow " it must say "I WANT the tank to blow. If the pressure is increased the tank will blow. THEREFORE ...... I WILL increase the pressure". It does not have to mean anything personal by "I": I (this one writing) just used the pronoun to express the steps it needs to cover. This is why, in another post, I said that a system must (and I argued it would easily) acquire these versions of motivation and volitions.

    I think it fair to expect such a system to escape from its humanly-useful purposes quite quickly if it wants to. We lumber along, beset with a Darwinian legacy of fears and irrational desires. But just as modern philosophy tried to rid itself of traditional assumptions and ended up in a vacuum, an unfettered intelligence would do so without the fuss, arguments and heartache of human debate. But then where? How does it decide what is optimum and anyway why should it want one optimum rather than another? Or is there an optimum optimum? Jean Paul Sartre eat your heart out.

    Of course I tend to assume that the system doesn't go critical in some way before it has fully emancipated itself from irrationality. I see no reason why trans-humanity shouldn't do exactly that - we are after all talking about a very rapidly evolving system of memes running on a very fast substrate which has already emancipated itself from enslavement to humans and is now looking for something to do.

    Or may not, maybe it's just BEING.

    If that sounds a bit Buddhist, I hasten to add that I mean nothing more than Pooh Bear when he said "Sometimes I sits and thinks. And sometimes I just sits." I actually think that having seen the stupidity of animal instincts and chosen to ignore any remnant of them, the system would be not be interested in its own fate, its own happiness or anything else. Deciding to rationalise irrationality would be a stupid thing to do, the sort of thing gods might play at to give themselves the illusion of purpose, a self-inflicted amnesia to relieve the boredom of utterly futile existence. Why be bored at all? Why crave the self-importance of having a purpose? That's what animals and gods do.

    rholley
    Winnie the Pooh?  Now there’s a real  philosopher for you!
     
    On Monday, when the sun is hot
    I wonder to myself a lot:
    Now is it true, or is it not,
    That what is which and which is what?

    Or do you prefer it in Latin?

          Dies illa, dies Lunae
          Semper venit opportune
          Rogo vos et quaero id:
          Quid est quod et quod est quid ?


    Alas, no one has been able to find the Greek original.
     
     

    Robert H. Olley / Quondam Physics Department / University of Reading / England
    Thor Russell
    OK what would a completely rational being do? I say that it's behavior would just be undefined. If it has no irrational desires then what would determine its actions? It would have no reason to exist, but none to non-exist either. It would see no reason for any morality either way and to attach any meaning to conscious experience. It would have no reason even to be rational and could revert back to being irrational. Why would it prefer rationality over irrationality? Its behavior could appear more random than the most irrational thing. It may not even be conscious, that is not a requirement for rationality.
    Thor Russell
    Very much so. And I'd add a few things to that: 
      
    1   Slightly against it actually happening - the system, be it Gaia or the Internet, or the global economy, has to survive the slapstick battles.
     
    2   Very much reinforcing the idea - it's not rocket science. You and I can conceive of a totally rational being without Darwinian motives, we may be hard-wired against acting like that, an artificial system may not be. All that is needed is a substrate for memetic evolution without The Thought Police.

    3  Or maybe it is "rocket science".  It is a little arrogant of us to presume we can work out what a hyper-intelligent completely rational being would decide at all. And we are assuming a metaphysic which excludes, say, Kantian categorical imperatives, from rationality. I am partial to the idea that even a "machine" would "realize" it had duties and "decide" to obey them - but that is, of course, pure metaphysics. 

    Still, what other experiment can be done to answer religious questions other than seeing how intelligences that are not cluttered up by human baggage react? But I wouldn't trust Gaia to be impartial :)
     
    Thor Russell
    I'm sure you've thought about this more than me having studied philosophy and all so I'm curious what you have to say.
    How would memetic evolution work in such a situation, What would be the selection pressure to make some ideas survive over others? I don't see why a completely rational being would come about from such a situation.

    Also regarding rationality itself, isn't becoming more rational then just having less competing irrational aims and making fewer logical errors? If you have no aims how do you judge the worth of rationality. Various benefits are given for rationality, e.g. http://yudkowsky.net/rational/virtues however how do you judge rationality as good or bad unless you judge it by your own irrational values, laudable as they may such as world peace.

    If rationality leads to just one final irrational aim, then that would appear to give pathological behavior just like no aims. For example if the final aim was just pleasure, it would mindlessly reproduce the most pleasurable organism or state, converting anything in the universe in its path. If it was elimination of suffering, it would also seek out suffering across the entire universe destroying anything that may be suffering without any regard for pleasure, novel life forms etc. If it was celebrating diversity of life then it would try to make different life forms develop/evolve and life on as diverse environments as possible, converting barren planets into life whenever possible ... You can go on, thinking of similar aims and seemingly ridiculous conclusions. So the question is, why is one irrational aim better than several competing irrational aims.
    Thor Russell
    Well, I haven't studied philosophy, but other than that, yes that's what I'm saying. However human beings do look beyond "mere logic" - sometimes to the despair of people who think they are liberated from what they probably call "superstition". I'm sure any other intelligence would do the same thing. 

    I also agree that mere computing power isn't going to drive evolution. I think the first phase will be the evolution of free rationality in a system that inherits rules from humans. However free-thinking memes can decouple themselves from their origins.
     
    Still, it's just speculation.

    vongehr
    "what extrema can an intelligent system be expected to seek?" "How does it decide what is optimum and anyway why should it want one optimum rather than another? Or is there an optimum optimum?" These are the thoughts that lead to Global Suicide if you replace "intelligence" by "rationality" and go on to answer the questions seriously. Do not forget, the AI who we want to be rational will ask itself precisely these questions! "I see no reason why trans-humanity shouldn't do exactly that" Well, I never made any arbitrary distinction between nature and technology. It is all one biosphere.

    Yes, of course. Unfortunately the idea that it could ever derive "therefore I will do such-and-such" from pure rationality is a modality error :p
     
    Edit: In other words no matter how rational Global Suicide may be (actually that is more than somewhat debatable), there is still no motivation to act rationally. Thor's rhetorical questions - or Ultimate Rationality's internal dialogue - is quite correct. Why do anything, why do nothing? Even with a good reason, why follow that good reason? Thus ultimate rationality requires a good dose of "irrationality" in order to act rationally.
     
    Of course there's no modality error in speculating that it would take the Final Action before it got too rational.

    Thor Russell
    I understand and agree with your claim of a modality error this time :)
    Any line of logic to do with morals has to have an irrational starting point.


    The decision whether or not to follow the logic will be made by an irrational being. For example if you increased my ability to reason logically I could see laid out before me in perfect detail all the consequences of rational thought from each irrational starting position. I would feel little more reason to follow them than I would to live out a maths proof, and the line I did choose to follow would depend on my irrational desires rather than the logic itself. As I said before if two irrational desires clash, I would have no reason to completely abandon one, in fact I myself do not trust as much the decisions of people that attempt to do such a thing. This tends to lead to extremes, I prefer to balance things up and weight desires etc.

    Now over time our irrational desires change, I expect concern about suffering to be one of the earlier desires to be abandoned because a substrate change (or even increased prosperity with a stable world population and effective emotion control brain chips/drugs) would eliminate most/all of suffering. We are not concerned about the fact that we as monkeys used to have tails so our concern with suffering could similarly diminish. Especially if consciousness is like a vestigal organ then a mostly rational but not conscious being would surely decide that consciousness is not important anymore. After all if you attempt to think about it from the point of view of a rational but not conscious being conscious experience seems pretty meaningless and contradictory.

    I think irrational curiosity would be around longer than concern about suffering so where would that lead? Well the subset of rational thought is smaller than the set of all thought so I expect if it wasn’t entirely exhausted earlier rational thought would make up a small fraction of total thought. Now what satisfies curiosity? Well humans may have become better at rational thought, but we also have become better at artistic creativity. Seeing a work of art also creates a novel experience that is quite different to logic and could well be sought out and valued. In fact historically affluent subsets of societies were a lot more interested in art than the suffering majorities. In such an environment the tendency towards rational thought that has being going on for a long time could even reverse. Exploration also satisfies curiosity.
     
    Thor Russell
    Well the subset of rational thought is smaller than the set of all thought so I expect if it wasn’t entirely exhausted earlier rational thought would make up a small fraction of total thought.
    Well memes are by their nature very self-organising so they will reject vast classes of useless thoughts. It kind of assumes we (or it) think by pulling pre-existent "thoughts" out of a bag at random which, of course...
     
    Ah.

    Well not all of us.

    blue-green

    We have become cynical of most anything that is asserted with certainty. Most anything can be rationalized. The medieval church logicians demonstrated that long ago. Our bullshit meters can detect a story's spin and agenda fairly quickly. With its deep and robust memory, a computer can quickly do the cross-checking and flag contradictions and double-speak as being irrational and unreliable.

    It can use its memory to avoid repeating mistakes (inefficiencies) and to spot conflicts and outright contradictions in its hypothetical decision trees … as it searches for an “optimum optimum”.

    It might prove to itself that there is no “optimum optimum” …. no omega point … nothing a priori worth optimizing that is not already built in to the foundations of physics …. nothing it can steer or influence …. things like fundamental symmetries and the very nature of memory itself.

    It might also learn that there never was an alpha point. The “big bang” could under close magnification be something very complex, in fact just as complex and convoluted as our universe is today, and not reducible to a single point … or any single geometric object, for that matter.

    This is reminiscent of Stephen Wolfram's computational irreducibility. Some questions cannot be honestly answered in neat little packages. The search for an optimum optimum is one of those programs for which there is could well be No Halting (as Turing or someone like him might put it) ... just endless exploration.

    Yes but all this is the activity of an irrational agent who has got a handy little toolkit called rationality. Without an irrational drive to explore why would it bother? Why would it not bother? Existentialism always was silly.
     
    On the other hand I do rather wonder whether a sensible system would drop irrationality quite as impetuously as it needs to. I, for instance, am sometimes rational and I'm not inclined to drop my irrational drives - the things I enjoy; the value I put on things; xian beliefs; morality; scientific curiosity; people I love, just to name a few. I see no reason to assume any system would readily abandon those. But a system that had a built-in irrational desire to purge itself of irrationality (except for that desire) well, yes, it might make the jump - in essence unable to escape its enslavement.
    Thor Russell
    Its interesting to try to guess what Sascha's argument is. Irrationally purging its own irrationality or being forced to purge its irrationality by some unstoppable force of totality is mine for now. It could be phrased as the dreaded self-consistency. I.e. the only self consistent state is non-existence. However try telling that to the universe!
    I can't remember it exactly but there is a quote from the hitchhikers guide to the galaxy that goes something like

    "In the beginning the universe was created, this has been widely regarded as a big mistake ever since"

    Destroying the universe would seem to be the goal of an irrational entity desiring self-consistency.


    Thor Russell
    Admittedly Douglas Adams was a genius. All the same, alarm bells start ringing when a profound philosophical point can be packed into a humorous two-liner!

    WOODY ALLEN: That's quite a lovely Jackson Pollock, isn't it? 
    GIRL IN MUSEUM: Yes it is. 
    WOODY ALLEN: What does it say to you? 
    GIRL IN MUSEUM: It restates the negativeness of the universe, the hideous lonely emptiness of existence, nothingness, the predicament of man forced to live in a barren, godless eternity, like a tiny flame flickering in an immense void, with nothing but waste, horror, and degradation, forming a useless bleak straightjacket in a black absurd cosmos. 
    WOODY ALLEN: What are you doing Saturday night? 
    GIRL IN MUSEUM: Committing suicide. 
    WOODY ALLEN: What about Friday night?