Banner
    Global Suicide: No Singularity, Just Evolution Of Deadly Rationality
    By Sascha Vongehr | April 2nd 2011 03:46 AM | 45 comments | Print | E-mail | Track Comments
    About Sascha

    Dr. Sascha Vongehr [风洒沙] studied phil/math/chem/phys in Germany, obtained a BSc in theoretical physics (electro-mag) & MSc (stringtheory)...

    View Sascha's Profile

    Why should technology not go on and accelerate like it has before? Why should humanoids not get ever brighter; why should democracy not grow until true communism emerges? Techno-progressives emanate an air of renegade radicalism. They like to accuse critics of not thinking things through sufficiently and stopping at the point best befit to rationalize beliefs.


    Yet both, the critics and many proponents of technological enhancement alike agree on where to stop asking: a racist ‘we (I, humans, our planet) must survive and conquer’ plus lip-service toward a pseudo-democratic doctrine so comfortably ‘coincidentally’ at the helm as we speak. As bad as that may be regarding other issues, it turns into Jules Verne stories when discussing future.

    Generalized evolution and futurism/futurology; here again does transhumanism reveal itself to be largely Sci-Fi, which invariably conserves how humans are stuck with their contemporary concepts, unable to envision future, a critique I started the last time (also here and here for more of a discussion in the comment sections).


    Invariably, death, always equated with an apocalyptic end, is worse than dystopia. Why? The latter’s potential for revolution? What arrogant coffeehouse existentialism. Certain questions are taboo among the smug i-phone bourgeoisie ‘radicals’. Techno enthusiasts confuse progress with the second coming of this time true democracy.


    Must I really first believe in ill-defined freedom to envision future without preconceived notions? Is the US in decline in spite of or because of their type of ‘democracy’? In case you have still not realized yet: China has taken over the future, and not because of chop sticks. The future has already partially arrived, but techno snobs still can’t imagine it while it hits them over the head.


    My criticism is not shortsighted towards the potential of technology. On the contrary: Yes we can; we are developing past the merely human stage right now. We couldn’t stop it if we all really wanted to. This TED talk by Paul Root-Wolpe should eliminate any doubt about the convergence of biology and technology. Techno-future will be, but if you think that the outcome is anything like you dreamed or hoped for, think again!


    What is Evolution?

    The transhumanism crowd has understood evolution while many other intellectuals still grapple with getting their head around mere old biological evolution. Evolution is tautologically true: Whatever there will be (‘successful’, more numerous, …) in the future, will be there (successful, more numerous, …) in the future, regardless the specifics. This is the basis of ‘algorithmic evolution’.


    Evolution has no clue about the difference humans like to uphold between nature and technology. Some argue that transhumanism “isn't a matter of letting evolution take its course, instead it specifically engineers.” These are indistinguishable as far as algorithmic evolution is concerned.


    Biological evolutionary change is spread out over generations, but to most substrates that harbor evolution, the concept of generations is not even applicable, be it on the pre-biotic stage or on the level of animal societies, sociology, nation states, religions, computer viruses, etc. With transhumanist engineering it is explicitly desired that the current generation is intentionally evolving itself.


    Dan Sperber argues that the formal mechanisms that explain cultural evolution are from epidemiology, not population genetics. Ideas spread like contagious diseases, not like genes. The fastest developing evolutionary substrate is cyberspace and nanotechnology. Many developments in these areas still accelerate exponentially. There are no thousand years anymore for this dinosaur to go extinct. We are finished already.


    With this take on evolution as the background, lets see what light can be shed onto the future.


    What Can We Do?

    Ethics is about what you should do. There is nothing whatsoever ants can do to avoid the evolution of ant colonies. Nobody has given a valid argument for why the system called “human” could be so special that there is even the slightest chance of changing evolution from how it worked before, from how it always ‘works’.


    Read the news lately? The increasing stress and depression in the ‘developed world’ is not even in the news. Ethics likes to imagine useful measures of happiness. Assume we agree on one and calculate its expectation value with advanced evolution theory. What if the evolved balance of well-being to suffering in co-evolved systems is located in the red? Global death stands obviously at zero, which is relatively happier.


    You cannot kill the machines like the Luddites tried without killing yourself anymore. You can try to storm them in order to jump on and steer, but naive optimism accelerates the inevitable instead of slowing it down. It is not coincidence but due to the kind of evolved stability that only co-evolved complex systems have that the vast majority of ‘great ideas’ backfire to bring on precisely what they tried to avoid and worse. Transhumansists need to think much harder about the great dangers of their overenthusiastic plans.


    What to Expect in the Future?

    There are only two possibilities: Evolution stays pretty much the same (S) as it always has, independent of the substrate in which it is ongoing. Evolution is somehow different (D) in the new substrate; there is some sort of threshold because memes in cyberspace in some way behave fundamentally different from any evolutionary actor before.


    If S is true, the future will be as dark as past and present, and this is why many embrace D, the singularity for example. Without extremely good reasons for assuming otherwise, we should assume S. However, I have reasons for taking D seriously, but it will be the opposite of what people wish for! The novel is not intelligence but rationality. There will not be unprecedented intelligence in pursue of irrational aims, but unprecedented rationality.

    Not homosexuality or murder or masturbation distinguishes us from animals. Humans commit suicide. They are the first systems with primitive beginnings of rationality, while all evolved systems show 'intelligence'. Extreme rationality is deadly.


    No Freedom in the Beast

    Many have suggested that whatever humans will develop into, those ‘androids’ will become somewhat like cells living in a multicellular organism, the beast (from Robert Pirsig's “Lila”) or megaorganism. But actually, we are already in the megasystem: society. Yes, humans will be like the ‘happy’ cells of a body, be that body a murderer or tyrant. It is also called “good citizens” paying their taxes.


    Some cling to a biological picture in order to argue for hope. They say something like: “There is a multitude of microorganisms living in and on our bodies, more gut bacteria in us than human cells! There are myriads of opportunities to remain a ‘free agent’ inside a complex and highly integrated system.


    Sure, if you like to compare yourself with viruses and gut bacteria landing in the toilet after a day. However, you may like to compare yourself rather with complex, highly integrated ‘social’ systems, like skin cells voluntarily committing suicide after a week, or say with a cortex neuron in human brain. Its ‘freedom’ is decoupled from having any undesired effect. This is the fundamental nature of the evolved democratic doctrine and its dangerous metastability as humans are still not sufficiently ‘wired in’ into pre-selected perceptions to make it reliable.


    Some take hope from the misconception that “relative to the time scale of individuals, the evolution of megaorganisms is slow.” However, the evolution of the superstructure is precisely not slower! Humans did not evolve much since the Neolithic age while society obviously changed like crazy.


    The evolution of higher level systems happens precisely because adaptation speeds up through the higher stratum as changes in the environment can be survived better that way. The lower level components' evolution is slowed down. They become ‘legacy systems’, like the qwerty arrangement of letters on your keyboard.


    The interesting today is the emergence of a new evolutionary substrate, namely cyberspace, which now starts to evolve orders of magnitude faster than anything evolving before. To be about 100% expected: We will become legacy systems! The superstructure will also likely lead to total abandonment of volatile human-like individualistic consciousness in cyberspace.


    Not surprisingly, those scared come forth with desperate suggestions about that the new evolutionary substrate may behave differently for the first time: The “singularity”, strong contender for the most misleading terminology this side of the great attractor, as our savior rather than destroyer, if we all just use twitter enough.


    Global Suicide

    There have been many future scenarios suggested. They all have their specific features that make them more or less likely. Global Suicide is not merely yet another contender, one a little more thought-through, but it is the result of thinking such scenarios in general through to the end. It is the result of taking evolution seriously as something that applies generally. Is there an unexpected, attractor or threshold in general Darwinian processes, hypothesis D, like the ‘singularity’?


    Sci-Fi novels have come up with punk futurisms involving silly scenarios, like once we know everything, we get bored and kill ourselves, as expounded on sites like  exitmundi. Global Suicide is not drivel inspired by the renegade emo/goth coolness of death. Global Suicide is a scientific question about the substrate dependence of algorithmic evolution.


    I have argued for the Global Suicide hypothesis, giving such thoughts a serious treatment at all for the first time. That bacteria often kill their host and thus themselves (without necessarily finding another host to jump on) is not what Global Suicide is about. Global Suicide is about a globally omnipotent structure that results from potentially passing the bottleneck of such likely catastrophe. It is the opposite of runaway environmental disasters.

    The unlikely development through the bottleneck brought on by ‘too successful’ conscious individualistic systems leads to a globally omnipotent super structure that will eventually switch itself and all else off because it is the only rational decision left to do*.


    Global Suicide is the nightmare of the heaven-on-earth techno-evangelists that hope for a quite different kind of salvation. I have argued at other places and will here again that it is the logical outcome of the enhancements that we are now actively after in order to make life incrementally better and optimize for self-fulfillment.


    Global Suicide predicts that shortly after development of information technology, life disappears due to what is inherent to Darwinian processes generally. This makes Global Suicide the only suggestion that brings the probability of long lived advanced civilizations to zero rather than just a small number. It is thus the only good candidate to solve the Fermi paradox, which is about there not being a single trace of any other civilization out there although astrobiology keeps finding ever more planets, more chemistry that life could be based on, and so forth. Astronomical observation thus supports hypothesis D: Global Suicides have likely occurred already multiple times in the milky way.

    What may irk many people the most is that none of this is easily attacked neo-Luddism, because guess what: It is OK! In fact, Global Suicide is but one part of Suicidal Philosophy, which puts the usual boring “philosophy of suicide” from its head onto its feet. It is about assisting rational suicide, the personal one as well as the global. Making philosophy scientific again and useful for the suffering system, regardless what kind of system, what could be more ethical?

    Comments

    A lot of interest, please elaborate. I don't understand why a globally omnipotent superstructure would eventually switch itself and everything off.
    The tautological notion of evolution you voices will always, no matter what, hold in any consistent universe. At first sight, it may seem dreadful, since nothing, lets say, a priori, prevents a universe from evolving into horrible meta stable states. Yet, a posteriori observations of this very universe suggests that there are certain features that at least suggests that 'good' (I know this is vague but) evolved states are possible. Human brains instantiate conscious experiences, and I think a strong case can be made that if anything is valuable it is because it promotes the existence of better conscious experiences. Furthermore, once we abstract the goodness of something by localizing the positive conscious states it promotes, we can notice that humans are a kind of algorithm that explores the space of possible conscious states attempting to maximize the instantiations of positive conscious states. Of course, the existence of this algorithm is explained in evolutionary terms where the process of evolution itself does not maximize preferable conscious states. The evolved algorithm, however, can take over: intelligence can be put into work so as to explore, categorize and eventually instantiates the better regions of the space of possible conscious experiences. I would consider such scenario rather positive, and certainly it does not hint to an event of global suicide. Although certain conscious experiences might never me instantiated because of their poor quality,the brightest and most profound conscious states may bright for as long as physical substrate of the universe allows.

    vongehr
    “that 'good' (I know this is vague but) evolved states are possible”
    But this is the core rather than a side issue you can brush over. It must be ‘good’ on a scale that was for some reason adopted by an advanced, rational system.
    “if anything is valuable it is because it promotes the existence of better conscious experiences.”
    That’s kind of circular and giving some sort of (general?) consciousness a certain value it may not have for future systems. I am unsure about what your favorite ‘consciousness’ is that you refer to here, but I suspect that it is exactly the one that will be at most remain a ‘legacy system’.
    “humans are a kind of algorithm that explores the space of possible conscious states attempting to maximize the instantiations of positive conscious states.”
    But humans are irrational and soon gone.
    “and eventually instantiates the better regions of the space of possible conscious experiences.”
    Here you suppose that an informed future intellect still believes into that some sort of mechanical instantiation in a real world adds something to totality that is not there already anyway.
    "I would consider such scenario rather positive, and certainly it does not hint to an event of global suicide."
    Global Suicide is not anything negative.
    You did not address the concern I voiced about the argument you use on the basis of how evolution works. As I said, it is impossible for possibility 1) to be false. Yet this does not represent a problem for systems that attempt to engineer new and more comprehensive systems so that the quantity and brightness of positive conscious states is maximized. If you consider the rational self-modification of a system a suicidal act, then I would agree that a certain kind of global suicide may take place. If this is what you argue for, then we are not in disagreement, though I would be hesitant in naming rational self-modification be an instance of suicide. For one thing, it is actually possible with some effort, for a human to identify himself with consciousness in its abstract form over and above his personal characteristics. I do this on regular basis, and it is for this reason that if I were to choose between replacing myself by a well-engineered nirvana machine I would not hesitate, no matter how different it may be from me.

    In another article (No Higher Consciousness) you stated "Humans feel consciousness to be the precious crown of creation. This is first and foremost a feeling co-evolved along with consciousness!" You are right, but the fact that valuing consciousness is an evolutionary artifact does not mean that it is a neutral fact. I would go as far as to qualify this as an amazing discovery made by evolution on earth. Its implications transcends our historical period: the fact remains that it is possible to create conscious experiences that values themselves with a subjectively convincing feel of 'self evidence' to it (just ask someone experiencing profound euphoria and she'll tell you "if you were in my shoes, you'll understand why this state of mind is self-evidently worthwhile"). The fact that these experiences are common among humans does not only shed light on the possible evolutionary paths that biological systems may travel. It also sheds light on a more profound terrain: worthwhile phenomenology is a timeless possibility, even if seldom instantiated at the present moment.
    I claim that it is possible to put intelligence to work for the instantiation of rich and worthwhile conscious states. Most likely, the most profoundly reflectively worthwhile states of consciousness would not resemble in any serious way what we know already that is possible to experience. The space of possible phenomenologies is simply extremely huge: this is not only obvious by taking into consideration the range of normal human experiences, but how easy it is to dramatically shift conscious experiences by using brain altering methods for mind alteration. We already know of some amazing subjective states, and this is enough to conceive of a system that rationally preserves itself. However, we might expect most systems of this sort to modify themselves many times before deciding to stay at any particular ecology of phenomenological instantiations (or even just one unified huge conscious experience, let alone the possibility that it decides that it is preferable to have many, although smaller, replicas of bright conscious states).

    vongehr
    Andres, thank you for your thoughtful comment. I am glad to see serious comments and I am sincerely sorry not to have come around to give yours a more serious answer sooner (I have now edited my first answer above).
    “You did not address the concern I voiced about the argument you use on the basis of how evolution works. As I said, it is impossible for possibility 1) to be false.”
    You seem to indicate that 1) = the mentioned tautology. What I mean by 1) is basically evolution going ahead as usually observed in all substrates, however, that empirical experience is no proof for that in the realm of memes (and rationality) there may not be an interesting threshold for example. See, it is kind of like in 1900 saying that gravity is everywhere the same for sure, whatever the actual theory is, so it must be in all instances just like what we see here on earth. Kind of true, but still, once density goes over a certain threshold, there are black holes, and it would be kind of a stretch to claim that that is nothing special at all [some even think there is a singularity in a black hole ;-)]
    “rational self-modification of a system a suicidal act”
    No, I mean the complete termination.

    Not sure what you even mean by “neutral fact”, but it seems what you mean by “self-evidently" is basically just a form of consistency. I fail to see the connection to what I wrote however.
    “it is possible to put intelligence to work for the instantiation of rich and worthwhile conscious states.”
    Having what worth to what, that is the question.
    “the most profoundly reflectively worthwhile states of consciousness would not resemble in any serious way what we know already that is possible to experience.”
    This seems to be a mystic, obscure concept of consciousness. “Profoundly … in any serious way” - I cannot make sense out of it. Where is the measure?
    “The space of possible phenomenologies is simply extremely huge”
    According to what measure? If you seriously put down numbers (e.g. bit-rate of visual cortex or possible physical states in our personal 'hubble bubble' of about a few milli seconds times the speed of light or whatever Fermi calculation you can come up with) you will see how limited it actually is. (remember, faster computation can make this at most smaller and I have never heard anybody suggesting that slower computation is the route to 'elevated consciousness')
    “We already know of some amazing subjective states”
    Aha – I think I know your fallacy. No, the feeling of how amazing that LSD/meditation/… state was comes not from amazing hidden dimensions that await us but simply from that it felt amazing; it is no more than another feeling! Like horror-trips are fear without any threat, sometimes we get this all is one amazing feeling. Nothing more behind that amazingness than that it feels amazing.
    “we might expect most systems of this sort to modify themselves many times before deciding to stay at any particular ecology of phenomenological instantiations”
    Yes, right, here is the point it gets interesting! What is that consistent rational end state considering that the system wants to securely stay in it?
    “or even just one unified huge conscious experience, let alone the possibility that it decides that it is preferable to have many, although smaller, replicas of bright conscious states”
    OK, here it goes into mystic obscure definitions of conscious again, plus a somewhat classical direct realism about that I am somehow twice if there are two of me in the box called universe. I criticized this, but have also not given the core argument (for now, I kept it at repelling arguments of why what ever I claim must be impossible before spelling out the argument in detail, which is not trivial).
    > "... What is that consistent rational end state considering that the system wants to securely stay in it?"

    this depends the fundamental belief system .. so it could be something involving optimising either the route to god or the path to universal entropy. Presumably, god oriented evolution involves the betterment of mankind and progressively higher states of consciousness. Non-god evolution would involve some form of optimisation principle based on the essential creative principle of the universe. For non-god evolution, we're along just for the ride, because we'd never fathom the optimisation goal. We're stuck with science and rationality. Those are our limits, not reality's.

    vongehr
    As argued in the post, mankind is at that point already not the issue anymore. The autonomous individuals terribly afraid of not having a "closest continuer" everywhere along the t-axis clinging to their obscurant soul concepts are gone. Now what is the equivalent to a "belief system"? Or in other words: what is the aim that there could rationally be and how is it enforced, what measurements according to what scale (utility for what aim) are employed to decide actions? The self-consistency of that system is what I consider, one of its obvious main aims being the stability of the optimized state.
    People always read some depression into my writing, some angry white man paying back, but that is due entirely to fear and confusion on the perceiving side. Do not forget, Global Suicide is the last rational decision that ensures stable maximized well-being!
    Thank you for your replies. I've been thinking about this in the last few days. I had not thought about the ideas you are presenting, so they require some careful consideration on my part. I think I have an idea of where you are going, but I'll stay tuned. I will also voice what I think once I have a clearer picture.

    "Aha – I think I know your fallacy. No, the feeling of how amazing that LSD/meditation/… state was comes not from amazing hidden dimensions that await us but simply from that it felt amazing; it is no more than another feeling! Like horror-trips are fear without any threat, sometimes we get this all is one amazing feeling. Nothing more behind that amazingness than that it feels amazing."

    Yes, it's all in the head.

    "Global Suicide is not anything negative. "

    Then why do you talk about it in language like this?:

    "Transhumansists need to think much harder about the great dangers of their overenthusiastic plans."

    If "suicide" is not negative at all, then the more "over"enthusiastic, the BETTER! Bring on the danger! Ramp it up like crazy! All the way! Find the most dangerous, foolish, and irresponsible path and EMBARK ON IT! That would accelerate the bringing about of this "good" suicide, no?

    vongehr
    No! Firstly, the danger is not the suicide, but the dark ages that may be brought on along the way. Secondly, some irrational "ramp it up like crazy" is not going to switch off all life in such a way that it cannot evolve again. The rational switch-off is there to avoid that any of the system's previous states are going to occur again. This can only be accomplished by a very well planned, rational switch-off.
    "But humans are irrational and soon gone."

    And the "super rational" system is also "soon gone" since it ultimately decides to switch off, permanently.

    Already covered at exitmundi.com : http://www.exitmundi.nl/suicide.htm

    vongehr
    No, anonymous, what you linked to is nowhere close to a proper treatment of the issue. Global Suicide is about the rational switch-off of a globally potent structure, not about the impossible agreement between gazillions of irrational entities suffering from depression or somesuch.
    Where the site actually mentions something remotely similar ("purple cloud"), there are no logical reasons presented, for example: "we will realize we have become useless. Right there on the spot, we will kill ourselves" - no - why - there is no reason to do that. In fact, there are traditionally reasons given not to commit suicide in that situation.
    Moreover: that web site proposes a naive "we are a simulation like a Sims game" aspect that I argue to be nonsense.
    The nice thing about that site is that it is able to deal with the issue without much of the usual, boring moral ballast. However, the logical argument and discussions against alternatives is missing. This is what I claim can be provided.
    "In fact, there are traditionally reasons given not to commit suicide in that situation."

    But why does it matter? If the inevitable endpoint of evolution is suicide anyway at the hands of this "super rational" machine system, and it's a Good thing, why not speed that up with a little irrationality? :)

    Wow, just found this site, and read two ridiculous articles in a row. For a site called Science 2.0, you people are pretty narrow-minded folk.

    You ignore or omit the simple fact that life exists for no other reason than to survive. We've been evolving over millions of years, becoming increasingly adept at surviving in increasingly diverse conditions. What on Earth makes you think that leads to death? It would run entirely contrary to all evidence leading up to this point. Human beings have evolved intelligence to enable us to survive increasingly diverse conditions. Our future technology will continue the same trajectory.

    Those who imagine a dystopian future do so out of fear, just as all end-of-the-world-is-upon-us doomsayers have since humanity began. It is neither logical, nor does it have any empirical evidence to support it. Just because a future is possible, does not make it probable or even likely. By all means imagine dystopian futures if you like. They make good sci-fi.

    Incidentally, we are not "coincidentally" at the helm of this evolution. It is only happening because we exist, and because of our nature, which is the pinnacle of natural evolution.

    Gerhard Adam
    Wow .. and I just read two posts by (presumably) the same individual and you've managed to demonstrate that you understand neither technology nor biology.
    Incidentally, we are not "coincidentally" at the helm of this evolution. It is only happening because we exist, and because of our nature, which is the pinnacle of natural evolution.
    That pretty sums up the problem regarding your understanding of evolution, biology, and just about every thing to do with predicting the future.
    Mundus vult decipi
    vongehr
    Gerhard: Don't feed the trolls. ;-)
    Troll: Thank you for you interest in Science2.0. A warm welcome to our site. So you insist on scenario number one, namely that there is no unusual threshold in any evolutionary substrate. Fine. Go and criticize those who talk about the singularity as if it is some sort of heaven then and people like yourself who think that we are the pinnacle of evolution. You cannot have it both ways, 1) and 2) at the same time I am afraid.
    "That pretty sums up the problem regarding your understanding of evolution, biology, and just about every thing to do with predicting the future."

    Maybe you should explain *WHY* it is wrong, so that other people who want to learn something, can.

    Gerhard Adam
    Maybe you should explain *WHY* it is wrong, so that other people who want to learn something, can.
    OK, here's a whack at the original poster's comments.
    You ignore or omit the simple fact that life exists for no other reason than to survive. We've been evolving over millions of years, becoming increasingly adept at surviving in increasingly diverse conditions. What on Earth makes you think that leads to death? It would run entirely contrary to all evidence leading up to this point. Human beings have evolved intelligence to enable us to survive increasingly diverse conditions. Our future technology will continue the same trajectory.
    The direction of human evolution is not directly involved with intelligence (at the individual level).  After all, if you consider the historical record, there is little likelihood that there is any significant intellectual difference between modern humans and our ancestors from thousands of years ago.  However, it is equally erroneous to suggest that our modern achievements are the result of "human intelligence" (in a specific sense) because there isn't actually any human being that knows how it all works.  In truth, our achievements are the result of a quirky shift in our social relationships that made us move in a direction of increasing co-dependence and an overwhelmingly high division of labor.  It is this latter adaptation that has provided all the human development that has occurred.  In other words, our modern civilization is the direct result of our division of labor and not intelligence.  This has created a kind of "social intelligence".
    Those who imagine a dystopian future do so out of fear, just as all end-of-the-world-is-upon-us doomsayers have since humanity began. It is neither logical, nor does it have any empirical evidence to support it. Just because a future is possible, does not make it probable or even likely. By all means imagine dystopian futures if you like. They make good sci-fi.
    It's not a dystopian future in a normal sense.  It's problematic when people that can't even articulate the problems are proposing solutions for which they haven't considered any ramifications.  In effect, the proposed "future" is something that is supposed to be tightly engineered and controlled in the worst kind of eugenics program imaginable.  It is the incredible social naivete that is disturbing.
    Incidentally, we are not "coincidentally" at the helm of this evolution. It is only happening because we exist, and because of our nature, which is the pinnacle of natural evolution.
    The problem here is in even asserting that there is a "pinnacle of natural evolution".  Evolution doesn't have a direction.  To suggest that we're at the "helm" defies any reasonable explanation.  We can control virtually nothing about the natural world, because contrary to our assumptions, it is not a static existence.  For every action we take, other organisms apply their "counter-measures" precisely for the purpose of ensuring their own survival.  So the promise of antibiotics is corrupted by the evolution of antibiotic resistant bacteria.  We're at the "helm" of nothing.  In addition, by assigning a trajectory to evolution it creates the illusion that proposed "improvements" are building on previous developments that ultimately lead to positive directions.  However, basic questions such as what intelligence is and what would it mean to possess more of it aren't answered.  If evolution doesn't have a direction, then what is being proposed?  It's simply speculative nonsense based on a complete misunderstanding of something as basic as evolution.
    Mundus vult decipi
    so, in essence, the Supreme Court was right - corporations *are* people?

    So, offering an intelligent counter-argument is considered trolling around here? Why does that not surprise me.

    Species evolve to survive. We have evolved to the point where we can manipulate our own evolution, thus empowering us to survive indefinitely. We can't do it today, but we have the potential for doing it. Ergo, we have reached the pinnacle of natural evolution. We don't need any further natural evolution, for we have the tools already. All we need now is the knowledge and technology. Any species who can take the reigns from nature will reach that pinnacle as well. How is this hard to understand?

    Singularity is not a "heaven", whatever that's supposed to mean. It's a simple concept, folks. Some may embrace it as a religious concept without understanding it, and clearly others (ahem) may reject it on the same grounds. But neither diminishes the simple reality of the underlying premise.

    Neither of you have offered a rebuttal to anything I've presented, so I have to assume no intellectual discourse is sought here. Ironic that see yourselves as the custodians of science. Science is built on challenging assumptions to expand knowledge.

    vongehr
    No, not bothering to read the article properly first and then going on about stuff that the article is not even about in an aggressive way is trolling. If you like to get an answer, maybe you would like to first understand the post where your "intelligent counter-argument" is already sufficiently dealt with.
    This adds another possible explanation for why the Milky Way galaxy isn't teeming with intelligent life forms after ~13 billion years of evolutionary opportunity:

    - all intelligent life evolves to a level of technical knowledge & capability to destroy themselves without evolving the necessary level of constraint or maturity to prevent using that capability and unintentionally eliminates their entire species
    - intelligent life is all around us; however we are not interesting enough to interact with
    - and now global suicide (which is arguably an extension of the "accidental self destruction" scenario mentioned first)

    Interesting thought...

    vongehr
    Exactly right, Global Suicide as developed in Suicidal Philosophy is the answer to the Fermi paradox, because all other explanations do only lower the probability of finding advanced, still spreading life to a finite value while we still find more and more planets out there and more and more possibilities of how life could have started. Only an inherent endpoint to Darwinian evolution that occurs very soon after technology evolved can explain the total absence of any trace of advanced civilisations, which there should be plenty off otherwise.
    Thor Russell
    This seems a bit of a stretch as an answer to the Fermi paradox.There would be plenty of opportunities before global switch off for the species to make some kind of unintelligent self reproducing robotic like organism capable of interstellar travel. It would be made as a tool, so would be deliberately designed not to be able to evolve. If let loose, it would inevitably spread exponentially across the entire galaxy, very likely leaving some trace. There are plenty of valid reasons for building such an organism, e.g. preparing a habitat before colonization. While you may argue that things will not happen in that order on earth, its entirely possible that an alien species would develop this kind of technology before internet like tech, and so leave their trace. How can you say the probability of this happening is zero?
    Thor Russell
    vongehr
    self reproducing robotic like organism capable of interstellar travel. It would be made as a tool, so would be deliberately designed not to be able to evolve. If let loose, it would inevitably spread exponentially across the entire galaxy
    Some strange new "organic/inorganic" divide: Self-reproducing and exponential in the galactic environment but not able to evolve. I think you just suggested developing a plane that can fly like an eagle, but it is not allowed to fly.
    Anyway, my thesis is that once a solar system harbors rationality sufficient to accomplish anything like that, it has already sufficient rationality to lack the desire to do so. Why would a system that understands evolution to that extend ever want to still spread?

    Thor Russell
    But in order for GS to be a solution to the fermi paradox, every single time things need to happen in that order. Say there have been 1 million civilizations in the galaxy, in order for them not to have left a trace, in over 99.99...% of the time the desire to spread needs to go away before interstellar travel is possible.
    Once interstallar travel is possible, its easy to see how colonies can keep seeding other colonies until the entire galaxy is colonized in <10 million years or so even if every colony eventually GS.
    I don't see how you can predict with such confidence that technology will always develop in that order. Even if it does 99% of the time GS is still only a small part of the Drake equation.


    Thor Russell
    vongehr
    every single time things need to happen in that order.
    Yes, that is why it needs to belong to algorithmic evolutionary theory generally. It cannot be an argument like why we will never get to the moon. Once IQ developed the computational level necessary to make viable nanotechnological space-invaders (you are perhaps thinking of von Neumann probes), rationality has developed sufficiently to leave zero desire to do so. This is not coincidence but probably due to that evolutionary theory itself needs to be understood sufficiently in order to ensure the probes do what they are supposed to do. From a human perspective, these probes are supposed to be a more efficient form of ourselves (wanting to spread and all that), so understanding them sufficiently requires to understand systems like us sufficiently to not want to spread them. The latter is called "Sagan's Response", however, Sagan was a "nice guy" and I think he was thus incapable to see where this argument actually leads. He thought people will discover that evolution is such that any such probes will have the potential to turn around and eat us. He could not see and probably would have been a lot less popular if he had seen that rationality will kill the wannabe von Neumann probes already in existence, too.
    Even if it does 99% of the time GS is still only a small part of the Drake equation.
    The factor I propose is identical zero. That is why I think it is the correct answer. All other proposed more technological arguments indeed succumb to precisely the argument you are making.
    Thor Russell
    Yep von Neumann probes. Wikipedia is great.
    What exactly do you mean by "kill the wannabe von Neumann probes already in existence". Wouldn't that mean that there would have to be some intelligence still around in order to do that? Why wouldn't it be the rational thing for a cyber-mind then be to take itself out of existence, but not completely so that it could stop such probes, and spread itself around the galaxy so that it could take them out before they caused harm?

    On a less serious note I have now found my life's calling. I will start a religion to create such machines where the members try to stay as irrational as possible, and the machines will have your name on them, can you think of anything worse! If they are then destroyed it will answer the Fermi paradox.



    Thor Russell
    vongehr
    "kill the wannabe von Neumann probes already in existence" would be assimilating things like humans.
    but not completely so that it could stop such probes, and spread itself around the galaxy so that it could take them out before they caused harm?
    It knows there are no such probes. There are multipliers that are already here and that are just transformed away into something useful to the last aim there is. The only aim left for it is to follow Buddha's way into oblivion.
    I have now found my life's calling. I will start a religion to create such machines where the members try to stay as irrational as possible,
    That is good - founding a religion will make you rich and having people as irrational as possible is definitively suitable for a religion. You will be very successful. Once your machines are ready, you may realize, if they let you, that they are the very means to commit GS however.
    Thor Russell
    Sure it may know there are no such probes, but the situation where an advanced intelligence spreads out in such a way also solves the fermi paradox.
    It doesn't depend on technology, only that an advanced intelligence won't interfere with a less advanced one. It also unlike in your case doesn't depend on such rationality always evolving before the replicators. In the rare situation where replicators are made, they are stopped by the advanced intelligence. I don't see how/why you should prefer one situation over the other given our viewpoint.

    I appreciate that you will present this later, and will await your answers when you do, but I can think of reasons for the advanced intelligence to stay alive. Why wouldn't it be curious about how life would evolve on other planets? After all it couldn't figure this out for itself, and surely would watch the real thing rather than run a simulation.
    Also how is it going to know that it wouldn't come to a different conclusion/action about the GS if it had evolved differently. I think you can be pretty sure our intelligence could not have evolved in a 2-d world because of connections required between neurons etc, so surely it would be curious about what intelligence would be like in a 4-d world. (after all you are curious about such things) If it simulated a 4-d intelligence evolving, and such an intelligence came to a different conclusion, or was not fully comprehensible by a 3-d one, (and so on with higher dimensions or different universes/rules of physics) then I don't see why there would be a switch off. 

    Thor Russell
    vongehr
    It doesn't depend on technology, only that an advanced intelligence won't interfere
    Why you come with "prime directive" stuff after already yourself having pointed out that all such solutions that only give you 0.001 instead of identically zero will not work to suppress the vast number of planets with life in the galaxy?
    I appreciate that you will present this later, and will await your answers when you do
    Well, there are other projects and I need to make sure my bowl of rice stays filled, which is not easy in academia nowadays if you do not want to get too corrupted. Result: I do not think I will get to this any time soon.
    Why wouldn't it be curious about how life would evolve on other planets? After all it couldn't figure this out for itself,
    We have pretty much already figured it out to the degree it is useful.
    our intelligence could not have evolved in a 2-d world because of connections required between neurons etc
    There is somebody in 4D who just posted a comment that life in 3D is impossible because the DNA helix cannot unravel after being copied inside 3D.

    Again - all your comments stay on the technological level and do not grasp the core. You are forgetting that the system is supposed to be rational and not a higher IQ form of us irrational humans. There is a big difference between IQ that hunts after irrational values rationalizing them and a rational system. The rational system actually looks rationally at its own aims.
    Gerhard Adam
    There are only two possibilities: 1) Evolution stays pretty much the same as it always has, independent of the substrate in which it is ongoing. 2) Evolution is somehow different in the new substrate; there is some sort of ‘singularity’ because memes in cyberspace in some way behave fundamentally different from any evolutionary actor before.
    Perhaps I'm way off base here, but from a philosophical perspective might we not consider that there's a kind of third possibility which is a hybrid of these two?  In other words, that 1) continues as before, but through our technology we think we've achieved 2)?

    In that case, we would be subject to our normal evolutionary direction, but also risk aggravating the situation through misapplied technological solutions to poorly understood problems.  If this occurred, then a third outcome (i.e. extinction by mistake) could be the result.  I realize that your two possibilities don't actually require such a consideration, since they don't presume success or failure just by their existence, but I thought I'd pose the question.


    Mundus vult decipi
    vongehr
    What I mean by 1) is basically evolution going ahead as usually observed in all substrates, however, that empirical experience is no proof for that in the realm of memes (and rationality) there may not be an interesting threshold for example. It is kind of like in 1900 saying that gravity is everywhere the same for sure, whatever the actual theory is, so it must be in all instances just like what we see here on earth. Kind of true, but still, once density goes over a certain threshold, there are black holes, and it would be kind of a stretch to claim that that is nothing special at all.
    In other words, whether 2) is something new to evolution theory or not depends what you count to evolution theory. Say evolution theory is completely established and there is one parameter in there that just never goes over a threshold except when we add memes. Then it is 1) even though it is 2). The real question that actually makes a real difference is of course, is there something that ever could stop evolution in any substrate? If not, then it is 1). If yes, and I do not mean a probable extinction/catastrophe, but an inherent end like a computation algorithm halting, then that is 2). As far as we know evolution today, there is nothing like 2) in there. HOWEVER, anybody who tells me that this is indeed the case and I should stick to it, should equally criticize many of the obscure singularity hypotheses (not all of them, but a lot of especially the all positive ones!).
    Interesting topic and interesting theory. It is not a coincidence that suicides are on the rise in the developed countries. Thus, the relationship between technological advancement and suicide is a direct one although we are still at a nascent stage in our technological curve.

    I do think though, that your take on suicide is too computerized and rational. Human are not rational especially in calamitous circumstances.
    My adjustment to your theory is this: Rather than a global suicide isn't it more plausible that birth rates will drop gradually but firmly to a level that would lead to a defacto suicide for our race? This would be gradually spanning over centuries? But then again, scarcity builds value....doesn't that mean that as global population dwindles, the value of a life will be increasingly more "costly" to take away, thus stopping at a point?

    Another point worth making is the idea of "direct evolution". If we are in charge of evolution doesn't that mean that we can change our course to one that always strives for meaning and challenge? It would be the same a a driver behind the steering wheel of a car. He can drive it anywhere he wants, why would he drive it to oblivion when he can go anywhere he wants?
    This leads to the conclusion that, the only way to suicide is through boredom as any outcome which we can choose we have already embarked. SO, there would be nothing else to explore. Only then mass suicide is plausible. For that to happen, you have to eliminate the variable of "offsprings" for they would ruin your model. As offsprings are by definition inexperienced. Thus, the mass suicide cannot converge at the same time to bring young and old together.

    vongehr
    Birth rates drop until extinction is not how evolution is known to work, neither are the birth rates of humans of much interest to whatever there is in the future. This article is not about "mass suicide". It is about switch-off by a rational system that is powerful enough to do so globally, basically in order to minimize suffering (but this is not the whole argument of course).
    Would not the "rational system" merely turn itself off? An intelligent, rational system would know that merely vaporizing the earth (or the solar system, or the galaxy) would not end all life (and thus sufferiing (how buddhist!))

    vongehr
    Would not the "rational system" merely turn itself off?
    Yes, that is precisely what Global Suicide means. The system is global however (1) and also needs to ensure that evolution does not over the remaining time where such is possible recreate the system in its pre-rational state yet again (2).
    Thor Russell
    Some questions: 


    1. If there is no "human-like individualistic consciousness in cyberspace.", then why talk about human stress and depression in the article, surely that won't carry over to the behavior of the cyberspace entity? It wouldn't be stressed.

    2. Yes I am interested to see how you think you can predict what such an entity would do, you would have to convince me that it isn't as hopeless as a dog trying to figure out what we are up to.


    3. When you say "turn itself off" what exactly do you mean, for example in the case of earth?
    You seem to say that it would it also wipe out all life on earth, and stop it from arising again.
    Why would it stop there, why not wipe out all life in the galaxy?





    Thor Russell
    1. If there is no "human-like individualistic consciousness in cyberspace.", then why talk about human stress and depression in the article, surely that won't carry over to the behavior of the cyberspace entity? It wouldn't be stressed.
    Quite so. It depends entirely on how much it inherits from human evolution. My contention is that memetic evolution has hardly started and that the single example of evolution that we do know about may be far tighter constrained by biological competition than memetic evolution need be. The trans-human entity ought to be able to decouple itself from its Darwinian origins. It's worth reading
    The Hedonistic Imperative
    3. When you say "turn itself off" what exactly do you mean, for example in the case of earth? You seem to say that it would it also wipe out all life on earth, and stop it from arising again. Why would it stop there, why not wipe out all life in the galaxy?
    Sascha is making the unwarrented assumption that because biological evolution creates a lot of suffering, there is a universal law which says that all forms of evolution must do the same thing and that the trans-human entity will realise this and decide that going to sleep is the best thing.

    Yes, if it is capable of making that decision perfectly rationally and not just because it's having a bad day, then it had better eradicate life in the entire universe. No doubt this explains Fermi's paradox.
     
    But it would be an incredibly arrogant decision not at all in keeping with rationality, more like the unshakeable, dogmatic despair of extreme depression. A smart cyber-being ought to be able to take a pill.


    Thor Russell
    But if you achieve the Hedonistic imperative, then what is the point of lesser states of bliss? Why not just settle in the most blissful state permanently? Thats the same as a switch off as far as I can see.
    Unless you think that swapping between different states of bliss in a way consistent with how an organism would experience them (e.g. consistent memory of those states) is somehow more blissful. You would have to argue that swapping states somehow creates something that staying in one state doesn't. That's fine if you think there is some "soul" outside the universe that "feels" things in some fundamental way, and physical state changes create these feelings, incrementing some counter of good experiences stored outside our universe, but otherwise I'm not sure what it would mean.
    Thor Russell
    Well, I'm not saying that "happiness is the only good"; only that David Pearce makes a good point which is fundamentally about decoupling ourselves from our Darwinian past.
     
    After all, once you get to the point where you can be as happy as you like, you can also make decisions as to what to do while happy. One thing Pearce does point out very clearly is that he is not talking about escapist amnesic dope. Soma, given to subdue the populace in Brave New World, is actually the very opposite of Pearce's utopia, rather as heroin is, crudely, "opposite" to ecstasy or LSD. What rational transhuman would not create infinite dimensions of happiness, if only to extend the possibilites? One could be happy without limit and yet still be capable of enjoying even more bliss along the orthogonal axes of contentment or love which are grounded in what one does. 
     
    There again, unadulterated happiness gets tricky unless all natural disasters have been eliminated. As Pearce says, "When a loved one gets killed should it affect your happiness? I suppose it should, but I wouldn't let it spoil my whole day".

     
    vongehr
    Sascha is making the unwarrented assumption that because biological evolution creates a lot of suffering, there is a universal law which says that all forms of evolution must do the same thing
    Nonsense! Where did I ever claim such? I even wrote about consciousness becoming a fossil/vestigial organ - how is there human suffering without consciousness? Please stay speaking for yourself.
    "Invariably, death, always equated with an apocalyptic end, is worse than dystopia. Why?"

    I GOTTA STOP YOU THERE BRO.
    DEATH IS THE PERSONAL APOCALYPSE.
    I'M A PERSON. DON'T KNOW ABOUT YOU.

    you seem more like a robot. it scares and enchants me

    also i'm afraid you misunderstand true transhumanism.
    i don't care that i'm a legacy.
    i'm just happy the pattern that my pattern is part of might eventually spawn a pattern that swallows the stars

    bro you don't GET reproduction

    DO YOU HAVE KIDS