Fake Banner
    So What’s The Deal With The Singularity Again?
    By Kees Pieters | June 22nd 2011 03:50 PM | 36 comments | Print | E-mail | Track Comments
    About Kees

    With a Ph.D.from the University for Humanistics on complexity, complex systems and technology and society, Kees holds degrees in electrotechnics...

    View Kees's Profile

    The Dutch translation of Ray Kurzweil's 2005 bestseller “The Singularity is Near: When Humans Transcend Biology” (the title says it all!) is about to be released, so this post on 'practical PAC' will be a summary of my objections to this idea. The Singularity was a case study in my research to see if I could analyse the claims with PAC. This post will probably not reveal any new information for people who are into this particular transhumanist ideal (both enthusiasts and critics), but it was a good exercise to test the methodology of PAC with.

    Methodological Stuff:
    1. Introduction
    2. Patterns
    3. Patterns, Objectivity and Truth
    4. Patterns and Processes
    5. Complexity and Randomness
    6. Complexity and Postmodernism
    7. Complexity and Rationality
    8. Complexity and Professionalism

    The Pattern Library:

    1. A Pattern of Difference
    2. A Pattern of Feedback
    3. The Hourglass Pattern
    4. The Pattern of Contextual Diminution

    Practical PAC:

    1. Performing Research on Complex Themes

    A quick recap; ‘Singularitarians’ (with a capital ‘S’) are people who believe that in a decade or three, technological intelligence will supersede biological intelligence because of the “Law of Accelerating Returns”. This ‘law’ claims that technological progress is increasing exponentially, with the result that at a certain point in time the increase blows off the charts and becomes infinite. This point has been argued with meticulous zeal by Ray Kurzweil, the well-known American entrepreneur and futurologist. On example graph from his 2005 book “The Singularity is Near”, which was amongst others a New York Times bestseller, demonstrates his point (Kurzweil 2005):

    Countdown linear

    Countdown to Singularity

    The first graph shows the ‘Countdown to Singularity’ in its most dramatic form, while the other graph  shows the same in a log-log plot. Either way, the ‘time to the next event’ comes quicker and quicker and soon will be described in minutes and seconds rather than years or decades. These events correspond with ‘canonical milestones’ in the evolution of human intelligence, and as the more recent events correspond with technological innovations, Kurzweil’s claim –and other Singularitarians beside him- is that the evolution of technological ‘intelligence’ will take over biological intelligence. A new ‘epoch’ of human existence will arrive where we transcend our current limitations and become hyper-intelligent beings. Kurzweil presents a plethora of similar graphs in his book to support this claim.

    Some Problems

    There are a few problems that I have with this line of reasoning. First, when presenting patterns like these exponential curves, one has to realise that they are all existential proofs which cannot be scientifically falsified. However many of these curves are presented, there is a chance that they all boil down to one underlying process that is driving the acceleration. To make my point, I will present one of my own below.

    Crude Death Rate

    This graph shows the amount of worldwide deaths, or ‘crude death rate’ throughout history to the expected death rate in 2050. As one can see, the number of deaths is increasing exponentially, so if we follow Kurzweil’s argumentation, we are heading for mass extinction somewhere beyond 2050.

    Of course we all know that this isn’t true, for there is another process that counters the crude death rate, namely the population growth.

    It is clear with graphs like these, that one often needs a reference against which one can compare the curve; the curve itself does not necessarily say that much.

    In terms of PAC: The pattern is contextualised with another one.

    I have plotted the population growth over the last 10 000 years on the graph on the right-hand side as a log-log plot, and projected Kurzweil’s ‘events' during the same time-scale over this to highlight the trend  (the red line).  Population size and ‘events’ are different quantities, so an comparison is not really possible. However, it is interesting to see that, roughly speaking, the trend in the events correspond with that of the growth of the population.  

    Trends


    This alternative explanation would suggest that the Singularity will not occur, as the exponential growth will become less when the world population stabilizes beyond 2050. In my previous posts I already used the same argumentation why we need multiple perspectives (different contexts). There will probably be a lag in this development, as innovations are likely to follow the growth of certain groups in the population, such as scientists and inventors. Schooling and increased access of certain groups of people who currently do not have the means to participate in innovation will probably delay this ‘cooling down’.

    Another problem I have with Kurzweil’s graph is the quantification of events. If I call the bouncing of an object an ‘event’ and plot a log-log graph with the number of bounces as function of the mass of an object, then a number of experiments of objects from heavy rocks to bouncing balls will undoubtedly show an exponential curve. Bouncing balls are made to…, well, bounce!

    The same applies to Kurzweil’s events. If one starts with events of cosmology, moves to biological events, then societal ones and last technological innovations, then one is bound to get an increase. If I would include the ‘canonical milestones’ of an insect –say every munch on a nice juicy leaf- the curve would already have blown of the charts millennia ago. This is a fundamental problem of isomorphy between a pattern and the phenomenon that the pattern tries to model. The isomorphism between ‘canonical milestones’ and ‘intelligence’ is highly uncertain. Therefore the correspondence between the amount of transistors on a square millimeter of silicon and intelligence is just as circumstantial as –say- the food intake of an insect. These kind of scaling tricks seem awfully close to being a form of circular reasoning.

    An issue related to this is the way metaphors are used. ‘Singularity’ is a metaphor introduced by Vernor Vinge in the early Eighties to describe the point in time where the exponential curve becomes infinite. This metaphor is borrowed from the astrophysics of black holes. In itself there’s nothing wrong with this, but on Kurzweil’s website some of the discussions about the Singularity have gone past the metaphor and implicitly suggest an equivalence. It is a bit like saying “the sun is like a burning furnace” and then start to wonder how to get it installed in your kitchen.

    My last problem with Kurzweil’s graphs deal with the correspondence between the canonical milestones and ‘intelligence’. We are currently facing the problem that we really do not know what exactly ‘intelligence’ is. This word falls in the same category as ‘life’, ‘consciousness’ or ‘empowerment’, which all have a strong intuitive ring to them, but are very hard to define in exact terms. Such terms are often package deals embodied in a network of meanings and associations.

    As an example, it might be more correct to consider the ‘intelligence’ of an agent as being relative to the complexity of its environment. Some definitions of rationality in (software) agents for instance are defined along the lines of their ability to achieve certain goals in their environment. Exaggerating the consequences, this means that the more complex the environment, the more stupid the agents become!

    the problem with these terms is, that they are biologically, socially and ethically contextualized. Scientists can use these words for their own endeavors, provided that they are strictly defined in the contexts of their research goals. Any claim that technological innovations correspond with biological, or human intelligence can never be substantiated because these concepts are socially constructed and embodied.

    This brings me to my major concern,which has little to do with Kurzweil’s singularity.

    A Slippery Slope

    I am an old-school techie and I have always been intimately  aware of the importance of careful analysis and modest claims. This has little to do with “scientist’s pessimism” that Kurzweil accuses the scientific community of, because even these modest claims –once proven- often may have significant consequences. A mathematical proof that a certain model behaves as expected is always an achievement. In my professional career in high-power robotics and industrial production machines, I have also witnessed that this careful attitude contributes significantly to the trust that people put in techno-science and technology.

    In the current media-age, more and more pop-scientists are presenting themselves to the public through popular scientific books and media performances. On the hunt for lucrative grants they commit themselves to projects that  promise to solve AKPU; All Known Problems in the Universe. Many of these good people –and Kurzweil is certainly one of them-  take care to stress  that they are engaging in a more or less speculative vision of the future, or that they indulge in science fiction. This is important, for speculation envisions new paths of scientific progress and tickles our imagination and enthusiasm. Imagination drives novelty and marks paths of new enterprise.

    However, on many an occasion it becomes very vague in which context scientists make their claims. With the current state of our technology, Kurzweil’s claims to technological intelligence as reference to human or biological intelligence cannot be justified scientifically because it cannot be validated. Now Kurzweil is not claiming scientific correctness, but the raving praises in his book do show that the media and the public are at least partially assessing his authority as a scientist. This authority is amplified because of the many physicists, mathematicians and other scientists who are engaging in this transhumanist project of the future.

    More and more often, scientists are making claims about technological developments that target human existence. Whereas Singulatarian claims are still, maybe paradoxically enough, rather harmless –it will either be proven or disproven in the near future- some other transhumanist groups are claiming to cure diseases or undesired social behavior, or conversely promise to ‘empower’ people based on current (gene or nano-) technology that as yet cannot fulfill these promises, if at all there is any correspondence between the technology they base their claims on and that what they target. In their communications, the role which they assume –visionary, scientist or dreamer- is extremely vague, but implicitly they ventilate their status as scientist to stress their authority on the matter. With this, they manipulate the public’s trust in ways that worry me. Trust, as an old Dutch saying goes, comes on foot and leaves on horseback. But especially when human health and well-being is concerned, betrayal of this trust can cause enormous emotional damage on the path that the fleeing rider leaves behind.

    Comments

    vongehr
    Any claim that technological innovations correspond with biological, or human intelligence can never be substantiated
    I am surprised that somebody who stresses the importance of thinking in abstract patterns would say something like this, as I would expect that perspective to be very receptive for ideas like algorithmic evolution.
    I wonder what you as an "old-school techie" into "modest claims" think about the idea that there is either nothing substantially new (evolution of patterns as usual) or if there is, it must obviously be an endpoint! Modest and reasonably careful or not?
    keesp
    Hi Sascha,

    Actually you are quite right, my bad! I should have said that we currently can say very little about this association, as we are still largely grappling in the dark on what intelligence is. This is/was a big issue in the AI community. Traditionally researchers/thinkers on (artificial) intelligence would assume that intelligence is located somewhere in our brain (traditional atomistic view of intelligence). So the hope was that we can find this thingie,  relocate it (or its principles) to a computer, and lo and behold, we have artifiical intelligence. to some extent this has proven to be a useful idea, for instance when looking at reasoners and expert systems. But these formal machines immediately raised another problem, and that is that they just dumbly carry out the algorithms, and therefore are not intelligent at all! The intelligence is basically 'situated' in the designer/programmer who then relocates this to a machine who just does what it is instructed to do.
    The critics therefore tended to dismiss this atomistic idea and would try to develop alternatives. A pattern-approach balances in-between the antogonistic views, by acknowledging that something is happening in the brain (e.g. pattern-processor), but that this only becomes intelligence if it is in contact with an environment (contextualisation). With complexity, intelligence then becomes the ability to pursue/optimise certain goals in a contingent environment, which makes the concept of 'intelligence' relational. So intelligent behaviour (according to some empirical observable definition of such behaviour) in one setting may prove very stupid in another. The intelligence therefore lies in the adaptibility to circumstances, and not in the behaviour in itself. With this, I think intelligence is more closely related to the ability to reduce uncertainty, than it is in optimising behaviour.

    Now I agree that in my posts I make all these outragious claims for the fun of it, and then I suddenly start calling myself 'old-school techie' who is careful blah-di-blah. The thing is, that when I put off my complexity-jacket and get back to my normal work of developing machines/robots/software etc, I do realise that I take up a different set of responsibilities (with respect to clients, operators and so on). Therefore, as I posted earlier, when I take up the role of a craftsman, my responsibilities are such that I HAVE TO BE more modest than when I'm fooling around with my particular epistemological system of PAC. It's a matter of playing a different game!
    This fluidity in attitudes is consistent with my ideas, and the critique that you refer to was just that, that claims are currently often made on technological / scientific grounds on human 'improvements' that are morally and ethically questionable, far from 'objective', and based on techno-science that has not even reached prototype stage yet.
    I love enthusiastic and outragious ideas, and I think all good science starts with this. But when we start tinkering with our life-world or humans, I think, in the end, we have to be very, very careful. I also think that this manner of hyping new science is detrimental on the long term, because people lose trust in science when they make claims and don't deliver. This happened with AI in the Eighties (where are the domestic Jetson-like robots we were promised????)  and is currently happening with many promises related to bio-tech
    Keesp
    Bonny Bonobo alias Brat
    I love enthusiastic and outragious ideas, and I think all good science starts with this.
    I wish that more elites/scientists would agree with you about this. When I read all these articles about AI I feel like screaming with laughter, after 30 years of computer programming, analysis and project managemet, I find it hysterical to think that AI will ever be anything other than an extension of the limitations of the person programming the computer.



    My article about researchers identifying a potential blue green algae cause & L-Serine treatment for Lou Gehrig's ALS, MND, Parkinsons & Alzheimers is at http://www.science20.com/forums/medicine
    Hank
    I've said it in the past and his acolytes disagree but Kurzweil is firmly trapped in the mid 20th century.  He grew up when the digital standard was invented and is consumed with it.  If you look at science fiction of the 1960s and early 70s - heck, comic books - that is the AI he thinks we are moving toward.    Brain +  MAGIC BLACK BOX = Singularity.
    Could you not argue that is just a lack of imagination on behalf of the programmer? There were times not so long ago when Helen's argument could have been applied to search algorithms spanning billions of web pages, but google/ask/yahoo etc are doing okay!

    Hank
    Yes and no.  There is no different inherent method in search engines today versus 15 years ago, Google found a way to do it more efficiently but the process is the same - better search did not, for example, make anyone smarter.  Continuing to make processors that sift data faster and faster leads you to an IBM computer that can answer questions on Jeopardy as fast a human - but that is nowhere near even entry level 'artificial' intelligence much less a singularity or evolutionary leap.
    Gerhard Adam
    A pattern-approach balances in-between the antogonistic views, by acknowledging that something is happening in the brain (e.g. pattern-processor), but that this only becomes intelligence if it is in contact with an environment (contextualisation).
    I also think we tend to be distracted by the brain and trying to emulate its functions, instead of recognizing that we also observe "intelligent" behavior in organisms that don't even have a nervous system.  Certainly we aren't talking about bacteria engaging in debates of quantum mechanics, but nevertheless they are clearly exhibiting behaviors that are beyond simple algorithmic responses.
    Mundus vult decipi
    vongehr
    Nice non-answer you gave to my question. You should consider being a politician.
    keesp
    Ok, fair enough!

    There are two things you need to consider when you are referring to my liking for abstract patterns. First I do value their importance as being robust forms in a certain environment, and so they represent certain organisation. But at the same time, the kind of complexity thinking I refer to puts limits on these forms:

    * are they robust enough?
    * do they actually represent what we think they represent (important in this discussion)
    * etc,

    I think you mentioning Nick Bostrom in your post on the Singularity is a good example. He is still on my reading list, so I currently don't have a deep understanding of his ideas, but from what I gather he basically defends the thesis that our Universe acts like a giant information processing computer, and therefore hosts a sort of universal 'intelligence'. Now, I have no problems seeing that our universe may host equivalents of von Neumann machines that have formed spontaneously (I think this a quite acceptable idea), but the question then immediately arises: what kind of programmes are they running? How have these programmes formed, and what kind of mechanisms are at work to select amongst these programmes (evolutionary algorithms? something else? Intelligent Design...ieeeuwww...)

    So this was my point to this discussion...we are interpreting our life-world based on data that allows more than one explanations. No problem there...it's the tech-form of philosophy or hermeneutics (SF always is). But when you start believing that these interpretations boil down to one inevitable development (and we hard scientists have a natural tendency to do so, because we often forget that our 'objectivity' is embedded in philosophical belief systems) and start saying that we can 'for a fact' 'improve' human beings.... then we are starting to play a dangerous game.

    So, in order to answer your question (I hope): Yes I LOVE fooling around with abstract patterns, but complexity thinking itself warns us not to give too much meaning to them. They, like all other things, are always bounded.

    Second, as some of the discussions in other posts reveal, I place my bets on a transhumanist teleology (for whatever its worth..we mustn't take ourselves too seriously in these matters) in networked 'intelligent' societies -which we currently are not, but i see some developments that give hope- but these 'higher' forms of consciousness will also be very constrained in space and time. The 'evolution' (or teleology, depending on your taste) lies in the interactions between society(or societies) and the environment, which are complexifying. Now complexity, like 'life', 'consciousness' and 'intelligence' is a problematic term, so I can only use a more common-sense intuition that this is happening (and I think many will agree, but it will be nice to read an astute thinker who convincingly manages to tell us that our world is getting simpler..Fukuyama maybe?) . So evolution is continuing, I do think there are good reasons to believe that 'intelligence' and 'consciousness' are related to this, but all these visions must be put in the same category as Kurzweil's...informed visions of our future wth some scientific backing, but still highly uncertain.

    Hope this answers your question ~)
    Keesp
    keesp
    I think the following talk on TED captures the essence of a good direction to proceed when looking for 'intelligence':

    http://www.ted.com/talks/hod_lipson_builds_self_aware_robots.html

    I like Hod Lipson's comment that we should trust machines to make their own self-models, which of course means that we no longer know exactly what these robots do
    Keesp
    Gerhard Adam
    Doesn't that defeat the purpose of the effort?  After all, why assume that "intelligence" would necessarily evolve and why should it, if the robots are capable of surviving adequately without any greater quantities of it?

    The problem as proposed by the "singularity" is that evolution is inadequate, which is why Kurzweil and is "believers" want to accelerate the process in a direction of their own choosing.  Of course, that changes evolution to engineering, but it doesn't seem to bother the true believers that they're pursuing an untenable path.
    Mundus vult decipi
    Isn't there also the possibility that the robots that build robots might not like humans? That's a problem if AI actually can get itself going.

    Gerhard Adam
    Yes, that would be a problem, as well as the fact that the more "intelligent" such an entity becomes, the less useful it is as a robot.  What about deception (i.e. the ability to lie)?  Would you have to coerce cooperation?  These are all pertinent questions if the robot is supposed to truly be intelligent.
    Mundus vult decipi
    keesp
    Gerhard, Frank,

    These are all interesting and valid points you raise. I think 'intelligence' (in some meaning of the word) actually is exactly what evolution is about. In a contingent environment, forms evolve and discover some tricks to increase their stability, reproductive qualities, and so on. I have no problems in seeing these tricks as being 'intelligent' in some way. Another form of 'intelligence' is to have a toolbox of these tricks and knowing when to apply what. But 'choice' is, I think, an especially difficult concept in this matter, because you either take the rational approach (set of alternatives, certain ranking mechanism, hey presto the best choice follows). This is the analytical answer to choice. On the other end (unorganised complexity) choice is a probability in which randommness makes a decision. The scruffy 'complexity-informed' alternative might be a mechanism, such as an 'edge of chaos' transitory states, in whick the process of ranking the alternatives is 'chaotified'. I think these directions may provide interesting possibilities for artificial, autonomous intelligence (that can be machine-programmed).     
    Keesp
    Our brains are made of matter and operate according to physical laws. Surely from that, in principle you can describe a set of molecules in a brain and simulate it exactly?

    More likely, you could even model the constituent parts of a brain (the neurons) accurately enough much more simply to result in 'conscious' behaviour, with far less computational effort. You don't need to model all the atoms in a ball to get a tennis simulation, for example.

    I think what's missing is the knowledge of how the neurons fit together and interact to result in consciousness and intelligence, rather than an ability to simulate it. That said, any brain simulated now would surely be rather slow compared to our dedicated-matter brains, where every atom is used to perform the relevant calculations.

    Gerhard Adam
    I think what's missing is the knowledge of how the neurons fit together and interact to result in consciousness and intelligence, rather than an ability to simulate it.
    You're missing the point in assuming that neurons are even necessary to gain basic "intelligent" behavior.  As I mentioned previously, this occurs in organisms that don't even have a nervous system.
    Mundus vult decipi
    Sorry Gerhard, but the point is not missed. Sometimes I even think why couldn't a chemical fire be consciousness of the most basic level!

    The neuron/human brain example is an example that people generally find it quite hard to argue against, as we know we ourselves are conscious, so I used it here.

    Gerhard Adam
    Sometimes I even think why couldn't a chemical fire be consciousness of the most basic level!
    Well, my own pet explanation is that life is based on the ability to "exploit" and "direct" processes for some objective (i.e. oxidation for energy, etc.), whereas a fire (or non-life) is essentially an undirected process that aimlessly consumes its fuel (or continues its reaction) until it simply runs out.


    Mundus vult decipi
    I agree with that. The problem is, you can describe bacteria in a test tube of sugar solution as something that aimlessly consumes its fuel until it runs out (or is poisoned). The bacteria have no more end-game than a fire - fire flares up when fuel is added, dies down as it is restricted. It moves to follow fuel, reacting to its environment. Fires in an enclosed environment poison themselves with their excretion, carbon dioxide. It reproduces and spreads given the right conditions. All these properties are almost an exact mirror of the bacteria in your previous comments. Just a lot more basic and involving fewer chemicals.

    Thing is, when I start thinking that maybe something as simple as fire, just a single chemical reaction at its most simple, could be the most basic form of life, I usually just realise it's just craziness. But hey, if the shoe fits...

    Gerhard Adam
    Not true.  Bacteria have quorum sensing which may enable different behaviors based on the population size.  They can also have limited cannibalism in the event of shortages, as well as even having strategies like "hibernation" where they may suspend themselves until conditions become more suitable. 

    There may be a variety of strategies (which is a term you can't apply to fire) that, even if limited, represents a means by which novel solutions may be applied to problems.  This is markedly different from inanimate systems.  In addition, depending on the conditions, bacteria may exchange genetic materials to potentially acquire new "options" that might more beneficial.
    ...you can describe bacteria in a test tube of sugar solution as something that aimlessly consumes its fuel until it runs out (or is poisoned).
    Without considering the actual biology of the organism, this simple description could also apply to humans, which would negate the point about "consciousness" being a relevant property.  I seriously doubt that humans confined and reproducing in a similarly constrained environment would fare any better.
    Mundus vult decipi
    I'd argue that those examples you have given are simply refinements, a set of properties which can arise due to greater complexity, rather than some fundamental difference. I'll phrase that in a question:

    Which of those properties are required to make the bacteria alive and the fire not? Is it the quorum sensing, the cannibalism, the hibernation? Would a bacteria which didn't do one of those things no longer be alive? I'd imagine the answer would be a 'no'. Probably you'd go back to genetics as a requirement? Passing on information to the next generation? Death?

    keesp
    I think the metaphor of the tennis game is an intersting one. When you model the atoms of a ball, you don't capture the rules of the tennis game. The same applies for 'intelligence', I think, there needs to be an understanding somewhere on the environment (the field and the rules of the game) that you are in, in order to be intelligent. If allow an elephant to enter the court, we have neurones (a lot of them) and all the physical attributes we need to have 'intelligence' ...but it most likely will not result in an intelligent game of tennis...
    Keesp
    Bonny Bonobo alias Brat
    If allow an elephant to enter the court, we have neurones (a lot of them) and all the physical attributes we need to have 'intelligence' ...but it most likely will not result in an intelligent game of tennis...
    Actually, if you think about it, there's nothing very intelligent about a game of tennis. Players repeatedly hitting a ball back and forth across a net within a set of delimiting lines is hardly intelligent. What's even more amazing is that thousands of people will pay money to watch it. Personally I would rather pay money to see what happened if an elephant was allowed on the court while a game of mixed doubles was in progress, say between the Williams sisters, Federer and Nadal. Now that would be interesting!

    My article about researchers identifying a potential blue green algae cause & L-Serine treatment for Lou Gehrig's ALS, MND, Parkinsons & Alzheimers is at http://www.science20.com/forums/medicine
    Could you not have a general intelligence that just bases itself in the laws of logic? Such an intelligence could be the most basic and flexible type. A generic problem-solver and pattern-finder.

    Although I suppose to move towards any solution from any problem, you need a specific set of laws to follow to go from one step to the next. Historically in biology, I imagine one could argue such a set of laws has been the environment and how it reacts, which I guess is the thrust of your comments to this article.

    keesp
    @Helen,

    I totally agree with you...the elephant would be a hoot! Of course, the elephant -being quite intelligent-  could LEARN to play tennis to some extent, but then the process of learning becomes part of the definition of intelligence.

    @AG
    Well..this is what I mentioned earlier. We DO have reasoners, expert systems, automatons and other forms of machine learning which do very interesting things. Optimisation is definitely one aspect of intelligence. My point is mainly...does this FULLY capture intelligence, or are we missing things. I think we are missing things, and this is the reason why we are still far from making a machine-like variant of human intelligence. This has to be the goal if we analyse Kurzweil's claims: can we achieve/supersede this in thirty or so years? 
    Keesp
    Gerhard Adam
    What exactly do we mean by human intelligence?  People seem to presume that it is direct human intelligence that has produced our current society and manner of life, but if that isn't a valid assumption, then what do we think "human intelligence" will produce?
    Mundus vult decipi
    keesp
    Yup..., well I don't think we need to repeat the endless discussions in the AI community, John Searle, Paul/Patricia Churchland, Daniel Dennet and all the others twenty years ago, to know that we're still far from putting any human-like intelligence (whatever the definition) in a machine/computer in the coming decades. Also the chances to enhance our current intelligence artificially is likely to be problematic...the pharmaceutical attempts (cafeine, coke, crack, LSD, etc..etc..) seem to come with side effects...
    Keesp
    Gerhard Adam
    I understand and I wasn't trying to repeat the definition-specific kind of discussion.  In particular I simply wanted to point out that "human intelligence" reflects a native from Paupau, New Guinea as readily as it identifies with Einstein.  Unfortunately, it is only that latter that most people tend to fantasize about when they consider such development.
    Mundus vult decipi
    keesp
    I totally agree with you. I once heard it said as follows: stone-age man was biologically exactly the same as we are. If you would take a baby from the Stone-Age and let him/her grow up in our times s/he would be totally similar to us. But the cognitive differences in 12000 years (or so) are unreconcilable!
    Keesp
    Gerhard Adam
    But the cognitive differences in 12000 years (or so) are unreconcilable!
    Why would you say that?  Other than what they have been exposed to (i.e. taught), why would you think there are any differences?
    Mundus vult decipi
    keesp
    Well...it's the exposure that makes us way different than our Stone Age forebears, not the biological differences
    Keesp
    Gerhard Adam
    I apologize for not being as clear as I should have been in this.  My point is that there is no difference between a modern human's intellect and one from 12,000 years ago.  Whatever they are exposed to as children and growing is how they would adapt and cope as adults.  Therefore if there is not cognitive differences between them in any material way, then the only variable is the society/culture in which they are reared.

    Even in our modern society, we are not shaped by our intellect specifically, but rather by our incredible division of labor, wherein every individual participates by specializing in a tiny part of the social "organism".  As a result, the differences in our culture over the past 12,000 years is a direct result of that social organization rather than any intrinsic differences in human capability.

    So when there is a discussion about human intellectual "evolution" in the future, it is precisely the wrong discussion to be having, because that is no longer subject to selection.  Our social organization has ensured that regardless of the intellectual capabilities of the individual, they will essentially be assured survival by belong to the social group.  We only need one Newton, or one Einstein to provide a benefit to the entire group, so we are not dependent on "evolving" intelligence as a criteria for future human survival.  So when transhumanists talk about transcending or accelerating human intellectual evolution, they are talking about the wrong thing.  It won't happen, because it is no longer a factor in our individual survival.  This is also why such an idea is doomed to fail, because it neglects the collective component of our social success which determines what our future options will be.  In fact, it relies on the assumption of individual success (intellectually) as if such a thing makes a difference for the species as a whole.
    Mundus vult decipi
    keesp
    @Gerhard,

    I totally agree with you. Neuro-scientist Merlin Donald defends the thesis that our consciousness  'comes to us' by our interactions with our environment. I like this idea, especially in its most down-to-earth form, when we 'are made conscious' (this may be a Dutchism, by the way) of things that we did not realise before. For this reason, i think that collaboration is a more powerful mechanism than competition (but we never should make this an either-either discussion), because collaboration allows us to create networks where our little niches can connect with other.
    Extrapolating this idea, I think a more likely candidate for a transhumanist evolution will favour an ever-increasing network where individual excellence becomes embedded in an 'intelligent whole', like you seem to suggest. Forums like these, I think, are enablers of these intelligent networks, because ideas are expressed without reserve (but with moderation, if it is to be meaningful) and others are able to transform these for their own enterprises. The Internet-induced revolts in the Middle-East are another sign of these transformations.
    Meanwhile there are still large groups of people, of all rank and standing, who are burying themselves in their own preferred positions, whichever positions they defend, and get a lot of attention.
    But I think that epistemological fluidity/interconnectedness is the real interesting development of our current times. 
    Keesp
    Wow nice article. Its not often you find criticisms of the singularity stuff that is this logical. Linking population growth and tech growth is a new idea for me too.

    keesp
    Thanks! Of course, it 'came natural' for me to take this angle...I have more objections, but they are not related to 'patterning' of the data. Other objections could be:

    * Is the envisioned future really 'better' (think of population explosion).
    * Are the purported 'cures' for our ailments really cures (think of existential crises).
    * Are our biological 'version 1.0' bodies really tat bad and frail? We all know that machines are superior to human tasks (in a limited scope) provided the environment is very stable. Organisms outperform machines in robustness, and one of the costs of this is certain redundancy. That is not a matter of frailty, but a matter of balancing optimisation and contingency

    On a side note, I still like the idea of technology being a self-referential feedback loop, but it needs to be energised, like any other process.

    The main point is that exponential patterns are very slippery things, because they are so abundant and take up so many different forms.
    Keesp
    My earliest contact with the idea of singularity, a book in the 80es making the eye-opening observation that a hyperbole with a vertical asymptote in the early 21st century what a better fit to the world's population curve, than was any exponential. iirc it was Manfred Eigen's and Ruth Winkler's "The Laws of the Game: How The Principles of Nature Govern Chance", but I don't have it at hand. There was no black hole in that picture. From a quick check with wikipedia, it appears possible that it was published before Vinge's notion of a technological singularity; in any case it is roughly contemporary.

    As for "the technological singularity" I am tempted to diagnose in its prophecy the ultimate reification of a bias that correlates intelligence with motion (possibly electronic, motion nevertheless). Given the origin of the intuition of intelligence in the experience of communicating with lively peers and cousins, I'd say before engageing into eg space-opera using that quasi-concept of intelligence as a keystone, we should first flex our imagination by spending a moment in the skin of a law student required by a pleading contest to defend the notion of "static intelligence" to the best of her ability.

    I'd be tempted to start with the boomerang. A boomerang isn't static, to be sure, but it's a machine with a single moving part, so it's very close to something wholly static (zero moving parts). Is the boomerang intelligent ? Suppose we were to make a flying machine that is no boomerang but behaved like one, most plausibly we would try to control its motion using a microprocessor. Then isn't it natural to say that whatever the intelligence of the boomerang is, it is equal to the intelligence of the microprocessor and its software ? Etc.