The three problems of humanity were outlined in a talk by Nick Bostrom (of Oxford University, UK) at TED in April 2009.

In this piece I will continue to examine the "big" problems identified in the TEDTalk.

Problem #2:  Existential Risk is a BIG problem.

In this section the emphasis shifts to humanity in general, with consideration for the future (potential) of humanity as a species.  While admitting that there are no realistic studies and that any probabilities are ultimately guesses, the conclusion drawn is that there is somewhere between 20% and 50% probability that the human species will not survive the next century.  Regardless of the specific numbers, the point is that there is a high probability of failure if something isn't done.

Not to be outdone by the previous topic (Death is a Big Problem), we now engage in the obligatory fantastic mathematics to make the point.  The slide shown indicates that with a 1% reduction in existential risk we would save 60 million lives.  This number was obtained by taking a world population of 6 billion and multiplying it by 1% and arriving at 60 million.  

Since a probability of 50% means that there is a 1 in 2 chance that the human species will fail and no people would be left. It isn't like failure can be mitigated at the rate of 60 million people per percentage point.  This particular assumption and calculation simply defies any explanation.

However, this is then extrapolated to future generations to indicate that a 1% reduction in probabilities will affect 100,000,000,000,000,000,000,000,000,000,000 (1032) future lives.  Interestingly enough this represents more than ten orders of magnitude more people than there are estimated stars (3x1022 - 7x1022) in the universe.  What makes this important is the point that death is a problem to be solved, so presumably these future generations represent individuals that would not die with each generation, but would be persistent as immortals.

Of course, the presumptive nod to human colonization of space is given, but is also treated as a tacit assumption instead of being scrutinized.  Unfortunately with that many people, we would have to find a planet by each star and it would end up as populated as our current Earth is.  He does make a comment that it may take us 100,000,000 years to get to a particular star, but if we're extinct we can't ever do it1.  Apparently actually considering the logistics for such a venture is never considered, nor whether the Earth even possesses the resources for such an effort. It is one thing to set goals and be optimistic, but at some point it is equally important that reality be a factor in even the wildest speculations.

At this point, I couldn't keep up with the mental gymnastics, so I just gave up.

Fortunately it seems even Nick Bostrom might've been feeling uncomfortable with the amount of nonsense being generated, so he decided to begin exploring the third problem without any additional delay.  
In this segment the entire argument is formulated around a "what if" situation being presented as a choice of dilemmas.  Obviously if the human species were to go extinct, then there are no further considerations as to what it might have accomplished.  However, it is equally true that there would be no one around to care.

So, at best, the argument is creating a false concern by speaking to a fear that can never be realized.  It is clear that humans can't control every aspect of the cosmos, no matter their level of technological achievement.  If we are truly talking about immortality, then issues as far flung as the sun being extinguished suddenly become problems.  However, then we have to consider that the universe is also expanding which makes the observable universe a very tenuous objective for human colonization (at least from the perspective of immortality).  

Perhaps the least logical element of this argument is that it requires that technology be used not just to "enhance" humans, but to maintain them in some kind of stasis regarding evolution.  In short, evolution must stop for such a technological hold to work.  If we are to take comments such as a multi-million year space journey seriously, then it is clear that not only must we solve problems such as death, but we must technologically control every aspect of biology to avoid "interference" from it. 

I'm also not clear on why such unrelenting growth should be considered a "good" thing.  However, at a more pragmatic level, this viewpoint also requires that there is no other life in the universe.  If this isn't true, how do we avoid contacting another civilization that may think the same way (with their own trans-"life" agenda), which in turn may well seek to destroy us, lest we spoil their plans.

In the end, it doesn't matter how favorably one embraces the viewpoint of the future, it is clear that for humans (or any life form), it is finite.  Whether the transhumanists like it or not, there are limits and they can't simply be wished away.

In this phase of the presentation it is becoming clear that one of the primary problems being overlooked is that of accepting limits.  In effect, it's as if we hear words like "infinity" and think that it should be directly applicable to human beings at every level.  We should have infinite lives, with infinite growth, with infinite opportunities for colonization.  In short, an almost childish expectation that the rules of the universe shouldn't apply to us.


1  This is such an amazingly stupid statement, that there is little argument that can be made against it.  Of course, if one is envisioning a group of immortals that are bored out of their minds, then perhaps a 100 million year journey sounds plausible.

NOTE:  For those that think I'm being unreasonable consider what it would take to move 1 billion people to colonies on other planets:

Assume 100,000 people could be transported per spacecraft (which is hopelessly optimistic).  Then we would need 10,000 spacecraft.  Launching at the rate of 1 per day would still take 27.3 years to move 1 billion people (which would have since been replaced at the reproductive rate of humans).  In effect, it would mean that humanity would need to dedicate virtually all of its efforts towards such coloniziation, and having little else to focus on.  Even so, it seems unlikely that we could actually colonize faster than we reproduce on Earth.