Banner
    Ray Kurzweil Responds To PZ Myers Regarding "Ray Does Not Understand The Brain"
    By Andrea Kuszewski | August 20th 2010 01:53 AM | 31 comments | Print | E-mail | Track Comments
    About Andrea

    Andrea is a Behavior Therapist and Consultant for children on the autism spectrum, residing in the state of FL; her background is in cognitive

    ...

    View Andrea's Profile
    Well, you knew this day was coming. Ray Kurzweil, futurist and author, was attacked for his supposed lack of understanding of how the brain functions, by popular biologist and ScienceBlogs blogger PZ Myers earlier this week.


    Image courtesy of Singularity Hub

    This attack came following a presentation that Ray gave at the Singularity Summit this past weekend, titled, The Mind And How To Build One. You can read the article by PZ in the link above, so I won't get into that here. A few other responses came out of this controversy, including a piece written by Leo Parker Dirac on his website, Embracing Chaos. Dirac raises other issues with Ray's logic that PZ missed, as well as clarified a few others. Singularity Hub also featured an article written by Aaron Saenz, titled When Will Computers Match The Human Brain? Kurzweil vs Myers. Saenz claims that the attack was unfair, due to the fact that it was based on a summary written by an attendee of the Summit, not from the presentation itself. He mentions several instances of misquotes and generalizations taken from the summarized report of the talk.

    Tonight (actually, this morning), Ray Kurzweil himself released this response addressing PZ's blog post directly, featuring it on his website, Kurweilai.net. Here is an excerpt:

    While most of PZ Myers' comments (in his blog...) do not deserve a response, I do want to set the record straight, as he completely mischaracterizes my thesis.

    For starters, I said that we would be able to reverse-engineer the brain sufficiently to understand its basic principles of operation within two decades, not one decade, as Myers reports.

    Myers, who apparently based his second-hand comments on erroneous press reports (he wasn’t at my talk), goes on to claim that my thesis is that we will reverse-engineer the brain from the genome. This is not at all what I said in my presentation to the Singularity Summit.
    I explicitly said that our quest to understand the principles of operation of the brain is based on many types of studies — from detailed molecular studies of individual neurons, to scans of neural connection patterns, to studies of the function of neural clusters, and many other approaches. I did not present studying the genome as even part of the strategy for reverse-engineering the brain.

    You can read the rest here.

    What do you think? Reverse engineering the brain- is it just a pipe dream, or a real possibility? And can it happen in our lifetime? Only the future can tell.

    However, for those of you who want to hear from Ray Himself, in a talk he gave a few months ago on the same general principles, you can look to this video recording of his presentation from the H+ Summit (where I was also a speaker, so I was present for this talk), titled The Power of Hierarchical Thinking. He explains many of the concepts that are currently being debated, so it is worth taking a look if you want to participate in the discussion.

    Comments

    SynapticNulship
    PZ Myer's main argument was against the concept that "The design of the brain is in the genome."  But in his response Kurzweil sidesteps that by relegating epgenesis to a parenthetical:

    The original source of that design is the genome (plus a small amount of information from the epigenetic machinery)...

    And then Kurzweil says none of that matters anyway, because he doesn't expect reverse engineering to have anything to do with the genome:

    To summarize, my discussion of the genome was one of several arguments for the information content of the brain prior to learning and adaptation, not a proposed method for reverse-engineering.
    Which is fine, but why mention the genome in the first place?  I'm not against Kurzweil--I see no reason we we can't reproduce the information functionality of brains in computers.  But, if Kurzweil expects people to reverse engineer a brain at some point in ontegeny, then why say the information content of the genome is related to the complexity of that reverse engineering?

    The argument is that Kurzweil just wanted to point out that the brain isn't very complex--it's just billions of repeated structures.  This does no at all convince me that it's non-complex--I'm supposed to assume that homogenous building blocks lead to simple system regardless of connections and interfaces?  You really have to prove that at a very specific level of abstraction that the system really is less complex in some way.
    He mentions it presumably to show that the brain is not unbounded in scope, making a reverse engineering argument plausible. It can be reproduced (in nature, not in simulation) from a limited amount of starting data (the genome + development). Therefore while an extant brain might be complex it is hopefully not irreducibly so.

    for a value of "plausible" here that depends on yet to be invented technologies and processes of course :)

    @ Samuel Kenyon

    An important thing to be noted is that PZ Myers said he has no problem with the idea that we can reverse engineer the brain via computers.

    What he had problems with was simply Kurzweil's absurdist claims of when we will do it and how.

    Hank
    I was at Kurzweil's talk and I nearly canceled my hotel reservation after it because it made the quality of the Summit seem rather poor - luckily it improved a lot after lunch.  It isn't that I might disagree with Kurzweil's overall concepts if he could be pinned down to some but he makes vague claims and then says there is no accountability for them because we can't 'measure innovation' the same way we would demand accountability in research.  I wasn't the only one who wondered if he knew what he was talking about.  His ideas on neuroscience were the sort of populist silliness that flies with a lay audience but the Singularity Summit seemed to have pretty smart people.   I know the CMU PhD with a focus on AI sitting next to me sighed at the same places I did.

    His presentation was nothing new and he couldn't even be bothered to show up - he literally and figuratively phoned it in.  He is operating in a world of faith no different than religious leaders and he matches data to the topology he wants, which is shoddy science.

    The phoning it in thing may have to be excused.  Having seen him in the past and seeing him now, I had to wonder if he was ill so I was trying to find anyone who might know.  In the interim a lot of people wrote about this so I am happy I don't have to.  But if I did, I wouldn't be writing from second-hand notes, though it is not a surprise either.   Many bloggers spew first and research later.
    Andrea Kuszewski
    Hank, I value your opinion in part because you were actually there and heard the talk. One of the critical points made by Saenz, is that all of the information PZ got was from a summary (and from the looks of it, an inaccurate one), not from the talk itself. It will be interesting to see the counter response (if there is one), as well as the video of the actual talk, which is supposed to be released to the web following post-production editing. I haven't heard a date for the release, but when that day comes, I will be watching it.

    Until I have the specifics myself (either a transcript, which I requested, or a video), I hesitate to really debate the content or implications of what was thought to be said or might have been implied, or whatnot. I don't want to be one of the spewers. Not pretty.

    I do want to hear more from the people who were there and get their impression.
    Hank
    You've hit on a crucial problem at the talk and what has made it hard to write about - for being a conference on the future, they had no video and won't release any until it is, in their term, heavily edited.  They also had no table, no power outlets and no coffee.   

    Basically, for those of us covering it from the outside of that movement, the future looked pretty friggin' Spartan, not at all interested in open dialogue, and not at all something to aspire toward.   If we have a Science 2.0 conference it is going to be jacked up with everything people can want, from an infrastructure point of view, and it will have content about what science can do in the future, not mumbo jumbo.

    But, as I said, I have to write about the really cool stuff I eventually saw - and there was cool stuff.  It is unfortunate that Kurzweil is the face of the movement and an easy target because he seems to be surrounding himself with people who aren't challenging him to keep up.  PZ didn't accomplish challenging him either, he made himself easy to deflect by using a second-hand acct (hint: if you are going to carpet bomb journalists on your blog, PZ, do better than them, not worse - a journalist would have at least called Ray) but it got him some pageviews so it's all good.

    I am betting there are no transcripts.  Like I wrote above, this was not well-organized - though the caterer was terrific.
    I perceive Kurzweil's response to be just more of the same. A lot of buzz words which don't supply meaning to what he is saying. One sentence in his reply stood out to me.

    “The goal of engineering is to leverage and focus the powers of principles of operation that are understood, just as we have leveraged the power of Bernoulli’s principle to create the entire world of aviation.”

    Does Kurzweil understand what the term “leverage” actually means? Archimedes understood leverage when he said

    “Give me a lever long enough and a fulcrum on which to place it, and I shall move the world.”

    What known principle(s) does Kurzweil expect to use to “leverage” into a reverse engineering of the human brain?

    People have always known that birds could fly. In no case has any aircraft ever been built by “reverse engineering” any bird. If people actually tried to build an aircraft by “reverse engineering” a bird without understanding fluid mechanics, the project would fail. No aircraft designer talks about “reverse engineering” a bird to create an aircraft.

    Kurzweil is using “reverse engineering” as a metaphor to disguise the problem, to hide the degree of difficulty by in effect saying, “the difficult part has already been done” and we can learn what that difficult part is by “reverse engineering” what nature has already done.

    His statement about there being sufficiently high resolution brain scanners that can image the brain in vivo making thoughts and then seeing how those thoughts affect the brain is false.

    In his attempt to deduce the complexity of the brain from the size of the genome that codes for it, he makes the mistake he castigates others for; the mistake of thinking linearly and not combinatorially. How many genes comprise the antibody repertoire? How many genes does it take to produce the 10^10 to 10^14 individual and unique antibodies that the immune system can produce? More than a million times more proteins than the rest of the genome put together?

    I am sorry to hear that Kurzweil is so sick. I suspect that the supplements he is taking are doing more harm than good by messing with his normal control of physiology. I happen to know a lot about the kind of physiology he is trying to influence (but which shall remain unmentioned out of deference to Andrea ;). Unless he disables the feedback first (which he has not done and which really can't be done, the systems are too robust), trying to over-ride his normal physiology will not work, will actually cause harm, and will accelerate senescence. I would expect he would at first feel good, but then the first adverse symptoms to be very much like chronic fatigue accompanied by insomnia and brain fog.

    Andrea Kuszewski
    There were complete transcripts taken at the H+ Summit at Harvard, and made available on the web in real time. Darlene Cavalier was on the train from PA to Boston during my talk, and when she arrived, she told me that it sounded great from the transcript she read. I was pretty impressed by the transcribing and immediately publishing to the web. Now I realize that was just a cool feature of H+, not the standard. But we also livestreamed it. Open knowledge to the public!!! Why don't more organizations think that way?

    (BTW... Ray Kurzweil's H+ Summit talk is available here:)
    http://www.viddler.com/explore/ChrisgNYC/videos/37/
    Hank
    And he showed up at the H+ Summit - when I pointed out that he didn't even go to his own Summit (so why did I?), some fans told me it was not his summit, another group, etc. but they list him as co-founder of the thing so perhaps there has been a schism - he couldn't ignore this one, since his name on it and it is a chance to sell some books, but phoning it in is as close as you can get to boycotting an event you started.

    As David Whitlock quoted him...
    “The goal of engineering is to leverage and focus the powers of principles of operation that are understood, just as we have leveraged the power of Bernoulli’s principle to create the entire world of aviation.”
    ... shows Kurzweil doesn't understand aviation any better than he does brains much less what goes into each - or he only talks to unscientific audiences.  Here is a handy graphic I made, he is free to use it in the future:



    Andrea Kuszewski
    Where's the little "like" button next to your info-graphic? :)
    Hank
    Knowing I contributed to his science understanding of basic aviation is thanks enough!
    Hank, I really like your graphic too, but the image I have now is one of mixed metaphors with Archamedes hanging ten on your graphic in toga and sandles waving a big stick around shouting "Leverage this!"

    Quentin Rowe
    Oops!

    I used to follow this airfoil explanation rigidly, until I found out recently that it is incomplete...

    For a more thorough explanation, you need to include the downward deflection of the airflow of the wings trailing edge. It's all about how the upper and lower air-streams reconnect. Then it becomes a matter of action-reaction (yes, I know, hinted at by your Newton dynamic momentum transfer.. ). The downward action of the airstream pushes the wing up. Yep, Rocket Science!

    Here's one of many links clarifying the matter.
    http://www.associatedcontent.com/article/1011178/how_wing_lift_really_works.html
    Hank
    Yep, I was trying to condense it into a graphic so easy even Ray Kurzweil could understand it so I wanted to show him he wasn't totally wrong, Bernoulli matters in the CFD, but he was missing a crucial component in the Newton below the wing - just like he is missing important parts of neuroscience thinking duplicating the brain with AI is easy.
    Gerhard Adam
    With all the optimism, has anyone thought of how they intend to verify that they've got a "super-intelligence"?  Since it is my assertion that you can't confirm the operation of anything beyond your own level of competence. 

    Considering that it took three years just to confirm the results of Perelman's solution to Poincare's Conjecture, I can't imagine how anyone can claim they would even know if "super-intelligence" had been achieved.  Similarly, I have yet to hear a plausible explanation of how a machine can build a machine more intelligent than itself.  Once again, the verification problem comes into play.

    In effect, it sounds like the computing equivalent of a perpetual motion machine.  There isn't a single shred of evidence that something like this is possible and yet we're hearing predictions of when it will be achieved?

    I'm always skeptical over claims of progress in an area that has performed so dismally in all of its promises (like AI). 

    In fact, the primary flaw in the whole, building a "smarter" machine argument is that it reduces to a mechanistic problem when we can't even reasonably articulate what it is that would mark an intelligence.  Even when you try and pin it down, it seems like there's fundamental confusion regarding the difference between intelligence and knowledgeable.  Most examples I've heard are little more than increases in communications connections or storage access, which is hardly the same thing.

    Mundus vult decipi
    Quentin Rowe
    Of course it's possible to build an intelligent machine, as sure as biology is atoms. To state otherwise, is to invoke consciousness as some mysterious afterthought, existing separately from physicality. Dualism, I believe they call it. I have the soup, just add water/ I built the machine, just add consciousness. Actually, you can add water to the machine too, 'cos it'll probably have to be wet to operate without over-heating.

    I would predict that nano-engineering will be forced to adopt, at the very least, coarse biological techniques to achieve such machines. It is clear that there would likely be very delicate structures to maintain, requiring a nano-scale maintenance infrastructure. Bear in mind, that every delicate thought has an electrochemical structure to it - every thought changes the structure of the brain. For a brain to work, it has to be plastic. What the heck, lets just use genes.

    On what time-scale this would be achieved is just a guess, and personally, I don't see the point in such a prediction, except to encourage research. We know how easy it is to damage a good brain, so it's also likely that some weird monsters would be created along the way. It raises the ethical issues of should we even try to do it.

    It makes more sense for any future intelligent 'machines' to be specialist entities, designed for specialist tasks, rather than a general intelligence.
    Gerhard Adam
    I think I was pretty clear and so are the transhumanists when they are talking about "super-intelligence" and the rapid development of ever increasing machine "intelligence".

    My point is that such a thing is impossible since it would be impossible to verify success.

    As for specialist systems ... that's already been done to varying degrees and isn't terribly exciting within the context generally considered as AI.
    I would predict that nano-engineering will be forced to adopt, at the very least, coarse biological techniques to achieve such machines.
    That's an interesting choice of words ... "forced to adopt".  At this juncture it seems they would be "lucky" to adopt something for which they possessed sufficient knowledge to even make cursory projections regarding intelligence. 

    My point remains, in that until there is something that provides more predictability, as well as ability to measure, it's simply idle talk.  Despite claims to the contrary, biology isn't just engineering, and any engineer that says otherwise doesn't understand biology.
    Mundus vult decipi
    @Sergio Could you define what 'simple' nanotech is in your argument? 'Nanotech' is a pretty broad field of research. By many standards 'simple' nanotech its already here. Peptoids, DNA origami, quantum dots, targeted gold nanoparticles for drug delivery, self assembled monolayers of graphene.... but I would be interested to see what applies to neurobiology. I'm not trying to be mean, but that term gets abused when its not placed into context.

    As it applies to systems biology, nanoscale engineering is probably going to have a great contribution in helping us understand a lot sub cellular processes. In that spirit we have to accept that there are still plenty of things about cells - in this case neurons - that we don't understand. For example: how do epigenetic factors determine expression, how do proteins fold, are we confident that we have composite surveys of for ion channels? There is a vast amount of information that we don't even know about cellular systems biology, let alone scaling it up to an orchestrated network of tissues and organs. Engineering phenotype is another thing entirely.

    Science and engineering are related but they are *not* the same thing. That's when you get into the grey area that makes prediction difficult, and I would suspect that this is where Ray and his proponents argue that we don't need to have an the empirical map of how the brain works to generate advanced general AI and ultimately 'singularity'. But in that vein, you have to ask yourself, do benchmarks really mater? And if that's the case, why bother with comparison at all?

    http://ieet.org/index.php/IEET/more/brain20100817

    This is a great introductory presentation about what we know, and also about some significant gaps in our knowledge.

    Don't get me wrong, I appreciate Ray's vision and how he has inspired so many people to build the future that he has envisioned. I consider myself to be one of the people working towards this future. Its a very powerful call for humanity to evolve. But in my humble opinion, Ray's vision remains prediction and as such is still a valid target for debate. Its certainly not proof, nor should it be considered as such. Too many people treat it as inevitable certainty.

    Gerhard Adam
    Its a very powerful call for humanity to evolve.
    What does that even mean?
    Mundus vult decipi
    A human designed feedback loop to the evolutionary process. Synthetic biology. If Kurzweil's vision holds (timeline variable), it will be an AI designed feedback loop to the evolutionary process.

    Gerhard Adam
    It never ceases to amaze me that a species with woefully inadequate knowledge about how virtually everything works, can speculate about what it intends to replace it all with, with a technology it neither has nor understands.
    Mundus vult decipi
    We will never understand everything. And yes, often times our understanding is quite inadequate. But to dismiss the pursuit of knowledge because you can't fully grasp the details at first is a sure fire way to never accomplish anything. You presume that my philosophy is to replace something. I certainly don't think of these efforts as replacing anything. As I see it, Nature's existing biodiversity and ecology possess insights to potential toolkits and solutions that are vanishing and going extinct more quickly than we can understand them. And that is tragic.

    That said, I take issue with the premise that we don't have a grasp of enough knowledge. We have vast tomes of information - not complete mind you - but it is the compilation of generations of labor, and its growing by the second.

    http://www.youtube.com/watch?v=6HeQlrtUOu4&feature=player_embedded

    Did the domesticated dog evolve from wolves without the selective breeding of humans?
    Do transgenic crops exist without contemporary biotechnology developed by Monsanto or Pioneer?
    Is stem cell research growing functional organs outside of organisms, as you see in Dorris Taylor's lab at the University of Minnesota?
    Are we starting to create CAD designed synthetic organisms at the Venter Institute that didn't previously exist in nature?

    It exists. Its real. For now, I side with the contributions of humans over AI at this point.

    Gerhard Adam
    But to dismiss the pursuit of knowledge because you can't fully grasp the details at first is a sure fire way to never accomplish anything.
    Since when does requiring evidence and real science instead of fantasy predictions entail "dismissing the pursuit of knowledge"?

    My point is precisely that we lack so much knowledge, but instead of pursuing it, we want to indulge in all kinds of stupid considerations about "forcing evolution" and "human-machine interfaces" and "super-intelligence", etc.  These are idiotic ideas, not because the dumbest thing a species can do is to build its own replacement, but because we lack the most fundamental knowledge to even begin to appreciate how to do these things ... and it's like we don't want to wait and develop the knowledge.

    I firmly believe that there are people right now that are quite prepared to rush headlong into some of those technologies with not the slightest care or concern over the ramifications.  That isn't science, it's suicide.

    It seems that many people are too glib about brushing aside the unknowns, and using hand-waving arguments to rationalize their objectives.  The acquisition of knowledge is not the problem.  It's the behavior and arrogance of thinking that you already possess it when even the most fundamental elements are so poorly understood.

    In the end, it appears that too many people looking forward to such technology are ultimately frightened of being human.  The thought of dying terrifies them, as if it's some fundamental defect in biology (I can't wait to hear the responses to that). 

    So rather than simply have the true believers tell me how quaint my views are and how I'm a Luddite, how about they explain what they've actually done to advance their goal (besides marketing).  How about hearing the laws of intelligence and how to measure and augment it, instead of "pie in the sky" claims regarding AI?  Instead of filling the air with talk of nanobots and doing what the immune system can't, let's see if you can deal with a simple cold.

    If the following thoughts are typical, then humanity is already doomed:
    At age 200, he says, he'll add on new capacity to avoid becoming bored or suffering a serious cognitive malfunction. At age 2,000, he would "probably need serious architectural changes to the mind." He plans to be alive after the last star in the Milky Way is dead.
    http://articles.sfgate.com/2004-01-11/living/17405990_1_artificial-intelligence-dot-com-startup-post-human

    I can't even think of what to respond to something like that.
    Mundus vult decipi
    I know exactly how to respond to that last quote - its absurd. That's why I don't worry about it or defend it. Still, If the progress of synthetic biology, nanotechnoloy, artificial intelligence, and robotics is complete nonsense, why is there such fear about it?

    I probably enjoy being lumped into the totality Kurzweilian philosophy about about as much as you enjoy being labeled a Luddite. I probably buy into God-like AI about as much as you do - its more science fiction than science, even taking memristors and quantum information science into account. But to be fair, I have substantiated my claims with peer reviewed scientific research up to this point. As such, I disagree with you that we lack the most fundamental of knowledge to pursue some of these goals.

    Be careful throwing the idiocy card around. Lots of people said that the efforts of the Wright Brothers, Kennedy's call to land on the Moon, or the creation of Bose Einstein Condensate were pretty stupid ideas too.

    Rushing into adoption of technology that knowingly poses catastrophic risk is foolish. There is a lot of published literature on this topic. Global Catastrophic Risks (Bostrom) Moral Machines (Wallach, Allen) Nanoethics / Nanotechnology and Society (Lin, Alhoff). But I would argue that its equally foolish to put a moratorium on research just to comply with the precautionary principle. Nobody can question the value that something like flight had to benefit civilization, but I doubt that the Wright Brothers could foresee their role in leading us towards ICBMs and the Cold War. You will never be able to eliminate all risks. There will always be the known knowns, the known unkowns, and the unknown unkowns.

    Gerhard Adam
    Once again, who said anything about a moratorium, or eliminating all risks?  My point is that the Wright Brothers and most other scientists worked on the problems at hand.  They did their research and did the hard work necessary to reach goals. 

    Today it seems that there's so much hype and marketing, that science is undermining its credibility especially when it comes asking for more money for more projects.  Instead of working to develop the computing technology we've got people masquerading as scientists talking about super-intelligent AI's.  Instead of working to make legitimate medical progress (including actually making such care available), we've got the nutcases talking about living forever.

    It isn't about stopping the work, it's about toning down the ridiculous rhetoric about technologies that we aren't remotely close to achieving.  You can be assured that when the Wright Brothers were doing their work, it would have been proper to call them fools if they were already talking about taking their winged aircraft to the moon. 

    Neither the Wright Brothers, nor Kennedy's call for a moon landing were foolish.  Ambitious perhaps but they were an accessible problem.  There was no question that many problems had to be solved, but they were all possible based on our current state of knowledge.
    Still, If the progress of synthetic biology, nanotechnoloy, artificial intelligence, and robotics is complete nonsense, why is there such fear about it?
    The fear is simple to understand.  To anyone that thinks about it for a moment, they realize that there almost 7 billion people on this planet.  There is nothing remotely resembling parity between people (economically or otherwise).  Half a century ago we still have the memory of a madman wanting to build a master race.   Now we have supposedly educated people talking the same kind of nonsense under the guise of progress.

    Does it occur to no one that such a technology would be a virtual death sentence (or at least slavery) to half the human race?  Do you honestly believe that if such a technological advancement were achieved that it would be equally shared between all people?  Is it so hard to envision that those with augmented intelligence might elect to capitalize on their advantages and enslave or control those that don't have it?  There's plenty to fear and mostly its from those that advance these ideas without the slightest inkling of the disaster being cultivated.

    I have enough trouble trusting those in power, and it scares me to death to see people jumping headlong into bed with those that would take advantage of such power to the detriment of the rest. 

    Mundus vult decipi
    Comparing the mainstream philosophy behind the Transhumanist / Singulatarian movement to advocacy for eugenics is a bit much. Its an area for concern, something the vast majority would decry and oppose, moreover its certainly not something Kurzweil advocates.

    The rhetoric on both sides of the argument needs to be more in tune with reality.

    http://www.foresight.org/roadmaps/index.html

    That's what I would consider to be mainstream consensus among many of the thought leaders in this space.

    I think its better to address how we can use these technologies to address the stress in energy and resources 9 billion people are going to demand by 2050. That by far is a more common goal than the much more sensational soundbyte of Kurzweil's metamorphosis of Ubermensch. In fact I don't see how we can address 21 century demands without advancing our work in the convergent technology space.

    I've enjoyed our conversation. Thanks for being candid.

    Gerhard Adam
    Comparing the mainstream philosophy behind the Transhumanist / Singulatarian movement to advocacy for eugenics is a bit much.
    Actually when we're talking about transhumanism with respect to augmented intelligence and
    negligible senescence, it's not advocating eugenics ... it is eugenics.

    In addition, it makes no difference whether Kurzweil advocates such a position, it is his naivete that would provide that capability to those that would advocate such positions.  As I said previously, my concern is that scientists cannot afford to be so naive as to think that the problems of society and a seriously overpopulated planet are beyond their consideration.  Virtually every technological choice we make these days will either address this problem or exacerbate this problem.  There are few neutral areas left.

    We've already adopted the "head in the sand" approach when it comes to human population.  It's as if we were heroin addicts and had determined that our problem was not the drug, but rather that we simply didn't have enough money.  So we focus on more and more ways to get money and simply refuse to consider that it is the drug that is the underlying problem.  So it is with our attitude towards overpopulation. 

    The issue I have with science is that instead of learning how the world works, we seem to be striving to find the loop holes and determine how we can game the system.  The rules aren't supposed to apply to us; humans are supposed to be exempt.

    Whether such a position is tenable or even believable, I'll leave for others, but to me that's no longer science.

    Mundus vult decipi
    RK sez:

    Linear thinking about the future is hardwired into our brains. Linear predictions of the future were quite sufficient when our brains were evolving. At that time, our most pressing problem was figuring out where that animal running after us was going to be in 20 seconds. Linear projections worked quite well thousands of years ago and became hardwired. But exponential growth is the reality of information technology.

    We’ve seen smooth exponential growth in the price-performance and capacity of computing devices since the 1890 U.S. census, in the capacity of wireless data networks for over 100 years, and in biological technologies since before the genome project. There are dozens of other examples. This exponential progress applies to every aspect of the effort to reverse-engineer the brain.

    I'm no brain surgeon, but what I do know about the structure of the universe suggests to me that basing one's predictions in ANY realm on the assumption of "smooth exponential growth" is pretty much pasting a sign on your back that reads "Dear Inherent Fractal Geometry of Change: Kick me. Hard."

    I have not read PZ's commentary or heard the talk, but reading the response, there's one piece that stands out as a misdirection:

    It is true that the brain gains a great deal of information by interacting with its environment – it is an adaptive learning system. But we should not confuse the information that is learned with the innate design of the brain. The question we are trying to address is: what is the complexity of this system (that we call the brain) that makes it capable of self-organizing and learning from its environment?

    He rightly says the brain does not form without the input it gets from the rest of its environment (including the rest of the body). He then implies that's impossible to replicate, but is merely understanding the mechanisms that could make such a system develop. This is, of course interesting, but he throws on the "reverse-engineering the brain" label to make his idea sound a whole more exciting than it is.

    That all said, he also seems to be misunderstanding brain complexity. Yes there are highly repetitive patterns to structure, but there are also subtle differences at pretty much every location that make a world of difference and they ways these separate structures interact is even more complex. I won't comment more since I didn't hear the talk, but the logical leaps and lack of neuroscience knowledge in this response don't make me want to hunt down the full talk.

    jlparkinson1
    I haven't heard Kurzweil's talk and can't comment on it. I will say, however, that based on what I've read by and about Kurzweil in the past, including some of his past interviews, I'm not really anxious to read or see his talk. He loves to oversimplify biology and neurobiology by making these analogies to computers that, for anyone who knows a little about cells or the brain, don't really make a lot of sense. Most of his predictions are centered on the startling assumption that technology is going exponential, so anything we can't do at the moment we'll be able to do in future. This is just an extrapolation from current and past trends without anything else to back it up -- and so far as I can tell it owes more to sci-fi than to science. Not that there's anything wrong with sci-fi, mind you -- heck, I'm a sci-fi fan myself -- but it's not a good basis for projections of the future.