Banner
2014: Postmortem

Oh no! I forgot to post a personal postmortem1 for the year 2014 like I did for the previous year...

Cognitive Abstraction Manifolds

A few days ago I started thinking about abstractions whilst reading Surfaces and Essences, a recent...

On That Which is Called “Memory”

Information itself is a foundational concept for cognitive science theories.But the very definition...

Polymorphism in Mental Development

Adaptability and uninterrupted continuous operations are important features of mental development...

User picture.
picture for Hank Campbellpicture for Helen Barrattpicture for Ladislav Kocbachpicture for Sascha Vongehrpicture for Bente Lilja Byepicture for Steve Schuler
Samuel KenyonRSS Feed of this column.

Robotics software engineer, AI researcher, interaction designer (IxD). Also (as Sam Vanivray) filmmaker, actor.

Working on my new sci-fi movie to be filmed in 2016:
BRUTE SANITY... Read More »

Blogroll
Ray Kurzweil's presentation at the 2010 H+ Summit [1] was largely a mix and mash of his talks from the past 5 years.  I had hoped he would give some insight into his upcoming book on how to reverse engineer the mind, but on the subject of the mind he just repeated old stuff from his 5-year-old book.  (And his mention of "new" theories of consciousness like Penrose's quantum consciousness theories were not even new in 2005, let alone 2010.)

As far as consciousness, he reiterated chapter 7 ("Ich bin ein Singularitarian") from The Singularity is Near [2].  According to chapter 7: "There exists no objective test that can conclusively determine its [consciousness] presence."
Stephen Wolfram's long-winded (no offense meant) talk at the 2010 H+ Summit was about predicting the future. 

The material was mostly standard Wolfram stuff but with some focus on future technology.  NKS points of view on AI were of course also present.  The most interesting theme for me was about human purpose.

Here are a few points I extracted:

Humans can't predict the future because of computational irreducibility, except for "pockets" or reduceability.  I'm not entirely clear as to what defines those pockets.  This notion apparently has the premise that human society is a sufficiently complex system so that humans have to run the program to see what happens.

There is only one sign in my cubicle at work--it states: "Adapt or perish." It appears that many industries refuse to do that.