Are we on the road to uploading our brains to computers and living forever? 

Singularity proponents require a two-pronged approach to believing so; wildly overstating the technology curve of what future computers and programmers will accomplish and wildly understating the complexity of the human brain.  If you believe strongly enough, the future looks bright for an eternal...future.

But between the reality of what neuroscientists know (and don't even know they don't know yet) and the vision of Ray Kurzweil lies a giant chasm: A two-year old can look at a cartoon picture of a cat and say, "That's a cat" but a computer cannot.  Yet it is getting a little closer. Google X researchers, the people who gave us those creepy cars driving around headless photographing you in your bathrobe while you get the newspaper, say they can learn to recognize a cat - it just takes 1,000 machines using 16,000 processors right now.

They created a neural network which 'taught' itself to recognize Internet felines, they revealed at  the International Conference on Machine Learning (ICML 2012) in Edinburgh, Scotland last month. 

Taught itself?  No, really. It took a billion connections, a dataset of 10 million images - "a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization" - and a week of watching YouTube videos but the neural network learned to recognize a cat.  This is an achievement, because they say they doubled the previous accuracy of neural networks by having a selection of 20,000 things and it still worked. Obviously this is only recognition, not context - a child can create an entire fantasy world about cats and the computer still won't recognize a cartoon one, but they achieved 15.8% accuracy in recognizing 20,000 object categories, which they said is a 70% relative improvement over previous efforts.


The cat that started the singularity?  Credit: Google

A billion connections is a lot but even if we just assume context and understanding is a factor of quantity, the visual cortex alone has 100 trillion connections, so they would need 1.6 billion processors - gigantic, but the first cell phone wasn't portable either.(1) Yet this network was able to identify things without being told what to search for - their new work was able to classify objects on its own, humans were not 'supervising' it to funnel the learning into what they needed it to learn.

In other words, they say, it came up with the concept of a cat. And brute force is how humans do it too, it just seems subtle.  We learn to recognize people and things.  Some people have a much harder time remembering faces than others.  But don't get too excited about modeling the full human visual cortex, even the researchers behind this project don't think they have it correct just yet. "It'd be fantastic if it turns out that all we need to do is take current algorithms and run them bigger, but my gut feeling is that we still don't quite have the right algorithm yet," co-author Dr. Andrew Ng told John Markoff  at the New York Times.

Back to the hardware; how will processors of the future make the giant leap needed for both efficiency and manageable power needed to really get smart?  Researchers at the ACM International Conference on Computing Frontiers in Italy said the secret was to build in mistakes.  That's right, building in errors cut energy demands and dramatically boosted performance.

Insert your Skynet/Terminator joke here.  Those never get old. But a free t-shirt to anyone who makes me laugh and/or comes up with one I have not heard before.

Citation: Quoc V. Le, Marc’Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado, Jeffrey Dean, and Andrew Y. Ng, 'Building High-level Features Using Large Scale Unsupervised Learning', ICML 2012: 29th International Conference on Machine Learning,
 Edinburgh, Scotland, June, 2012 
arXiv:1112.6209v3 [cs.LG]

NOTES:

(1) Ratio of neuronal: non-neuronal cells is in the human brain, says Dr. Suzana Herculano-Houzel, posted by a commenter on Dr. Jeffrey Dean's (Google X - and co-creator of too many cool things to count, including this cat project) Google + page: