Growing new connections in your brain is more essential to learning than strengthening established connections. You may know how to be a tax lawyer because you've spent years strengthening your old tax law connections, but these are utterly useless when you take up ballroom dancing to find yourself a mate.

Bruce Lee teaching the Cha Cha.


Messes of neural connections get created in your brain as you Tango your way from Salsa to Swing. One day you might even take a fancy to improvise your own moves, breaking away from the classical mess as Bruce Lee himself put it. Learning canned martial arts, he realized, can get you in trouble in a real fight because real fights are fluid.

The Numenta approach follows these Jeet Kune Do precepts. Their inspiration, that is to say, comes from the fluidity of biological computation, which we know works. This is entirely different than methods falling under the rubric of machine learning tasked with the problem of classifying features.

New data streams lead to new neural connections. Old neural pathways keep strong if exercised, else weaken as does the Spanish you learned in high school many moons ago. Quiero una cerveza por favor. Merci! 

It's nevertheless remarkable to take pause and see for ourselves how seemingly lifelike classical deep learning techniques can appear given the brute power of today's computers. A 15 minute TED talk by Fei-Fei Li, Director of the Stanford Artificial Intelligence Lab and the Stanford Vision Lab, shows how nearly 50,000 people annotating tens of millions of pictures from the internet was used to train a fixed architecture convolutional neural network (CNN) to produce rudimentary sentences about pictures. It's uncanny! But also lifeless.

A CNN

A cat on a table by a cake.


THE Stanford machine can pick out cats by cakes and kites in the sky, partitioning photos into categories with spatial attributes – cat by cake on table. Yet ImageNet is temporally fractured. There are something like 70,000 temporally disjoint images of cats in ImageNet. This kind of training set is going to pose problems if we’re trying to make machines act more like humans.

You and I experience life as a continuous, streaming process only rarely interrupted by surprises. Taking time into account is yet another essential ingredient in the Numenta approach to cognition, which, by definition, is not present in fixed architecture networks. Weights change in time in fixed architecture neural nets, as they might if a new tax law is amended, but no new connections get grown as they do on that dance floor twirling hand in hand with someone new. It’s being water, my friends, and that's important.

If you have another 15 minutes after watching Professor Fei-Fei Li's TED talk, the first 15 minutes of a YouTube presentation at IBM by Jeff Hawkins (founder of Numenta) should leave you little doubt that any hope for real machine cognition akin to biological computing can’t be based on fixed architectures. This isn't to say that fixed architecture methods in deep learning aren't useful.

There’s been loud criticism of the Numenta approach to biology inspired machine cognition by Facebook’s Yann LeCun, handpicked by Zuckerberg to run Facebook’s AI lab. Yann LeCun questions whether the Numenta approach is minimizing some objective function or not. My take from the Numenta talk at IBM is that nobody may ever really know because the evolutionary process is not uniquely mappable to some function. Grand Chess Master Kasparov probably follows significantly uniquely different thinking than Grand Master Karpov follows. Is there commonality between these two champtions? Probably - certainly more than with Deep Blue, the IBM computer that beat Kasparov back in 1996. Each Human Grand Master is somehow intuiting maybe hundreds of good solutions from an astronomically large space of solutions versus Deep Blue, which literally checks hundreds of millions of solutions or more. In principle, the IBM chess playing algorithm might be mapped to minimizing some objective function because it's based on a fixed architecture.

If the machine at Stanford currently outperforms Numenta at classifying ImageNet pictures, one has to consider the scale of the Stanford machine. The Stanford machine in the TED talk has 24 million nodes, considers 140 parameters, and has 15 billion fixed connections. This mismatch, however, is likely to end soon as Numenta joins forces with IBM,  "IBM creates a research group to test Numenta, a brain-like AI software."