Human vision is very advanced.  While a human child can look at a cartoon picture of a chicken and know that's a chicken, computers cannot.   We can also recognize cars, people, trees and lampposts instantaneously on a busy street without much thought and decide what to do.

That's an enormous number of computations and just one reason that coming up with a computer-driven system that can mimic the human brain in visually recognizing objects has been difficult.  Eugenio Culurciello of Yale's School of Engineering&Applied Science has developed a machine dubbed Neuflow, which is 'based' on the human visual system and operates much more quickly and efficiently than predecessors, coming closer to the mammalian visual system.

The system uses vision algorithms developed by Yann LeCun at New York University to run large neural networks for synthetic vision applications. One application is a system that would allow cars to drive themselves. In order to be able to recognize the various objects encountered on the road such as other cars, people, stoplights, sidewalks and the road itself, NeuFlow processes multiple high resolution images in real time.



They say the system is also extremely efficient,  running more than 100 billion operations per second using less power than the power a cell phone uses - what bench-top computers with multiple graphic processors require more than 300 watts to achieve.

Newflow
NeuFlow is a supercomputer that mimics human vision to analyze complex environments, such as this street scene.   Credit: Eugenio Culurciello/e-Lab

"One of our first prototypes of this system is already capable of outperforming graphic processors on vision tasks," Culurciello said.

Culurciello embedded the supercomputer on a single chip, making the system much smaller, yet more powerful and efficient, than full-scale computers. "The complete system is going to be no bigger than a wallet, so it could easily be embedded in cars and other places," Culurciello said.

Beyond the autonomous car navigation, the system could be used to improve robot navigation into dangerous or difficult-to-reach locations, to provide 360-degree synthetic vision for soldiers in combat situations, or in assisted living situations where it could be used to monitor motion and call for help should an elderly person fall, for example.

Culurciello presented the results Sept. 15 at the High Performance Embedded Computing (HPEC) workshop in Boston, Mass.