Heads up, banner-plane pilots: you, like so many before you, could lose your job to a sleek, 21st century technology that not only performs better than you, but looks way cooler doing it.

A new project out of MIT’s SENSEable City Lab aims to create interactive, 3D displays in real space – not on a touch screen or with the help of gaudy glasses. They created tiny, remote-controlled helicopters equipped with LEDs to act as single pixels, which can be maneuvered in real time. While current technology only allows the researchers to orchestrate a few heli-pixels at once, they can easily choreograph their positions and colors to create designs from shapes to flat photographs to fluid 3D presentations that can be experienced from any direction.

The lead researcher, E Roon Kang, likens the project to an effect often seen in old Disney cartoons, where a swarm of bees forms words or objects then acts collectively to fend off a honey-seeking invader. It’s also similar to the art style pointillism, where tiny dots of color appear as a cohesive image when viewed from a distance.

With continued improvement,  like making the pixels smaller and synchronizing more of them at once, the possibilities for Flyfire are endless. Think interactive frog dissections in science lab, visualizations of outer space or molecular structures, annoying billboards that come to life as you pass by.

A video demonstrating Flyfire’s potential is available here.
About a minute into the video, the pixels arrange themselves to depict the Mona Lisa. Surely there is no better way to demonstrate the viability of a new technology than to use it to recreate a 16th century masterpiece. But the MIT group isn’t the first to do it.



About a year and a half ago, Mythbusters stars Adam and Jamie also recreated the Mona Lisa – with an 1100-gun paint machine. They were comparing CPUs and GPUs to describe sequential versus parallel processing. The “CPU” (one paint gun) takes about 20 seconds to draw a blue smiley face on a canvas across the stage. The “GPU,” on the other hand, recreates the Mona Lisa in a matter of milliseconds.

A little more than four years ago, scientists in The Netherlands used their state-of-the-art emotion recognition software to determine that Mona Lisa was 83 percent happy, 9 percent disgusted, 6 percent fearful and 2 percent angry. They of course acknowledge that the results were unscientific, because they had no neutral Mona Lisa control image and the software doesn’t pick up on subtler emotions like sexual suggestiveness.

Countless other groups have also modeled, analyzed, manipulated and recreated the Mona Lisa to demonstrate their technologies at work (like the 8x8 micron version created with nanolithography seen to the right). And that’s not including the plethora of amateur interpretations of the painting, made out of things like cups of coffee, motherboards, and train tickets.

I’m not sure why the Mona Lisa is the go-to historical figure for showing off a new technology. Perhaps there is some unwritten code amongst scientists that someone will decipher another five centuries from now.

Have any other examples of scientific renditions of the Mona Lisa? Share them in the comments!