IBM takes data seriously, as seriously as they took Business Machines back in their early days.

They want to be the resource for the blanket concept of The Internet Of Things. Someone will have to do it, because the amount of information available today is overwhelming. When you can produce 250 gigabytes of data an hour, you have too much data.

Or you are onto something big.

This idea of being able to parse big data and make meaningful sense of it was one of the cornerstones of the Science 2.0 concept back when I started creating this in 2006, but it was unrealistic as an endeavor then. But a few years ago I was at a conference on big data and I wondered if marketing people might get to the data part of Science 2.0 before I could - marketing people have $100 billion at stake and I knew even by then science labs were not going to pay $50,000 for a tool. It wouldn't be big advertising companies, of course, they are as slow as monolithic as government-funded efforts, but rather someone who would sell them the information they would use to put on a show in conference rooms.



Of course, without collaboration, and connecting all of the data that might be available, there isn't any value to big data in science, but connecting the data we have is vital to get people comfortable with Science 2.0. 

But how to do it? David Corrigan, Director of IBM InfoSphere Product Marketing at IBM, thinks they are the way to go, and takes some shots at open source Hadoop along the way. One thing is clear; there remains a big hurdle between the big data buzzterm and the Science 2.0 vision. But at least people are working on it.