The Affordable Care Act and data portability is forcing health care providers, and the vendors who service them, to accelerate development of tools that can handle an expected deluge of data and information about patients, providers and outcomes.

The volume of data is daunting - so are concerns about interoperability, security and the ability to adapt rapidly to the lessons in the data, writes Dana Gardner at Big Data Journal.

That is why Boundaryless Information Flow, Open Platform 3.0 adaptation, and security for the healthcare industry are headline topics for The Open Group’s upcoming event, Enabling Boundaryless Information Flow on July 21 and 22 in Boston, he notes.

Solving the issue will take a combination of enterprise architecture, communication and collaboration among healthcare ecosystem players. It's no secret that Collaboration and Participation are the big missing puzzles in the Science 2.0 mission.

What about informed consent in a world where, if estimates are correct, 90 percent of the world's data has been generated in the last two years? Has it become meaningless? Joe McNamee, executive director at European Digital Rights (EDRi) and Alex ‘Sandy’ Pentland, academic director of the MIT-Harvard-ODI Big Data and People Project, are having that discussion at debates.europeanvoice.com.

The discussion is happening in the context of a Facebook emotional manipulation study published by Proceedings of the National Academy of Sciences. The authors, from both academia and Facebook’s data science team, subtly changed almost 700,000 people’s Facebook feeds to produce slightly more or less emotional content to see how it would affect what users posted.

Nigel Shadbolt, Professor of artificial intelligence at the University of Southampton, sees the upside and likens it to the agricultural revolution. Sure, some worried there would be a boom and bust and mass starvation - that is why the phrase Malthusian is still with us today - but that never happened. America alone produces enough food to feed the world now. Data, and therefore the science that needs to manage larger volumes of data, have the same potential.

Software-Defined Storage may be part of that infrastructure. Traditional file- and block-level storage have gotten us to where we are but the future of Science 2.0 may be in object stores and software-defined storage mechanisms and new “data-defined” storage techniques.

File-level storage works well with traditional structured data, such as hard drives but as data volumes increased, many customers found success using block-level storage, where data volumes are virtualized across groups of devices that make up a storage area network. 
Object-based storage mechanisms, which break data away from the file-based hierarchies and assigning the object a unique identifier and stores complete the virtualization of data from the underlying storage device, could enabling scalability that is theoretically unlimited.

Since object stores typically run on clusters of commodity hardware–as opposed to proprietary appliances with the big-name SANs–they bring big cost benefits to the equation.