Let's Start Using Our (Si) Brains


scientists have sequenced the human genome -- the blueprint for all of the proteins in biology -- but how can we understand what these proteins do and how they work?

...
However, only knowing this sequence tells us little about what the protein does and how it does it. In order to carry out their function (eg as enzymes or antibodies), they must take on a particular shape, also known as a "fold." Thus, proteins are truly amazing machines: before they do their work, they assemble themselves! This self-assembly is called "folding." One of our project goals is to simulate protein folding in order to understand how proteins fold so quickly and reliably, and to learn about what happens when this process goes awry (when proteins misfold).
http://folding.stanford.edu/English/Science

The study of how proteins fold requires inordinate amounts of computing power.  Scientists at Stanford worked out a solution - they started a distributed computing project in which multiple users donated their spare CPU cycles to the project.  Folding at Home was born.  Later, observing that graphics cards represent a generally under-used resource, the team developed software to use spare GPU power in the project.
in 2006, we began looking forward to another major advance in capabilities. This advance utilizes the new, high performance Graphics Processing Units (GPUs) from ATI to achieve performance previously only possible on supercomputers. With this new technology, as well as the new Cell processor in Sony's PlayStation 3, we will soon be able to attain performance on the 100 gigaflop scale per computer. With this new software and hardware, we will be able to push Folding@home a major step forward.
http://folding.stanford.edu/English/FAQ-ATI

There are many other areas in which the raw computing power needed has been found in the modern graphics card.  A recent paper entitled 'Massively Parallel Computation Using Graphics Processors with Application to Optimal Experimentation in Dynamic Control'  by Sudhanshu Mathur and Sergei Morozov states:

The rapid increase in the performance of graphics hardware, coupled with recent improvements in its programmability has lead to its adoption in many non-graphics applications, including wide variety of scientific computing fields.

In the quest to satisfy insatiable demand for high-definition real-time 3D graphics rendering in the PC gaming market, Graphics Processing Units (GPUs) have evolved over the past decade far beyond simple video graphics adaptors.  Modern GPUs are not single processors but are rather programmable, highly parallel mulri-core computing engines with supercomputer-level high performance floating point capability and memory bandwidth.
http://ideas.repec.org/p/pra/mprapa/16721.html


In the early days of computing a computer was kept in a special room and only the elite were granted physical access.  Programs were written by and for scientists.  If you didn't know Fortran you got handed a broom.  There were no operating systems, no word processors and - shudder - no gui card games.

We have certainly come a long way since Alan Turing wrote the world's first programmers manual.
Electronic computers are intended to carry out any definite rule of thumb process which could have been done by a human operator working in a disciplined but unintelligent manner. The electronic computer should however obtain its results very much more quickly.

Today there are millions of PCs around the world.  The great majority of them are under-utilised - especially when the user is reading or writing text files.


The future

Now that we have gained some 'supercomputing' experience in combining the power of the CPU with any unused GPU power I suggest a step forward.

There is a need for a new kind of operating system for the general user.  A bottom layer would virtualise the CPU cores in combination with the GPU cores and present itself to an upper layer as a single entity.  This is rather like the hard disk RAID software which can present a rack of mixed hard drives, a JABOD - Just A Bunch Of Drives - to the computer user as a single very large drive.

We need a single JABOC - Just A Bunch Of CPUs.  That, I suggest, would make it easier for the average skilled programmer to write code using maximal raw computing power.

Just imagine a new version of Linux with this feature.  You could get the raw data from e.g. NASA and run your own climate model.  From 80 million years ago to last week.  All analysed in under 10 minutes.

Perhaps it's not such a good idea.  Can you imagine a world in which we have not one, not two, but ten million different flavors of hockey stick?

------------------------------------------------------------
Related / further reading:
http://profiles.nlm.nih.gov/KK/