As an electromagnetics guy I stay in touch with a lot of what is happening in that segment of physics by subscribing to plain, ol' email lists. People who need info just fire off a question to the group and someone helps.
Occasionally recruiters spam the place because, you know, all of their recruiting emails are terribly important to the whole planet. When I got my email this morning, I saw this:
Subject: [SI-LIST] Google: Hiring SI Engineers, Mountain View, CA
Google's Platforms group is looking for SI Engineers. Platforms has
many openings for Senior, Junior SI engineers. These positions are all
in Mountain View, CA. Relocation assistance is provided. Platforms is
a separate group of elite hardware and systems software engineers building
the next generation of supercomputers.
Please feel free to forward.
Google Platforms Recruiter
-Using analytical and simulation tools to define design requirements at
system level. Making use of both time and frequency domain tools for
simulation of high speed serial links (chip to chip and across
-Using 2D and 3D modeling tools you will play a key role to generate
library of frequency dependent models for physical structures such as
transmission lines, vias, and connectors.
-Performing Power Integrity analysis and generating power delivery
-Performing timing, clock distribution and voltage margin analysis and
generate requirements for placement and routing based on simulations.
-Correlate simulation results with laboratory measurements using high
speed Oscilloscopes, VNA, PNA, TDR, Bit Error rate testers, and Spectrum
Now, I am not sure she knows what she means when she says 'supercomputer' and her use of the term echoes my general puzzlement about the company ( of course their billions in revenue per quarter means they don't have to care what I think ) - I read their paper from Stanford and it always seemed to make sense to just build 150,000 machines and a good algorithm and go after them. I Googled ( ha ha ) 'Google supercomputer' and the only articles I saw were basically hyping up the construction of another data center and calling it 'super.'
Today there are basically two ways of building a supercomputer, the vector approach and the scalar approach. The "For Dummies" version of the Vector approach means that the processor is 'looking ahead' for the next instruction. The Scalar approach means hooking processors together in a distributed method.
A distributed approach is what you see commonly used in smaller machines today. The NEC Earth Simulator, as an example, is a vector machine while the Redstorm machine from Cray is scalar.
Google must be building a distributed machine because they wouldn't need high-end signal integrity guys otherwise. All those processors hooked together over all those interconnects and channels means they have some concerns about electrical noise. To just do design they could use EDA tools and time/voltage engineers rather than frequency-domain people.
What they may be attempting is entirely different. The Redstorm supercomputer from Cray ( "Thor's Hammer" - a supercomputer and also the name of a vodka) cost around $90 million and is in use at Sandia National Lab, to do nuclear simulation testing. Google may have reached a limit on how efficient they can make their algorithms and are instead focusing on how efficient they can make the processors that run them.
If they're just taking 10,000 Opterons and putting them in a scalar computer, no issue at all and it explains the signal integrity engineers. If they are building these machines to learn how to design them, however, it means Google's next target could be ... IBM. And then it would be a reasonable jump to conquering Intel. Am I going to talk to them? No, I am more interested in the next Google than the current one. A free lunch and ping-pong tables don't excite me all that much.
Data, phones, computers, ping-pong tables - are we sure they aren't the Empire?