In 2007, I had a series in my personal blog about technology in Star Trek. I remembered that series the other day, when I read an article in New Scientist about the science in Battlestar Galactica.[1]

The New Scientist article focuses on human physiology and psychology, and on gravity and g-forces. It doesn’t look at power, speed, computers, astronomical issues, or any of a number of other things that would have been fun to see covered. Oh, well.

But it made me think to come back to the idea of how technology is portrayed — in particular, how computers had been and are depicted in movies and television.

Star Trek, of course, and other futuristic stories, such as 2001: A Space Odyssey (which, at the time the movie was made,[2] was still set more than 30 years in the future), sported talking computers that showed varying degrees of intelligence. The Enterprise’s computer mostly responded to spoken commands and queries, but other computers in the Star Trek universe were semi-sentient to the point of being able to be confused or tricked, an angle that was central to the plot of more than one episode. And 2001’s HAL-9000, well... we all know what happened there, whether or not we understood what was happening.

But cinematic views of contemporary computers have always been somewhat odd, usually aimed at being obvious or flashy. In the old days, they always consisted of banks and banks of tape drives and flashing lights, long after real computers had few or none of those. As personal computers came around and more people had more realistic images of computers — boxes on their desks, rather than mysterious roomfuls of equipment — the tape-drives-and-lights depiction had to change.

Now, computers look like what we’re used to seeing, but what they can do seems just about unlimited. The stuff that’s comical now has to do with those limitless capabilities and the silly user interfaces.

Computers on television can search for anything, find anything, display anything. They can zoom in on the minutest details, rotate images in three dimensions to show any perspective, and go through millions or billions of data records or documents in no time at all. They understand commands or queries in human language, just as the Star Trek computer did, except we have to type the instructions, not speak them.

The interesting thing about that last bit is that it’s actually backward from what the real technology can do: we’re much better at having a computer turn the spoken language into the right words than we are at having the computer understand what those words and sentences mean. My former colleagues at IBM’s Watson Research Center have long had the ViaVoice products working quite well, but they’ve only recently begun a “grand challenge” project to get a computer to understand human-language questions well enough to play Jeopardy! competitively.

We often see police investigators zooming in on low-quality surveillance videos and “enhancing” a cropped portion in order to identify a face, read a sign or a car’s license plate, or the like. A certain amount of computer-enhancement can, indeed, be done, and technology for image processing is getting better all the time. That said, for the most part what they’re doing is ridiculous. Image data can only be extrapolated to a point, and the reality is that information that isn’t there can’t be created out of nothing. A low-resolution image can’t magically become high-resolution with the aid of a computer, and if you zoom in on a 30-pixel-square portion of a grainy, one-megapixel security-camera image, you will never, with any computer, get a clear image of the suspect’s face.

The same goes for the 3-D rotation: such manipulation is possible, and it’s done all the time when the 3-D data are available. But a two-dimensional source does not have that information, and, beyond approximation and guesswork, such an image can’t be rotated to show a side view.

When was the last time you did a search on your computer and wound up with a large blinking, beeping box in the middle of the screen, saying, “NO MATCH FOUND”? A pop-up box with an “OK” button on it, maybe, but we just don’t have them blink and beep repeatedly.

The other thing we don’t have them do is display the thing to be searched for — often a face or a fingerprint — on the left side of the screen, while rapidly flashing all the unmatched images we’re searching through on the right side. That may look cool on TV (which is why they do it), but in reality it would slow the search down so much that it’d be entirely useless. No one would ever design a real search program that did that.

Finally, in the movies people always seem able to go up to any computer, start any program, and use it expertly. They can even do this with special-purpose computers, not just ones that run Windows or Unix or MacOS. To address the most ridiculous case that comes to my mind: it would simply not be possible to connect your laptop to a space-alien’s computer and upload a computer virus that would take out the computer system and defeat the aliens.

Suspension of disbelief has its limits.
 


[1] The recent, well-received series, of course, with Edward James Olmos, not the horrible, short-lived one from the late ’70s, with Lorne Greene.

[2] And, for the record, the book came from the movie’s screenplay, not the other way around.