Last week, the L.A. Times was able to post an article about an earthquake 3 minutes after it happened.

No human could write that fast. And no human did.

Instead, the story was generated by a computer algorithm called Quakebot, co-created by Northwestern University Professor of Computer Science Dr. Kristian Hammond and programmer Ken Schwencke. All it needed was statistics and clichés. It seems a robo-reporter can write about earthquakes as well as anyone and Schwenke filed the report and proved it. It's meant to be supplemental, Schwenke told Will Oremust at Slate but, come on, who believes that stuff about how a robot will not replace a journalist?  Schwenke might as well be the Neanderthal inviting the first Cro-Magnon to sit at the campfire.


Can you build a version of Quakebot for science articles? If so, write me. We will be retired in the islands by this time next year if you can. Image credit and link: Slate/iurii /Shutterstock.com

Singularity Hub’s Jason Dorrier talked to Hammond and Hammond is not waffling. He predicts that 90% of the news could be written by computers by 2030 and that it will be Pulitzer Prize quality by 2017. We have already seen it doesn't take facts or real people or even real events to get a Pulitzer Prize, now it won't even take fake stories written by real journalists. 

So what? Studies have found that the public can't tell the difference. And doesn't care.


Respondents' assessment about the origin of the text (software or journalist). N = 45 (one answer missing). DOI:10.1080/17512786.2014.883116

Will it be any differently for science journalism? Most of the public just wants to know what is going on, they don't need 'context'. Long-form writing will always be around but, like Vaudeville or doo-wop music, it may not be what the bulk of the people want.

Seriously, if you are a savvy programmer with an understanding of big data and can go beyond something canned like earthquakes or sports, let's work on it. We'll basically be printing money by next year.