I haven't been on this site in a while, and in looking at a past article it strikes me that some of my previously posted views have changed. (Congratulations, right?) Normally, this is the kind of thing I'd tuck away in some cortical sulcus, but I think it's worth posting because it has got me thinking about some bigger questions. My post on commercial brain-computer interfaces from November of last year meandered around that peculiar industry for a bit and ultimately examined a study in which the authors discuss a proof-of-concept for pulling out volunteers' banking locations, ATM PINs, and the like. I ended the piece with these thoughts:
Give decoders another ten years and they will likely know what you are thinking about, too. Sure, the claim is unnecessarily inflammatory, but buried beneath the sensationalism is a kernel of scientific truth that is difficult to ignore. Open-sourcing the software (and hardware—any institutionally affiliated researcher can pick up an Emotiv headset for free) is an invitation to experimentation. Technology is only going to improve.
Which, in hindsight, is a little too fear mongering for my tastes. Granted, this was before news about the NSA broke, so maybe some of the apprehensions are justified. But I'm not happy having left the essay on that note. Recently, I've been working on a piece on Ramirez et al.'s false memory paper. The study, which describes experiments in which the researchers create artificial fear memories in mice, certainly lends itself to the same dystopian fears. What strikes me as hugely important, though—and this is so obvious it's nearly a truism—is that it doesn't do anyone any good to pander to that shade of journalistic yellow. Especially when it comes to exciting, non-malicious neuroscience. Alarmist posts are an insidious, backwards excuse for science writing. (If you're looking for something to make you angry, take a look at this article, which was shared nearly 30k times, and its accompanying comments section.)

As science communicators, it is our job to inform, excite, and turn peer review in public review. The latter clause is, I charge, the hardest to accomplish and the easiest to exploit. I just graduated from university, and I'm working on figuring out how this field works. If I've learned anything from the best (and worst) science writers, it's that integrity is absolutely paramount. 

The previous claims might imply that I favor objectivity over subjectivity in science writing, but that's not completely true. If we strive for ultimate objectivity, our work is not only dulled, but stripped of its human element. Of course, one can (and should) write objectively about findings without becoming a bromide, but by 'ultimate objectivity' here I'm invoking something else. Ultimate objectivity would mean a lack of extrapolation—any interpretations past those of a given study's authors' tend to verge on subjectivity. Sometimes scientists even have trouble drawing the line. Which, I argue, is a good thing. Subjectivity, here, is what gives science some of its wonder. Our ability to extrapolate allows us to reflect on real-world implications of scientific studies, and it can personify bench work that would otherwise go unnoticed and misunderstood. But this ability is wasted if we don't append the necessary qualitative confidence intervals.

In an effort to ground this a little, let me bring the conversation back to the fact that I've changed my mind about a previously published piece. I feel okay about that. It doesn't feel contradictory; it feels like growth. Working on the new essay and comparing it to the previous piece on a similar subject has given me the opportunity to learn a bit more about how I respond to science. Orwell and Inception still seem like good popular culture touchstones, and I'll likely allude to them again. Inciting some looming, amorphous fear is something I won't do.

If a writer can make science relatable to everyday life, it's probably worth doing. It's not always possible. There are few policy implications that might be inspired by the discovery of a new supernova. For discoveries that are closer to home, though, discussing how science affects us seems like an outright duty. One of the risks of interpretation and extrapolation is occasionally being wrong, but I think it's a risk worth taking. Too often, science is billed as dehumanizing. To me, it seems like a good way to protest that viewpoint is to, well, humanize our work as scientists and science writers. This means thinking about what science means in the day-to-day, making and qualifying predictions, reflecting on our mistakes and evolving views, and above all, acknowledging that science and writing are conducted by people.