I recently wrote about research on people's beliefs using fMRI technology to see how different parts of the brain were activated. Near the end of their paper, the researchers commented that such results could be useful as a lie detection technique. The differences in brain activity between perceptions of truth and falsehood seemed significant enough to warrant putting these results into practical real-world use. Indeed, there are already two companies, No Lie MRI and Cephos, promoting their services to screen customers who need to prove they are telling the truth. The difficulty is that lying appears to be a complex activity.

The first fMRI scan to be used as evidence in an American court happened at the end of 2009. However, this was an attempt to prove that the defendant had a psychiatric condition, rather than a question of establishing the truth of what physically took place. In this case, the accused was still found guilty, but a precedent has now been set for the admissability of fMRI data as evidence.

The Cephos website includes a page of scientific references on using fMRI to detect lying. It does not include the Harris et al paper I wrote about. Some of the papers implicate similar brain areas to those found for disbelief, such as parts of the prefrontal cortex. This opens up the alarming prospect of someone genuinely telling the truth, but responding negatively to the statement as presented, having a similar profile to someone lying. Indeed, the number of false positives seems to still be too high for comfort. Cephos is obviously aware of this as they take great care in crafting unambiguous questions tailored to each specific case.

But one real problem that research on lying and deception has to solve is to design experiments where the participants have a genuine motivation to lie. Deception is, as I said, a complex process of simultaneously knowing the truth inside and yet trying to give the appearance of a different truth. But current researchers are finding it difficult to generate real motivation and real rewards for the kinds of big whopping lies one would expect from a guilty defendant trying to prove innocence.

I have little doubt that as the techniques are refined we shall see attempts to submit fMRI lie tests as evidence in court. However, even the widely-known polygraph is not admissible in a US court (apart from in New Mexico), so what chance does the fledgling fMRI have? Such questions are part of what has become known as Neurolaw. For the moment, the judges still rule over what can and cannot be admitted as evidence. But looking over the horizon, if the neurosciences eventually succeed in mind-reading then trials will be little more than a quick rewind to the scene of the alleged crime as experienced by the individual. At that point, will law become a branch of science?