Do you have a soul?

Even if you do, psychologists say, free will is a conscious choice. 

This is nothing new. Religious people of 400 B.C. debunked atomic determinism because they believed in free will, just like they debunked genetic determinism of the early 20th century. To scholars, there has always been a difference between mind and soul but an article in Consciousness and Cognition rehashes ancient metaphysical arguments and use results from  Amazon Mechanical Turk volunteers to do so, which means it is not a representative sample.

We've seen this dichotomy in action so often that it wouldn't seem like it still merits analysis in 2014. A deeply religious person may be a ruthless businessperson and an atheist may work at Doctors Without Borders. Metaphysics has little to do with how people assess each other's behavior. 

"Neuroscience is no threat at all to this concept of choice," says lead author Andrew Monroe, a postdoctoral researcher now at Florida State University. 

"I find it relieving to know that whether you believe in a soul or not, or have a religion or not, or an assumption about how the universe works, that has very little bearing on how you act as a member of the social community," said Bertram Malle, professor of cognitive, linguistic and psychological sciences at Brown University and senior author of the new study. "In a sense, what unites us across all these assumptions is we see others as intentional beings who can make choices, and we blame them on the basis of that."

Results of of online volunteers. The perception that an agent had choice, not whether the agent had a soul, predicted whether the agent was perceived to have free will. Credit: Malle Lab/Brown University

Free will, quantified


To quantify whether people define free will as being metaphysical (as derived from the soul) or psychological (as derived from a mental capacity for independent, intentional choice), Monroe, Malle, and Kyle Dillon of Harvard University conducted two experiments.

In the first trial, 197 demographically diverse Amazon Mechanical Turk volunteers considered the rule-breaking actions of a randomly assigned character or "agent." That cast included a normal human, an "akratic" human with an inability to use his thoughts to control his actions, a cyborg with a human brain in mechanical body, an artificial intelligence in a human body, and an advanced robot.

Participants read about the agent and seven transgressions of varying seriousness and then rated the blame the agent deserved for each. Then the volunteers answered questions about the agent's capacities, such as their ability to choose and to form intentions, and whether they had a soul.

The results showed a clear difference between having a soul and having free will. Volunteers generally said each human agent (normal or akratic) had a soul, but only said the normal human had free will. Meanwhile they resoundingly said the cyborg with a human brain had free will but generally did not believe it to have a soul.

When it came to blame, people judged the normal human and the cyborg (the two with a mind that had the ability to make choices) most harshly. The akratic human (despite having a soul in the estimation of most), and the entirely artificial robot received the least blame.

Statistically, the capacities that most predicted whether volunteers said an agent had free will and should be blamed for wrong actions were the ability to make a choice with intentionality and being judged as free from control of others. Having a soul was a poor predictor of being seen as having free will or meriting blame.

"The thing that seems to be most important, and that people do extremely reliably, is that they care about an agent's capacity for choice-making," Monroe said.


Little role for the soul


The second experiment, conducted with 124 online volunteers who had not done the first one, ran much the same with important differences. In this case the cast of agents explicitly embodied four types covering the range of combinations of soul and choice: Normal humans had a soul and the ability to choose, robots had neither, akratic humans had a soul but no choice, and cyborgs had choice but no soul.

This experiment explicitly asked participants whether they believe in souls: 68 percent said they did, and participants were moderately religious, averaging 2.1 on a 0 to 4 scale.

Again, however, the characteristics that best predicted whether people judged the different agents to have free will or to be worthy of blame were the psychological ones of choice and intentionality. Soul's statistical role in predicting assessment of free will was only 7 percent and its influence in the degree of blame was zero.

In the statistical models, a shared notion of metaphysical and psychological capacities contributed some predictive value, but further analysis determined that it came almost entirely from the robot, who had neither a soul nor the ability to choose and therefore bore no free will or blame by any criteria.

The findings suggest that the concept of a soul, while widely held, is not readily applied in day-to-day situations, Malle said.

It also suggests that people could come to regard non-humans as having free will if they come to believe that those actors — for example, a sufficiently sophisticated robot — have the capacity of independent, intentional choice. Malle recently entered a collaboration studying whether robots can be infused with a sense of right and wrong.

Monroe is now studying the information processing that underlies how people make moral judgments and update those judgments in response to new information.

Citation: Andrew E. Monroe, Kyle D. Dillon, Bertram F. Malle, 'Bringing free will down to Earth: People’s psychological concept of free will and its role in moral judgment', Consciousness and Cognition Volume 27, July 2014, Pages 100–108 DOI: 10.1016/j.concog.2014.04.011. Source: Brown University