Yes, yes, we’ve covered this territory before. But you might have heard that Sam Harris has reopened the discussion by challenging his critics, luring them out of their hiding places with the offer of cold hard cash. You see, even though Sam has received plenty of devastating criticism in print and other venues for the thesis he presents in The Moral Landscape (roughly: there is no distinction between facts and values, hence science is the way to answer moral questions), he is — not surprisingly — unconvinced. Hence the somewhat gimmicky challenge.

We’ll see how that ones goes, I already have my entry ready (but the submission period doesn’t open until February 2nd).

Be that as it may, I’d like to engage my own thoughtful readers with a different type of challenge (sorry, no cash!), one from which I hope we can all learn something as the discussion unfolds. It seems to me pretty obvious (but I could be wrong) that there are plenty of ethical issues that simply cannot be settled by science, so I’m going to give a few examples below and ask all of you to: a) provide more and/or b) argue that I am mistaken, and that these questions really can be answered scientifically.

Before we proceed, however, let’s be clear on what the target actually is. I have summarized above what I take Harris’ position to be, and I have previously articulated what I think the proper contrast to his approach is: ethics is about reasoning (in what I would characterize as a philosophical manner) on problems that arise when we consider moral value judgments. This reasoning is informed by empirical evidence (broadly construed, including what can properly be considered science, but also everyday experience), but it is underdetermined by it.

This may be taken to be somewhat out of synch with Harris’ attempt, because he is notoriously equivocal about what he means by “science.” At one point (in an endnote of the book) he claims that science encompasses every activity that uses empirical facts, not just the stuff of biology, chemistry, physics, neuroscience, and so on. But if that is the case, then his claim comes perilously close to being empty: of course facts understood so broadly are going to be a crucial part of any ethical discussion, so what?

Therefore, for the purposes of this discussion I will make what I take to be a commonsensical (except in Harris’ world) distinction between scientific facts (i.e., the results of systematic observations and experiments, usually embedded in a particular theoretical framework), and factual common knowledge (e.g., the n. 6 subway line in New York City stops at 77th St. and Lexington). If you don’t accept this distinction (even approximately) then you “win” the debate by default and there is nothing interesting to be said. (Actually, no you still lose, because I can do one better: I arbitrarily redefine philosophizing as the activity of thinking, which means that we all do philosophy all the time, and that the answer to any question, not just moral, is therefore by definition philosophical. So there.)

I also need to make a comment about the other recent major supporter of the view that I’m criticizing: Michael Shermer. To be honest, I still don’t know exactly what Michael’s position is on this, even though I asked him explicitly on more than one occasion. At times he sounds pretty much like Harris (whom he openly admires). But if that’s the case, then one wonders why Shermer feels compelled to write another book on the relationship between science and morality, as he is reportedly doing. At other times Michael seems to be saying that both science and philosophy are needed for a comprehensive understanding of morality — both in terms of its nature and when it comes to applications of moral reasoning to actual problems. But if that is what he means, then no serious philosopher would disagree. So, again, why write a whole book to elucidate the obvious?

Anyway, let’s get down to business with a few examples of ethical questions that I think make my point (many others can be found in both recent books by Michael Sandel). (Entries are in no particular order, by the way.)

1. Should felons not regain their full rights as citizens after time served? Most US states (the exceptions are Maine and Vermont) prohibit convicted felons from voting while they are serving their sentence. This, seems to me, is relatively easy to defend: being a convicted felon entails that you lose some (though certainly not all) of your rights, and one can make an argument that voting should fall into the category of suspended rights for incarcerated individuals, just like liberty itself. More controversial, however, is the idea of disenfranchising former convicts, which is in fact the case in nine states, with three of them (Florida, Kentucky, and Virginia) imposing a lifelong ban from voting.

Is this right? How would science answer the question?

ne can’t just say, “well, let’s measure the consequences of allowing or not allowing the vote and decide empirically.” What consequences are we going to measure, and why? And why are consequences the ultimate arbiter here anyway? Consequentialism is famously inimical to the very concept of rights, so one would then first have to defend the adoption of a consequentialist approach which, needless to say, is a philosophical, not empirical matter.

2. Is it right to buy one’s place in a queue? This example comes straight from Sandel’s What Money Can’t Buy (hint hint), and there are several real life examples that instantiate it. For instance, lobbying firms in Washington, DC are paying homeless people to stand in line on their behalf in order to gain otherwise limited access to Congressional hearings. Yes, on the one hand this does some good to the homeless (even if one were to set aside issues of dignity). But on the other hand the practice defeats the very purpose of a queue, that is to allow people who care enough to get ahead of others because they are willing to pay a personal sacrifice in terms of their own time.

Even more importantly, as Sandel argues, the practice undermines the point of public hearings in Congress, which are vital for our democracy: instead of being truly open to the public, they become a near-monopoly of special interests with lots of money. Again, what sort of experiment could a neurobiologist, a chemist, or even a social scientist carry out in order to settle the question on exclusively empirical grounds?

3. Should discrimination (by sex, gender, religion, or ethnicity) be allowed? This may seem like an easy one, but even here it is hard to see what an empirical answer would look like. What if, for instance, it turns out that social and economic research shows that societies that provide disincentives to women in the workplace (in order to keep them at home raising children) thrive better (economically, and perhaps in other respects) than societies that strive for equality? Such a scenario is not far fetched at all, but I would hope that most of my readers would reject the very possibility out of hand.

It wouldn’t be right (insert philosophical argument about rights, individuals and groups here) to sacrifice an entire class of people in order to improve societal performance in certain respects. And, of course, there is the issue of why (according to which more or less hidden values?) we picked those particular indicators of societal success rather than others.

4. Those darn trolley dilemmas! I doubt there is need for me to rehash the famous trolley scenarios that can be found in pretty much any book or article on ethics these days. But it is worth considering that those allegedly highly artificial thought experiments actually have a number of real life similes, for instance in the case of decisions to be made in hospital emergency rooms, or on the battlefield. Regardless, the point of the trolley thought experiments is that the empirical facts are clearly spelled out (and they don’t require anything as lofty as “scientific” knowledge), and yet we can still have reasonable discussions about what is the right thing to do.

Even people who mindlessly choose to “pull the lever” or “throw the guy off the bridge,” following the simple calculus that saving five lives at the cost of one is the obviously right thing to do, quickly run into trouble when faced with reasoned objections. For instance, what about the  analogous case of an emergency room doctor who has five patients, all about to die because of the failure of a (different) vital organ? Why shouldn’t the doctor pick a person at random from the streets, cut him up, and “donate” his five vital organs to the others? You lose one, you save five, just as with the trolleys. And yet, a real life doctor who acted that way would go straight to jail and would surely be regarded as a psychopath.

5. How do we deal with collective responsibility? Another of Sandel’s examples (this one from Justice). He discusses several cases of apologies and reparations by entire groups to other groups,  cases that are both complex and disturbingly common. Examples cited by Sandel include the Japanese non-apology for wartime atrocities that took place in the 1930s and 40s, including the coercion of women into sexual slavery for the benefit of its officers; or the apologies of the Australian government to the indigenous people of that continent; or the reparations of the American government to former slaves or to native Americans. The list goes on and on and on.

What sort of scientific input would settle these matters?

Yes, we need to know the facts on the ground as much as they are ascertainable, but beyond that the debate concerns the balance between collective and individual responsibility, made particularly difficult by the fact that many of these cases extend inter-generationally: the people who are apologizing or providing material reparations are not those who committed the crimes or injustices; nor are the beneficiaries of such apologies or material help the people who originally suffered the wrongs. These are delicate matters, and the answers are far from straightforward. But to boldly state that such answers require no philosophical reasoning seems just bizarre to me.

Of course, in all of the above cases “facts” do enter into the picture. After all, ethical reasoning is practical, it isn’t a matter of abstract mathematics or logic. We need to know the basic facts about felonies, voting, queues, Congressional hearings, sex / gender / religion / ethnicity, trolleys, war crimes, genocide, and slavery. From time to time we even need to know truly scientific facts in order to reason about ethics.

My favorite example is the abortion debate: suppose we agree (after much, ahem, philosophical deliberation) that it is reasonable to allow abortion only up to the point at which fetuses begin to feel pain (perhaps with a number of explicitly stated exceptions, such as when the life of the mother is in danger). Then we need to turn to developmental and neuro-biologists in order to get the best estimate of where that line lies, which means that science does play a role in that sort of case.

Very well, gentle reader. It is now up to you: what other examples along the lines sketched above can you think of? Or, alternatively, can you argue that science (in the sense defined above) is all we need to make moral progress?

Originally on Rationally Speaking, October 11, 2013