A chain saw, sporting all the safety interlocks, might still kill you if you use it carelessly. You’re self-confident and you suffer the usual optimism bias. Do you buy the chainsaw?

A driverless car (autonomous vehicle, or AV) is programmed to kill you under certain conditions. It’s clear, up front, that those conditions will be beyond your control; your self-confidence is irrelevant. Will you pay for this machine? Is the manufacturer insane to believe you would buy your own possible execution?

The second scenario refers to the Trolley Problem – the (usually) hypothetical dilemma in which you must choose to sacrifice your own life in a traffic situation, or kill one or more pedestrians. Recent articles have put forward the view that the Trolley Problem inevitably will be faced by AVs, and must be considered in their programming.

I believe the two scenarios describe two different psychological situations and lead to two different customer decisions. As the question seems urgent – all signs point to AVs appearing in the marketplace soon and inevitably – I have asked a guest blogger to add his views to mine.

Scenario 1 is a common situation. We often buy products that endanger us at moments when our skill and alertness – or the skill and alertness of those around us – are not near optimum. Ordinary automobiles are a perfect example. We pay our money, we do our best, and we take our chances. This is how humans have acted for thousands of years.

Cigarettes present a situation more similar, but not identical, to AVs. Smoking-related cancer is a crap-shoot; it may happen sooner, later, or not at all. The difference is that only tobacco addicts take the gamble. AVs will protect or endanger everyone who drives, rides or walks on or near a roadway.

Regular readers know I am a technological optimist. Today though, I am declaring that AVs in their currently envisioned form (i.e., on streets rather than on tracks) are unsuited for the market. Readers will also notice that this management blog usually deals with uncertainty reduction, a goal beloved by managers. I’m now highlighting a case where we embrace uncertainty. Why would humans embrace uncertainty? The reason that’s pertinent here is, “It’s the only way we can indulge our optimism bias.”

A Public Broadcasting System web site lets you self-test on the Trolley Problem and other moral choices. When someone – even PBS! – asks you the Trolley question, they commit a fallacy by assuming only two possible courses of action. If you, as subject, go along with the assumption, all you’re doing is acting as the questioner’s enabler. (Who’s testing whom, really?) The best possible reply to the trolley problem is, "I don't do hypotheticals." There are always unique and extenuating conditions in every real instance of risk, including traffic risk. There is always a third choice.

This makes one ask whether a machine can react appropriately to the unique conditions, and take that third or fourth option. I believe they cannot. First, static programming can prepare the machine to deal only with foreseen circumstances.

The previous sentence was just a scene-setting straw man; there will be no static programming. The central program governing the whole population of deployed AVs will “learn” and modify itself as each vehicle encounters new conditions. One problem though: The program’s creators won’t know what has been learned, except by watching the cars’ movements. The cars’ adaptive behavior will become unpredictable, and perhaps detrimental to human safety.

Okay, human drivers can be unpredictable and dangerous too. But there are additional problems with the AVs.

They are, for most of their operating time, autonomous from human control. However, they are not autonomous from each other. AVs communicate with each other and with a central computer program. This means the vendor’s challenge is not just to control an individual vehicle, but to control the entire network of vehicles.

The testing of small AV networks has gone fairly well, with only one collision I recall hearing about.* The tests won’t scale to the mass market, however, because the complexity of the system, and thus the chance of system breakdown, increases super-linearly with the number of networked vehicles.

The AV programmers may be trying to emulate flocking behavior. Each bird in a flock, following a few simple rules (for example, “Remain about two wing-lengths away from the bird on your left”), creates highly coherent mass behavior. The problems with translating this to AV networks are (i) the rules for avoiding other AVs, pedestrians, buildings, etc., have to be more complex than for schools of fish or flocks of birds; (ii) AVs will sometimes revert to the human driver’s control; and (iii) tests have shown the most accident-prone moments are the instants during that changeover from machine to human control. Some manufacturers are talking about AVs that will not allow a human driver to take control. I don’t know whether that is more scary or less, but I don’t think it would be practical for our transportation needs.

Consumers can’t be sure the computer program’s prime directive is to protect human life. Programmers may have given the AV network a directive to smooth traffic flow, or to maximize rider convenience. (Look at the pharmaceutical industry for examples of how things can go wrong in terms of prime directive.) Indeed, though I have not had time to fact-check it before writing this column, it’s my impression that articles on AVs tout life-saving as a by-product of autonomous vehicle deployment, not as the primary purpose.

Bonnefon et al (2015) surveyed public attitudes about AVs programmed to solve the Trolley Problem using the “Utilitarian” principle. 

Although [respondents] were generally unwilling to see self-sacrifices enforced by law, they were more prepared for such legal enforcement if it applied to AVs, than if it applied to humans. Several reasons may underlie this effect: unlike humans, computers can be expected to dispassionately make utilitarian calculations in an instant; computers, unlike humans, can be expected to unerringly comply with the law, rendering moot the thorny issue of punishing non-compliers; and finally, a law requiring people to kill themselves would raise considerable ethical challenges.
“Even in the absence of legal enforcement, most respondents agreed that AVs should be programmed for utilitarian self-sacrifice, and to pursue the greater good rather than protect their own passenger. However, they were not as confident that AVs would be programmed that way in reality—and for a good reason: They actually wished others to cruise in utilitarian AVs, more than they wanted to buy utilitarian AVs themselves.”

News items published after I began drafting this column suggest AVs will be rented by endusers, not purchased. That is, the manufacturers are targeting the taxi and Über markets. OK, but what’s really a horse of a different color is that they are pushing for urban districts in which human-driven vehicles will be prohibited. Seen from one angle, the logic is clear: In a collision between an AV and an ordinary vehicle, it’s easier to blame a machine than a human driver, so the liability issues get big. Seen from another angle, it raises a fundamental question about our future: Are our cities for machines, or are they for people?

In certain (admittedly, probably rare) AV crisis situations, you’re dead, and you know this in advance. When you are driving, there is always hope. Where there’s life, there’s hope. Where there’s machines, who knows? Manufacturers are rushing AVs to market without due regard for the human psychology that makes AVs unworkable.**

Now, over to my guest, who adds still another important dimension to the argument. Joe Rabinovitsj makes the dark implication that the easy market for AVs will be people who don’t mind – probably because they are accustomed to it – feeling helpless.


Agency and Autonomous Vehicles: a Hopeless Deal; a Response to Fred Phillips

            There is a consideration that our discussion of psychological hang-ups presupposes — that of agency. Specifically, I argue that whether one will have an optimism bias toward the dangers of AVs is really a matter of whether one feels in control of one’s actions when facing those dangers.

            Why is it important that the our possible inability to form optimism bias toward the dangers of AVs depends on agency? AVs are distinct from other dangerous technology products in a fundamental way. Namely, AVs prevent their users from feeling like they have control over those dangers, as opposed to many other dangerous consumer products that allow consumers to at least feel as though they can control the possibility of being harmed.

            Let me consider Dr. Phillips’ two examples in greater detail. First we have the chainsaw. As we have mentioned, when we walk into the hardware store and buy a chainsaw we probably know how dangerous it is (and if not, we should probably watch Texas Chainsaw Massacre for more information). But, as Dr. Phillips suggests there is uncertainty about whether you will get hurt by that chainsaw. This uncertainty likely leads to the optimistic belief that you won’t get hurt — what we have called the optimism bias — and allows you to buy the chainsaw despite its dangers.

            But it looks like we’ve missed a step in our analysis: why exactly are we uncertain that a chainsaw will hurt us? This question leaves open why exactly we develop an optimism bias in this case.

            We see an analogous unanswered question in our example of purchasing an AV. Indeed it will be pre-determined by an AV’s programming whether or not it will hurt us under certain conditions. But when deciding to purchase an AV, it is still uncertain that we will ever encounter those circumstances and therefore it is also in some sense uncertain whether or not we will get hurt. So we have uncertainty of harm in both the chainsaw and AV cases but optimism bias in the former and not in the latter. So, if we are committed to the idea that AVs resist the formation of optimism bias towards their dangers, it is unlikely that uncertainty of harm is the only factor determining our lack of optimism bias in these cases.

            I would like to propose that considering issues of agency can help explain why we seem inclined to form an optimism bias when it comes to the dangers associated with AVs rather than chainsaws. I am neither trying to nor am equipped to provide a fully fleshed-out, general theory of agency. Rather, I only intend to make claims about the extent to which people feel in control of their actions in certain situations.

            Why does the uncertainty that a chainsaw will hurt us inspire an optimism bias? Part of what may lead us to form the optimistic belief that we won't get hurt by the chainsaw, despite its potential dangers, is that we feel confident in our ability to handle a chainsaw. This idea that degrees of confidence in one’s ability to mitigate uncertain danger seems to explain why certain people purchase dangerous equipment and others don’t. Many people do not feel comfortable in their ability to handle a chainsaw, which may help explain why a lot of people don’t form optimism biases about the dangers of chainsaws and therefore don’t purchase them. On the other hand, professional loggers probably don’t think twice about purchasing chainsaws because they are so confident in their ability to use them, despite the fact that even loggers are injured by chainsaws.

            But what about AVs? Talking about degrees of confidence in one’s ability to mitigate the dangers of AVs at first glance seems silly. There is nothing we can do to prevent an AV from killing us in cases in which it is programmed to do so: we cannot mitigate the dangers associated with AVs. But that precisely is the interesting point. This feeling of helplessness before we even set foot in an AV could explain why we would be unlikely to develop an optimism bias towards the dangers of AVs.

            What is this feeling of helplessness really a feeling of? This feeling seems rooted in a question of agency — of how much control we have over our actions. This also could explain the development of an optimism bias when it comes to the dangers of chainsaws: the degree of confidence we have in our ability to mitigate the dangers of a chainsaw is an expression of how much control we feel we have over our actions.

            Where, though, does this feeling of helplessness — the feeling of a lack of agency — come from? Of course, considerations of human psychology have something to do with it. And if we are to take seriously the lack of optimism bias formation as a problem for AVs as an emerging technology, study of these considerations of human psychology will be helpful. But this I leave to the cognitive scientists, psychologists, neuroscientists, and decision theorists. The point that I want to offer to the broader discussion about autonomous vehicles and add on to Dr. Phillips’ suggestion is that there is something about AV technology itself that may cause individuals to feel this helplessness and be unable to develop optimism biases against the dangers of AVs in contrast to the way they can develop optimism bias towards the dangers of other technologies. And this ‘something’ seems to be that, unlike other dangerous technologies, AV technology takes the metaphorical and literal wheel out of our hands when it comes to dangerous situations. It is an AV’s source code rather than our shaky — but nonetheless our — hands that may steer us to our deaths. It is this aspect of AV technology, I believe, that makes us feel helpless at its prospect.

- Joe Rabinovitsj 4/24/16


* By some accounts, that collision involved a human driver crashing into an AV. The manufacturer claimed this incident revealed nothing wrong with the AV system. I heard this kind of upside-down logic once before, when karate master Mas Oyama knocked out a bull by punching it in the head. Oyama’s detractors said, “Nah, the bull was drugged.” As if just anybody could punch out even a drugged bull.

**My lecture slides on other aspects of AV technology assessment are downloadable from Slideshare, http://www.slideshare.net/fredphillips/the-selfdriving-car?qid=0c2ac803-13a0-4ae1-9dcb-5cb56e6d9a0b&v=&b=&from_search=1

Reference

J.-F. Bonnefon, A. Shariff and I. Rahwan (2015) Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars? http://arxiv.org/pdf/1510.03346v1.pdf