A decade ago, automated telephone menus were what everyone hated. It was common to press 0 or yell 'Agent' at the phone because you knew that after inputting everything it wanted, when you got to an agent they were going to ask you for everything all over again.

Today, you probably ignore the chat window that opens up on a website, especially if you think it is a bot. A Verizon bot is going to be able to help you with nothing, so going back to find your account number, which means closing the chat window, is a waste of time, when the real agent who replies will ask for it all over again(1)

In health care, things may be even more off-putting. Health care is personal so you may be leery if an avatar doesn't look 'like' you. Some are concerned that a chatbot’s skin color or perceived ethnicity may affect how open patients will be to providing information but it makes little sense. People click through Terms and Conditions without reading it twice per week, three times per day some bot is calling about a car warranty. There is no reason a bot avatar might mean they won't follow recommendations based on that.



If anything, a bot might make it easier to disclose things people may not to a human. If you ever visited Amsterdam's Red Light District you saw a police sign that said, basically 'if something goes wrong, don't try to gloss it over or leave details out, there is nothing you are going to say that will shock us' - there is comfort in that. An avatar can't be surprised.

On the other hand, an avatar will lose trust if they try to nudge you. If an avatar claims eating meat is 'linked' to cancer, patients will wonder if a company sponsored that message. The authors use the example of marketing to try and promote vegetarian options in schools, a relic of the Obama administration, and they note it can have consequences. In the case of forced vegetarian options in schools it led to extraordinary food waste, and poor kids who may have had their most nutritious meal of the day at school being denied it to promote an agenda.

There aren't many answers to be found. Surveys often muddy the water on this kind of thing; should we not use female avatars, because it reinforces stereotypes about women in health care? Yet if female avatars are not used, aren't we denying the presence of women in health care, the same way TV shows about nurses look ridiculous if there is no Filipino? What if you are black and you get a black avatar, does that seem forced? What if you get a white avatar, is that making a minority feel unseen?

Chatbots will remain good for low-impact issues but if it is true that patients distrust a skin color or hair style of someone different, culture will need to progress far more than AI will.

NOTE:

(1) Then send you a link and make you click it and come back to the website before fixing the common billing error they hope you don't notice.