A recent study involved 1,500 individuals evaluating photographs. The team asked volunteers to count the number of people in a photograph of a crowd and supplied suggestions that were generated by a group of other people and suggestions generated by an algorithm.
As the number of people in the photograph expanded, counting became more difficult and people were more likely to follow the suggestion generated by an algorithm rather than count themselves or follow the "wisdom of the crowd."
The choice of counting as the trial task was an important one because the number of people in the photo makes the task objectively harder as it increases. It also is the type of task that laypeople assume is easier for computers.
"This is a task that people perceive that a computer will be good at, even though it might be more subject to bias than counting objects," says professor Aaron Schecter from the University of Georgia. "One of the common problems with AI is when it is used for awarding credit or approving someone for loans. While that is a subjective decision, there are a lot of numbers in there -- like income and credit score -- so people feel like this is a good job for an algorithm. But we know that dependence leads to discriminatory practices in many cases because of social factors that aren't considered."
Facial recognition and hiring algorithms have come under scrutiny in recent years as well because their use has revealed cultural biases in the way they were built, which can cause inaccuracies when matching faces to identities or screening for qualified job candidates, Schecter said.
Those biases may not be present in a simple task like counting, but their presence in other trusted algorithms is a reason why it's important to understand how people rely on algorithms when making decisions, he added.