Skip to main content
Research

Building Trust with the Algorithms in Our Lives

Consumers are wary of the recommendations made by algorithms. But according to new research co-authored by Yale SOM’s Taly Reich, showing that an algorithm can learn—that it improves over time—helps to resolve this distrust.

Algorithms are omnipresent in our increasingly digital lives. They offer us new music and friends. They recommend books and clothing. They deliver information about the world. They help us find romantic partners one day, efficient commutes the next, cancer diagnoses the third.

And yet most people display an aversion to algorithms. They don’t fully trust the recommendations made by computer programs. When asked, they prefer human predictions to those put forward by algorithms.

“But given the growing prevalence of algorithms, it seems important we learn to trust and appreciate them,” says Taly Reich, associate professor at Yale SOM. “Is there an intervention that would help reduce this aversion?”

New research conducted by Reich and two colleagues, Alex Kaju of HEC Montreal and Sam Maglio of the University of Toronto, finds that clearly demonstrating an algorithm’s ability to learn from past mistakes increases the trust that people place in the algorithm. It also inclines people to prefer the predictions made by algorithms over those made by humans.

In arriving at this result, Reich drew on her foundational work on the value of mistakes. In a series of prior papers, Reich has established how mistakes, in the right context, can create benefits; people who make mistakes can come across as more knowledgeable and credible than people who don’t. Applying this insight to predictive models, Reich and her colleagues investigated whether framing algorithms as capable of learning from their mistakes enhanced trust in the recommendations that algorithms make.

In one of several experiments, for instance, participants were asked whether a trained psychologist or an algorithm would be better at evaluating somebody’s personality. Under one condition, no further information was provided. In another condition, identical performance data for both the psychologist and the algorithm explicitly demonstrated improvement over time. In the first three months, each one was correct 60% of the time, incorrect 40% of the time; by six months, they were correct 70% of the time; and over the course of the first year the rate moved up to 80% correct.

Absent information about the capacity to learn, participants chose a psychologist over an algorithm 75% of the time. But when shown how the algorithm improved over time, they chose it 66% of the time—more often than the human. Participants overcame any potential algorithm aversion and instead expressed what Reich and her colleagues term “algorithm appreciation,” or even “algorithm investment,” by choosing it at a higher rate than the human. These results held across several different cases, from selecting the best artwork to finding a well-matched romantic partner. In every instance, when the algorithm exhibited learning over time, it was trusted more often than its human counterpart.

Of course, Reich recognizes that companies often can’t or don’t want to disclose specific details about the accuracy of their algorithms. Most likely, they won’t break outcomes down to percentages and share these with consumers. “Importantly, though, this was a hybrid paper, where we cared about the practical implications as much as the theory,” she says. “Given constraints in the real world, we wanted to know whether there were more subtle methods for dispelling this notion that AI can’t learn.”

“If we understand that machines, like humans, can learn from their mistakes, we won’t resist them as much.”

The researchers explored whether small changes to how predictive software is described has an impact on choice. In one study, participants were asked whether they wanted to rely on themselves to judge the quality of a piece of art, or whether they wanted to rely on technology to do it for them. The technology was described as either an “algorithm” or a “machine-learning algorithm.” When given the choice of “algorithm,” the majority of people chose themselves. When offered instead a “machine-learning algorithm,” the majority of people chose technology. Simply providing a name suggestive of an algorithm’s ability to learn proved sufficient to overcome lack of trust.

For Reich, this presents a clear and practical takeaway for companies that rely, in one way or another, on predictive algorithms. Companies need to be aware that consumers, for the most part, harbor distrust of the recommendations made by algorithms. But this distrust, to a point, is readily overcome: a simple semantic nod toward an algorithm’s ability to learn will build greater trust with the consumers it serves.

“If we understand that machines, like humans, can learn from their mistakes,” Reich says, “we won’t resist them as much.”

Department: Research