What Makes Employees Trust (vs. Second-Guess) AI?
When an algorithm recommends ways to improve business outcomes, do employees trust it? Conventional wisdom suggests that understanding the inner workings of artificial intelligence (AI) can raise confidence in such programs.
Yet, new research finds the opposite holds true.
In fact, knowing less about how an algorithm works—but following its advice based on trusting the people who designed and tested it—can lead to better decision-making and financial results for businesses, say researchers affiliated with the Laboratory for Innovation Science at Harvard (LISH).
Why? Because employees who are making decisions often decide to trust their anecdotal experience rather than AI’s interpretation of the data. The trouble is, sometimes decision makers think they understand the inner workings of an AI system better than they actually do.
The findings have implications for a variety of businesses, from retailers and hospitals to financial firms, as they decide not only how much to invest in AI, but how decision makers can use the technology to their advantage. Understanding how algorithms work to make recommendations—and knowing how people navigate them—is more important than ever, the researchers say.
“Companies are trying to make the decision: ‘Do we invest in AI or not?’ And it's very expensive,” says Timothy DeStefano, an affiliated researcher with the LISH team from Harvard Business School and an associate professor at Georgetown University.
DeStefano partnered on the paper with LISH senior research scientist Michael Menietti; Katherine C. Kellogg, a professor at the Massachusetts Institute of Technology; and Luca Vendraminelli, who is affiliated with LISH and a post-doctoral fellow at the Politecnico di Milano.
Read the full article here.