Press "Enter" to skip to content

CXOs: Are we ready for AI to assist human decision-making?

One of the emerging areas of AI use in corporations is to assist in human decisions. But is it ready, and are the decision-makers ready for it?

Image: iStock/MaksimTkachenko

More about AI

The idea of artificial intelligence-driven tools taking over jobs at all levels of organizations has gradually rationalized into a vision where AI serves as more of an assistant, taking over various tasks to allow humans to focus on what they do best. In this future, a doctor might spend more time on treatment plans while an AI tool interprets medical images, or a marketer focuses on brand nuances as an AI predicts the results of different channel spend based on reams of historical data.

SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)

This human-machine pairing concept is even being extended into military applications. Several programs are building AI-enabled networks of sensors integrating battlefield data and summarizing key information to allow humans to focus on strategic and even moral concerns rather than which asset is where.

An underlying assumption of this pairing is that machines will provide a consistent, standardized set of information for their human partners. Based on that consistent input, the assumption is that humans will generally make the same decision. At a simplified level, it seems sensible to assume that if an intelligent machine predicts heavy rain in the afternoon, most humans will bring their umbrellas.

However, this assumption seems to rest on some variation of the rational economic actor theory of economics, that humans will always make a decision that’s in their best economic interest. Given the same data set, the theory presumes that different humans will make the same decision. Most of us have seen this theory disproven, as humans are economically messy creatures, as demonstrated by industries from gambling to entertainment continuing to exist and thrive even though buying lottery tickets and binging on Netflix is certainly not in our best economic interest.

MIT proves the point of AI decision-making

A recent MIT Sloan study titled The Human Factor in AI-Based Decision-Making bears this point out. In a study of 140 U.S. senior executives, researchers presented each with an identical strategic decision about investing in a new technology. Participants were also told that an AI-based system provided a recommendation to invest in the technology and then asked if they would accept the AI recommendation and how much they would be willing to invest.

As a fellow human might expect, the executives’ results varied despite being provided with the exact same information. The study categorized decision-makers into three archetypes ranging from “Skeptics” who ignored the AI recommendation to “Delegators,” who saw the AI tool as a means to avoid personal risk.

The risk-shifting behavior is perhaps the most interesting result of the study, whereby an executive who took the AI recommendation consciously or unconsciously assumed they could “blame the machine” should the recommendation turn out poorly.

The expert problem with AI, version 2

Reading the study, it’s interesting to see the evolution of technology to the point that the majority of the executives were willing to embrace an AI as a decision-making partner to some degree. What’s also striking is that the results are not necessarily unique in organizational behavior and are similar to how executives react to most other experts.

Consider for a moment how leaders in your organization react to your technical advice. Presumably, some are naturally skeptical and consider your input before doing their own deep research. Others might serve as willing thought partners, while another subset is happy to delegate technical decisions to your leadership while pointing the finger of blame should things go awry. Similar behaviors likely occur with other sources of expertise, ranging from outside consultants to academics and popular commentators.

SEE: Metaverse cheat sheet: Everything you need to know (free PDF) (TechRepublic)

A seemingly recurring theme of interactions with experts, whether human or machine-based, is varying degrees of trust among different types of people. The MIT study lends rigor to this intuitive conclusion that should inform how technology leaders design and deploy AI-based technology solutions. Just as some of your colleagues will lean towards “trust, but verify” when dealing with well-credentialed external efforts, so too should you expect these same behaviors to occur with whatever “digital experts” you plan to deploy.

Furthermore, assuming that a machine-based expert will somehow result in consistent, predictable decision-making appears to be just as misguided an assumption as assuming everyone who interacts with a human expert will draw the same conclusion. Understanding and communicating this fundamental tenant of human nature when dealing with a messy world will save your organization from having unreasonable expectations of how machine and human teams will make decisions. For better or worse, our digital partners will likely provide unique capabilities, but they’ll be utilized in the context of how we humans have always treated “expert” advice.

Also see

Source: TechRepublic