Press "Enter" to skip to content

AI: Now is the time for us to get it right

Experts are warning it’s not too late to ensure AI is helpful and has accountability.

Image: KOHb/Getty Images/iStockphoto

Historically, we tend to mainstream new technology before knowing all of the ramifications of its implementation. Luke Lango, InvestorPlace senior investment analyst, in the introduction to his article: 5 Disruptive Technologies That Are Moving Too Fast, agrees, “Some disruptive technologies move fast. Others move too fast.”

More about artificial intelligence

“A disruptive technology moves too fast simply because there is so much pent up demand for it,” continues Lango. “Too fast simply means that the technology is bound to hit some road bumps. Right now, it seems like a lot of industry-disrupting technologies are moving too fast.”

In his article, Lango looks at autonomous driving, e-commerce, decentralization, internet television and data sharing/analytics. They all have two things in common—artificial intelligence (AI) and the road bumps caused by AI.

SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)

Now is the time to address the AI road bumps

Academics and futurists attribute the road bumps to our inability to know how and why an AI decision is made. Aziz Huq, the Frank and Bernice J. Greenberg professor of law at the University of Chicago, in his Psyche article, When a machine decision does you wrong, here’s what we should do, suggests that now is the time to look at what we are getting ourselves into with AI.

Huq mentions, “States and firms with whom we routinely interact are turning to machine-run, data-driven prediction tools to rank us, and then assign or deny us goods.”

Huq offers a rather unnerving example of where the decision-making process was flawed. Ten years ago, in Michigan, members of the state government decided to replace the state’s computer system for handling unemployment claims. Once up and running in 2013, the number of claims tagged as fraudulent skyrocketed. Huq wrote, “A subsequent investigation found that MiDAS was flagging fraud with an algorithmic predictive tool: out of 40,195 claims algorithmically tagged as fraudulent between 2013 and 2015 (when MiDAS was decommissioned), roughly 85% were false.”

Human review and appeal

The initial response often is to instigate a process of human review and appeal, as humans are capable of nuanced judgments. Huq adds, “They’re capable of responding to new arguments and information, updating their views in ways that a merely mechanical process cannot.”

One would expect that to resolve the issue, but Huq isn’t so sure—technical, social and moral difficulties can surface. “Without for a moment presuming that machine-driven decision tools are unproblematic—they’re not—the idea of creating an appeal right to a human decision-maker needs to be closely scrutinised,” cautions Huq. “That ‘right’ isn’t as unambiguous as it first seems.”

A right to a human appeal from a machine decision, such as the MiDAS fraud, can be understood in two different ways. “It could first be translated into an individual’s right to challenge a decision in their unique case,” explains Huq. “I suspect that most people have this in mind when they think of a human appeal from a machine decision: you got my facts wrong, and you owe it to me as a person to correctly rank and treat me based on who I am and what I have, in fact, done.'”

Human review is not always better

Human review is not always an improvement. Huq cites a 1954 paper by psychologist Paul Meehl that offers a significant amount of research showing that having a human review the decision made by a simple algorithmic tool, tends to generate more mistakes. If there is concern about the paper being outdated, Huq cites the 2006 American Psychological Association paper Meehl’s contribution to clinical versus statistical prediction, authored by Grove, W. M., and Lloyd, M., saying: “Meehl’s conclusion that statistical prediction consistently outperforms clinical judgment has stood up extremely well for half a century. His conceptual analyses have not been significantly improved since he published them.”

Social and economic implications

Huq next addresses social and economic implications. To start, some families may not have the resources to mount an appeal. Huq writes, “Some will be more capable of appealing than others. Without some well-intentioned advocacy group’s intervention, it’s likely that socioeconomic status and financial resources will correlate to the propensity to appeal.”

What’s the answer?

Huq believes it is possible to develop a cost-effective AI system that’s both sensitive and specific. “The right to a well-calibrated instrument is best enforced via a mandatory audit mechanism or ombudsman, and not via individual lawsuits,” explains Huq. “The imperfect and biased incentives of the tool’s human subjects means that individual complaints provide a partial and potentially distorted picture. Regulation, rather than litigation, will be necessary to promote fairness in machine decisions.”

Final thought

Huq has raised a red flag. He is concerned, as is Lango and other experts, that we are fast-tracking AI and will have to live with the detritus. Most of us have not been negatively impacted by erroneous AI decisions like those in Michigan accused of fraud. But, with AI encroaching more and more on every aspect of our digital lives, now might be the time to sort out the best way going forward with AI.

 Also see

Source: TechRepublic