Press "Enter" to skip to content

Are you ready for HAL?: 4 questions to ask about AI before launch

Although the fictional HAL supercomputer was first introduced to movie-goers more than 50 years ago, there are important lessons learned that AI practitioners can apply today.

HAL Supercomputer prop from Stanley Kubrick’s 1968 film, “2001: A Space Odyssey.”” data-credit=”Image: Hethers/Shutterstock”>
HAL Supercomputer prop from Stanley Kubrick’s 1968 film, “2001: A Space Odyssey.”

Image: Hethers/Shutterstock

HAL (heuristically programmed algorithmic computer) first debuted in the Stanley Kubrick classic film “2001: Space Odyssey” (1968). While part of HAL’s programming required the computer to keep the real purpose of the mission a secret from astronauts, HAL was also programmed to assist its human travelers on the mission by verbally taking questions and instructions and also providing verbal feedback with the help of natural language processing.

SEE: Hiring Kit: Video Game Programmer (TechRepublic Premium)

More about artificial intelligence

During the voyage, HAL experienced logic conflicts when it attempted to balance relaying critical information to astronauts against its directive to keep mission information secret. The end result was a series of software malfunctions that placed HAL on the path of destroying the human inhabitants of the ship in order to safeguard the secrecy of the mission. 

“2001: A Space Odyssey” showed in theaters more than 50 years ago, but is prescient in the questions that loom for organizations as they inject artificial intelligence into business processes and decisioning. Among these questions are:

What’s accurate?

In October 2019, Amazon’s Rekognition AI mistakenly classified 27 professional athletes as criminals, and in March 2021, a Dutch court ordered Uber to reinstate and compensate six former drivers who were fired based on incorrect assessments of fraudulent activity that were made by an algorithm. 

Many organizations enter the AI arena by purchasing an AI package that is already pre-programmed by a vendor that knows their industry. But how well does the vendor software understand the particulars of a specific corporate environment? And if companies continue to train and refine their AI engines, or they create new AI algorithms, how do they know when they’re inadvertently introducing logic or data that will yield flawed results?

SEE: Gartner: AI is moving fast and will be ready for prime time sooner than you think (TechRepublic) 

The answer is, they don’t know because companies can’t discover flaws in data or logic until they observe them. They recognize the flaws because of their empirical experience with the subject matter that the AI is analyzing. This empirical knowledge comes from on-staff human subject matter experts. 

The bottom line is that companies must keep human SMEs at the end of AI analytic cycles to ensure that AI conclusions are reasonable—or to step in if they are not.

What’s ethical?

A large retailer wants a predictive software that can anticipate customer purchasing needs before customers actually make purchases. The retailer purchases and aggregates customer data from a variety of third-party sources. But should the retailer purchase healthcare information about consumers to determine if they need diabetic management aids?

This is an ethics question because it intersects with individual healthcare privacy rights. Businesses must decide the right thing to do.

Where do humans fit in?

In the end, human knowledge is the driver of what AI and analytics can do.

The standard is that AI is cutover to production when it is within 95% accuracy of what subject matter experts would conclude. Over time, it is likely that this synchronization between what a machine and what a human would conclude will drift.

SEE: Deloitte: The top business use cases for AI in 6 consumer industries (TechRepublic) 

Realizing that AI (like the human brain) isn’t always perfect, most organizations opt to have a subject matter expert as the final review point for any AI decision-making process.

What limitations do we face?

Today’s AI analyzes vast troves of data for patterns and answers, but it doesn’t possess the human ability to intuit or tangentially arrive at answers that aren’t immediately in the data. Over time, there will be work to enhance AI’s intuitive reasoning, but the risk is that the AI can go off the rails like HAL.

How do we harness the power of AI so it does what we ask it to do, but doesn’t end up blowing the mission? This is the balancing point that organizations using AI have to find for themselves.

Also see

Source: TechRepublic