Press "Enter" to skip to content

Is AI just a fairy tale? Not in these successful use cases

Getting artificial intelligence past the fairy tale stage is a challenge for some organizations. Here are two examples of AI that’s granted its users’ wishes.

Image: iStock/Artystarty

More about artificial intelligence

Even with technology, sometimes we believe in fairy tales. A fairy tale is a story with a “fantastic and magical setting or magical influences within a story.” I hadn’t thought much about fairy tales recently, until I began reviewing the number of online case studies about artificial intelligence (AI) in companies.

In most of these case studies, the bottom line was that an AI solution had been successfully implemented. However, when I reviewed the stories for business outcomes or results, the results weren’t there. Instead, the stories ended with what companies hoped they would realize from their AI investments. They were hoping that fairy tale projects would indeed come true.

AI is in its infancy in most companies, and there are legitimate reasons many tangible business results from AI have still not appeared.

SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)

Nevertheless, it isn’t too early for CIOs and others with responsibility for AI projects to start worrying: Because it won’t be long before their boards and stakeholders begin to question business results.

I’m not going to review why AI projects can fail, or what companies should be doing to tighten up AI practices, since most of us have already heard about these subjects. Rather, it’s time to look at some of the AI projects that started out as fairy tales but that indeed came true and are yielding huge dividends for their companies.

Here are two examples.

GE reduces operating costs with AI

Machine downtime costs companies $260,000 per hour, according to Aberdeen Group. When General Electric decided to target machine downtime as a category that was bleeding the bottom line, it looked to a combination of AI and the Internet of Things (IoT) as a means of bringing meaningful operational savings.

GE added IoT sensors to all of its machines, from power turbines to hospital scanners and aircraft engines. The sensors transmitted data in real time that reported how machines were running. Sensors also measured the effects of fuel level changes and temperature fluctuations.

SEE: Natural language processing: A cheat sheet (TechRepublic)

“Each of the company’s 22,000 wind turbines is continually streaming operational data to the cloud, where GE analysts can tweak the direction and pitch of the blades to ensure as much energy is being captured as possible,” futurist Bernard Marr reported. “Intelligent learning algorithms allow each individual turbine to adapt its behaviour to mimic other nearby turbines that are operating more efficiently.”

In GE’s data center, this machine-generated data is embellished with data from third-party weather, geopolitical, and demographic sources. The data is processed by a Hadoop-based industrial data lake service that GE and its customers use, with the service coming with an assortment of tools that aid GE and its customers in data interpretations.

GE is saving money because its AI is predicting potential machine downtime situations. Management can intervene before downtime occurs, hopefully stopping or shortening the downtime. So are its customers, which are expected to save an average of $8 million per year from the reduction in machine downtime.

This is an AI use case that not only reduces operating costs but that passes on those benefits to customers, thereby increasing the likelihood of new revenue opportunities.

University of Iowa uses AI to fight blindness

Diabetic retinopathy (DR) is an eye disease caused by diabetes, which over time compromises small blood vessels at the back of the eye and interferes with blood flow. As this happens, patients begin to experience visual symptoms like dark spots and floaters, with the condition ultimately leading to blindness.

Because DR can lead to blindness if left untreated, it’s critical to detect the condition early so it can be successfully treated and managed.

SEE: AI in 2020: How use cases will drive artificial intelligence deployments (TechRepublic)

This sounds simple, but it isn’t. Patients often have limited access to care professionals who can diagnose the condition, and those who live in urban areas may find it hard to obtain an appointment in overloaded specialists’ offices.

Dr. Michael Abramoff is a professor of ophthalmology and visual sciences, and electrical and computer engineering at the University of Iowa. He was determined to address the problem with AI and an autonomous system that could render medical decisions. His IDx-DR system uses a low-power microscope attached to a camera to capture images at the back of the eye. AI then evaluates the images for DR biomarkers and reports the findings to the patient’s eye care provider. The process takes only a few minutes, making it easier for doctors to see more at-risk patients. If there is DR, the provider can immediately refer the patient to a specialist. The time saved can mean the difference between eye health management and blindness. 

This is an AI use case that is life- and game-changing.

Why AI success matters

Many companies are still working on AI projects in fairy tale mode, but we can all be heartened by the AI success stories that have already turned fairy tales into reality. These AI use cases targeted specific business problems and delivered significant and measurable results.

More of these impactful results are likely to follow as AI continues to revolutionize how we resolve problems and advance our thinking in new directions. 

Also see

Source: TechRepublic