In recent years, artificial intelligence (AI) has started to turn up everywhere you look. It powers our ever-present digital assistants; it helps recommend entertainment options and has even begun to reshape the way businesses carry out their everyday operations. As AI moves further into healthcare, regulatory challenges await.
The truth is, there isn’t a single major industry that isn’t being changed by the rapid development of AI-powered technology. There is one, though, that stands out among all others: healthcare.
The global healthcare industry arguably has more to gain from advances in AI than any other industry. It’s already being put to use in aiding diagnoses, monitoring patient health data to look for early warning signs of disease, and managing medication doses and prescriptions.
It’s even proven adept at predicting patient mortality. At the same time, however, the adoption of AI into healthcare carries some unique risks not found elsewhere – owing to the fact that any missteps can cost lives.
That reality is rapidly setting up a conflict between the multitude of businesses that seek to develop healthcare-focused AI solutions. The regulators tasked with making sure the industry always puts the safety of patients first may have a problem.
As an overview of what’s happening with AI and healthcare — here’s a look at the ways that healthcare AI solutions are pushing the regulatory envelope. The challenges it creates for regulators to solve.
Securing the Underlying Data
To begin with, AI solutions don’t work in a vacuum. They rely on complex infrastructures that bring together a variety of data sources from several disparate providers. When it comes to the healthcare industry that data may come from medical practices, hospitals, drug makers, insurers, and any number of other intermediaries.
The first challenge is in designing medical data integrations that adhere to existing medical privacy laws such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States and the General Data Protection Regulation (GDPR) in the EU.
The problem, as it relates to regulation, is the fact that there’s no one-size-fits-all standardized platform to handle medical data. Most of what’s already in use or in development consists of custom database solutions that were never designed to be interoperable.
That makes every link between such systems a potential privacy nightmare. That alone will require some time to remedy, and there’s no telling what the ultimate solution will be. In addition, when you add AI to this mix, things get even more muddled.
For example, many of today’s medical AI systems have used real-world patient data to learn how to do their intended functions. That data is normally anonymized before being used for machine learning purposes, but studies have already confirmed that it’s often possible for such data to be re-associated with the people who generated it. That means there’s still a major data privacy concern even after current standards are applied to prevent one, and regulators are going to have to come up with entirely new guidelines for and oversight of new medical data sharing platforms and how they handle sensitive information.
Approving a Moving Target
Another regulatory challenge that current medical AI developments are creating is the fact that they’re distinctly different from all previous devices, medications, and technologies that have come before.
It’s because the latest AI solutions in the medical field are being designed to learn as they gain exposure to new patient data in order to hone their ability to make diagnoses, assist doctors, or suggest treatments.
That means the capabilities, safety, and efficacy of some of the newest medical AI solutions can’t be assessed a single time for regulators to grant approvals.
Unlike medications and standard medical devices, applied AI in medicine is a moving target. Whereas a non-AI device can undergo thorough testing and gain approval, an AI’s performance may be different the day after its undergone testing. What’s more, there’s no telling if the performance differences will make it work better or worse.
That’s why regulators like the US’s Food and Drug Administration (FDA) have thus far only started to approve locked-algorithm solutions like IDX Technologies’ IDx-DR eye scanner.
When medical AI has the ability to learn, however, all of the existing approval processes no longer suffice. To tackle the problem, the FDA has already proposed a whole new regulatory framework to deal with AI in medical applications. It would include a preapproval process that would allow manufacturers some leeway in what changes (or how much machine learning) would be permissible without re-approval. In addition, it would require manufacturers to submit ongoing performance data to the agency so they can intervene if necessary. That, however, will require a drastic increase in manpower at the FDA – and nobody’s sure if they’ll get it.
Dealing with Black Boxes
Just as it’s the case in the broader world of AI development, regulators of healthcare AI solutions are going to have to grapple with the prevalence of black-box AI software. AI seems to present a double-edged sword in healthcare. If we restrict developers from protecting their work too much, and innovation stops.
However, let devs have free reign, and there won’t be any way to know if the approaches in use are what’s best for the patients that will rely on the technology.
To solve that problem, regulators are going to have to strike a delicate balance that allows developers some means of protecting trade secrets while providing enough transparency to allow for thorough vetting of healthcare algorithms. That’s going to require regulators across a variety of agencies to bring high-level AI developers into the fold, as they’ll be the only ones qualified to figure out what the AI solutions are doing and why.
Those developers will also need a medical or research background to be able to comprehend the medical aspects of the technology. That in itself is a problem because there aren’t too many people that can satisfy both requirements at present, and there’s no existing program designed to produce such experts.
A World of Innovation Complications
Although it is certain that AI holds the power to revolutionize almost everything about the modern healthcare industry, the regulatory issues identified here must be solved if it’s to happen in a safe and controlled manner.
Regulatory issues will require something of a parallel revolution to take place within the relevant regulatory bodies that oversee the industry. It’s going to take new approaches, expanded oversight, and the development of a new generation of medical AI experts. Needless to say, paying for all of the regulations won’t be a trivial matter, either.
For all of those reasons, it’s easy to foresee that the required changes are not going to happen overnight. There isn’t a roadmap for regulators or developers to follow, and that means they’ll have to blaze a trail together into healthcare’s AI-powered future.
Blazing a trail means the need for all sides to be cautious and take the time to get things right on the first try may prove to be the ultimate limiting factor in AI’s spread into the industry. That, of course, is how it should be.
After all, the consequences of failure to regulate would be dire and irreversible — and in healthcare, there are real human lives that hang in the balance.