You and Artificial Intelligence

Medical devices are getting smarter. Contracting executives will have to do the same.

Zach Rothstein

In May the FDA granted marketing clearance to a device called eMurmur ID, by Ottawa, Canada-based eMurmur®. The device is mobile- and web-based, and it operates in conjunction with an electronic stethoscope. It uses advanced machine learning to identify and classify pathologic and innocent heart murmurs, the absence of a heart murmur, and S1, S2 heart sounds, according to the developer. Bottom line: It uses its algorithm – which was developed based on real-world observations – to help providers identify innocent and pathological heart murmurs, or the absence of a heart murmur, in seconds.

That’s the kind of technology which supply chain will be contracting for in the future.

Medical devices are getting smarter. In other words, devices and equipment that incorporate artificial intelligence, or AI, can actually “learn” over time. And as they do so, they can help clinicians make better diagnoses and therapeutic plans.

That fact raises all kinds of issues for supply chain executives – among others. For example, how do you know the AI-based technology is “smart,” a “good learner?” How do you contract for such a technology? How do you pay for it?

The U.S. Food and Drug Administration is grappling with some AI-related issues as well. For example, as an AI-based device takes in more information and offers new insights to clinicians, should it be considered a “new” device? Should it go through FDA’s marketing clearance procedures every time it learns something new? And how can the healthcare community trust that the device will make better choices or recommendations a year from now, or five years from now, than it does at its introduction?

Continuous learning
The old rules of the road for medical device regulation – which have been around since the 1970s – don’t apply anymore, says Zach Rothstein, vice president, technology and regulatory affairs, AdvaMed.

“In terms of regulation, the most unique aspect of AI, or machine learning, is that it can continuously learn,” he points out. “The inputs it receives in the field inform future outputs. The question is, ‘How do you truly allow for that continuous learning aspect of the device to occur?’”

Thus far, the FDA has handled the question by granting marketing clearance for AI-based products that are essentially “locked,” says Rothstein. Their algorithms are typically based on thousands of data points – which make them very smart indeed. But they haven’t been FDA-cleared to get any “smarter” in the field. In other words, they are prevented from continuously learning.

FDA is trying to re-imagine its approach to AI-based devices by adopting a “change management protocol,” which would establish parameters that would allow devices to continuously learn in the field. “Without that, things have to be locked,” says Rothstein. “If a developer wants to update the software of an AI device based on input received from the real world, the developer has to go back to the FDA for marketing clearance.”

FDA trying to catch up
To catch up to AI technology, FDA is simultaneously exploring two paths:

  1. Precertifying developers of AI-based devices.
  2. Developing a framework for AI-based medical devices

In July 2017, the agency launched its “Pre-cert pilot program” as part of its “Digital Health Innovation Action Plan.” The gist is to look at the software developer or digital health technology developer, rather than primarily at the product. After reviewing systems for software design, validation and maintenance, FDA would determine whether the company meets quality standards and if so, would precertify the company.

The agency compares it to the Transportation Security Administration’s Precheck program, which screens travelers and awards them with a “Known Traveler Number,” speeding up their airport security checks.

With the information gleaned through the pilot program, the agency hopes to determine the key metrics and performance indicators for precertification and identify ways that precertified companies could potentially submit less information to the FDA than is currently required before marketing a new digital health tool. The FDA is also considering – as part of the pilot program – whether and how precertified companies may not have to submit a product for premarket review in some cases.

In September 2017, the agency announced the names of the companies selected to participate in the pilot program The agency’s intention was to include a wide range of companies and technology in the digital health sector, including small startups and large companies, high- and low-risk medical device software products, medical product manufacturers and software developers. Participants selected include:

  • Apple, Cupertino, California.
  • Fitbit, San Francisco, California.
  • Johnson & Johnson, New Brunswick, New Jersey.
  • Pear Therapeutics, Boston, Massachusetts.
  • Phosphorus, New York, New York.
  • Roche, Basel, Switzerland.
  • Samsung, Seoul, South Korea.
  • Tidepool, Palo Alto, California.
  • Verily, Mountain View, California.

As part of the Pre-cert pilot program, participants have agreed to provide access to measures they currently use to develop, test and maintain their software products, including ways they collect post-market data. Participants also agreed to be available for site visits from FDA staff, and provide information about their quality management system. This sharing will help the FDA continue to build its expertise in these areas, while giving the agency the information it needs to provide proper oversight of these products and firms.

A broader framework for AI devices
In April 2019, then-FDA Commissioner Scott Gottlieb announced that FDA was exploring a framework that would allow for modifications to algorithms to be made from real-world learning and adaptation.

“For traditional software as a medical device, when modifications are made that could significantly affect the safety or effectiveness of the device, a sponsor must make a submission demonstrating the safety and effectiveness of the modifications,” Gottlieb wrote at the time. “With artificial intelligence, because the device evolves based on what it learns while it’s in real world use, we’re working to develop an appropriate framework that allows the software to evolve in ways to improve its performance while ensuring that changes meet our gold standard for safety and effectiveness throughout the product’s lifecycle – from premarket design throughout the device’s use on the market.”

For example, an algorithm that detects breast cancer lesions on mammograms could learn to improve the confidence with which it identifies lesions as cancerous or may learn to identify specific subtypes of breast cancer by continually learning from real-world use and feedback, Gottlieb pointed out. “Our ideas are the foundational first step to developing a total product lifecycle approach to regulating these algorithms that use real-world data to adapt and improve.”

What’s ahead?
FDA is probably a few years away from figuring all this out, says Rothstein. Congressional legislation may be required for some of the changes being considered. “From most people’s perspective, these proposals are outside the bounds of the Federal Food, Drug, and Cosmetic Act,” says Rothstein. For that reason, Congressional legislation may be required. Still, next year may be a pivotal one, as FDA prepares to present concrete proposals for Congress to consider.

“This will certainly delay the deployment of certain technologies,” says Rothstein. “But FDA is doing its best to expedite the process. Long-term, I don’t think it will significantly impact the advancement of AI technology. Any developer that’s serious about getting into this space will do so.”

Supply chain executives might be in a position to help.
“A lot of these companies are small, and they’re trying to figure out how to get into the market,” says Rothstein. “They may be sophisticated at developing software, but many are new to the healthcare market.” By interacting with such companies, supply chain executives might be able to help them understand what the market needs.

“A village approach might be the way to go.”


Get to know these terms

Artificial Intelligence. The science and engineering of making intelligent machines, especially intelligent computer programs. Artificial intelligence can use different techniques, including models based on statistical analysis of data, expert systems that primarily rely on if-then statements, and machine learning.

Machine Learning. An artificial intelligence technique that can be used to design and train software algorithms to learn from and act on data.

“Locked” algorithms. Algorithms that don’t continually adapt or learn every time they are used.

“Adaptive” or “continuously learning” algorithms. Machine-learning algorithms that can learn from new user data presented through real-world use. They don’t need manual modification to incorporate learning or updates. For example, an algorithm that detects breast cancer lesions on mammograms could learn to improve the confidence with which it identifies lesions as cancerous or may learn to identify specific subtypes of breast cancer by continually learning from real-world use and feedback.

Algorithm Change Protocol, or “Predetermined Change Control Plan.” Proposed plan by FDA that would include the types of anticipated modifications – referred to as the “Software as a Medical Device Pre-Specifications” – being used to implement changes in a controlled manner that manages risks to patients. In this approach, the FDA would expect a commitment from manufacturers on transparency and real-world performance monitoring for artificial intelligence and machine learning-based software as a medical device, as well as periodic updates to the FDA on what changes were implemented as part of the approved pre-specifications and the algorithm change protocol.

Software as a Medical Device. Software intended to be used for one or more medical purposes that are not part of a hardware medical device. It can be used across a broad range of technology platforms, including medical device platforms, commercial “off-the-shelf” platforms, and virtual networks, to name a few. Such software was previously referred to by industry, international regulators, and health care providers as “standalone software,” “medical device software,” and/or “health software.”

Source: U.S. Food and Drug Administration


AI-based technologies on the market.

Examples of AI-based devices that have received FDA marketing clearance.

Heart murmur detection
In May 2019, eMurmur®, Ottawa, Ontario, announced that its eMurmur ID received FDA clearance. The company says eMurmur ID is a mobile and cloud solution which operates in conjunction with an electronic stethoscope. It uses machine learning to identify and classify pathologic and innocent heart murmurs, the absence of a heart murmur, and S1, S2 heart sounds. The solution is comprised of AI-based analytics, a mobile app, and a web portal (all HIPAA compliant). Evidence for the device is based on five studies involving more than 1,000 patients, according to the company.

Chest X-ray triage product
In May 2019, Zebra Medical Vision, Tel Aviv, Israel, received marketing clearance from the U.S. Food and Drug Administration for its artificial intelligence-based chest X-ray triage product. The FDA approval focuses on an alert for urgent findings of pneumothorax, an accumulation of gas within the space between the lung and the chest wall that can lead to total lung collapse. It is usually diagnosed by chest X-ray scan but is difficult to interpret.

Detection of left ventricular EF
In June 2018, San Francisco-based Bay Labs announced its EchoMD AutoEF software had received 510(k) clearance from the U.S. Food and Drug Administration for the fully automated clip selection and calculation of left ventricular ejection fraction (EF). EF is said to be the single most widely used metric of cardiac function and used as the basis for many clinical decisions. The EchoMD AutoEF algorithms are intended to eliminate the need to manually select views, choose the best clips, and manipulate them for quantification, said to be a time-consuming and highly variable process. The company says that its software algorithm “learned” clip selection and EF calculation after being trained on a curated dataset of over 4 million images, representing 9,000 patients.

Detection of diabetic retinopathy
In April 2018, the FDA permitted marketing of the IDx-DR, from IDx LLC, Coralville, Iowa, said to be the first device to use artificial intelligence to detect greater than a mild level of the eye disease diabetic retinopathy in adults who have diabetes. Diabetic retinopathy occurs when high levels of blood sugar lead to damage in the blood vessels of the retina, the light-sensitive tissue in the back of the eye. The IDx-DR is a software program that uses an artificial intelligence algorithm to analyze images of the eye taken with a retinal camera. A doctor uploads the digital images of the patient’s retinas to a cloud server on which IDx-DR software is installed. If the images are of sufficient quality, the software provides the doctor with one of two results: (1) “more than mild diabetic retinopathy detected: refer to an eye care professional,” or (2) “negative for more than mild diabetic retinopathy; rescreen in 12 months.” The FDA evaluated data from a clinical study of retinal images obtained from 900 patients with diabetes at 10 primary care sites.

Potential stroke warning
In February 2018, the FDA permitted marketing of the Viz.AI Stroke Platform from San Francisco-based Viz.ai, Inc. A stroke occurs if the flow of oxygen-rich blood to a portion of the brain is blocked. When this happens 2 million brain cells die every minute. The Viz.AI Contact application (part of the Stroke Platform) is designed to analyze CT images of the brain and send a text notification to the mobile device of a neurovascular specialist if a suspected large vessel blockage, or occlusion, has been identified. The device could benefit patients by notifying a specialist earlier, thereby decreasing the time to treatment.


What if AI makes a mistake?

The premise behind artificial-intelligence-based devices is that they can “learn” over time. In other words, based on real-world inputs and experience, they can make better diagnoses and better care recommendations as time goes by.

But what if those recommendations result in harm to a patient? What if the diagnosis is wrong? Who’s to blame?

Not physicians, says the American Medical Association.

In their recent annual meeting in June, delegates to the AMA endorsed some policy recommendations regarding AI. Among those recommendations is the following:

“Liability and incentives aligned so the individual or entity best positioned to know the AI system risks and best positioned to avert or mitigate harm do so through design, development, validation, and implementation. When a mandate exists to use AI, the individual or entity issuing the mandate must be assigned all applicable liability. Developers of autonomous AI systems with clinical applications (screening, diagnosis, treatment) are in the best position to manage issues of liability arising directly from system failure or misdiagnosis and must accept this liability.”

safe online pharmacy for viagra cheap kamagra oral jelly online