More Than Meets the AI.

When it comes to news about Artificial Intelligence (AI), all too many stories focus on the potential negative consequences of the technology, often in a sensationalized way. That is not to say that there are no legitimate concerns about its use or that the potential risks should be ignored. Technology can be used for good as well as evil, however in healthcare there’s even more at stake, so it is important to approach AI with an informed and balanced perspective, especially when it can dramatically improve efficiencies and outcomes.

Diagnostic AI is better described as “Assisted Intelligence” rather than “Artificial Intelligence”

AI is revolutionizing the way we diagnose and treat medical conditions. One of the latest developments in the field is the use of diagnostic AI in medical devices and software to analyze heart and lung sounds. In this capacity, diagnostic AI is perhaps better thought of as “Assisted Intelligence” rather than “Artificial Intelligence,” as its primary function is to assist the healthcare professional, who ultimately makes the clinical decisions.

In this blog post, I focus on two specific areas that are often overlooked in evaluating the pros and cons of using Artificial Intelligence (AI) in healthcare. I will explore why diagnostic AI in healthcare is different from using AI for other applications or industries.

AI is not one technology.

AI is a term coined by John McCarthy, Emeritus Stanford Professor in 1955, and defined by him as “the science and engineering of making intelligent machines.” AI is NOT one technology, nor one methodology, and it’s certainly not one solution for one industry.

This is what OpenAI’s ChatGPT-4 answered to the prompt: “AI isn’t one thing. Explain.”

You are correct in saying that AI is not just one thing. AI is not a single technology, but rather a collection of different techniques and tools that can be used to enable machines to perform intelligent tasks. Large Language Based AI, such as the GPT-4 model that I am based on, is just one type of AI, focused on natural language processing and generation. However, when people refer to “AI” in general, they often use it as a catch-all term to refer to a broad range of machine learning techniques and tools that can be used to solve a variety of problems, including image and audio recognition, language translation, speech recognition, and more.

Pretty impressive!

Large Language Models (LLMs), as used in ChatGPT, are not typically used for image or audio diagnostic applications in healthcare. LLMs, as the acronym implies, are designed for language applications and in healthcare are best suited for analyzing and processing medical records or other text-based input.

AI techniques that use images or audio recordings to detect anomalies or can help identify certain conditions or diseases, use different technological approaches. For instance, the diagnostic AI employed in digital auscultation (Note 1) uses audio as input. Multiple clinical trials have confirmed that such diagnostic AI can detect murmurs, clinically significant aortic stenosis, and mitral regurgitation with a level of accuracy comparable to that of expert cardiologists (Note 2).

Regulatory Approval

Although numerous AI tools are freely available online and used by millions of consumers without any regulatory oversight (Note 3), that is not the case with diagnostic AI in healthcare. Diagnostic AI is subject to regulatory approval and must undergo rigorous testing and evaluation to ensure it is safe and effective. Regulatory agencies such as the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA) have established guidelines and requirements for the use of AI in healthcare to ensure that it is used ethically and responsibly.

Under its September 28, 2022 final guidance document (Note 4), the FDA sets out four criteria for regulating software in medical devices and systems. The first criterium states “software functions that are intended to acquire, process, or analyze a medical image, a signal from an IVD, or a pattern or signal from a signal acquisition system and are intended for a purpose identified in section 201(h) of the FD&C Act remain devices and therefore are subject to FDA oversight.

In other words, if the type of data described above is used as input, such as is the case with diagnostic AI, the software function is regulated by FDA. Ironically, diagnostic AI, because it is subject to much stricter regulation, results in lower risks, than general unregulated AI applications.

In summary, while AI is a powerful technology with many potential applications, it is important to recognize that there are different types of AI, each with its own unique capabilities, limitations and regulatory requirements. The use of diagnostic AI in healthcare, unlike some other types of AI applications, requires careful consideration and regulatory oversight to ensure that it is used safely and effectively.

References and further reading:

1: Feasibility assessment of AI-assisted home-based remote auscultation to optimize echocardiogram referrals:

https://emurmur.com/wp-content/uploads/2021/12/High-Value-Practice-Academic-Alliance-2021-Poster.pdf

2. Deep Learning Algorithm for Automated Cardiac Murmur Detection via a Digital Stethoscope Platform. JAHA, Journal of American Heart Association. April 26, 2021: https://www.ahajournals.org/doi/10.1161/JAHA.120.019905

3. ChatGPT Is a Tipping Point for AI: https://hbr.org/2022/12/chatgpt-is-a-tipping-point-for-ai

4. FDA Clinical Decision Support Software: https://www.fda.gov/media/109618/download