NCHR Comments on the Proposed Framework from Ranking Member Cassidy to Regulate Artificial Intelligence (AI) in Healthcare

September 22, 2023

We appreciate the opportunity to comment on the proposed framework from Ranking Member Cassidy to regulate artificial intelligence (AI) in healthcare.

The National Center for Health Research (NCHR) is a nonprofit think tank that conducts, analyzes, and scrutinizes research on a range of health issues, with a particular focus on which prevention strategies and treatments are most effective for which patients and consumers. We do not accept funding from companies that make products that are the subject of our work, so we have no conflicts of interest.

This framework comes at an important time as the use of AI in healthcare is significantly increasing and has numerous potential benefits. However, we agree with the report that AI healthcare tools also carry significant risks which may require new legislation to clarify or expand authority. We also agree with the white paper’s statement that the FDA’s framework for regulating medical devices was not designed for devices that incorporate evolving AI. The FDA has authorized more than 500 AI-enabled medical devices via 510(k) clearance, granted De Novo request, or approved PMA since 1995.[1] The vast majority of these devices are not required to prove either safety or effectiveness, and therefore it is past time to update the regulatory frameworks for medical devices while also ensuring that products are safe and effective for patients. The following are considerations to promote this goal.

  1. Regulate Lab Developed Tests (LDTs) as medical devices, including those incorporating AI.

Currently LDTs are not regulated by the FDA, including those using AI diagnostic tools. This means that thousands of LDTs are legally sold but are not proven to be accurate, exposing patients to potential harm either through false positive or false negative results.

  1. Require diversity in data.

Inaccuracy in AI models has been documented such as those used to diagnose breast cancer.[2] When models are disproportionately based on one racial or ethnic group, the results may be inaccurate for other racial and ethnic groups. Any framework around AI should require evaluations and routine monitoring to establish validity and reliability of the tests for diverse racial and ethnic groups.

  1. Reform the 510k clearance pathway for AI/machine learning (ML) devices to require appropriate predicates and proactive post-market surveillance.

The FDA is clearing an increasing number of AI/ML based medical devices through the 510(k) pathway. This pathway allows clearance if the device is substantially equivalent to a formerly cleared device, referred to as a predicate. For example, the FDA cleared an AI device designed to diagnose liver and lung cancer in 2018 based on its similarity to a type of imaging software approved 20 years prior.[3] That software was cleared because it was considered “substantially equivalent” despite substantial changes to the product. According to a 2023 study, more than a third of FDA cleared AI/ML medical devices were based on equivalence to predicate medical devices that did not involve AI or ML. This is an obvious failure to hold AI devices to the standards necessary to ensure that they will benefit patients rather than harm them.[4]

As a result, many devices are subsequently recalled, but they may cause great harm before a recall is implemented. In a study of 755 AI/ML medical devices cleared from 2019-2021, approximately 10% were subsequently recalled.3 Similar recall rates were seen in a study assessing clearances from 2008-2017.[5] This is unacceptable. Predicates should be required to be substantially equivalent in terms of AI/ML standards and strong post-market surveillance should also be required to ensure that these AI products are accurate and functioning properly to prevent patient harm.

  1. Revise processes to routinely test effectiveness as systems are updated.

A draft FDA guidance allows FDA to accept predetermined change control plans in premarket product submissions where developers can outline anticipated modifications to avoid subsequent review and approval. The focus of that guidance is to reduce the administrative burden on the FDA, rather than to protect patients. What criteria will FDA use to accept predetermined change control plans to ensure that changes do not reduce device accuracy? These predetermined changes should be very limited in frequency and narrow in scope.

Thank you for considering these recommendations, which are essential to improve the accuracy of AI-related medical devices. The use of AI medical devices must be much more carefully regulated pre-market and post-market to ensure its benefits outweigh its risks.


  1. U.S. Food & Drug Administration. Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices.
  2. Williamson-Lee, J. (2022). To prevent unnecessary biopsies, scientists train an AI model to predict breast cancer risk from MRI scans. STAT.
  3. Szabo, L. (2019) Artificial Intelligence Is Rushing Into Patient Care – And Could Raise Risks. Kaiser Health News.
  4. Muehlematter, U., Bluethgen, C., & Vokinger, K. (2023).  FDA-cleared artificial intelligence and machine learning-based medical devices and their 510(k) predicate networks. The Lancet
  5. Dubin JR, Simon SD, Norrell K, Perera J, Gowen J, Cil A. Risk of Recall Among Medical Devices Undergoing US Food and Drug Administration 510(k) Clearance and Premarket Approval, 2008-2017. JAMA Netw Open. 2021;4(5):e217274. doi:10.1001/jamanetworkopen.2021.7274