Dr. Diana Zuckerman Testifies at FDA Advisory Committee on Digital Health

Statement of Dr. Diana Zuckerman, President, National Center for Health Research, FDA Advisory Committee on Digital Health, November 6, 2025


I’m Dr. Diana Zuckerman, president of the National Center for Health Research. We’re a nonprofit think tank focused on the safety and effectiveness of medical and consumer products. We do not accept funding from entities that have financial ties to our work, so we have no conflicts of interest.

I started my professional career as a clinical psychologist and was a post-doc in psychiatric epidemiology at Yale Medical School. I worked as a therapist and also conducted research on the causes and treatment of depression. I moved to Washington, DC to be a AAAS Congressional Science Fellow sponsored by the American Psychological Association. I worked in the U.S. House, Senate, and at the Center for Mental Health Services at HHS, so I have a strong concerns about how to ensure that digital mental health technology is safe and effective.

We all know that many people with mental health issues are unable to afford therapy or do not go into therapy for other reasons. But that does not mean that anything is better than nothing. As you’ve heard, the use of AI chatbots by children and young adults can be extremely harmful, in some cases encouraging them to harm themselves and others.

FDA understands the unique challenges for devices that use AI, because “the device may confabulate, provide inappropriate or biased content, fail to relay important medical information, or decline in accuracy.” Some patients may misinterpret what the device is telling them, or the device may misinterpret what the patient is trying to convey. Some mental health professionals will not understand how to monitor or oversee use of the technology, or may not be included in the patient’s use of it, unless oversight is a feature of the device.

We have several major concerns about the FDA’s proposed approach:

#1. The FDA’s distinction between wellness products and devices for mental health problems is problematic because there can be considerable overlap between those two categories. That’s because there is so much overlap between common problems like poor social skills and poor coping skills and the sometimes unpredictable need for professional help to prevent a dramatic downward spiral to suicidal thoughts and self-harm. This is especially true for adolescents but can be true for children and adults of all ages.

#2. FDA does not have the resources to regulate all AI chatbots or so-called wellness devices, either pre-market or post-market. The agency already needs many more subject matter experts in AI, software, and other specialty areas to help CDRH reviewers. We’ve heard today that CDRH is not regulating devices that are marketed as licensed therapists. FDA needs to make sure that unregulated products are not being used for diagnostic or therapeutic purposes. That requires more staff than CDRH has. But, If CDRH does not regulate wellness devices that are used to measure a person’s signs and symptoms or that are used as AI friends by vulnerable children and adults, many will be harmed.

#3. 510k is not an appropriate pathway for these devices because they are not substantially equivalent to regulated devices already on the market. De Novo and PMA could be appropriate pathways, but only if they require randomized controlled trials. Many clinical trials for De Novo and PMA devices are single arm trials without a placebo control or comparison to accepted standard of care.

Control groups are especially important for issues like depression and stress, which tend to ebb and flow over time even without any treatment. Many depressed patients will get better or worse for reasons that are completely unrelated to the device, treatment, or placebo.

In Conclusion, QUESTIONS for today

1. The first example is when a healthcare provider prescribes the digital mental health medical device for MDD for the patient to use independently at home.

It’s logical to have risk mitigations, such as alerts for thoughts of self-harm. But how possible is it to predict how to reduce risks in a clinical trial involving an AI therapist that changes daily? Clinically meaningful endpoints should be required, but that takes time to study, and that’s not possible for devices that change from week to week. How long can you follow-up patients who are not seeing a human therapist?

What postmarket monitoring capabilities are realistic for an agency that was short-staffed even before the recent staffing cuts, unpaid furloughs, and retirements?

If the patient/client refused treatment with a human therapist, how much confidence would you have that they would carefully read, understand, and remember warnings on the label?

2. The second scenario involves OTC use of the same types of patients and devices. A device for Major Depression should not be OTC, because research shows that if the patient starts to improve that increases the risk of suicide, and if they don’t improve, they may may self-medicate with alcohol or drugs.

3. Finally, the third scenario: children and adolescents need a mental health professional substantially involved.