Back to News

FDA Opens Public Consultation on Oversight of AI-Enabled Medical Devices, Seeking Input on Real-World Performance and Safety

By Emily Carson|
FDA Opens Public Consultation on Oversight of AI-Enabled Medical Devices, Seeking Input on Real-World Performance and Safety
FDA Opens Public Consultation on Oversight of AI-Enabled Medical Devices, Seeking Input on Real-World Performance and Safety

October 6, 2025 — Washington, D.C.

The U.S. Food and Drug Administration (FDA) has launched a formal public consultation to gather feedback on how artificial intelligence-enabled medical devices (AIMDs) should be monitored and evaluated across their entire lifecycle. Announced on September 30, 2025, the initiative marks a significant step in the FDA’s evolving regulatory strategy toward adaptive medical technologies — particularly those used in surgery, diagnostics, and real-time clinical decision-making.

The notice, issued by the Center for Devices and Radiological Health (CDRH), is titled “Request for Public Comment: Measuring and Evaluating Artificial Intelligence-enabled Medical Device Performance in the Real-World.” It invites comments from manufacturers, healthcare systems, clinicians, and patients until December 1, 2025.

A New Era of Continuous Oversight

AI-enabled devices, such as robotic surgical assistants, imaging diagnostics, and predictive analytics systems, are increasingly capable of learning and evolving once deployed. This adaptive nature has challenged the FDA’s traditional “static” approval process — designed for fixed, unchanging devices.

In its announcement, the FDA recognizes the need for “dynamic oversight frameworks” that monitor how AI systems behave in real-world environments. The agency specifically requests input on:

• Best practices for assessing AI device performance in changing clinical settings.

• Techniques to detect and mitigate algorithmic bias and data drift over time.

• Methods for post-market surveillance as models learn or update after deployment.

• Criteria for transparency, reporting, and documentation across the device lifecycle.

According to the FDA’s Digital Health Center of Excellence, the goal is to create a regulatory ecosystem capable of identifying performance degradation before patient safety is compromised — a shift from reactive to predictive regulation.

Image 11

“AI-enabled medical devices can transform precision surgery and diagnostics, but their learning nature makes static oversight obsolete. The FDA’s call for feedback is a recognition that regulation must evolve to keep pace with adaptive algorithms,”

— Dr. Michelle Tarver, Deputy Director, Digital Health Center of Excellence, CDRH.

For medical device manufacturers, the consultation signals future expectations for continuous performance tracking and algorithmic transparency. Companies developing surgical robots, AI-driven imaging tools, or smart diagnostic platforms will likely need to demonstrate not just safety and efficacy at approval — but sustained performance over time.

“Manufacturers may soon be required to implement post-market monitoring pipelines similar to pharmacovigilance systems in pharmaceuticals,” noted a policy analysis from. “This includes proactive data collection, bias audits, and continuous calibration mechanisms.”

Hospitals and health systems, meanwhile, are expected to adopt outcome auditing processes for AI tools in clinical use. Instead of one-time credentialing, hospitals may need to validate algorithmic outputs regularly to ensure they perform as intended within diverse patient populations.

This initiative follows the White House Office of Science and Technology Policy (OSTP)’s September 26, 2025, request for public input on federal regulations that may hinder responsible AI innovation. The FDA’s consultation complements that broader effort by focusing specifically on healthcare applications, where the stakes of algorithmic error are particularly high.

In recent years, the FDA has published multiple discussion papers outlining the “Total Product Lifecycle (TPLC)” approach to digital health technologies — a concept that emphasizes ongoing data collection, verification, and adaptation of regulatory requirements throughout a product’s lifespan.

This consultation effectively operationalizes that philosophy. By integrating TPLC principles, the agency aims to build a feedback-driven ecosystem where manufacturers, regulators, and clinicians share responsibility for safety and performance assurance.

Image 20

AI is already transforming the surgical landscape — from real-time visualization and navigation systems to autonomous suturing and predictive risk analytics. However, these same capabilities also introduce potential for unintended algorithmic drift if models adapt to biased or incomplete data.

Historically, medical device oversight has been reactive — responding to adverse events or performance failures after they occur. The FDA’s latest consultation aims to invert that paradigm by building early-warning systems into regulatory oversight.

By collecting data continuously from clinical use, the agency envisions a future where machine learning models are regulated through performance analytics rather than periodic static reviews. This could dramatically reduce safety risks while fostering innovation through data transparency.

As notes, “the FDA’s approach reflects a growing recognition that adaptive technologies require adaptive governance.”

The public comment period remains open until December 1, 2025. Submissions can be made via with contributions encouraged from manufacturers, hospitals, researchers, and patient advocacy groups.

Once comments are reviewed, the FDA is expected to release a guidance document or pilot program outlining best practices for real-world AI performance monitoring — potentially influencing future premarket submissions and postmarket requirements.

Sources