Back to News

AI-Enhanced Medical Devices Under Scrutiny as Safety Concerns Rise — Surgeons and Regulators Call for Stronger Oversight

By Ava Renshaw|
AI-Enhanced Medical Devices Under Scrutiny as Safety Concerns Rise — Surgeons and Regulators Call for Stronger Oversight
AI-Enhanced Medical Devices Under Scrutiny as Safety Concerns Rise — Surgeons and Regulators Call for Stronger Oversight

Surgeons, Regulators Sound Alarm as AI-Enhanced Medical Devices Linked to Rising Reports of Surgical Mishaps

A major Reuters investigation published today reveals an emerging safety concern in U.S. healthcare: a sharp increase in reported malfunctions and injuries associated with artificial intelligence (AI)-augmented medical devices, particularly those used for surgical navigation and imaging. The report highlights potential regulatory gaps at the U.S. Food and Drug Administration (FDA) as AI technologies rapidly proliferate in clinical settings without rigorous pre-market testing or robust oversight.

In 2021, a unit then owned by Johnson & Johnson announced it had added AI to its TruDi Navigation System, a device designed to guide ear, nose and throat surgeons during sinus operations. Previously, the FDA had received just a handful of malfunction reports about the device. After integrating machine-learning algorithms for real-time surgical guidance, that number jumped to more than 100 reports of malfunctions and adverse events by late 2025. Of those, at least 10 were linked to serious injuries, including strokes, cerebrospinal fluid leaks, and skull base punctures allegedly tied to the device misinforming surgeons about the position of their instruments.

Two patients injured during such procedures have now filed lawsuits alleging that the AI enhancements contributed to their harm, including claims that the device was less safe after the AI module was added. These cases are currently ongoing in Texas state courts, though Reuters notes it could not independently verify all the allegations.

The Reuters investigation also found a broader pattern: the FDA now lists over 1,350 AI-enabled devices it has authorized, and among the adverse event reports tied to these products, many mention software or algorithmic issues — such as an ultrasound algorithm that allegedly misidentified fetal anatomy.

Image 5

A critical concern reporters uncovered is that the current FDA regulatory system for medical devices — particularly AI-enhanced ones — may be ill-equipped for the latest technologies. Unlike new drugs, which generally require extensive clinical trials before approval, most AI-enabled devices are cleared through the FDA’s 510(k) pathway by showing “substantial equivalence” to existing products. This often means no direct human clinical testing is required, even when the software component fundamentally alters how a device operates.

Independent academic research supports this broader regulatory concern: AI/ML-enabled medical devices cleared by the FDA have shown accelerated approval rates but also heightened recall and adverse event signals in some studies, partly because so few undergo rigorous clinical trials prior to clearance.

Additionally, current and former FDA specialists told Reuters that regulatory capacity is stretched thin as more AI-based products enter review. A once-robust unit focused on AI and digital health saw staffing cuts in recent years, limiting resources available to thoroughly evaluate sophisticated machine-learning systems.

Examples of Reported Incidents

Among the more serious reports reviewed by Reuters:

• A patient undergoing sinus surgery suffered a leak of cerebrospinal fluid after the navigation system guide reportedly misled the surgeon.

• In another procedure, the system allegedly mislocated a carotid artery, leading the surgeon to accidentally injure it — resulting in a stroke.

• Dozens of other FDA reports reference incorrect anatomical positioning or labeling errors, though not all involve documented patient harm. For example, some reports involving fetal imaging software stated algorithms labeled anatomical features incorrectly — even if no injury occurred.

The sheer volume of reports — from seven prior to AI integration to over a hundred afterward — has alarmed some clinicians and safety advocates.

The manufacturer now responsible for the TruDi system, Integra LifeSciences (which acquired the device as part of its Acclarent purchase in 2024), denies that the AI software caused or contributed to any injuries. Integra says the adverse event reports simply indicate the device was in use when an event occurred, without proving a causal link.

Image 16

The FDA, for its part, maintains that patient safety remains its top priority and that it continues to apply rigorous standards to AI-enabled medical devices. The agency also says it is recruiting experts in digital health to bolster its capacity for evaluating emerging technologies.

As AI is incorporated into more aspects of medicine — from diagnostics to real-time surgical guidance — the industry faces a tension between innovation and safety assurance. While AI has the potential to boost precision and help clinicians make better decisions, inadequate validation and oversight can lead to unexpected and harmful outcomes.

Healthcare providers, hospital risk managers, and surgical teams may need to critically evaluate which devices they adopt, how they train clinicians on AI interfaces, and how they monitor post-market performance. Regulators worldwide are also debating how to modernize approval pathways to reflect the complexities of software-driven medical technology.

Sources

Related News