As AI enters the operating room, reports arise of botched surgeries and misidentified body parts
Medical device makers have been rushing to add AI to their products. While proponents say the new technology will revolutionize medicine, regulators are receiving a rising number of claims of patient injuries.
www.reuters.com
Medical device makers have been rushing to add AI to their products. While proponents say the new technology will revolutionize medicine, regulators are receiving a rising number of claims of patient injuries.
By
Jaimi Dowdell,
Steve Stecklow,
Chad Terhune and
Rachael Levy
February 9, 20266:30 AM ESTUpdated February 9, 2026
In 2021, a unit of healthcare giant Johnson & Johnson announced “a leap forward”: It had added artificial intelligence to a medical device used to treat chronic sinusitis, an inflammation of the sinuses. Acclarent said the software for its TruDi Navigation System would now use a machine-learning algorithm to assist ear, nose and throat specialists in surgeries.
The device had already been on the market for about three years. Until then, the U.S. Food and Drug Administration had received unconfirmed reports of seven instances in which the device malfunctioned and another report of a patient injury. Since AI was added to the device, the FDA has received unconfirmed reports of at least 100 malfunctions and adverse events.
At least 10 people were injured between late 2021 and November 2025, according to the reports. Most allegedly involved errors in which the TruDi Navigation System misinformed surgeons about the location of their instruments while they were using them inside patients’ heads during operations.
Cerebrospinal fluid reportedly leaked from one patient’s nose. In another reported case, a surgeon mistakenly punctured the base of a patient’s skull. In two other cases, patients each allegedly suffered strokes after a major artery was accidentally injured.
FDA device reports may be incomplete and aren’t intended to determine causes of medical mishaps, so it’s not clear what role AI may have played in these events. The two stroke victims each filed a lawsuit in Texas alleging that the TruDi system’s AI contributed to their injuries. “The product was arguably safer before integrating changes in the software to incorporate artificial intelligence than after the software modifications were implemented,” one of the suits alleges.
Reuters could not independently verify the lawsuits’ allegations.
Asked about the FDA reports on the TruDi device, Johnson & Johnson referred questions to Integra LifeSciences, which in 2024 purchased Acclarent and the TruDi Navigation System. Integra LifeSciences said the reports “do nothing more than indicate that a TruDi system was in use in a surgery where an adverse event took place.” It added that “there is no credible evidence to show any causal connection between the TruDi Navigation System, AI technology, and any alleged injuries.”
Insight into the incidents comes as AI is beginning to transform the world of health care. Proponents predict the new technology will help find cures for rare diseases, discover new drugs, enhance surgeons’ skill and empower patients. But a Reuters review of safety and legal records, as well as interviews with doctors, nurses, scientists and regulators, documents some of the hazards of AI in medicine as device makers, tech giants and software developers race to roll it out.
At least 1,357 medical devices using AI are now authorized by the FDA – double the number it had allowed through 2022. The TruDi system isn’t the only one to come under question: The FDA has received reports involving dozens of other AI-enhanced devices, including a heart monitor said to have overlooked abnormal heartbeats and an ultrasound device that allegedly misidentified fetal body parts.
Researchers from Johns Hopkins, Georgetown and Yale universities recently found that 60 FDA-authorized medical devices using AI were linked to 182 product recalls, according to a research letter published in the JAMA Health Forum in August. Their review showed that 43% of the recalls occurred less than a year after the devices were greenlighted. That’s about twice the recall rate of all devices authorized under similar FDA rules, the review noted.
The AI boom poses a problem for the FDA, five current and former agency scientists told Reuters: The agency is struggling to keep pace with the flood of AI-enhanced medical devices seeking approval after losing key staff. A spokesperson for the U.S. Department of Health and Human Services, which includes the FDA, said it’s looking to boost its capacity in this area.
Another form of artificial intelligence, generative-AI chatbots, is also making its way into medicine. Many physicians are now using AI to save time, such as in transcribing patient notes. But doctors also say many patients use chatbots to self-diagnose or challenge professional advice, posing new challenges and risks. (
See related story.)
Artificial intelligence became a business and social sensation after the launch of ChatGPT about three years ago. ChatGPT and other popular chatbots, such as Google’s Gemini and Anthropic’s Claude, use so-called generative AI to create content. They are built on top of large language models, or LLMs, which are trained on huge troves of text and other data to understand and generate human language. These AI
tools are now being introduced into medical areas such as consumer healthcare apps.
AI encompasses more than LLMs, however, and the technology made its way into medicine long before AI bots appeared. The field dates back more than 70 years: A key moment was when British mathematician Alan Turing asked in a 1950 paper, “Can machines think?”
The FDA authorized its first AI-enhanced medical devices in 1995 – two systems that used pattern-matching software to screen for cervical cancer. The type of AI used in medical devices today is often called machine learning, along with a subset known as deep learning, which are trained on data to perform specific tasks. The technology is used in radiology, for example, to enhance and analyze medical images. It can help diagnose cancers by identifying tumors that doctors may overlook.
Such systems are also used in surgical devices. In June 2022, a surgeon inserted a small balloon into Erin Ralph’s sinus cavity at a hospital in Fort Worth, Texas. According to a lawsuit filed by Ralph, Dr. Marc Dean was employing the TruDi Navigation System, which uses AI, to confirm the position of his instruments inside her head.
The procedure, known as a sinuplasty, is a minimally invasive technique to treat chronic sinusitis. A balloon is inflated to enlarge the sinus cavity opening, to allow better drainage and relieve inflammation.
But the TruDi system “misled and misdirected” Dean, according to the lawsuit Ralph filed in Dallas County District Court against Acclarent and other defendants. A carotid artery – which supplies blood to the brain, face and neck – allegedly was injured, leading to a blood clot. According to a court filing, Ralph’s lawyer told a judge that Dean’s own records showed he “had no idea he was anywhere near the carotid artery.” Reuters wasn’t able to review the records, which are subject to a judicial protective order.
After Ralph left the hospital, it became apparent that she had suffered a stroke. The mother of four returned and spent five days in intensive care, according to a GoFundMe fundraising drive that was organized to support her recovery. A section of her skull was removed “to allow her brain room to swell,” the GoFundMe appeal stated.
“I am still working in therapy,” Ralph said in an interview more than a year later in a blog about stroke victims. “It is hard to walk without a brace and to get my left arm back working, again.”