Medtech Insight is part of Pharma Intelligence UK Limited

This site is operated by Pharma Intelligence UK Limited, a company registered in England and Wales with company number 13787459 whose registered office is 5 Howick Place, London SW1P 1WG. The Pharma Intelligence group is owned by Caerus Topco S.à r.l. and all copyright resides with the group.

This copy is for your personal, non-commercial use. For high-quality copies or electronic reprints for distribution to colleagues or customers, please call +44 (0) 20 3377 3183

Printed By

UsernamePublicRestriction

More Robots In The Operating Room Could Mean More Lawsuits In The Courtroom

Executive Summary

With the era of medical automation upon us, surgeons are increasingly turning to robots and artificial intelligence for better results in the OR. But when things go wrong, who’s to blame?

In the classic novel Fail-Safe, reliance on super computers leads to catastrophe. In that same vein, cutting-edge technology can also result in unintended consequences in medical automation, which is becoming increasingly prevalent in operating rooms.

There’s no doubt, however, that robotic assisted surgery (RAS) has enormous benefits for both surgeons and patients.

For surgeons, robotic arms can maneuver 360 degrees, providing a level of dexterity otherwise impossible to achieve, even among those most skilled with a scalpel. Artificial intelligence technology magnifies the field of vision beyond the scope of the human eye, allowing surgeons to access the most remote corners of human anatomy to treat more conditions with greater effect.

And for patients, RAS not only means the potential for better outcomes, but often results in less pain, smaller incisions, less blood loss, shorter hospital stays and quicker recoveries.

But as it does in Eugene Burdick’s and Harvey Wheeler’s fictional account of tech gone awry, medical technology, no matter how advanced, can fail. And when it does, it raises a serious question of liability, says Frank Pasquale, a Brooklyn Law School professor and author of New Laws of Robotics: Defending Human Expertise in the Age of AI.

“Getting this liability question right will be critically important not only for patient rights, but also to provide proper incentives for the political economy of innovation and the medical labor market,” Pasquale writes in his essay “When Medical Robots Fail: Malpractice Principles for an Era of Automation,” published in The Brookings Institution’s TechStream.

“When AI and robotics substitute for physicians, replacing them entirely, strict liability is more appropriate than standard negligence doctrine.” – Frank Pasquale

In his essay, Pasquale offers an example of a traditional surgery in which a surgeon slips cutting a vital tendon with the scalpel. But what if, Pasquale poses, that surgeon is using a robotic device designed and marketed with a special component to avoid such a mistake. Who’s liable then? The surgeon or the device maker?

In such a scenario, Pasquale argues that the answer might not necessarily be clear; that there could be a shared liability, “a sliding scale based on an apportionment of responsibility.” But for courts to justly determine how these allocations of liability are meted out, Pasquale says courts need a clear legal theory upon which to base the responsibility of vendors of technology.

Strict Liability Standard

In Pasquale’s view, the law cannot be neutral with respect to emerging markets, including new technology. “When AI and robotics substitute for physicians, replacing them entirely, strict liability is more appropriate than standard negligence doctrine,” Pasquale writes. “Under a strict liability standard, in the case of an adverse event, the manufacturer, distributor and retailer of the product may be liable, even if they were not negligent. In other words, even a system that was well designed and implemented may still bear responsibility for error.”

Victor Schwartz, a partner in the DC-based law firm Shook, Hardy & Bacon, says getting through the maze of medical malpractice when robotic automation is involved can be tricky, though long-standing legal precedent that predates robots can help navigate the way.

“There’s a standard of care for doctors that we’ve had for many years,” says Schwartz, whose practice includes public policy and FDA regulatory guidance. “The doctor is supposed to act as if he or she is a reasonable doctor trying everything to provide the very best care. So if the surgeon used all care in selecting the device and used all reasonable care in using the device, but still something went wrong, then that surgeon is unlikely to be subject to malpractice.”

Schwartz told Medtech Insight that winning lawsuits against doctors is not easy because jurors are more likely to empathize with them and relate to them on a human level, whereas device makers do not evoke the same personal response. “Devices are cold,” he says. “Product liability is cold.”

This doesn’t mean, however, that winning a judgment against a device manufacturer is easy, Schwartz adds, although the path to victory can be clearer, especially if the plaintiff’s team can show the device was defective.

“One of the things I've learned in product liability in 50 years is that devices that are highly technical when they’re made – not the way Nabisco makes Oreo cookies, which are all alike – but very technical, there can be what's called a manufacturing defect,” Schwartz says. “In other words, the device in question is not as it was intended to be. Something was left out. A wire was not right. Something went wrong with it.”

If such a defect in the device can be established and then shown that the defect resulted in harm to the patient, proving negligence is unnecessary, Schwartz explains. “All you have to show is that it was not manufactured in accordance with the design of the manufacturer's own specifications, and it failed.”

In Pasquale’s view, this standard – in which a manufacturer can be held accountable without being found negligent – provides an incentive for innovation.

“This may seem like an unduly harsh standard,” Pasquale writes in his essay. “However, the doctrine incentivizes ongoing improvements in technology, which could remain unduly error-prone and based on outdated or unrepresentative data sets if tort law set unduly high standards for recovery.”

Pasquale also argues that a strict liability standard might serve as a deterrent to prematurely automating medical fields of practice where human expertise remains foundational to those fields functioning properly and safely – a philosophy that can apply to any practice or industry. After all, autopilot is essential in aviation, but who would hop on a flight without a human pilot in the cockpit?

Human Supervision

Throughout the evolution of medicine there has always been a person monitoring and supervising its advancement, Pasquale points out, and that must remain with the emergence of robots and AI in health care.

“It would probably be easier to teach someone how to fly a jet than to use some of these devices.”– Victor Schwartz

When a machine in place of a person fails and injury follows, compensation is due, Pasquale says. “The amount of compensation may be limited by state legislatures to avoid over-deterring innovation,” he writes. “But compensation is still due because a person in the loop might have avoided the harm.”

But attorney Schwartz notes that there’s another factor that judges and juries must consider in determining who’s at fault when robotic surgery goes wrong: training. Did the manufacturer properly instruct the surgeon using its robotic instrumentation? Was the surgeon warned about all circumstances that could occur while using the system?

Improper training, he says, is where manufacturers can get into serious legal jeopardy. But providing the necessary training is not easy. Robotic devices, such as those used during surgery, can be extraordinarily complex.

“It would probably be easier to teach someone how to fly a jet than to use some of these devices,” Schwartz says, adding that in a suit against a manufacturer he would absolutely raise questions around training during cross examination. “I would want to make sure the manufacturer had the right teachers, I would want to know their qualifications, that the training wasn’t rushed. Was the doctor able to train with the device on an accurate replica of the human body? And if they didn’t provide the proper training, they’re going to have a problem.”

But regardless of what safeguards are put in place or how advanced technology evolves, mistakes can still happen, be it on the part of the human or the robot. And when they do occur, there must be accountability.

“Even when robotics and AI only complement a professional, there still need to be opportunities for plaintiffs and courts to discover whether the technology’s developers and vendors acted reasonably,” Pasquale writes. “All responsibility for an error should not rest on a doctor when complementary robotics fails to accomplish what it promised to do. To hold otherwise would again be an open invitation to technologists to rest on their laurels.” 

 

Related Content

Topics

Latest Headlines
See All
UsernamePublicRestriction

Register

MT144478

Ask The Analyst

Ask the Analyst is free for subscribers.  Submit your question and one of our analysts will be in touch.

Your question has been successfully sent to the email address below and we will get back as soon as possible. my@email.address.

All fields are required.

Please make sure all fields are completed.

Please make sure you have filled out all fields

Please make sure you have filled out all fields

Please enter a valid e-mail address

Please enter a valid Phone Number

Ask your question to our analysts

Cancel