Medtech Insight is part of Pharma Intelligence UK Limited

This site is operated by Pharma Intelligence UK Limited, a company registered in England and Wales with company number 13787459 whose registered office is 5 Howick Place, London SW1P 1WG. The Pharma Intelligence group is owned by Caerus Topco S.à r.l. and all copyright resides with the group.

This copy is for your personal, non-commercial use. For high-quality copies or electronic reprints for distribution to colleagues or customers, please call +44 (0) 20 3377 3183

Printed By

UsernamePublicRestriction

Therapy Chatbots And Medtech Laws: Why AI App Developers Must Tread Carefully

Executive Summary

With the rise of AI chatbots enabling rapid access to personalized mental health support and lifestyle advice, app developers must know when a product is legally a medical device. Medtech regulatory lawyer Erik Vollebregt explains what EU law says, and how to avoid mistakes.

The first artificial intelligence (AI) chatbot was released in the 1960s, but it is only in recent years that similar - albeit far more sophisticated - technology has been adapted to provide mental health support. Emerging AI programs can not only direct patients to sources of information, but also simulate human conversation, meaning they have the potential to be used as virtual therapists.

But what does European law say about this software? Are AI chatbots that act as therapists considered medical devices? Do software manufacturers have a responsibility - both ethically and legally - to obtain medical device certification for apps that might be used for medical purposes, even if they do not diagnose patients, but provide counselling or emotional support?

The EU Medical Device Regulation makes clear that AI chatbots offering mental health therapy are indeed medical devices and must be CE marked as such, according to expert medtech regulatory lawyer Erik Vollebregt. However, he explained, the borderline can be “pretty fuzzy” when it comes to differentiating between therapy and lifestyle guidance, something that app developers must heed when marketing their product (more below).

“Software with a therapeutic or diagnostic purpose with respect to mental health is clearly in the scope of the MDR, even if it does not give a direct diagnosis - as is mistakenly assumed to be a requirement by many,” Vollebregt recently told Medtech Insight.

“Indeed, AI chatbots can absolutely qualify as a medical device requiring CE marking,” he continued, adding that this “would be a logical extension of all kinds of therapeutic stand-alone software that already exists in the mental health space and that have been CE marked as medical devices already.”

In terms of defining mental health conditions in the context of EU MDR, Vollebregt said it can be “assumed that conditions described in the DSM-5 will be seen as accepted mental illnesses and therefore are in scope of the MDR as disease.”

The fifth edition of the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM-5) is the standard classification of mental disorders used by mental health professionals in the US, but it is also used in other parts of the world, including Europe.

While this aspect of the MDR is clear, the laws are murkier when it comes to differentiating between an app providing users with lifestyle advice and actual therapy. Vollebregt noted that therapy is “always in the scope of the MDR,” whereas more general advice around reducing overall stress levels or improving sleep may be classified as lifestyle advice, and therefore not obligate an AI app to be regulated as a medical device.

Is It Therapy Or Lifestyle Advice?

The borderline between lifestyle advice and medical treatment can be “pretty fuzzy,” Vollebregt warned, adding that this crucially depends on how manufacturers phrase the intended purpose of their AI product.

“This is where companies should really spend some time thinking about what they want to claim for the AI.”

“Also, it would be important to manage risks by ensuring that the AI is not suckered into treatment anyway by users asking questions that prompt it to give treatment advice, like ‘should I go see a psychiatrist if I think green men will abduct me?’,” Vollebregt said.

Indeed, Medtech Insight notes, this could become a problem for app developers whose chatbots are adaptive. This means that when used in a real-world setting, AI bots could respond therapeutically to users that are merely seeking medical information - even if the developers did not intend for the product to be used for this purpose.

Preventing Patient Harm

A solution to this, Vollebregt proposed, is the use of “proper risk management”. For instance, software could be programmed to “ensure that the AI would refuse to answer questions [seeking treatment advice], and refer people to an actual doctor.”

He reiterated that mitigating the risk that could arise from the use of AI chatbots in a mental health or lifestyle advice context depends on what claims the developer makes for their product.

If an AI program is marketed as a “novelty service that allows you to obtain information with zero guarantees as to factual correctness or clinical performance,” risk-management measures should be put in place and developers must be “very clear [about] what the AI is not intended to do.”

On the other hand, apps that provide a medical service that does make a clinical difference should ensure that the AI underlying the product or service has undergone clinical evaluation in addition to appropriate risk-management.

“For example, deep learning models need specific risk management controls and monitoring that their conclusions remain true over time. At the moment, we see that AI models can very much outperform humans in specific cognitive tasks like image processing, but direct language (DL) based models are another thing,” Vollebregt said. DL models, he explained, “need a lot of training on curated datasets to make sure that their statements are and remain correct and clinically relevant.”

Indeed, this potential problem with DL models when used in health care, which include ChatGPT, were explored in a recent journal article published in Nature. The authors conclude that this technology could be used to deliver therapy services such as cognitive behavioral therapy (CBT), but only if the product is specifically tailored for this purpose.

UK On The Same Page

The UK’s Medicines and Healthcare products Regulatory Agency (MHRA) also confirmed to Medtech Insight that software intended to provide a “medical purpose” for example diagnosis, monitoring or treatment, and which is marketed for this purpose, would qualify as Software as a Medical Device (SaMD).

If this software utilizes AI to achieve this purpose, it would fall in a subset of SaMD called Artificial Intelligence as a Medical Device (AIaMD).

The MHRA directed manufacturers towards guidance it has produced on determining whether software is a medical device that would be relevant in the context of providing mental health advice, as well as recently published information on crafting an intended purpose for SaMD.

Johan Ordish, the MHRA’s former lead for software and AI, published a blog post last month detailing when a chatbot is likely to be legally considered a medical device. Language models such as ChatGPT, he wrote, are unlikely to be considered medical devices as they are designed for general use purposes.

However, if chatbots based on the same technology “are developed for, or adapted, modified or directed toward specifically medical purposes [they] are likely to qualify as medical devices,” Ordish said.

“Additionally, where a developer makes claims that their [product] can be used for a medical purpose, this again is likely to mean the product qualifies as a medical device.”

Although EU and UK rules at present take a similar approach to regulating AI medtech, this could change over the coming years as the EU introduces its proposed AI Act. The UK recently announced that its own governance framework for AI will not see new legislation introduced, and will instead rely on guidance and principles that sectoral regulatory bodies can implement as they see fit. (Also see "What The UK’s “Non-Statutory” AI Regulatory Framework Means For Medtech" - Medtech Insight, 3 Apr, 2023.).

Case Studies: England’s NHS Uses Wysa and Limbic Access

Mental health AI chatbots can come in many forms, whether they screen patients for mental health disorders, simulate therapy sessions, or act as supportive tools for use alongside human interventions, etc.

For example, Wysa has developed an AI system known within England’s National Health Service as the Everyday Mental Health app. This system has a UKCA marking as a class I medical device for England, Wales and Scotland.

The company’s universal app has a CE marking class I under the former EU medical device directives, and the company says it plans CE mark its Everyday Mental Health app under the new MDR by late 2024.

The Wysa app, as described on its website, is an AI-based service which responds to the emotions expressed by users, utilizing evidence-based therapeutic techniques as well as meditation, breathing exercises, yoga and other “micro-actions” that can improve individuals experiencing low mood, stress or anxiety.

The app is not, however, intended to diagnose, treat or cure patients who use its services.

Within England’s NHS, Wysa is used as an “e-triage” service that guides patients through self-referral, and can help to identify patients who may be at a high risk of suffering a mental health crisis.

Limbic Access is another app used within England’s NHS to provide mental health support to patients. It has a class IIa UKCA certification, and is indicated for a number of mental health conditions including depression, generalized anxiety disorder, panic disorder and obsessive-compulsive disorder (OCD).

The product is described as a cloud-based chatbot that conducts a psychological assessment of users and facilitates self-referral to a given mental health service. It also supports data collection and triaging for service providers within the care system.

At the time of publication, Limbic had not responded to Medtech Insight’s request for confirmation of the product’s EU regulatory status.

Medtech Insight did not discuss any specific app or company with either Erik Vollebregt, the MHRA or other individuals who have provided information for this article. Case studies are provided for informative purposes of how and where AI chatbots are being used as regulated medical devices.

Related Content

Topics

Latest Headlines
See All
UsernamePublicRestriction

Register

MT147873

Ask The Analyst

Ask the Analyst is free for subscribers.  Submit your question and one of our analysts will be in touch.

Your question has been successfully sent to the email address below and we will get back as soon as possible. my@email.address.

All fields are required.

Please make sure all fields are completed.

Please make sure you have filled out all fields

Please make sure you have filled out all fields

Please enter a valid e-mail address

Please enter a valid Phone Number

Ask your question to our analysts

Cancel