AI and the ‘Wild West’ of Digital Mental Health

By: Lawrence Quill, Ph.D.

Takeaways

  • Digital mental health apps are not new.
  • Two types of mental health applications include scripted mental health apps designed with the oversight of experts for almost 50 years and newer companion apps using Generative AI to produce human-like responses to the user’s prompts, which are currently unregulated.
  • The question for policymakers is how to make good decisions informed by best practices, ensuring the proper testing of products, increasing transparency, and employing licensing with respect to mental health applications.
  • Relevant, independent expertise should be called on to counter the self-interest of technology companies especially when the mental health of Californians is at stake.

Introduction

Over the last few years, digital technologies, including Artificial Intelligence (AI), have been applied with increasing frequency to the field of mental health. There has been growing public interest as a result and numerous policy initiatives with respect to regulating some aspects of AI, internationally, nationally, and in the United States by state.[1] However, in important ways we are still in the “Wild West” phase with respect to digital mental health provision.

In California, Assembly Bill 2089 came into effect on January 1, 2023. This bill extended federal protections concerning the confidentiality of health records already in place thanks to the Health Insurance Portability and Accountability Act or HIPAA (1996) to include medical information collected via mental health applications on mobile devices or Internet websites. The bill was sponsored by Assemblymember Rebecca Bauer-Kahan, who expressed concerns over user privacy concerns and the sale of sensitive information to third parties. Bauer-Kahan’s office noted that there are over 20,000 mental health apps now available, that their use increased markedly during the pandemic, and that many of these apps transmit the personal information they collect to advertisers.[2]

This initiative was welcomed and reflects a growing awareness by policymakers of the need for sensible regulation of digital technologies and, increasingly, AI-powered technologies; though no mention is made of the latter in this particular bill. Despite these efforts, challenges remain in determining the efficacy of digital technologies, especially in the field of mental health. How, precisely, are policymakers supposed to determine whether a technology is safe, offers value for money, reaches the people it is supposed to help and, crucially, that they will actually use it?

These questions are much harder to answer. This short paper is intended to identify some of the more salient difficulties that policymakers face. I will conclude by suggesting some practical proposals concerning how best to move forward.

AI-generated image of a hand holding a cell phone with a faceless person from the shoulders up. Behind the person's head on the cell phone is a sphere with many lines crossing in a pattern and a constellation of stars. Behind the phone and the hand is another circle with foliage growing from the bottom of the circle, and comment boxes that have dots and designs in them.

Digital Mental Health

Despite claims that digital mental health applications represent a breakthrough in possibilities for mental health therapy provision, the idea that a computer could replace a therapist has been around for some time. Almost 50 years ago, MIT Professor Joseph Weizenbaum developed Eliza, a relatively simple piece of natural language software that mimicked the responses of a therapist, modeled on the approach developed by the famed psychotherapist, Carl Rogers.[3] Weizenbaum noted a number of things about the program: it was relatively easy to develop, people (his secretary to begin with) quickly became absorbed in the “conversation” with the computer and, moreover, they developed strong feelings of trust towards the machine.

Weizenbaum expressed concern. He was worried that an electronic simulacrum of a therapist would replace a human-centered approach to therapy and questioned the value of professional therapeutic help; this would alter the way patients saw therapy and how therapists understood what it was they were doing. He was additionally worried that users of the computer would take the  “wisdom” offered by the software at face value. They would, in short, trust the machine when they ought not to. The development of relatively cheap and ubiquitous computing has transformed our societies since Weizenbaum first expressed these concerns. Yet, his warnings are still relevant and continue to be echoed by well-informed skeptics.

Types of Digital Mental Health Applications

Digital Mental Health apps offer many benefits, especially to policymakers who recognize a crisis in mental health in their communities. Yet, it is important to note that they fall into broadly two types.

The first type is scripted mental health apps. These follow the approach of the original Eliza model drawing upon a data-set of pre-scripted responses (Woebot, Wysa, Ellie, and Koa Health).[4] The advantage of these models is they are designed with the oversight of experts in psychology and possess built-in protocols or guard-rails that ensure the responses to user statements are checked. Koa Health’s suite of apps was developed with experts from Oxford University. Woebot, was developed by psychologists at Stanford University. Ellie, a virtual therapist developed at the University of Southern California, was designed specifically to treat individuals with Post Traumatic Stress Disorder, PTSD. The designers of these systems note the willingness of human beings to engage with the technology. In fact, one recent study noted that a majority of individuals who use Ellie report a preference for interactions that are machine-exclusive. They feel more comfortable talking to a machine than a human therapist.[5] Regulatory bodies such as the American Psychiatric Association have created an App Evaluation Taskforce together with an App Evaluation Model designed specifically to help professionals determine “the efficacy and risks of mobile and online apps.” This is a welcome and important move.[6]  

According to figures from the Association of American Medical Colleges more than 60% of practicing psychiatrists are over the age of 55.[7] The prospect of future therapy platforms supporting and, in some cases, replacing human therapists with AI-based models is no longer theoretical. To advocates the advantages are obvious: “…constant availability, greater access, equity of mental health recourses, immediate support, anonymity, tailored content, lower cost, and increasing service capability and efficiency… overcoming geographical barriers to treatment …[together with] engaging traditionally hard-to-reach groups.’”[8] Features such as these make the prospect of the AI therapist an appealing one. Scripted apps, especially those developed and evaluated by trained professionals, are unlikely to say anything inappropriate or dangerous to a user. The latter can be confident that what they are downloading to their phone has been carefully assessed. The disadvantage of this approach is that (sometimes) the conversation often feels unnatural. Apps in this category may also not be the most attractive or easy to use. They might also incur a financial cost, which further disincentivizes their use.

The second approach uses Generative AI to produce human-like responses to the user’s prompts. There are a great number of apps that employ this technology. They have been termed Companion Apps, and much less research has been conducted concerning the efficacy of these applications with respect to mental health. However, what research has been done asserts that many people use these Companion Apps to discuss their most intimate thoughts including mental health issues. The CEO of Replika, a Companion App with 2 million users, noted that its most frequent users struggle with emotional and mental health problems. Around half are involved in a romantic relationship with their AI avatar.[9] The authors of one recent paper on the subject note that in terms of a business model that guarantees long-term loyalties from customers, and from the perspective of individuals experiencing loneliness, the technology appears to fulfill an individual need and address an acute social problem–loneliness. Yet, they also note that feelings of unhealthy emotional dependency develop among users who grow strong emotional attachments to their AI companions. This is problematic when the  performance of the avatar is altered due to software updates, connection issues, or a change in company policy, resulting in the removal of certain features. The outcome is often a feeling of profound loss or mourning, much as one might mourn the death of a loved one, leading the authors to suggest the following:

Since consumers form such close relationships with AI companions, changes to the apps can result in negative consumer mental health that persists over time. Yet, AI companion apps are currently unregulated “general wellness apps” in the U.S.—which are defined as apps that promote healthy living but do not diagnose, treat, or prevent specific medical conditions (FDA 2022)—perhaps under the assumption that they pose only minimal risk to consumers…[10]

They conclude, however, that the strong emotional attachments formed with AI companions raises real ethical issues concerning the monetization of emotional dependency and pose serious risks to consumer mental health. Recent cases that have appeared in the mainstream press support this contention.[11] They suggest that existing regulatory bodies ought to put additional guardrails in place for these kinds of digital applications. There is, for example, no clear definition currently in place for what would constitute ”mental health safety.” While human therapists undergo rigorous and lengthy training before they are certified, no such requirements are needed for AI therapy applications.

Conclusions

The issues surrounding digital mental health apps form an important subset of issues that scholars and policymakers are now considering as they weigh the costs and benefits of including AI in social policy. Despite the hype and excitement surrounding AI, it remains unclear how best to move forward when there are even debates about the meaning of AI, what it does, how it does it, and what it’s for?[12] There are already too many examples of governments and corporations employing AI with disastrous results. California’s own experiment with the digital mental health app 7 Cups of Tea offers an instance of good intentions that run up against a technology’s shortcomings.[13]

Sometimes the argument is made that policymakers are unqualified to make decisions about technology because they lack the technological know-how. This is spurious. Policymakers are not experts in most of the fields they consider. The question is how to make good decisions informed by best practices, ensuring the proper testing of products, increasing transparency, and employing licensing with respect to mental health applications. This is no easy task. Many companies require individuals to sign Non-Disclosure Agreements (NDAs) to protect their products. But the latter should not inhibit understanding, especially when problems arise. In short, relevant, independent expertise should be called on to counter the understandable self-interest of technology companies especially when the mental health of Californians is at stake.