The Evolution Of AI And Mental Healthcare

0
The Evolution Of AI And Mental Healthcare

Rob Morris, co-founder and CEO, Koko.

People have been using chatbots for decades, well before ChatGPT was released. One of the first chatbots was created in 1966, by Joseph Weizenbaum at MIT. It was called ELIZA, and it was designed to mimic the behaviors of a psychotherapist. Though Weizenbaum had no intention of using ELIZA for actual therapy (in fact, he rebelled against this idea), the concept was compelling.

Now, almost 60 later, we are still imagining ways in which machines might help provide mental health support.

Indeed, AI offers many exciting new possibilities for mental health care. But understanding its benefits—while navigating its risks—is a complex challenge.

We can explore potential applications of AI and mental health by looking at two fundamental use cases: those related to the provider, and those related to the client.

Provider-Facing Opportunities

Training

AI can be used to help train mental health practitioners by simulating interactions between clients and patients. For instance, ReflexAI uses AI to create a safe training environment for crisis line volunteers. Instead of learning in the moment, with a real caller, volunteers can rehearse and refine their skills with an AI agent.

Quality Monitoring

AI can also help monitor conversations between providers and clients. It has always been difficult to assess whether providers are adhering to evidence-based practices. AI has the potential to provide immediate feedback and suggestions to help improve client-provider interactions. Lyssn applies this concept to various domains, including crisis services like 988.

Suggested Responses

AI can also provide in-the-moment advice, offering suggestions and resources for providers. This could be especially helpful for crisis counselors, where the need to provide timely and empathetic feedback is extremely important.

Detection

There is also research suggesting that some mental health conditions can be inferred from various signals, such as one’s tone of voice, speech patterns and facial expressions. These biomarkers have the potential to greatly facilitate screening for mental health. A good example is the work being done by Ellipsis Health.

Administrative

While less exciting and attention-grabbing than other opportunities, the greatest potential for near-term impact might relate to easing administrative burden. Companies like Eleos Health are turning behavioral health conversations into automated documentation and detailed clinical insights.

Client-Facing Opportunities

Chatbots like Woebot and Wysa are already capable of delivering evidence-based mental health support. However, as of this writing, they do not use generative AI. Instead, they guide the user through carefully crafted pre-scripted interactions. Let’s call this ELIZA 2.0.

But there are now several startups exploring something like ELIZA 3.0, where generative AI conducts the entire therapeutic process. Generative AI offers the potential to provide rich, nuanced interactions with the user, ideally improving the effectiveness of online interventions.

Users are also given much more control of the experience, potentially redirecting the chatbot toward different therapeutic approaches, as needed. New startups are already seizing this opportunity. For example, Sonia uses large language models to mimic cognitive-behavioral therapy.

Other companies (such as Replika and Character.ai) are providing companion bots that form bonds with end users, offering kind words and support. These AI chatbots are not trained to deliver anything resembling traditional therapy, but some users believe they have therapeutic benefits.

Risks

It seems clear that AI is well-positioned to enhance many elements of mental health care. However, significant risks remain for nearly all of the opportunities described thus far.

For providers, AI could become a crutch—something that is increasingly used without human scrutiny. For example, the current state-of-the-art models lack the situational awareness of providers to recognize complex and potentially very dangerous shifts in mood and behaviors.

In high-stakes environments, such as crisis helplines, AI could cause dangerous unanticipated consequences. This has happened before. For instance, a simple helpline bot designed to treat eating disorders was accidentally connected to a generative AI model, without the awareness of the researchers who were involved. Unfortunately, the bot then proceeded to advocate unhealthy diets and exercise for individuals struggling with disordered eating. Here, the root failure was likely a coordination issue between different organizations, but it exemplifies ways in which AI could be dangerous in real-world deployments.

For clients, AI has the potential to mislead users into thinking they are receiving acceptable care. A therapist bot may contend that it has clinical training and an advanced degree. Most users will probably know this is false, but these platforms tend to attract young people, and many may decide not to speak up about their mental health, believing these platforms provide sufficient care.

There are, of course, many other issues to consider, such as data privacy.

As we continue to explore AI in mental health, it’s important to balance its potential benefits with careful consideration of the risks to ensure it truly helps those in need.


Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?


link

Leave a Reply

Your email address will not be published. Required fields are marked *