Insights

From our roots in academia to today’s technology, groundbreaking and peer-reviewed research provides the foundation for all that we do.

What Does “Quality” Mean When it Comes to AI in Mental Health and Human Services?

Lyssn team member working in an office, representing expertise in AI-driven behavioral health.

Mental healthcare professionals seeking to use technology to improve the impact of their team, without compromising human connection and quality care, face a critical challenge: distinguishing between high-quality AI tools and potentially harmful ones. 

With the market saturation of general AI technologies like ChatGPT and Claude, it’s become increasingly easy to create AI software that appears tailored for behavioral health or health and human services needs. However, this has led to a flood of low-quality, unsafe, unproven, and biased AI tools in the market.

Take a quick look at this video from our Chief Technology Officer, Dr. Michael Tanana, explaining why off the shelf AI is not acceptable for ethical use in mental health use cases:

Unlike less critical AI tasks such as planning someone’s travel agenda or recommending a cocktail recipe, the stakes in mental healthcare are incredibly high. A biased or unsafe AI application used by frontline mental healthcare workers, clinicians, or caseworkers can pose serious risks to individuals and communities. 

This raises an important question: Is there a way to define and evaluate quality in AI applications for health and human services without having to use the trial and error method? 

As the mental healthcare industry grapples with resource constraints and growing demand, the need for reliable AI tools becomes more pressing. However, the challenge lies in developing a framework to assess these tools’ safety, efficacy, and potential impact on vulnerable populations.

The Lyssn team has created a framework, specifically designed to identify criteria for evaluating AI tools in mental healthcare and discuss strategies for implementing AI solutions responsibly in high-stakes environments.

Head to our website to download our free framework for assessing AI quality. 

Our expertise in this field is founded on Lyssn’s extensive 16-year research history, which has focused on developing specialized machine learning models for Behavioral Health, Crisis, Wellness, and Health and Human Services. This research has yielded a substantial dataset comprising 1.9 million high-quality recorded therapy sessions, totaling 54 million minutes and 4.3 billion uniquely labeled words.

Want more insights? 

Dr. Zac Imel, Lyssn’s Chief Science Officer discusses the importance of peer-reviewed studies:

Proven science. Powerful AI.
Profound improvement.

Let Lyssn reduce burnout and transform the way you implement and model fidelity to evidence-based practices.