Insights

From our roots in academia to today’s technology, groundbreaking and peer-reviewed research provides the foundation for all that we do.

Lyssn’s 2023 Bias Report

Therapist engaging with AI-powered on a laptop technology to enhance communication skills.

Unchecked, language models have the potential to reproduce harmful social biases around race, gender, and other cultural and social identities.  Any tool, whether it relies on human judgment, simple math or artificial intelligence, has the potential to introduce this type of bias into important decisions that we make.  It is up to us to ensure bias is not replicated in our own usage of this powerful technology.

We believe in both carefully monitoring and transparently reporting on any potential bias in our AI tools. This year we are publically releasing our first annual report on bias in our AI models, specifically related to race and ethnicity — click here for a copy. We plan to repeat this process every year in addition to expanding the scope of what sources of bias we look at in our system.

 

Proven science. Powerful AI.
Profound improvement.

Let Lyssn reduce burnout and transform the way you implement and model fidelity to evidence-based practices.