Lyssn’s 2023 Bias Report

October 4, 2023

Unchecked, language models have the potential to reproduce harmful social biases around race, gender, and other cultural and social identities.  Any tool, whether it relies on human judgment, simple math or artificial intelligence, has the potential to introduce this type of bias into important decisions that we make.  It is up to us to ensure bias is not replicated in our own usage of this powerful technology.

We believe in both carefully monitoring and transparently reporting on any potential bias in our AI tools. This year we are publically releasing our first annual report on bias in our AI models, specifically related to race and ethnicity — click here for a copy. We plan to repeat this process every year in addition to expanding the scope of what sources of bias we look at in our system.