The use of AI in child welfare services: 5 common concerns

July 20, 2022

When people first hear about using Artificial Intelligence, or AI, in child welfare settings they might have some concerns, such as how the technology is used by the provider and if it is safe for kids and families. At Lyssn, we’re concerned about those things as well. In the 12+ years we’ve been conducting clinical research and building the Lyssn AI platform, we’ve worked hard to address bias, security, privacy, equitable access, and the ways AI can support humans in how they help the individuals, children, and families they serve.

We hear five questions and concerns most often. Here’s what we’re doing to constantly address and answer them.

1: What can be done to combat bias in AI and how can we keep AI from perpetuating systemic racism and other biases?

As a creation of humans, any system of Artificial Intelligence will carry forward the biases of the people that built it – unless we intervene to purposefully identify and root out those biases. At Lyssn, we take a number of steps to identify and reduce any bias in our AI platform by:

  • Using actual sessions/conversations/recordings from a wide range of settings and populations. This includes not only people of different races, ethnicities, and economic status but also age groups, regions of the country, etc.
  • Using a diverse group of licensed clinicians to “train” the system. These are the trained professionals that first listen to the sessions and rate them on the quality metrics. These ratings are then put into the Lyssn system and serve as a baseline for Lyssn AI. The system learns to properly match patterns but with ratings and data from a varied group of individuals who are working with session data from diverse populations.
  • Developing our very own Bias Detector! We’re working on this now and it will deploy later this year (and we will describe in more detail soon!)

2: I’m concerned about an AI software interacting directly with families or, even worse, making decisions that can affect them! How can we prevent this?

The Lyssn software does not interact with families at all. And, while there are examples where agencies have attempted to use computer algorithms to replace human judgement, we do not believe that any current computer can be trained to do that in the context of child welfare. Instead, Lyssn assesses how clinicians and case workers use evidence-based practices (Motivational Interviewing is a good example) when working with families and provide feedback to help them improve their skills. Even then, sometimes the platform makes mistakes. That’s why we stress that it’s always up to individuals and their supervisors to make the final call – and we give them the detailed information they need to do that.

3: What about access to technology? Many of our clients do not have high speed internet, for example.

The digital divide continues and is indeed all too real. However, Lyssn does not depend on families having access to broad band or to computers at all. Lyssn’s AI platform works behind the scenes using secure recordings of conversations/sessions between clients and case workers. Sessions can be recorded on agency phone systems or even on a case worker’s phone.

4: But is this affordable or the kind of big system that only rich states can afford?

Lyssn was designed to be accessible to all agencies and budgets. This is especially true with Family First programs. The Family First Prevention Services Act requires states to monitor fidelity of evidence-based practices, including Motivational Interviewing. The Act also ties vital program funding to quality and evaluation. Lyssn has an affordable “per seat” (full-time user) rate that allows smaller agencies the same kind of full-scale services and resources that previously only larger agencies could afford. The Lyssn platform, which can include clinical notes and training features, is being adopted by several states Department of Family Services, including Utah and Wyoming, with multiple other states in discussions now.

5: If there is a recording, how can families, caseworkers, Guardians ad Litum, etc. know that the recording won’t be used against them somehow?

Confidentiality and privacy are really important, and Lyssn has worked with behavioral health colleagues and clinics, and child welfare groups to develop our privacy protocols.

  • Any recording can be deleted, and when it is deleted, it is truly gone. There is no secret back-up in the cloud somewhere.
  • Agencies get to decide how long a recording is available on the Lyssn system. For example, an agency can choose 14 days, and after 14 days, the recording is automatically de-identified and removed from the Lyssn platform.
  • Lyssn is audited every year by an external group for its compliance with HIPAA and SOC 2. (SOC 2 is a set of security standards for cloud-based software.)

We are committed to continuing to make AI tools that support wellness, behavioral health, and social welfare systems in their work. Hearing your concerns, questions and comments only helps us improve the platform, weed out errors and bias, and fine tune what will support clinicians, coaches, caseworkers, and other care providers. Please use our contact form to send any additional feedback or questions you might have.

Are you a state or agency interested in how we can help you with your Family First plan? Please contact me, Jenny Cheng, Lyssn’s FFPSA Coordinator, directly at jenny@lyssn.io.

Interested in future updates? Sign up for blog post notifications.