Using Natural Language Processing to detect suicide risk

April 28, 2022

CAN AI SAVE A LIFE?

We sat down with Zac Imel, Lyssn’s Chief Science Officer, to discuss the use of AI in suicide prevention.

The background

For the past two years, Zac Imel and his team at the University of Utah have been working to build a machine learning approach to identify client expressed indicators of suicide risk as well how counselors respond to those indicators, specifically in text-based crisis counseling conversations. The research team wanted to know if Natural language processing (NLP) could be used to help identify at-risk clients interacting with crisis counselors via text message.

After hand labeling hundreds of crisis counseling interactions (and hundreds of thousands of messages) for expressions of suicide risk, they tested an NLP-based model against a baseline model for detection of client suicide risk. Papers will be published soon, but here’s the upshot: The NLP-based model was indeed able to detect suicide risk of text-based crisis encounters—in some cases better than human evaluators.

Q: Why did you choose to test this approach with text messages?

Text-based crisis counseling is an increasingly standard public health strategy to reduce suicide risk. Given this new modality, and the development of new technologies in NLP, we have the opportunity to build new tools for supporting at-risk clients and the counselors who work with them.

Q: Did the NLP-based model you tested accurately predict indicators of suicide risk?

It was very exciting to see how the NLP-based model performed. It was capable of detecting suicide risk at the conversation level on text-based crisis encounters. And our manual analysis indicates that these types of models can learn appropriate indicators of risk over time – meaning throughout the conversation. Although it is very encouraging, further studies need to be done to assess the robustness of these systems.

Q: Was there anything unexpected that came our of your study?

We observed that our model can capture risk in sync with the dynamics of a conversation – so we can identify not just that there was some indicator of risk during a conversation, but the model also showed evidence of noticing when those indicators occurred in the conversations.  We are building on these results with new approaches, but these initial results suggest that it may ultimately be possible to augment human based supervision and quality assurance processes in crisis lines with NLP-based systems.

Q: What implications do these findings have going forward?

These findings suggest that NLP tools – with appropriate validation and study, can be used as a part of a suite of tools to support the very human work of crisis counseling. Our crisis counselors are focused on building strong empathic connections with the folks who reach out for support, and I’m hopeful this work might eventually result in tools that could support their efforts in terms of identifying potential indicators of risk or supporting the use of evidence-based interventions like risk assessment. We believe systems like the one we tested could be used very effectively in parallel – not as a replacement – to provider assessment of risk.

Q: Will suicide prevention become a part of Lyssn’s service?

Lyssn AI already includes metrics that can identify when conversations about suicide have occurred during an interaction. However, we have not yet focused specifically on identifying indicators of risk, or specific interventions associated with risk reduction. It’s certainly on our roadmap to continue building these sorts of tools both in collaboration with our customers and as a part of ongoing R&D efforts supported by NIH and other foundations.

If interested in future updates, please sign up for blog post notifications.