Making AI Algorithms for Suicide Prevention (or Any Goal) as Effective as Possible

The use of artificial intelligence (AI) and machine learning in health care and social services recently got a boost of public attention in a New York Times story about the U.S. Department of Veteran Affairs (VA) using AI-assisted algorithms in an effort to lower the rate of suicides among veterans who are at higher risk—around 1.5 times higher—for suicide than the average non-veteran adult.

Image by Gordon Johnson from Pixabay

Data and AI have been used in the medical field in various shapes and forms for a number of years now, both generally for an array of preventative health purposes and with narrow focuses such as specifically for the cause of suicide prevention. But this is the first time AI algorithms being employed so extensively as part of daily clinical practice. Fortunately, the VA has the experience and wherewithal to make this project, called REACH VET, as effective as possible. The deciding factors, however, of the relative success or failure of REACH VET will be the degree to which the VA can implement a number of core principles that apply not just for suicide prevention programs but just about any instance in which A.I. and machine learning are used to achieve a desired outcome. Broadly speaking, these factors can be divided into two overarching categories: data and action.

The Importance of Relevant and Clean Data

First, it would help to provide a bit of context about how the AI algorithms in REACH VET work. Basically, what these algorithms do is flag veterans who may be at elevated risk for suicide based on a wide range of variables that the VA has identified as being significant. These include past mental health diagnoses, substance abuse, prescribed medications, employment status, and marital status.

AI algorithms can only be as effective as the data that’s fed into them, and the first step for doing this is to clearly identify what problem you’re trying to solve before getting into any kind of data analysis. For REACH VET, the goal is to reduce the rate of suicide in a population that’s at greater risk for suicide overall. You then work backwards and identify what kind of information and data sets would be needed to help accomplish that goal. For veterans, many of the relevant data sets for suicide risk would predictably overlap with the general population, such as past medical diagnoses, substance abuse, and notes from doctor visits. But there may also be variables that are less pertinent for the general population but which, based on research or the VA’s experience, may be significant for veterans such as length of deployment, training, or opportunities for rest and recuperation.

Once the data sets are identified, your data science teams analyze how these disparate data sources can be acquired, integrated, and leveraged in order to get a holistic portrait of an individual or community. You then go about putting it all into a conformed format. Care is often necessary in this process since when it comes to the topic of suicide—or, indeed, just about anything health related—there’s a lot of potentially sensitive data, and therefore greater scrutiny, involved. The importance of clean data to improve the predictive power of algorithms also can’t be overstated. Algorithms are worthless if the data fueling them are inaccurate or the data integrity isn’t pristine, or as close to pristine, as possible.

Next, an ongoing process of feedback and tweaking is critical. Data is a living, breathing phenomenon so there is always the chance that some of the information fed into algorithms may lead to false positives or false negatives. Biases, both implicit and explicit, finding their way into the algorithms are also a major concern. So the process must be continually monitored to identify and remove impurities and keep feeding new insights back into the algorithms. Here, it’s wise to have a variety of both technical and clinical staff (e.g., the clinicians and social workers who interact directly with the veterans) to review the process and provide feedback. Otherwise, there’s a real risk of missing critical insights about what is working and what isn’t.

Actionable Data is the Key to Success

The other big takeaway is the importance of translating data into action. Organizations using machine learning and AI algorithms must devote as much care and effort into this other half of the puzzle if they hope their projects will be successful. Naturally, a lot depends on the operational model of each organization and the limitations on budget, staffing, and resources that they must work with. But it is all too easy to overemphasize the technology components, intentionally or not, and neglect the equally (if not more) important human elements.

Perhaps the biggest challenge when it comes to suicide prevention, along with other areas in health care and social services, is building trust and rapport with veterans and patients. Imagine someone calling you out of the blue and telling you that you’ve just been flagged as a suicide risk, an opioid overdose risk, or a COVID-19 transmission risk. You can imagine how such a call may trigger all kinds of alarm bells in the recipient’s mind and thus may not necessarily be well received. Formulaic, telemarketing-type approaches don’t work with such sensitive and potentially emotional topics. Establishing trust requires the right kinds of people with the right skill sets and there’s simply no algorithm or technology that can accomplish this. Organizations, when developing these sorts of programs, therefore need to invest in finding and hiring the right personnel, training them, and arming them with the tools and resources they need.

The use of AI and machine learning in preventive health and social services is a bold new frontier full of exciting possibilities. There is little doubt that this technology, when used well, has the potential to improve, and potentially save, the lives of many. Clearly defining desired outcomes; accurately identifying and acquiring the relevant data sets; making sure they are as large and accurate as possible; continually monitoring to gauge what is and isn’t working and making adjustments accordingly, and investing an equal amount of energy and resources into addressing the human element through qualified personnel—all of these can help to better ensure that machine learning and AI algorithms will help organizations successfully achieve the outcomes they are hoping for.

 

Professor Sanket Shah is a clinical assistant professor for Biomedical and Health Information Sciences at the University of Illinois, Chicago.