In the UK, a quarter of people who take their own lives were in contact with a health professional the previous week, and most have spoken to someone within the last month. Yet assessing patient suicide risk remains extremely difficult.
There were 5,219 recorded deaths by suicide in England in 2021. While the suicide rate in England and Wales has declined by around 31% since 1981, the majority of this decrease happened before 2000. Suicide is three times more common in men than in women, and this gap has increased over time.
A study conducted in October 2022, led by the Black Dog Institute at the University of New South Wales, found artificial intelligence models outperformed clinical risk assessments. It surveyed 56 studies from 2002 to 2021 and found artificial intelligence correctly predicted 66% of people who would experience a suicide outcome and predicted 87% of people who would not. In comparison, traditional scoring methods carried out by health professionals are only slightly better than random.
Artificial intelligence is widely researched in other medical domains such as cancer. However, despite their promise, artificial intelligence models for mental health are yet to be widely used in clinical settings.
Suicide prediction
A 2019 study from the Karolinska Institutet in Sweden found four traditional scales used to predict suicide risk after recent episodes of self-harm performed poorly. The challenge of suicide prediction stems from the fact that a patient’s intent can change rapidly.
The guidance on self-harm used by health professionals in England explicitly states suicide risk assessment tools and scales should not be relied upon. Instead, professionals should use a clinical interview. While doctors do carry out structured risk assessments, they are used to make the most of interviews rather than providing a scale to determine who gets treatment.
Risk of AI
The study from the Black Dog Institute showed promising results, but if 50 years of research into traditional (non-artificial intelligence) prediction yielded methods that were only slightly better than random, we need to ask whether we should trust artificial intelligence. When a new development gives us something we want (in this case better suicide risk assessments) it can be tempting to stop asking questions. But we can’t afford to rush this technology. The consequences of getting it wrong are literally life and death.
AI models always have limitations, including how their performance is evaluated. For example, using accuracy as a metric can be misleading if the dataset is unbalanced. A model can achieve 99% accuracy by always predicting there will be no risk of suicide if only 1% of the patients in the dataset are high risk.
It’s also essential to assess AI models on different data to that they are trained on. This is to avoid overfitting, where models can learn to perfectly predict results from training material but struggle to work with new data. Models may have worked flawlessly during development, but make incorrect diagnoses for real patients.
For example, artificial intelligence was found to overfit to surgical markings on a patient’s skin when used to detect melanoma (a type of skin cancer). Doctors use blue pens to highlight suspicious lesions, and the artificial intelligence learnt to associate these markings with a higher probability of cancer. This led to misdiagnosis in practice when blue highlighting wasn’t used.
It can also be difficult to understand what artificial intelligence models have learnt, such as why it’s predicting a particular level of risk. This is a prolific problem with artificial intelligence systems in general, and has a lead to a whole field of research known as explainable artificial intelligence.
The Black Dog Institute found 42 out of the 56 studies analysed had high risk of bias. In this scenario, a bias means the model over or under predicts the average rate of suicide. For example, the data has a 1% suicide rate, but the model predicts a 5% rate. High bias leads to misdiagnosis, either missing patients that are high-risk, or over assigning risk to low-risk patients.
These biases stem from factors such as participant selection. For example, several studies had high case-control ratios, meaning the rate of suicides in the study was higher than in reality, so the artificial intelligence model was likely to assign too much risk to patients.
A promising outlook
The models mostly used data from electronic health records. But some also included data from interviews, self-report surveys, and clinical notes. The benefit of using artificial intelligence is that it can learn from large amounts of data faster and more efficiently than humans, and spot patterns missed by overworked health professionals.
While progress is being made, the artificial intelligence approach to suicide prevention isn’t ready to be used in practice. Researchers are already working to address many of the issues with AI suicide prevention models, such as how hard it is to explain why algorithms made their predictions.
However, suicide prediction is not the only way to reduce suicide rates and save lives. An accurate prediction does not help if it doesn’t lead to effective intervention.
On its own, suicide prediction with artificial intelligence is not going to prevent every death. But it could give mental health professionals another tool to care for their patients. It could be as life changing as state-of-the-art heart surgery if it raised the alarm for overlooked patients.
Joseph Early is a PhD candidate in Artificial Intelligence at University of Southampton.
This article first appeared on The Conversation.
Limited-time offer: Big stories, small price. Keep independent media alive. Become a Scroll member today!
Our journalism is for everyone. But you can get special privileges by buying an annual Scroll Membership. Sign up today!