Social media giant Facebook (NASDAQ:FB) has plans to use artificial intelligence to help prevent suicides among users.
The social network has developed algorithms, which warning signs in users’ posts and the comments their friends leave in response.
The developed pattern-recognition algorithms recognise if someone might be at risk, by training them with examples of the posts that have previously been flagged.
Once a post has been identified, it is sent for rapid review to the network’s community operations team.
The new efforts into potentially suicidal users comes following a 14-year-old foster child in Florida who broadcast her suicide reportedly on Facebook Live in January, according to the New York Post.
“We know that speed is critical when things are urgent,” said Facebook product manager Vanessa Callison-Burch.
The world’s largest social media network said it hopes to integrate its existing suicide prevention tools for Facebook posts into Facebook Live and Facebook Messenger.
The director of the US National Suicide Prevention Lifeline praised Facebook’s efforts.
“It’s something that we have been discussing with Facebook,” said Dr John Draper. “The more we can mobilise the support network of an individual in distress to help them, the more likely they are to get help.
“The question is how we can do that in a way that doesn’t feel invasive. I would say though that what they are now offering is a huge step forward.”
Mark Zuckerburg also announced last month that he hopes to use artificial intelligence use algorithms to identify posts by terrorists, among other concerning content.
Facebook has partnered with several US mental health organisations to let users contact them via Facebook Messenger.