Facebook Implements New AI to Prevent Suicide
“We can help connect people who are in distress connect to friends and to organizations that can help them.”

Facebook is introducing new artificial intelligence technology to help users struggling with immediate mental health issues.
Using new “proactive detention” technology, Facebook and its new AI system will sift through content from all across the globe to help those in need. The social media giant’s suicide prevention AI will notify at-risk users of available mental health resources and is even capable of contacting the appropriate local first-response authorities. “This is about shaving off minutes at every single step of the process, especially in Facebook Live,” reads a quote from Facebook product management VP Guy Rosen. In recent rounds of testing, Facebook has conducted over 100 “wellness checks,” allowing first-responders an opportunity to visit at-risk users. “There have been cases where the first-responder has arrived and the person is still broadcasting.”
As other outlets note, Facebook’s new AI is trained to find patterns in “words and imagery” that have been previously reported for “suicide risk” — it also looks for specific comments, such as “Do you need help?” or “Are you OK?”
“We’ve talked to mental health experts, and one of the best ways to help prevent suicide is for people in need to hear from friends or family that care about them,” Rosen explains. “This puts Facebook in a really unique position. We can help connect people who are in distress connect to friends and to organizations that can help them.”
In a post from earlier today, Facebook CEO Mark Zuckerberg shared support for his company’s latest major venture. “In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate.”
Regarding security concerns, Facebook’s chief security officer Alex Stamos addressed the matter with the following tweet:
The creepy/scary/malicious use of AI will be a risk forever, which is why it's important to set good norms today around weighing data use versus utility and be thoughtful about bias creeping in. Also, Guy Rosen and team are amazing, great opportunity for ML engs to have impact. https://t.co/N9trF5X9iM
— Alex Stamos (@alexstamos) November 27, 2017
At present time, users will not be able to “opt out” of the program and the AI is only unavailable for use in the European Union.