Facebook is taking a step in the right direction by deploying AI technology to help identify and detect users who are at risk of self-harm. Using algorithms trained to detect certain key words in content, Facebook is now able to find warning signs in posts and replies. If a red flag is raised about a certain user’s material, human reviewers are alerted and they can contact the user to offer help. Prior to this, Facebook relied on its own user base to flag material that was potentially harmful. This will be the first time an AI-assisted program will help out. Facebook will also be teaming up with charities and helplines to develop methods to link up those at risk with counsellors and advisors.
The function will also be rolled out onto Facebook Live, especially since there have been troubling instances where users have taken their own lives while livestreaming. In the future, AI-assisted technology can also help the social network detect posts that indicate terrorism or other worrying behaviour. Facebook also made a video touting its help options, which you can view here.
In other Facebook news, the social network finally introduced a “dislike” button.
- Carl Court/Getty Images
Join Our Discussions on Discord
The HYPEBEAST Discord Server is a community where conversations on cultural topics can be taken further.