According to the BBC, Facebook has begun to develop algorithms that locate warning signs in users’ posts and the comments other users post in response to this. Once highlighted, the company’s human review team will assess the situation and, if necessary, contact those thought to be at risk of self-harm. They will subsequently suggest ways in which the user can seek help.
Facebook has stated that this new technology is not only helpful but also critical to the well-being of users. So far, the tool is only being tested in the US, but there are hopes that it will soon reach the UK and other countries using the network across the globe.
This marks the first use of artificial intelligence to review messages on the platform. Last month, Facebook founder Mark Zuckerberg stated that he wanted to use algorithms to identify posts by terrorists, as well as other concerning content.
New ways to address suicidal behaviours on its Facebook Live broadcast tool have also been announced, with the network partnering with a number of US mental health organisations to allow vulnerable users contact them via its Messenger platform.
Such efforts come following the death of a teenager in Miami, who livestreamed her suicide on Facebook earlier this year.
Facebook aims to help at-risk users during their broadcast, rather than wait until the video stream has been completed to review it. Efforts now mean that when someone watching the livestream clicks a menu option to state they are concerned, Facebook will provide advice to the viewer about how they can support the broadcaster.
In addition to this, the video is flagged for an immediate review by the social media platform’s own team who will then overlay a message with their own suggestions, should this be appropriate.
Source:
http://www.bbc.co.uk/news/tech...d=socialflow_twitter
Comments (0)