Creepy Zuckerberg Rolls Out AI on Facebook That Senses If You’re Acting Suicidal So Police Can ‘Respond’

Share:

Soon the cops will be kicking in your door because Facebook says you’re a danger to yourself … or others.

This is software to save lives? Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.

Facebook previously tested using AI to detect troubling posts and more prominently surface suicide reporting options to friends in the U.S. Now Facebook is will scour all types of content around the world with this AI, except in the European Union, where General Data Protection Regulation privacy laws on profiling users based on sensitive information complicate the use of this tech.

Facebook also will use AI to prioritize particularly risky or urgent user reports so they’re more quickly addressed by moderators, and tools to instantly surface local language resources and first-responder contact info. It’s also dedicating more moderators to suicide prevention, training them to deal with the cases 24/7, and now has 80 local partners like Save.org, National Suicide Prevention Lifeline and Forefront from which to provide resources to at-risk users and their networks.

[contentcards url=”https://techcrunch.com/2017/11/27/facebook-ai-suicide-prevention/” target=”_blank”]
Share:
No Comments Yet

Leave a Reply

2021 © True Pundit. All rights reserved.