Happy New Year OOTers! I hope that everyone’s 2019 is off to a great start. I came across this article in the New York Times over the holidays and wanted to share it with the community to see if we can spark a discussion and get your thoughts.

This article speaks about how Facebook has developed an algorithm that scans all of the posts that go up on the platform and identifies potential suicide risks. Once the suicide risk is identified it is reviewed by a human specialist and if it is deemed that the poster is at risk they will notify the local authorities. Facebook flags high scoring posts as well as posts submitted by concerned users.
Is Facebook potentially putting itself in a sticky situation by taking on the role of a public health agency? Does this service violate user’s privacy? Some mental health experts are arguing that Facebook’s calls to the police can also cause harm?
What do you guys think?
In my opinion, I think Facebook is absolutely doing the right thing. I think that they are making a very good attempt to fulfill their social responsibility given the position that they are in….you know, huge social media platform with a population of over 2 billion! It may not be perfect the first time around, but doing something is better than doing nothing. Plus, I would wager that their algorithm and whole suicide detection system will only get better with time.
Alright, let’s hear it OOTers!