Facebook has just released a new set of services, in collaboration with Forefront, in order to try to tackle suicidal behaviors through the social network.
Basically, users will be able to report posts (status, images, etc.) to Facebook that might suggest suicidal thoughts or behaviors. Facebook and its partners will review the reports, then suggest best options (automatic messages, referrals, etc.). It is not really a new initiative, as Facebook already had a suicide risk alert system, established with organizations like the Samaritans in 2011, and a set of guidelines that are already available online.
Social networks and suicide: an interconnection
Suicide is one of the most prominent social facts. At a societal level, the rate of suicides can be increased or decreased depending on several factors, as explained by Durkheim.
In some countries, suicide is the main public enemy; in Japan, suicide is the number one cause of the death for people in their twenties and thirties. A mala vida that needs to be challenged to sustain more hopeful generations and the entire system.
A massive issue that could in theory be tackled thanks to the help of social networks. In fact, as digital conversations have more and more impact on individuals through their preferred social media and messenger apps, and as our personalities are more interfaced through digital platforms, there's a reasonable interest for society to tap into this vivid social ground.
The new functionalities of Facebook enter the "suicidal ideation" battlefield
Suicidal ideation is defined as "having thoughts of self-injurious behavior with variable suicidal intent" (Goldney, 2008). There is thus a debate among the scientific community whether "passive ideation" could be less risky than "active ideation." In the case of Facebook, the new set of tools target active ideation, when users publicly leave a digital footprint.
However, as demonstrated by Robert I. Simon, MD (Georgetown University School of Medicine, Washington, DC):
"When a patient reports passive suicidal ideation, active suicidal ideation invariably is present. No bright line separates them. Suicidal ideation, active or passive, contains a dynamic mix of ambivalent thoughts and feelings along a continuum of severity. It reflects ongoing change in the patient's psychiatric disorder"
In other words, the problem should be tackled at a very early stage. Considering the fact that only a minority of people express their views through social media, it's highly insufficient to only detect individuals' risks when they publicly express a supposedly suicidal statement. In some cultures, expressing suicidal thoughts is a taboo. Only very tiny groups of people share their deep views on their Facebook profiles; with teenagers intensively using Facebook less and finding new hooks to hide in social media, this initiative sounds a bit insufficient. After all, Facebook wants to connect the entire world and is currently working on virtual reality. Before doing so, trying to shape a more basic user experience needs to be set up to be legitimate, when Facebook will ask users to give even more to the network. Just to mention a few: our fantasy, our physical representation and our relationships' ties.
A new set of functions that could be conceptually wrong
What's disappointing in my opinion is the systemic and mechanical answers that Facebook provides with the current system. Just as users can report or "flag" content which seems "abusive" (and there is no tangible scale, as it relies on users' values and perceptions), a user should be able to flag and report to Facebook something that's wrong with a friend. It is very disturbing - if a human sees another human among his friends feeling bad, the first thing to do is probably to directly communicate or try to get in touch with other friends, or through an association. In Facebook's case, suspected users are going to receive a very weird message - very dehumanized. The sort of content you receive from CRM tools when brands want to incite you to click to a link:
And we can guess the reactions a user will have, with this sort of message. Even worse: the new functionalities could be conceptually wrong.
As described in two studies, it is not because a social network user is active on Facebook or Mixi (Japan) that he's more connected to people who could care.
"The researchers found that people who have regular thoughts about suicide have about the same number of friends as people in the control group. However, those prone to suicide thoughts are much less likely to be members of friendship triangles, meaning they have fewer friends who also are friends with each other.
In addition, the research found that people prone to suicide thoughts are likely to be members of more community groups than those in the control group, which may be a result of spending more time online and of a desire to interact."
"We found that the number of communities to which a user belongs to, the intransitivity (i.e., paucity of triangles including the user), and the fraction of suicidal neighbors in the social network, contributed the most to suicide ideation in this order".
Asking people socially disconnected but digitally hyper-active to react positively to the "flag" of people in their network, seems utterly complicated.
Facebook now owns WhatsApp and have invested a lot of money in the Messenger function. A bit like Google reads our email on Gmail to match advertisers to users, why couldn't Facebook automatically browse and analyze more of our content to subtly suggest anti-suicidal content? New patterns could be set up and identified to suggest more subtle ways of influencing users to fight against their suicidal tendencies.