Post by account_disabled on Mar 5, 2024 4:50:12 GMT
Facebook is changing the way it reaches people who have encountered misinformation on its platform. The company will now send notifications to anyone who has liked, commented or shared false COVID-19 information that has been removed for violating the platform's terms of service.
It will then connect users with trusted sources Chile Mobile Number List in an effort to correct the record. Although researchers believe that additional context could help people better understand their news consumption habits, it may be too late to quell the tide of COVID-19 misinformation.
According to Fast Company , these proactive notifications about misinformation are Facebook's latest attempt to let users of the platform know that they have come into contact with false information and that it has been removed from the site. The company launched this concept in April through a news post that directed users who had engaged with misinformation to a World Health Organization webpage debunking COVID-19 myths. Now, Facebook is directly extending notifications that say:
We removed a post you liked that had false and potentially harmful information about COVID-19.
Clicking on the notification will take users to a page where they can see a thumbnail image of the offending content, a description of whether they liked, shared or commented on the post, and why it was removed from Facebook. It also offers follow-up actions, such as the option to unsubscribe from the group that originally posted the false information or "see facts" about COVID-19.
What wasn't working
There's a good reason for the change in enforcement: Facebook found that it wasn't clear to users why messages in their news feed urged them to get the facts regarding COVID-19.
People didn't really understand why they were seeing this message. There was no clear link between what they were reading on Facebook from that message and the content they were interacting with.
Valerio Magliulo, Facebook manager.
The social network redesigned the experience so that the person understood what exact information they came into contact with, which was a context of false importance that it did not include in its original launch format.
The alert itself is written to be informative, but not judgmental. To this end, Facebook does not attempt to correct the record, but instead explains why a certain post was removed from the Facebook platform. For example, Facebook could write that it does not allow false information suggesting that there is a cure or prevention for a disease that could lead someone to harm themselves. But it won't explain how a particular post violated that rule.
The challenge we face and the delicate balance we are trying to achieve is how to provide enough information to give the user context about the interaction we are talking about without re-exposing them to misinformation.
Valerio Magliulo, Facebook manager.
The concern Magliulo highlights is that the platform could unintentionally reinforce the misinformation it is trying to debunk.
But according to Alex Leavitt, a disinformation product researcher at Facebook, the backlash effect, or the possibility that correcting misinformation could lead people to hold onto misinformation more, is minimal.
That's why the change appears to be a half-step measure. While the new notification is much more specific than the original feature, which didn't give users any information about the fake information they were interacting with at all, it still may not be explicit enough. The feature doesn't actually debunk any false narratives. Although Facebook connects users to a list that dispels the most common myths around COVID-19, it does not address the particular misinformation a person engaged with.
So why not provide a more direct rebuttal to the decidedly false information a person has experienced?
Facebook says that's because it can't show users a post that has already been removed. Additionally, the company does not want to embarrass the person who posted the misinformation in the first place, as he may have been unintentional.
Additionally, Leavitt says that unique experiences are easier to validate. When a debunking message is tailored to a small group of people who have interacted with the same misinformation, " it's harder to try to find really strong effects in experiments or surveys to make sure it really works ."
An overwhelming tide
Although this change could be a half-step in the right direction, researchers remain concerned that Facebook's efforts to fight misinformation are too little, too late.
Meanwhile, COVID-19 misinformation, particularly about vaccines, continues to spread rapidly. In May 2020, Nature published a study based on data collected from February to October 2019, before the pandemic began, which found that although anti-vaccination groups have fewer members than pro-vaccination groups, they have a greater number of pages, experience greater growth and are more connected with users who have not decided on vaccines.
An October report from the Center for Countering Digital Hate (CCDH), a UK-based nonprofit, indicates that some 31 million people follow anti-vaccination groups on Facebook.
It will then connect users with trusted sources Chile Mobile Number List in an effort to correct the record. Although researchers believe that additional context could help people better understand their news consumption habits, it may be too late to quell the tide of COVID-19 misinformation.
According to Fast Company , these proactive notifications about misinformation are Facebook's latest attempt to let users of the platform know that they have come into contact with false information and that it has been removed from the site. The company launched this concept in April through a news post that directed users who had engaged with misinformation to a World Health Organization webpage debunking COVID-19 myths. Now, Facebook is directly extending notifications that say:
We removed a post you liked that had false and potentially harmful information about COVID-19.
Clicking on the notification will take users to a page where they can see a thumbnail image of the offending content, a description of whether they liked, shared or commented on the post, and why it was removed from Facebook. It also offers follow-up actions, such as the option to unsubscribe from the group that originally posted the false information or "see facts" about COVID-19.
What wasn't working
There's a good reason for the change in enforcement: Facebook found that it wasn't clear to users why messages in their news feed urged them to get the facts regarding COVID-19.
People didn't really understand why they were seeing this message. There was no clear link between what they were reading on Facebook from that message and the content they were interacting with.
Valerio Magliulo, Facebook manager.
The social network redesigned the experience so that the person understood what exact information they came into contact with, which was a context of false importance that it did not include in its original launch format.
The alert itself is written to be informative, but not judgmental. To this end, Facebook does not attempt to correct the record, but instead explains why a certain post was removed from the Facebook platform. For example, Facebook could write that it does not allow false information suggesting that there is a cure or prevention for a disease that could lead someone to harm themselves. But it won't explain how a particular post violated that rule.
The challenge we face and the delicate balance we are trying to achieve is how to provide enough information to give the user context about the interaction we are talking about without re-exposing them to misinformation.
Valerio Magliulo, Facebook manager.
The concern Magliulo highlights is that the platform could unintentionally reinforce the misinformation it is trying to debunk.
But according to Alex Leavitt, a disinformation product researcher at Facebook, the backlash effect, or the possibility that correcting misinformation could lead people to hold onto misinformation more, is minimal.
That's why the change appears to be a half-step measure. While the new notification is much more specific than the original feature, which didn't give users any information about the fake information they were interacting with at all, it still may not be explicit enough. The feature doesn't actually debunk any false narratives. Although Facebook connects users to a list that dispels the most common myths around COVID-19, it does not address the particular misinformation a person engaged with.
So why not provide a more direct rebuttal to the decidedly false information a person has experienced?
Facebook says that's because it can't show users a post that has already been removed. Additionally, the company does not want to embarrass the person who posted the misinformation in the first place, as he may have been unintentional.
Additionally, Leavitt says that unique experiences are easier to validate. When a debunking message is tailored to a small group of people who have interacted with the same misinformation, " it's harder to try to find really strong effects in experiments or surveys to make sure it really works ."
An overwhelming tide
Although this change could be a half-step in the right direction, researchers remain concerned that Facebook's efforts to fight misinformation are too little, too late.
Meanwhile, COVID-19 misinformation, particularly about vaccines, continues to spread rapidly. In May 2020, Nature published a study based on data collected from February to October 2019, before the pandemic began, which found that although anti-vaccination groups have fewer members than pro-vaccination groups, they have a greater number of pages, experience greater growth and are more connected with users who have not decided on vaccines.
An October report from the Center for Countering Digital Hate (CCDH), a UK-based nonprofit, indicates that some 31 million people follow anti-vaccination groups on Facebook.