Courtesy of Wikimedia Commons

Following in the footsteps of Elon Musk’s decision for X, formerly known as Twitter, Mark Zuckerberg announced on Jan. 7, 2025, his intentions to remove third-party fact-checkers on all Meta apps and replace them with a “Community Notes” feature, which will function as crowdsourced contextual information that can be added to any post by another user and voted on by others. 

Meta, the parent company of Facebook, Instagram and Threads, will make this controversial change on all three platforms, starting first in the United States (U.S.), and then expanding the move to other countries. Social media is already a hub for angry political arguments and opinions. The trend of social media companies eliminating factual context or correction from their sites will soon permit people to say nearly anything they want without consequences — undoubtedly leading to a misinformation free-for-all.

While Meta plans to end fact-checkers on their platforms, there will still be community and “hateful conduct” guidelines that outline what kind of content can be removed by moderation. But those, too, have been amended. The “change log” in Meta’s Transparency Center shows the specific edits made to this document, including removing “do not post” topics, like women being referred to as household objects, non-binary people being referred to as “it” and any “protected characteristic” being referred to as criminals. 

The edits also specify the new allowance for “allegations of mental illness or abnormality when based on gender or sexual orientation.” These changes make way for hate speech on the platforms, which will make millions of users across the world feel excluded and unsafe since there is no longer any accountability for discriminatory behavior.

Zuckerberg’s announcement in a video he posted on Instagram states the company’s intent to “simplify[ing] our policies and restoring free expression on our platforms.” He also mentions concerns with fact-checking becoming “too politically biased.” Instead of the supposed political bias of keeping people informed of the true facts, Zuckerberg plans to implement a “Community Notes” feature, which he says will be similar to the one currently on X. However, this change is dangerous because it will prioritize and amplify the loudest voices online over factual information, empowering people to ignore facts when sharing their beliefs.

One of the most troubling parts of this shift is its reflection on the “Community Notes,” where reports have shown consistent failure in addressing the spread of misinformation. For example, one investigation showed that 74% of accurate notes correcting false or misleading claims about the 2020 presidential election were not displayed. Even when they were displayed, the original misleading post was shown without the note 13 times more than the post with the accurate notes. Evidently, the “Community Notes” feature does little, if anything, to correct misinformation on the platform.

Everyone deserves a space, even online, where they can be themselves without feeling unwelcome or unsafe. These changes in the regulation of misinformation reverse the work that has been done to create that secure space. Even outside social media platforms, hateful speech can promote offline violence. A paper published by two New York University professors, for example, reviewed data from 100 U.S. cities and found a direct link between the rise in targeted hate speech and hate crimes in every city in the study.

Misinformation on social media platforms will reach an all-time high if it is left unchecked. With increasing political polarization in the U.S. and incoming policy changes under the Trump administration, the country will only become more unsafe for women, LGBTQ+ people, immigrants and people of color.

 

Author