Meta Ditches Fact-Checkers on Instagram and Facebook: Why Parents Should Be Concerned
Today, Mark Zuckerberg introduced big changes to Meta’s content moderation. Here’s what parents need to know.
Meta CEO Mark Zuckerberg announced that the company is ditching its fact-checking program and moving toward a community-driven model, basically putting regular users in charge of flagging misleading posts.
Psst… Check Out Parents Taking Action: How You Can Make Social Media Safer for Kids and Teens
No More Fact-Checking and More “Bad Stuff”
Of course, these new changes are a concern for parents whose kids use Instagram, and the less popular sites for young people, Facebook and Threads. These sites will now emulate how X (formerly Twitter) conducts its operations, not good news when kids can quite easily come across immensely graphic images or violent content on X. These changes not only make it harder to keep kids’ online experience safe and appropriate but also raise concerns about what kids might read online when it comes to misinformation. Zuckerberg admitted that there will be more “bad stuff” on the platforms now but said it’s a trade-off.
In the past, Meta worked with third-party fact-checkers to flag and label posts that were false or misleading. This helped reduce the spread of both inaccurate news and harmful content. But now, Meta is moving away from that and adopting a system similar to X, where the community itself decides if something is true or not. While this could make the platforms feel more open and less censored, it also means there’s less oversight and fewer safeguards to stop misinformation from spreading.
For parents, it’s worrisome. Kids, especially younger ones, might have a harder time telling what’s true and what’s not. In the past, Meta had tools in place to catch and label false information, but now it’s up to regular users to decide whether a post should be flagged. Not all users will have the skills or desire to tell what’s true and what’s not, which could let misinformation slip through.
Only Serious Violations Will Be Addressed
In addition, the new moderation system is going to shift toward focusing only on the most serious violations, like content about terrorism, child exploitation, or drugs, leaving everything else to be flagged by users. This means that less obvious but still harmful content could get through the cracks. Consider that posts that promote unhealthy body standards, hate speech or engage in bullying could go unchecked if they don’t meet the company’s new high severity standard.
Another big change is that Meta is moving its content moderation team from California to Texas and streamlining its policies. Zuckerberg explained that this should reduce mistakes, but it might also mean that some harmful posts don’t get flagged as quickly. For parents, this could lead to kids seeing more content that’s misleading, harmful, or just confusing.
With these changes, parents need to pay closer attention to what their kids are seeing on social media now more than ever. Since Meta is backing off on its moderation, kids will likely run into more harmful or misleading content.
It’s a good time for parents to set clear rules about what’s okay to share or engage with online, and also help kids learn how to spot fake information. In a world full of digital misinformation, one of the best things parents can do is to teach their kids how to think critically about what they come across online.
Parents can also encourage kids and teens to protect themselves by limiting social media use or blocking accounts and users when they see disturbing or false content. These steps can help them avoid unnecessary stress and anxiety, and be better for their mental health.
Psst… Check Out Bills to Protect Kids on Social Media Pass