The advisory group that reviews Facebook and Instagram content moderation decisions issued its first annual report on Wednesday, capping its first year in operation.
Apparently, the Oversight Board received more than a million appeals from Facebook and Instagram users in 2021. Most of those requests asked the board to strike down Meta app content that was removed for violating anti-speech rules. hate, violence and intimidation. The board issued decisions and explanations on 20 cases that it describes as “significant.” In 70 percent of the cases the group reviewed, it overturned Meta’s initial determination.
“Clearly, there was a huge pent-up demand among Facebook and Instagram users for some way to appeal Meta’s content moderation decisions to an organization independent of Meta,” the board writes in the report.
The most high-profile decision by the Oversight Board to date is the issue of reinstating former President Donald Trump, who was removed from Facebook after fueling the insurrection at the US Capitol. The board responded to that decision by asking Meta to clarify the rules he used to get the former president off the platform to begin with. “In applying this sanction, Facebook did not follow a clear and published procedure,” the board wrote at the time, adding that Facebook did not have a rule for “indefinite” suspensions like the one issued to Trump.
Beyond its decisions, which set something of a precedent for future policy enforcement, the board also makes more general recommendations to Meta about how the company should think about particular aspects of content moderation and the rules it should implement.
In less high-profile instances, the board has recommended that Meta tighten Facebook and Instagram rules against doxing, requested that the company issue a specific transparent report on how well it has enforced rules related to COVID-19, and asked it to prioritize fact-checking for governments. who share misinformation about health through official channels.
The Oversight Board made 86 policy recommendations in its first year. Meta has implemented some of the board’s suggestions to improve the transparency of moderation, including giving users more information when they violate the platform’s hate speech rules and letting them know if AI or human moderation led to an attack. compliance decision and ignored others completely. Those results are tracked in the annual report, which sheds some light on how effective the group’s impact really is and how often Meta implements or ignores its recommendations.
The Oversight Board reviews content moderation cases from around the world, sometimes triaging linguistic and cultural nuances that Meta has failed to integrate into its moderation decisions, automated or not. Facebook whistleblower Frances Haugen has repeatedly flagged the company’s ability to monitor its social platforms in non-English-speaking markets. According to the report, half of the Oversight Board’s decisions concerned countries from the Global South, including some from Latin America and Africa.
Initially, the board only reviewed cases where users requested that Instagram and Facebook content be restored, but the group expanded to consider cases requesting that content be removed a few months later. Still, the Oversight Board’s scope of decision-making is limited to questions about individual posts and not about the many other features people use on Instagram and Facebook.
The board writes that it wants to expand the scope of its powers to advise Meta on moderation affecting accounts and groups on all of its platforms, not just individual posts. The Oversight Board is currently “in dialogue” with the company, which still has the final say on what the semi-independent advisory group can actually do.