Facebook Parent Meta Affected the Human Rights of Palestinians

For social media platforms, content moderation in languages other than English has always been a difficult nut to crack. In light of the results, Meta is adjusting its practises.

A report published on Thursday shows that Facebook’s parent company, Meta, made content moderation mistakes during a violent outbreak in the Gaza Strip in May 2021.2 The mistakes had an adverse effect on Palestinian human rights.

After receiving advice from its oversight board, which evaluates some of Meta’s most contentious moderation decisions, the social media company hired Business for Social Responsibility to assess the impact of its policies and actions on Palestinians and Israelis.

The actions taken by Meta were shown in the report to have deprived Palestinians of their “right to freedom of expression, freedom of assembly, political participation, and nondiscrimination.” This highlights the ongoing difficulties the company has in policing content written in languages other than English. Meta is the parent company of three of the most popular social media platforms: Facebook, Instagram, and WhatsApp.

According to the report, BSR consulted with affected parties and found that many of them agreed with the statement, “Meta appears to be another powerful entity repressing their voice.”

In this report, we highlight several mistakes Meta made in content moderation during last year’s Israeli-Palestinian conflict. The company incorrectly deleted posts made by Palestinians because Arabic-language content “had greater over-enforcement.” The rate of proactive detection of potentially violating Arabic content was found by BSR to be significantly higher than that of potentially violating Hebrew content.

Content written in Hebrew was subject to “greater under-enforcement” because Meta lacked a “classifier” for “hostile speech” in that language. The classifier enables the company’s AI systems to detect posts that are likely to be in violation of its rules automatically. Meta has also outsourced content moderation and lost Hebrew-speaking employees.

Some content that did not technically break Meta’s rules was also mistakenly removed. According to the report, “these errors were more severe given a context where rights such as freedom of expression, freedom of association, and safety, especially for activists and journalists, were of heightened significance.”

Other significant content moderation errors on Meta’s platforms were also highlighted in the report. For a short time, Instagram did not allow the use of the hashtag #AlAqsa, which is commonly used to refer to the Al-Aqsa Mosque in the Old City of Jerusalem. There was also incitement to violence and anti-Palestinian rhetoric directed at Arab Israelis, Jews in Israel, and Jews elsewhere. Similarly, Palestinian journalists have claimed that their access to WhatsApp has been restricted.

While BSR did not find any evidence of intentional bias at Meta or among its employees, it did find “various instances of unintentional bias where Meta policy and practise, combined with broader external dynamics, does lead to different human rights impacts on Palestinian and Arabic speaking users.”

Meta has promised to make adjustments in light of the report’s findings. For instance, the firm has declared that it will keep working on and releasing Hebrew machine learning classifiers.

In a blog post, Miranda Sissons, Meta’s director of human rights, said, “We believe this will significantly improve our capacity to handle situations like this, where we see major spikes in violating content.”