Meta urged to reassess 'shaheed' moderation policy by Oversight Board
The Oversight Board has called on Meta, the umbrella company of Facebook and Instagram, to reassess its moderation policy concerning the Arabic term 'shaheed.' This appeal follows Meta's request for the board's guidance in formulating new regulations after internal efforts to amend the policy hit a roadblock. Presently, Meta's understanding of 'shaheed' as a term of praise has resulted in more content being removed than any other word or phrase on its platforms.
The overlooked linguistic complexity of "shaheed"
The Oversight Board says Meta's existing rules are solely based on the 'martyr' definition of 'shaheed.' This interpretation neglects the linguistic intricacies of the term. The term 'shaheed' carries multiple meanings and is used in neutral commentary, scholarly discussions, HR debates, and even more passive contexts. The board expressed that there's a "strong reason to believe multiple meanings of 'shaheed' results in the removal of a substantial amount of material not intended as praise of terrorists or their violent actions."
Proposed changes to Meta's moderation policy
The Oversight Board proposed that Meta should cease its blanket prohibition on using 'shaheed' in reference to individuals labeled as 'dangerous.' The board advised that posts should only be taken down if they exhibit explicit signs of violence or breach other policies. Furthermore, the board urged Meta to offer a more detailed explanation of how it uses automated systems to enforce these rules, aiming for increased transparency in content moderation.
Potential consequences for Arabic-speaking users and journalism
Should Meta implement the recommendations made by Oversight Board, it could have a significant impact on Arabic-speaking users on its platforms. The board observed that due to its widespread usage, 'shaheed' likely accounts for more content removals under Community Standards compared to other single word/phrase across Meta's apps. Helle Thorning-Schmidt, co-chair of the board voiced concerns that Meta's approach could affect journalism and civic discourse as media organizations might refrain from reporting on designated entities to avoid content removal.
Meta's past controversies with Arabic content moderation
This isn't the first instance where Meta has been criticized for moderation policies that disproportionately impact Arabic-speaking users. A 2022 report commissioned by the company revealed that Meta's moderators were less accurate when evaluating Palestinian Arabic, resulting in unwarranted strikes on users' accounts. The company also issued an apology after Instagram's automated translations began inserting the word 'terrorist' into some Palestinian users' profiles, further underscoring issues with its moderation practices.