Twitter banning 'dehumanizing language' to prevent offline harm
"You may not dehumanize anyone based on membership in an identifiable group, as this speech can lead to offline harm" - this could soon be the newest addition in Twitter rules. The 12-year-old microblogging platform is updating its policy in a bid to ban dehumanizing speech. This, as company executives say, will support safer, healthier conversations on Twitter. Here are the details.
Hateful conduct policy in place, but some content passed moderation
The effort, which has been in the works for past three months, comes in the wake of several incidents were people have been targeted with dehumanizing language. Twitter already has a well-structured policy in place to prevent racist, sexist, and other hateful remarks objectifying a person. However, despite that people found ways to use dehumanizing language.
How people have been targeted?
Many have been targeted based on their participation in groups identifiable with shared characteristics like race, ethnicity, sexual orientation etc. The remarks against them, as Twitter described, may not be directly targeted, but could make them 'feel less than human'. This could include demeaning someone by drawing comparisons with animals or other objects and can even lead to violence in the real world.
Effect of dehumanization
Over the years, many experts have flagged the real-world effect of dehumanization. It has been described as a hallmark of dangerous speech, as dehumanizing somebody can not just make violence seem acceptable but also reduce the strength of forces acting against it.
Call for public inputs before final implementation
Though the implementation of this policy is important, Twitter isn't moving ahead with it, at least not at the moment. As part of the policy development process, the company is calling on the public to provide their inputs through a short survey open till October 9. The remarks, concerns from the public will be taken into account before moving ahead with the regular process.