Now, Twitter will save religious groups being dehumanized: Here's how
Finally, Twitter has decided to tackle the problem of dehumanization of its users. The company has announced changes to protect people affiliated with different religious groups from being targeted with hateful language. The move comes as part of the microblogging site's larger effort to combat the spread of hate and bigotry and to create a safer community for users. Here are the details.
Tweets dehumanizing users will be removed
Twitter has updated its policy to ban tweets that use hateful or offensive language to dehumanize people on the basis of their religious affiliation. This means that any tweet treating religious groups less than human will be taken down by the platform. This could include posts denying them human qualities by reducing them to animals or genitalia in public.
Twitter detailed what kind of tweets will be removed
Though Twitter already has a policy in place to take down threats against individuals or calls for violence, the latest change would shield religious groups from bigotry and drive healthy conversations. The company took the decision after consulting the public and experts and emphasized that all tweets comparing religious groups with "rats," "viruses," "maggots," and "filthy animals" will be removed from the platform.
Notably, this is just first part of a larger change
Back in 2018, Twitter had pitched the idea of banning dehumanizing language against all identifiable groups. The company sought feedback from the public on the matter, which indicated that it should narrow down the policy for specific groups. Now, with rules set for religious groups, we expect to see changes combating dehumanization on the basis of race, ethnicity, age, gender, sexual-orientation, and other characteristics.
New rules are now in effect
With the latest rules in effect, Twitter will launch a crackdown against dehumanizing tweets aimed at religious groups. The company has said it will remove old tweets existing on the platform and may even ban accounts for violating this policy time and again. However, it won't automate the process through AI, which means inappropriate content would be reviewed only if someone reports it manually.