Twitter tests anti-abuse heads-up prompts for potentially intense conversations
In its latest attempt to combat pervasive harassment on its platform, Twitter is testing an anti-abuse feature that gives its users a heads-up before they enter potentially heated conversations. The feature is expected to throw up prompts based on the conversation's "vibes." Calling it "a work in progress," Twitter Support said the feature is currently being tested on both Android and iOS. Here's more.
Feature will promote checking facts, diverse perspectives
The feature being tested will display warnings like "conversations like this can be intense" under relevant Twitter threads. Users can participate in the test after selecting the prompt—encouraging them to "remember the human" to treat others with respect, and that "facts matter" and "diverse perspectives have value"—and select "Count Me In." The criteria for labeling conversations as intense will change as the test progresses.
Tweet topic, writer-replier relationship to be judged
For now, Twitter will be using the tweet topic and the relationship between the tweet writer and the replier to figure out the vibes of a conversation. So, any tweet thread—containing political misinformation or even a product review—can now come under the scanner. "It's an early test, so we may not get it right every time," warned Twitter Support.
Metrics questioned, censorship fears raised on Twitter
Many people are already complaining about the feature, seeing it as stress on Twitter algorithms that have little understanding of context. Some expressed fears about it turning into a behavior manipulation or censorship tool. Last month, Twitter's Product Lead for Conversational Safety, Christine Su, defended anti-abuse features, like Safety Mode, saying the platform was trying to reduce the burden on people dealing with disruptions.
Prompt apparently marks harmless conversations as intense
Targeted prompts show promise of curbing toxic behavior
Meanwhile, targeted features like throwing up prompts for users typing harmful replies even before they send, suggesting them to rethink, show a lot of promise. Twitter is also exploring new ways of filtering out unwanted replies, containing slurs and aggressive emojis. Other features being tested include allowing users to exit conversations they are mentioned in and auto-blocking accounts similar to those they already blocked.