Meta to launch deepfake fact-checking helpline on WhatsApp next month
Meta and the Misinformation Combat Alliance (MCA) are teaming up to launch a WhatsApp helpline in March 2024. It will aim to tackle the growing issue of deepfakes and AI-generated misinformation. This helpline will allow users to report suspected deepfakes through a multilingual chatbot, which supports English, Hindi, Tamil, and Telugu. The goal is to provide people with accurate information and stop the spread of deceptive content.
What's next?
The MCA is setting up a central Deepfake Analysis Unit (DAU) to manage all inbound texts they receive on the WhatsApp helpline. The unit will work closely with "member fact-checking organizations as well as industry partners and digital labs." DAU will "assess and verify" the content, and based on the verdict, respond to the messages. We hope misinformation and false claims will be debunked.
Four-pillar approach to tackle deepfakes
Meta's strategy focuses on four key areas: detection, prevention, reporting, and raising awareness about deepfakes. Shivnath Thukral, Director of Public Policy India at Meta, highlighted the importance of collaboration in fighting AI-generated misinformation. He said, "Our collaboration with MCA to launch a WhatsApp helpline dedicated to debunking deepfakes that can materially deceive people is consistent with our pledge under the Tech Accord to Combat Deceptive Use of AI in 2024 Elections."
DAU and fact-checking partnerships
Bharat Gupta, President of the MCA, applauded the creation of the DAU. It brings together fact-checkers, tech professionals, journalists, and forensic experts supported by Meta. In addition to the upcoming WhatsApp helpline, Meta's fact-checking efforts in India involve partnerships with 11 independent organizations. Users are encouraged to follow dedicated fact-checking channels, and use WhatsApp tiplines, to verify information and help prevent the spread of misinformation.
Meta wants to make elections transparent
Meta is also a signatory to an industry initiative titled "Tech Accord to Combat Deceptive Use of AI in 2024 Elections." The goal is to deploy technology that will counter "harmful AI-generated content meant to deceive voters" participating in elections worldwide this year. Meta, along with others, has pledged to work "collaboratively on tools to detect and address the online distribution of such AI content, drive educational campaigns, and provide transparency."