Centre advises tech companies to tackle deepfakes, half-baked AI tools
Ahead of the upcoming Lok Sabha elections, India's Ministry of Electronics and Information Technology has advised social media giants Meta and X to eliminate deepfakes from their platforms. Social media companies must remove deepfake content as soon as it is detected by them. According to top government official S Krishnan, tech companies have also been asked not to release "half-developed" AI products that are "subject to hallucination or putting out things which that are inaccurate."
PM Modi's criticism of deepfakes
Last November, Prime Minister Narendra Modi described deepfakes as "one of the biggest threats" to India's social system, with the potential to create chaos in society. With the upcoming elections, concerns have surfaced about social media platforms being exploited by malicious actors to disseminate deepfakes for political advantage. Regulators and law enforcement agencies face immense challenges in tackling AI-generated disinformation which poses a risk to civil society. This is because it's a new technology with little regulatory control available.
Deepfakes covered under IT Act
Krishnan clarified that deepfakes, which are misleading and often defamatory, fall under the IT Act. He stressed the need for swift action to curb the spread of such false information. In December 2021, Meta, X, and Google received a similar advisory, directing them to remove deepfakes and adhere to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.