How Europe plans to fight against AI-generated disinformation
The advent of generative AI has made several things easy. One of them is using AI to create fake images, videos, or audio that can rival the real ones. Many fear AI will usher us into a new era of fake news and disinformation. Europe has found a way to stop this before it is too late by means of labeling the content.
Why does this story matter?
Deepfakes existed before AI. However, no matter how good they were, a trained eye or ear could easily identify them. Things may not be that easy with AI. From a viral AI-generated Drake song to Pope Francis wearing a Balenciaga jacket, AI has already stunned us with its realistic creations. Generative AI will get even better, and so will the realness of its creations.
EU asked online platforms to label AI-generated content
The European Union (EU) is at the forefront of any discussions related to regulating AI. It is also leading the fight against AI-generated disinformation. After a meeting with signatories to the Code of Practice on Online Disinformation, Vera Jourova, EU's values and transparency commissioner, asked online platforms to label any content generated by AI. The Code is the bloc's voluntary agreement to combat disinformation.
EU proposed two ways to deal with disinformation
According to Jourova, platforms that have integrated generative AI like Microsoft's new Bing should work on building safeguards to prevent malicious actors from using these services to "generate disinformation." She asked companies that offer services with the potential to disseminate AI-generated disinformation to introduce technology to "recognize such content and clearly label this to users."
EU wants companies to implement it immediately
The current version of the Code does not have any provision to deal with identifying and labeling deepfakes. But the sudden rise of AI has made it necessary to deal with this problem. Jourova asked platforms to implement labeling "immediately." She also prompted the 44 signatories, including Google, Meta, and Microsoft, to inform the public of steps taken to tackle AI-generated disinformation.
'Machines don't have freedom of speech'
The ability of AI chatbots, image generators, and voice generators to create complex content and visuals raises "fresh challenges for the fight against disinformation," Jourova said. "I said many times that we have the main task to protect freedom of speech. But when it comes to the AI production, I don't see any right for the machines to have freedom of speech," she added.