Meta to label AI-generated images across Facebook, Instagram, and Threads
Meta is stepping up its game to help users spot AI-generated images on platforms like Facebook, Instagram, and Threads. This move aims to fight misinformation, especially with concerns about generative AI's influence on major elections in the US and other nations. Nick Clegg, Meta's President of Global Affairs, mentioned that the company is joining forces with industry partners to create standards for signifiers that show if images, videos, or audio clips are AI-generated.
Detecting invisible signals and collaborating with industry partners
In a Meta Newsroom announcement, Clegg explained that the tools being developed will detect invisible signals in line with IPTC and C2PA standards. This means Meta will be able to label AI-generated images from big names like Google, Adobe, Midjourney, OpenAI, Microsoft, and Shutterstock. However, Clegg pointed out that AI-generated video and audio haven't yet adopted invisible signals as widely as images. Until that happens, Meta hopes users will label such content themselves and impose penalties if they don't comply.
Addressing challenges and adapting to evolving technologies
To automatically detect AI-generated content without industry-standard invisible markers, Meta needs support from partners like C2PA for authentication methods. The company is also working to stop any changes or removal of invisible markers from generative AI content. Meta's FAIR AI research lab has come up with technology that adds watermarking mechanisms right into the image generation process. As generative AI becomes more common, Meta plans to keep working with industry partners and stay in touch with governments and civil society.
Why did Meta make a pledge?
Meta's pledge to ramp up labeling AI-generated content comes a day after the company's Oversight Board took a stand on a video. The clip was misleadingly edited to suggest that US President Joe Biden was repeatedly touching his granddaughter's chest. In reality, Biden placed an "I voted" sticker on her garment after she voted in person. The board said that the video could be permitted under company rules on manipulated media. However, Meta was ordered to update the community guidelines.
OpenAI is also watermarking pictures
OpenAI has also decided to start adding watermarks to images created by its DALL-E 3 model. This will aid users in identifying AI-generated content. These watermarks will comprise a visible CR symbol and an invisible metadata component. They will appear on pictures (in the top left corner) from the DALL-E 3 API and ChatGPT website.