OpenAI adds erasable watermarks to DALL-E 3 images: Here's why
OpenAI is adding watermarks to images created by its DALL-E 3 model to help users identify AI-generated content. These watermarks, which include a visible CR symbol and an invisible metadata component, will appear on images from the DALL-E 3 API and ChatGPT website. This move aligns with the Coalition for Content Provenance and Authenticity (C2PA) standards, supported by companies like Adobe and Microsoft. Users can check the origin of these images through websites like Content Credentials Verify.
No effect on the quality of image generation
The watermarks can be seen in the top left corner of every picture, and mobile users will get it by February 12. As per OpenAI, the addition of watermark metadata represents a "negligible effect on latency and will not affect the quality of the image generation." However, it will slightly increase the image sizes for certain tasks.
Verifying provenance of AI-generated content
Although watermarking is a step toward establishing content provenance, OpenAI admits that C2PA's metadata can be easily removed, either accidentally or intentionally. Social media platforms often strip metadata from uploaded content, and taking a screenshot omits it as well. However, OpenAI believes that adopting these methods and encouraging users to recognize such marks, can increase the trustworthiness of digital information.
Challenges in ensuring the trustworthiness of digital information
It's important to note that watermarking alone may not be enough to combat misinformation. The US government's executive order on AI emphasizes the need to identify AI-generated content, but more efforts are required to ensure the reliability of digital information. By incorporating watermarks, OpenAI is taking a step in the right direction, but there's still work to be done in the fight against misinformation.