OpenAI's new tools use AI to detect AI-generated images, audios
OpenAI, one of the world's leading AI firms, has announced the development of cutting-edge tools designed to verify the authenticity of images produced by its DALL-E AI image generator. The company is also working on improving watermarking techniques to more effectively identify content created by its services. These advancements were disclosed in a recent blog post by OpenAI.
OpenAI's image verification system: A closer look
The new tools introduced by OpenAI include an image verification system that uses AI to determine if a photo was created by an AI. According to the company, this system can predict with about 98% certainty whether an image was generated by DALL-E 3, even if the image has been cropped, compressed or had its saturation altered.
OpenAI's watermarking techniques and affiliation with C2PA
In addition to the image verification tool, OpenAI is developing a robust watermark that can embed invisible signals into content like audio. This technique is currently being applied to clips from Voice Engine, OpenAI's proprietary text-to-speech platform. Previously, through its affiliation with the Coalition of Content Provenance and Authority (C2PA), OpenAI had integrated content credentials into image metadata functioning as watermarks providing details about the image's ownership and creation process.
OpenAI's ongoing refinements and user feedback importance
Both the image verification system and audio watermarking signal are still being refined by OpenAI. The company has emphasized that user feedback is vital for testing their effectiveness. To aid this process, researchers and nonprofit journalism organizations have been invited to evaluate the image verification system on OpenAI's research access platform. Despite past challenges in identifying AI-generated content, OpenAI remains dedicated to leading in this field.