
How OpenAI plans to tackle ChatGPT misuse with image watermarking
What's the story
OpenAI is testing a new watermark feature for the image generation model integrated into ChatGPT.
As AI-generated images proliferate, especially with increased free-tier access, the company is seeking ways to mark its content to maintain accountability.
This trial comes at a critical juncture when the misuse of AI for fake documents, ID forgeries, and what some call "AI slop" is raising alarms among policymakers, digital watchdogs, and users.
Discovery
AI researcher discovers watermark testing
AI researcher Tibor Blaho recently found that OpenAI is testing watermarks for images generated using free accounts.
A watermark would make it easy to tell apart images created with AI, particularly now that the tool is accessible to all and going viral.
OpenAI has not officially confirmed this feature and plans could still change.
Purpose
Watermarked images could help curtail AI slop
The rise of low-effort, mass-generated content—dubbed "AI slop"—has led to a flood of poorly regulated images online.
ChatGPT's ability to generate visuals quickly has been both a creative boon and a potential source of clutter.
A watermark could serve as a subtle flag, distinguishing AI-generated material and holding creators accountable for the quality and context in which these images are shared or used.
Model training
OpenAI's new feature will also tackle fake IDs and bills
One of the more troubling uses of ChatGPT's image tool has been the creation of fake government IDs, such as Aadhaar and PAN cards.
These fabricated visuals can be used for identity fraud or even financial scams.
By embedding a watermark into images generated through ChatGPT, OpenAI aims to provide an authentication marker that reveals the synthetic origin of such images.
The watermark will also help address the misuse of the tool in generating fake bills, invoices, and official letters.