OpenAI's tool can detect ChatGPT-generated content but it's not public
OpenAI, a leading artificial intelligence research lab, has developed a watermarking system for its chatbot, ChatGPT. However, the company is hesitant to release this feature due to various concerns. The Wall Street Journal reports that the system has been ready for approximately a year and operates by modifying how the AI model predicts subsequent words and phrases. This process creates a detectable pattern in the text output.
Watermarking system could help educators detect AI-generated content
The watermarking system could prove beneficial for educators aiming to prevent students from submitting AI-generated writing assignments. OpenAI's internal testing found that watermarking did not impact the quality of the chatbot's text output. A survey commissioned by the Sam Altman-led startup revealed that people worldwide supported the idea of an AI detection tool by a margin of four to one.
OpenAI's watermarking method is 99.9% effective
In a blog post update made on Sunday, OpenAI confirmed its work on text watermarking and stated that its method was 99.9% effective and resistant to tampering such as paraphrasing. However, the company also highlighted potential issues with this approach. Techniques like rewording with another model could allow bad actors to circumvent the watermarking easily. Additionally, there were concerns about potential stigmatization of AI tools' usefulness for non-native speakers if watermarked content was used extensively by them.
User apprehension and alternative solutions
OpenAI also expressed apprehension that watermarking could deter users from using ChatGPT. Nearly 30% of surveyed users indicated they would use the software less if watermarking was implemented. Despite these concerns, some employees reportedly believe in the effectiveness of watermarking. However, due to user sentiments, alternative methods are being considered by the company as potential solutions to these challenges.
OpenAI explores metadata embedding as potential alternative
OpenAI is currently exploring the possibility of embedding metadata into texts as a potentially less controversial method among users. This approach remains unproven but holds promise due to its cryptographic signature, which would eliminate false positives. However, it is still too early to determine its effectiveness or user acceptance. This exploration signifies OpenAI's commitment to finding a balance between maintaining user trust and ensuring the ethical use of their AI technologies.