OpenAI shutters its AI plagiarism finder: Here's why
Plagiarism has been one of the issues that have constantly featured in the debate surrounding artificial intelligence (AI). AI's ability to generate articles and essays at a high level has made it difficult to detect plagiarism. In January, OpenAI launched a tool called 'AI Classifier' to detect content generated by AI tools. The company has silently shut down the tool. Let's see why.
Why does this story matter?
ChatGPT not only revolutionized the tech world but also made the jobs of educators difficult. AI-generated content has become a significant challenge to academic honesty. AI is evolving at a rapid pace. Institutions are finding it hard to catch up to content generated by advanced AI systems. Now, even OpenAI has given up on detecting content generated by ChatGPT and other AI chatbots.
AI Classifier could differentiate between AI and human-generated content
OpenAI introduced the AI Classifier to allay the fears of educators and others who were concerned about the rising plagiarism courtesy of AI. The idea behind the tool was it could detect whether something was written by AI or a human. However, the tool has been unreliable from the beginning. OpenAI itself said the "Classifier is not fully reliable."
OpenAI's tool had multiple issues
The AI Classifier correctly identified AI-written text as "likely AI-written" only 26% of the time. At the same time, it mischaracterized human-written text as AI-written 9% of the time. The Classifier's ability to detect AI-generated text was limited when the text input was short. Despite the reliability issues, the company launched the tool to understand whether imperfect tools can be useful in detecting plagiarism.
OpenAI dropped the tool due to poor accuracy
It is the reliability issue that prompted OpenAI to call it quits on the AI Classifier. The company dropped the tool without much fanfare. The company simply updated the original blog post to inform its withdrawal. OpenAI attributed the decision to the AI Classifier's "low rate of accuracy." The start-up said it is working on better tools.
More robust solutions are required to detect AI-generated content
Multiple tools have come out in the past few months to detect AI-generated content. However, past attempts haven't been as successful as one would have hoped. OpenAI's decision to drop AI Classifier is a testament to how difficult it is to detect AI-generated text. AI plagiarism detectors often fail outside their training data. Therefore, a more robust solution is required to address this issue.