OpenAI, the company that created ChatGPT, has produced a “classifier” tool that can identify AI-written content, but warns that it should not be depended upon.
ChatGPT‘s ability to produce responses to a wide variety of questions has teachers worried that pupils may be passing off artificial intelligence-generated essays and assignments as their own.
This new classifier from OpenAI has the potential to revolutionise the education sector by automatically identifying AI-generated essays and tasks. Possible alternative solution to limiting its usage, as New York City schools have lately done. ChatGPT has been forbidden by Australian education agencies.
Because its moderators were inundated with sometimes-correct replies toChatGPT questions, developer Q&A site Stack Overflow banned them. The classifier might aid organisations like Stack Overflow.
OpenAI claims it is difficult to correctly recognise all text generated by AI and that the classifier has a variety of restrictions that reduce its usefulness.
The classifier has a true positive rate of 26% (the percentage of times it correctly classifies AI-written text as “possibly AI-written”). In 9% of cases, it mistakenly detects human-authored content as being produced by AI. Essentially, it has a fair probability of mislabeling text either way and a good possibility of missing content supplied by a human who doesn’t reveal it was created by AI. However, the report says the classifier is “much more accurate on text from more contemporary AI systems” than the GPT-2-based detector it previously used.
OpenAI stated in a news release that it expects the tool will “start talks on AI literacy” and that it is making the classifier public in order to get input on whether flawed tools like this one are beneficial.
Here you may access the free online categorization tool. The user just pastes the copied content into the text field and clicks “submit” to utilise it. It will determine if the text is “extremely improbable,” “unlikely,” “unclear whether it is,” “possible,” or “likely” to have been created by artificial intelligence.
The minimum input length required by the classifier is 1,000 characters (about 150-250 sentences). OpenAI cautions that it is simple to alter AI-generated text to fool detection systems. Additionally, the tool was mostly trained on English material produced by adults, making it more prone to mislabel literature authored by youngsters and non-English language.
When asked about the classifier’s ability to identify text created in tandem with human writers, OpenAI admits it has “not properly examined the performance of the classifier.”
The instructions warn that the software “should not be utilised as a major decision-making tool,” but rather as an addition to other techniques for tracing the origins of a given text.
Subtly charming pop culture geek. Amateur analyst. Freelance tv buff. Coffee lover