Concerns about the future of education have been prompted by ChatGPT’s capacity to write a paper in a matter of seconds. It has never been less difficult to cheat. In response to problems with the chatbot, developers have seen a rise in interest in AI-generated text detectors. New research, however, highlights telltale signs that help tell the difference between ChatGPT-generated material and human-authored content.
The study’s machine learning model searched for specific patterns and characteristics in ChatGPT answers that would indicate whether or not they were created by a person. According to the study, two studies trained ChatGPT to write restaurant evaluations and reword the ones written by humans.
Based on the data collected, it was determined that ChatGPT is more focused on recounting events than discussing emotions. It stays away from the first-person, utilises a few out-of-the-ordinary terms here and there, and, curiously enough, never resorts to foul language. It overused phrases like “absolutely exquisite” and “overly courteous” and used words like “inattentive,” which aren’t used very often.
Study findings show that the writing style of ChatGPT is quite respectful. Moreover, it is unable to respond with analogies, irony, or sarcasm as a person would. Does it indicate the main distinction between AI and human intellect is on how unpleasant we can be to one another?
The study’s authors said, “It is exceedingly courteous, attempting to satisfy many sorts of requests from diverse areas rather effectively copying people, but that still does not have the profoundness of human language (for example, irony, metaphors,…).”
Lack of description was another hallmark of ChatGPT writing. So, instead of including nuances about the restaurant that only a human diner would know, ChatGPT chose broad strokes to describe the experience. According to the data, ChatGPT also has a lot of redundancy, with the word “restaurant” appearing many times in the same sentence.
Keeping an eye out for these features may be your best hope if you’re trying to tell the difference between human and ChatGPT authored content. There are text identification systems developed using AI, but none of them are up to par with what is required.
OpenAI, the firm behind ChatGPT’s research, published a free text-identifying tool based on ChatGPT on Wednesday, but it is unreliable. OpenAI’s “classifier” technology only assigns a “presumably AI-written” label to 26% of AI-written material and makes 9% of the time.
In addition to testing GPT-2 Output Detector, Writer AI Content Detector, and Content at Scale AI Content Detection, ZDNET also tried some additional AI text-detecting technologies. The findings demonstrated the same lack of reliability in these methods.
Subtly charming pop culture geek. Amateur analyst. Freelance tv buff. Coffee lover