OpenAI Is Supposedly Reluctant to Publish a ChatGPT Detection Tool That Could Angry Cheaters

The program is currently "weighing the risks" before being made public, but it can "reliably" identify text that was created by AI and has watermarks.

OpenAI, the creator of ChatGPT, is developing new search and voice functionalities, but it also possesses a tool that is reportedly quite effective at identifying AI-generated fake articles prevalent on the internet today.

This tool has been in the company’s possession for nearly two years, and activating it would be a simple task. However, the company, led by Sam Altman, is still weighing the decision to release it, as doing so could potentially upset its most dedicated supporters.

This tool is not the outdated AI detection algorithm released in 2023; rather, it is a significantly more precise system. According to a report from the Wall Street Journal on Sunday, based on anonymous sources within the organization, OpenAI is cautious about launching this AI-detection tool. The program functions as an AI watermarking system, embedding specific patterns into AI-generated text that its detection tool can recognize. Similar to other AI detection systems, OpenAI’s tool would evaluate a document and provide a percentage indicating the likelihood that it was produced by ChatGPT.

OpenAI acknowledged the existence of this tool in an update to a blog post from May, shared on Sunday. According to internal documents cited by the WSJ, the program boasts a 99.9% effectiveness rate, which surpasses the performance of other AI detection software developed in recent years. The company noted that while it is effective against local tampering, it can be bypassed through methods such as translating and retranslating text using Google Translate or rephrasing it with another AI generator. OpenAI also mentioned that individuals looking to evade the tool could “insert a special character between each word and then remove that character.”

Advocates within the organization assert that the program will significantly assist educators in identifying when students submit homework generated by AI. The company has reportedly delayed the launch of this initiative for several years due to apprehensions that nearly one-third of its user base might oppose it.

Another challenge for OpenAI lies in the potential risk that the widespread release of its tool could lead to the unraveling of its watermarking method. Additionally, there are concerns that the tool may exhibit bias against non-native English speakers, a problem observed with other AI detection systems.

Similarly, Google has created watermarking methods for AI-generated images and text, known as SynthID. Although this program is not accessible to the majority of consumers, the company has at least been transparent about its development.

While major tech companies are rapidly advancing their capabilities to produce AI-generated content, the tools designed to detect such content remain significantly less effective. Educators, particularly, are struggling to determine whether students are submitting assignments authored by ChatGPT. Current AI detection solutions from firms like Turnitin report a failure rate of up to 15%, a statistic they attribute to their efforts to minimize false positives.

Moreover, the impact of AI text generation extends beyond educators. Gizmodo has previously highlighted instances where writing professionals were wrongfully accused of utilizing AI to complete their assignments, resulting in their termination. Researchers have indicated that the third-party AI detection tools employed in these situations are frequently less reliable than claimed.

Thank you for reading this post, don't forget to follow my whatsapp channel


Discover more from TechKelly

Subscribe to get the latest posts sent to your email.

Comments are closed.

Discover more from TechKelly

Subscribe now to keep reading and get access to the full archive.

Continue reading