Images generated by artificial intelligence (AI) have been causing a lot of consternation in recent months, with people naturally worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT creator OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.
According to Bloomberg, OpenAI’s tool is designed to route user-generated photos created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, OpenAI’s chief technology officer Mira Muratti claimed the tool is “99% reliable.” While the technology is being tested internally, there is no release date yet.
If it is as accurate as OpenAI claims, it may be able to give the public the knowledge that the images they are seeing are either real or AI-generated. Still, OpenAI didn’t explain how the tool would actually alert people to AI images, whether using watermarks, text warnings, or something else.
It’s worth noting that the tool is only designed to detect Dell-E images, and it may not be able to recognize fakes generated by rival services like MidJourney, Stable Diffusion, and Adobe Firefly. This may limit its usefulness in the grand scheme of things, but anything it can do to uncover fake images could have a positive impact.
OpenAI has launched tools in the past that were designed to recognize content put together by its own chatbots and generators. Earlier in 2023, the company released a tool that it claimed could detect text generated by ChatGPT, but it was removed a few months later after OpenAI admitted that it was highly inaccurate. was immediately withdrawn.
Along with the new image-recognition tool, OpenAI also discussed the company’s efforts to reduce ChatGPT’s “hallucinations,” or the tendency to spread nonsense and fabricated information. “We have made a lot of progress on the issue of hallucinations with GPT-4, but we are not where we need to be,” Murati said, suggesting work on GPT-5 – a follow-up to GPT-4. The model that underpins ChatGPT is working well.
In March 2023, a group of technology leaders signed an open letter calling on OpenAI to stop work on anything more powerful than GPT-4 or risk “serious risks to society and humanity”. It seems the request has gone unheeded.
Whether OpenAI’s new tool will be more effective than its previous effort, which was canceled due to its unreliability, remains to be seen. What is certain is that despite the obvious risks, development work is continuing at a fast pace.