OpenAI introduced the AI classifier tool with the claim that it was a lot better than previous versions at picking out AI-authored text. To its credit, the startup also made it clear that better didn’t mean good. The tool correctly identified 26% of AI-written texts but labeled 9% of text written by humans as also coming from an AI. Inputting longer texts, more than 1,000 characters, raised the accuracy rate, though there’s no simple correspondence in word count to accuracy. Six months later, OpenAI seems to have decided its approach isn’t working well enough, or at least not improving fast enough, for the company to continue supporting it as a publicly available tool.