Anthropic Introduces Choice: Protect Your Chats or Train the AI

Anthropic is introducing significant updates to how it manages user data, requiring all Claude users to opt-in or opt-out of AI training by September 28. Previously, user conversations were automatically deleted within 30 days unless flagged, and retained for a maximum of two years if needed. Now, data may be stored for five years for users who do not opt out. Business clients, including Claude for Work and Claude Gov, are exempt from these changes.

The company states that allowing user data to train models improves Claude’s ability to detect harmful content and enhances performance in coding, reasoning, and analysis. Behind this, however, is the clear motive of gathering high-quality real-world data to compete with other AI companies, such as OpenAI and Google, ensuring that Claude models remain cutting-edge.

Industry trends are pushing companies to revise data retention practices, as seen in OpenAI’s ongoing court case over indefinite storage of ChatGPT interactions. Many users remain unaware of the changes, clicking consent buttons without realizing the broader implications, which highlights the challenge of securing meaningful user approval for AI data use.

Anthropic has designed its prompts so that new users choose preferences at signup, while existing users see a pop-up with a prominent “Accept” button and a smaller toggle for data training permissions, preset to “On.” Privacy experts warn this may result in users agreeing without understanding the consequences. The update reflects ongoing tensions between AI progress, ethical responsibility, and user privacy concerns.