"By default, Anthropic doesn’t train our generative models on your content. This commitment to privacy is core to how we build our products and services."
IIUC, this is actually different from Gemini and ChatGPT, which sets Claude apart.
THEN they act like user data isn’t used for training by default, but if you look at the training part of their terms, it says they do train on “feedback” and then the decompiled version of Claude code reads “ // By using Claude Code, you agree that all code acceptance or rejection decisions you make, // and the associated conversations in context, constitute Feedback under Anthropic's Commercial Terms, // and may be used to improve Anthropic's products, including training models. //- You are responsible for reviewing any code suggestions before use.“
This just seems like cover your ass bullshit if every single use of Claude code counts as feedback they train on
ctoth•9h ago
"Pay to train your replacement! We'll give you a discount!"
After all, why do they need my tokens of telling the AI what to do if not to teach the AI to tell itself what to do? They're collecting our developer workflows to build the next level of automation. And we're still paying for it!