Anthropic’s recent postmortem described several Claude Code regressions around default reasoning effort, context/thinking retention, and a system prompt change to reduce verbosity. Those seem more likely to explain the “less careful / forgetful / worse follow-through” behavior than the context window alone.
I would compare the same task in a fresh session, with the effort setting fixed, and ideally against a few repeatable tasks from your own codebase. Otherwise it is very hard to tell whether the regression is the model, Claude Code’s harness, context management, or just a stale session.
But even codex has these super weird time limits. It's really starting to show that these companies must have been losing a ton of money with all the recent limits and degration.
I'm still on the "camp" that most of these unicorns will be F'ed by open and local models in the next few years, at least in these coding/chatbox niches and then they'll just be perpetually (re)searching for AGI :shrug:
sovenyr•1h ago