Is the AI itself degrading? Or is it because of product-policy changes, such as system prompt modifications and usage limits? Or is it both?
I sometimes wonder whether degradation is simply an inherent property of LLMs themselves.
Not a criticism of this project — it's a good idea, it just highlights the central question of "how well is this model working?" I'm not sure it's so straightforward.
wonderwhyer•1h ago
Part of it tracks how many tokens you actually get from various subscriptions, over time.
Past week, multiple people asked me about it — they'd been hitting Claude and Codex limits faster than expected.
Ran the tests yesterday. Reran today. Here's what came back: ▸ ChatGPT Plus / GPT-5.5: 95M → 37M tokens/week (−61%) ▸ Claude Max 20× / Sonnet 4.6: 388M → 214M (−45%) ▸ Claude Max 20× / Opus 4.7: 248M → 162M (−35%) ▸ Claude Pro / Sonnet 4.6: 19.6M → 11.4M (−42%) ▸ Claude Pro / Opus 4.7: 15.6M → 10.2M (−35%)
5 of 5 retested plans dropped 35-61% in five days. None went up.
Anyone else seeing similar in their own usage?
panny•1h ago
>Value over time by provider
Via a #fragment at the end of your url. It looks like you're selling me something at the top of the page so I can see how you were flagged dead.