The larger monthly scale should be the default, or you should get more samples.
I've been using CC more or less 8 hrs/day for the past 2 weeks, and if anything it feels like CC is getting better and better at actual tasks.
Is CC getting better, or are you getting better at using it? And how do you know the difference?
I'm an occasional user, and I can definitely see improvements in my prompts over the past couple of months.
Thumbs up or down? (could be useful for trends) Usage growth from the same user over time? (as an approximation) Tone of user responses? (Don't do this... this is the wrong path... etc.)
It's not my fault, they set high standards!
It was probably 3x faster than usual. I got more done in the next hour with it than I do in half a day usually. It was definitely a bit of a glimpse into a potential future of “what if these things weren’t resource constrained and could just fly”.
How do you pay for those SWE-bench runs?
I am trying to run a benchmark but it is too expensive to run enough runs to get a fair comparison.
Thanks!
I would imagine a sort of hybrid qualities of volunteer efforts like wikipedia, new problems like advent of code and benchmarks like this. The goal? It would be to study the collective effort on the affects of usage to so many areas where AI is used.
[MedWatch](https://www.fda.gov/safety/medwatch-fda-safety-information-a...)
[VAERS](https://www.cdc.gov/vaccine-safety-systems/vaers/index.html)
[EudraVigilance](https://www.ema.europa.eu/en/human-regulatory-overview/resea...)
1. The percentage drop is too low and oscillating, it goes up and down.
2. The baseline of Sonnet 4.5 (the obvious choice for when they have GPU busy for the next training) should be established to see Opus at some point goes Sonnet level. This was not done but likely we would see a much sharp decline in certain days / periods. The graph would look like dominated by a "square wave" shape.
3. There are much better explanations for this oscillation: A) They have multiple checkpoints and are A/B testing, CC asks you feedbacks about the session. B) Claude Code itself gets updated, as the exact tools version the agent can use change. In part it is the natural variability due to the token sampling that makes runs not equivalent (sometimes it makes suboptimal decisions compared to T=0) other than not deterministic, but this is the price to pay to have some variability.
Doesn't really work like that. I'd remove the "statistically significant" labelling because it's misleading.
qwesr123•1h ago