I've noticed recently that when I am using Opus at night (Eastern US), I am seeing it go down extreme rabbit holes on the same types of requests I am putting through on a regular basis. It is more likely to undertake refactors that break the code and then iterates on those errors in a sort of spiral. A request that would normally take 3-4 minutes will turn into a 10 minute adventure before I revert the changes, call out the mistake, and try again. It will happily admit the mistake, but the pattern seems to be consistent.
I haven't performed a like for like test and that would be interesting, but has anyone else noticed the same?
bayarearefugee•1h ago
The most reliable time to see it fall apart is when Google makes a public announcement that is likely to cause a sudden influx of people using it.
And there are multiple levels of failure, first you start seeing iffy responses of obvious lesser quality than usual and then if things get really bad you start seeing just random errors where Gemini will suddenly lose all of its context (even on a new chat) or just start failing at the UI level by not bothering to finish answers, etc.
The sort of obvious likely reason for this is when the models are under high load they probably engage in a type of dynamic load balancing where they fall back to lighter models or limit the amount of time/resources allowed for any particular prompt.
kevinsync•1h ago
I just assume it went to the bar, got wasted, and needed time to sober up!
scaredreally•1h ago
I jokingly (and not so) thought that it was trained on data that made it think it should be tired at the end of the day.
But it is happening daily and at night.
woleium•50m ago
stavros•39m ago
Now I don't know what to think.