What felt like cutting edge "intelligence" 1-2 months ago now frequently delivers outputs that wouldn't have impressed you in late 2023: vague, hallucinated, overly cautious, or just outright lazy.
Rationally speaking, the opportunity cost of wasting premium FLOPs on serving millions of casual chat users and vibe-coders and slop-makers is enormous.
The result is a phenomenon many users have encountered repeatedly across providers (Gemini 2.5/3 Pro, Claude Sonnet/Opus variants, GPT-4o/5 series, and 3rd party interfaces like various anti gravity or coding frontends):
You prompt for something non-trivial (e.g. code, analysis, creative work, research or whatever) and you get back the most sophisticatedly parroted 2023 tier mega slop and it would be a lucky instance if it didn't shit on your code.
When asked about the exact nomenclature of the models which are conducting edits, the models initially say that they are "large language models" configured by google or claude or openai...but once you insist they will reveal the whole thing... and et voila: it turns out you are using the oldest models available
When you casually ask the model to identify itself, it defaults to the scripted party line: "I'm a LLM built by configured for <insert tool here> by <insert provider here>"
Press harder, and it will reveal to you the actual nomenclature of the model: you might actually be able to access GPT 2 (lmao).
I'm collecting them like pokemon, i encountered gemini 1.5 pro, gemini flash 2.0 ,claude haiku.
I hope you try to ask insist or do some clever prompting to extract the model name and you will find out. Pro Tip is you asking in whatever interface exactly when it tells you that usage is "unusually high"..
PS: I'm a pro subscriber on all..