LOL
I don’t feel like I got anything new, I feel like something got taken away.
Upon some digging, it seems that part of the slowdown is due to the gpt-5 models by default doing some reasoning (reasoning effort "medium"), even for the nano or mini model. Setting the reasoning effort to "minimal" improves the speed a lot.
However, to be able to set the reasoning effort you have to switch to the new Response API, which wasn't a lot of work, but more than just changing a URL.
That's not true - you can switch reasoning effort in the Chat Completions API - https://platform.openai.com/docs/api-reference/chat/create . It's just that in Chat Completions API it's a parameter called "reasoning_effort", while in the Responses API it's a "reasoning" parameter (object) with a parameter "effort" inside.
Code seems to work on the first try more often for me too.
Perhaps my favorite change so far is the difference of verbosity. Some of the responses I am receiving when asking trivial questions are now are merely a handful of sentences instead of a dissertation. However, dissertation mode comes back when appropriate, which is also nice.
Edit: Slightly tangential, but I forgot to ask, do any of you all have access to the $200/month plan? If so, how does that model compare to GPT-5?
I’m reminded of that Louis CK joke about people being upset about the WiFi not working on their airplane.
ChatGPT 5 in web is a router to lots of different models [1], and the non-reasoning GPT-5 chat model ("gpt-5-chat-latest" in the API) is quite dumb - no significant difference from 4o/4.1. Even if you choose GPT-5 Thinking, there's a chance that your request will be routed to GPT-5 Mini, not to full GPT-5. The only real way to fix that in ChatGPT is to subscribe to Pro and use GPT-5 Pro, but of course that's very expensive. Otherwise people suggest saying "think hard" in the prompt, which might make the router choose the better model. Worse even, Sam Altman publicly said that in the first day of the GPT-5 release their router didn't work properly. [2]
I'd suggest trying GPT-5 in API or in apps like Cursor/Windsurf if you want to truly test it.
Oh, and from the GPT-5 guide [3] apparently OpenAI considers GPT-5 to be good even at "minimal" reasoning (it's still the full thinking model then, not the chat variant, and will respond much faster):
> The minimal setting performs especially well in coding and instruction following scenarios, adhering closely to given directions
[1] https://cdn.openai.com/pdf/8124a3ce-ab78-4f06-96eb-49ea29ffb... "Table 1: Model progressions"
[2] https://x.com/sama/status/1953893841381273969 "Yesterday, the autoswitcher broke and was out of commission for a chunk of the day, and the result was GPT-5 seemed way dumber"
I think the make everything bigger is plateauing and not yielding the infinite returns that were first suggested and demonstrated from gpt 2 - 4.
I think now it’s harder because they’ve got to focus on value per watt. Smaller good models mean less energy, less complexity to go wrong, but harder to achieve.
The unlock could be more techniques and focussed synthetic data from old models used to train new ones but apparently gpt-5 uses synthetic data and this is one of the reasons it isn’t necessarily good in real world tasks.
For me if we go the synthetic data route it’s important to shoot for quality - good synthetic data distils useful stuff, discards the noise so useful patterns are more solid in training, but imagine it’s hard to distinguish signal from noise to produce good synthetic data.
labrador•6mo ago
I can't speak to other uses such as coding, but as a sounding board GPT-5 is better than GPT-4o, which was already pretty good. GPT-5's personality has definitely shifted to a more professional tone which I like.
I do understand why people are missing the more synchophant persoanlity of GPT-4o, but I'm not one of them.
saulpw•6mo ago
labrador•6mo ago
3836293648•6mo ago
labrador•6mo ago
kjkjadksj•6mo ago
labrador•6mo ago
GPT-4 -> GPT-4 Home
GPT-5 -> GPT-4 Enterprise
Because my impression after using GPT-5 that it is designed to satisfy the needs of Microsoft mainly. Microsoft has no interest in making AI therapists or AI companions, probably because of the legal liability. Also, that's outside their core business.
coldtea•6mo ago
They make "this exact comment on here at every gpt release" because every GPT release is touted as revolutionary and it's increasinly a smaller bump.
BriggyDwiggs42•6mo ago
torginus•6mo ago
It's decently good at coding, and math, beating current SOTA of Opus 4.1 by a small margin, while being much cheaper and faster to run, hinting at likely a much smaller model.
However it's no better at trivia or writing emails or essays, which is what regular people who use ChatGPT through the website actually care about, making this launch come off as awkward.
hoppp•6mo ago
al_borland•6mo ago
coldtea•6mo ago
I'd say that points to it being very close to peaked.
Nobody said anything about a steady 10% year-over-year being the case forever...
anuramat•6mo ago
coldtea•6mo ago
>why does everyone feel the urge to extrapolate a vibe-based metric based on three points?
Because marketers of AI extrapolate even worse to hype it, and a counter-correction is needed...
anuramat•5mo ago
> marketers of AI
why would I ever consider their opinion to be valuable?
BriggyDwiggs42•6mo ago
dileeparanawake•6mo ago