frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Ask HN: What toolchains are people using for desktop app development in 2025?

46•lincoln20xx•4h ago•54 comments

ChatGPT 5 is slow and no better than 4

28•iwontberude•3h ago•22 comments

Ask HN: How can ChatGPT serve 700M users when I can't run one GPT-4 locally?

495•superasn•1d ago•323 comments

Ask HN: OpenAI GPT-5 API seems to be significantly slower – is this expected?

4•tlogan•2h ago•3 comments

Ask HN: In which programming language is it better to make your own language?

7•Forgret•6h ago•15 comments

Why Boring Businesses Outlast AI Hype Cycles

3•Taikhoom10•4h ago•1 comments

Ask HN: How do you find honest tech reviews?

7•bjourne•5h ago•4 comments

Ask HN: What trick of the trade took you too long to learn?

373•unsupp0rted•5d ago•647 comments

Exposing Satcom in the Sky: Aircraft Systems Vulnerable to Remote Attacks

2•hacker_might•8h ago•0 comments

Countries with most GPT-5 users, esp. in advanced computation and reasoning?

2•mzk_pi•8h ago•1 comments

Ask HN: Has any of the Pivotal Tracker replacement attempts succeeded?

44•admissionsguy•5d ago•34 comments

Tell HN: Chrome and Spotify dropping support for macOS11

8•Kalanos•9h ago•6 comments

Tell HN: Anthropic expires paid credits after a year

270•maytc•4d ago•135 comments

Ask HN: Claude Code vs. Codex vs. GitHub Coding Agent?

2•endorphine•12h ago•1 comments

Ask HN: How would you build second brain in the AI era?

8•divan•1d ago•4 comments

ChatGPT-5 Can't Do Basic Math

14•MarcellusDrum•1d ago•11 comments

GPT-5 streaming requires submission of biometric data

29•binarymax•1d ago•7 comments

Ask HN: Are you running local LLMs? What are your key use cases?

12•briansun•1d ago•12 comments

Ask HN: What do you dislike about ChatGPT and what needs improving?

32•zyruh•3d ago•123 comments

Ask HN: What are you working on this weekend?

10•lagniappe•12h ago•15 comments

Tell HN: Charles Irby has passed away

30•steven123•2d ago•4 comments

Ask HN: Should brain implants be available for everyone as a productivity boost?

2•amichail•23h ago•4 comments

Ask HN: Which processor to pick for learning assembly?

8•shivajikobardan•1d ago•7 comments

White Paper: Contribution-Based Governance for Developer Communities

4•ff12wq111•1d ago•2 comments

Ask HN: Recommendations for specification management software?

7•gusmally•1d ago•1 comments

What's Your Favorite LLM –and Why?

5•zyruh•22h ago•3 comments

Ask HN: Why Did Mercurial Die?:(

29•sergiotapia•4d ago•32 comments

Tell HN: Thing I learned this year was keeping a work journal

11•Muromec•1d ago•7 comments

Flycrypto – Book Flights and Hotels with Bitcoin and Crypto

3•flycrypto•1d ago•3 comments

Ask HN: What change enabled you to consistently finish your side projects?

49•pillefitz•5d ago•39 comments
Open in hackernews

ChatGPT 5 is slow and no better than 4

28•iwontberude•3h ago
Have general LLMs clearly peaked?

Comments

iwontberude•3h ago
This is intended to be a discussion thread speculating about why ChatGPT 5 is so slow and why it seems to be no better than previous versions.
labrador•3h ago
GPT-5 is better as a conversational thinking partner than GPT-4o. It's answers are more concise, focused and informative. The conversation flows. GPT-5 feels more mature than GPT-4o with less juvenile "glazing."

I can't speak to other uses such as coding, but as a sounding board GPT-5 is better than GPT-4o, which was already pretty good. GPT-5's personality has definitely shifted to a more professional tone which I like.

I do understand why people are missing the more synchophant persoanlity of GPT-4o, but I'm not one of them.

saulpw•2h ago
That sounds 10% better, not 10x better. That's close enough to 'peaked'.
labrador•2h ago
Agreed. Sam Altman definitely over-hyped GPT-5. It's not so much more capable that it deserves a major version number bump.
3836293648•2h ago
Surely a major version bump says more about the internals than the capabilities
labrador•2h ago
I see your point from a software engineering perspective but unfortuately that's not how the public sees it. The common perception is that we are making leaps towards AGI. I never thought AGI was close so I'm not disappointed, but a lot of people seem to be. On the other hand, I've seen comments like "I guess my fears of a destructive super-intelligence were over-blown."
kjkjadksj•2h ago
People seem to make this exact comment on here at every gpt release. I wonder what gpt we ought to actually be on? 1.4.6?
labrador•2h ago
In retrospect I would have named it as follows:

GPT-4 -> GPT-4 Home

GPT-5 -> GPT-4 Enterprise

Because my impression after using GPT-5 that it is designed to satisfy the needs of Microsoft mainly. Microsoft has no interest in making AI therapists or AI companions, probably because of the legal liability. Also, that's outside their core business.

torginus•56m ago
I still think it's a solid achievement, but weirdly positioned. It's their new-poverty spec model, available to everyone, and is likely not too large.

It's decently good at coding, and math, beating current SOTA of Opus 4.1 by a small margin, while being much cheaper and faster to run, hinting at likely a much smaller model.

However it's no better at trivia or writing emails or essays, which is what regular people who use ChatGPT through the website actually care about, making this launch come off as awkward.

hoppp•51m ago
They gonna release new models like apple releases iphones, same stuff little tweaks and improvements.
al_borland•53m ago
By definition, if something is still getting 10% better each year it hasn’t yet peaked. Not even close.
mikert89•3h ago
Have you used the Pro version? Its incredible
binarymax•3h ago
My primary use case for LLMs are running jobs at scale over an API, and not chat. Yes it's very slow, and it is annoying. Getting a response from GPT-5-mini for <Classify these 50 tokens as true or false> takes 5 seconds, compared to GPT-4o which takes about a second.
jscheel•2h ago
Doing quite a bit of that as well, but I’ve held off moving anything to gpt-5 yet. Guessing it’s a capacity issue right now.
hoppp•45m ago
If its 5 seconds maybe you are better off renting a GPU server and running the inference where the data is, without round trips and you can use gpt-oss
darepublic•2h ago
They took away o3 on plus for this :(
Buttons840•42m ago
o3 was surprisingly good at research. I once saw it spend 6 full minutes researching something before giving an answer, and I wasn't using the "research" or "deep think" or whatever it's called, o3 just decided on its own to do that much research.
gooodvibes•2h ago
Not having the choice to use the old models is a horrible user experience. Taking 4o away so soon was a crime.

I don’t feel like I got anything new, I feel like something got taken away.

hirvi74•1h ago
4o and perhaps a few other of the older models are coming back. Altman already stated so.
pseudo_meta•1h ago
API is noticeably slower for me, sometimes up to 10x slower.

Upon some digging, it seems that part of the slowdown is due to the gpt-5 models by default doing some reasoning (reasoning effort "medium"), even for the nano or mini model. Setting the reasoning effort to "minimal" improves the speed a lot.

However, to be able to set the reasoning effort you have to switch to the new Response API, which wasn't a lot of work, but more than just changing a URL.

hirvi74•1h ago
I'm noticing significant differences already.

Code seems to work on the first try more often for me too.

Perhaps my favorite change so far is the difference of verbosity. Some of the responses I am receiving when asking trivial questions are now are merely a handful of sentences instead of a dissertation. However, dissertation mode comes back when appropriate, which is also nice.

Edit: Slightly tangential, but I forgot to ask, do any of you all have access to the $200/month plan? If so, how does that model compare to GPT-5?

al_borland•49m ago
It feels slower, but if the quality is better, so one response will do instead of multiple follow up questions, it’s still faster overall. It’s also still orders of magnitude faster than doing the research manually.

I’m reminded of that Louis CK joke about people being upset about the WiFi not working on their airplane.