frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

GPT-5 for Developers

https://openai.com/index/introducing-gpt-5-for-developers
197•6thbit•2h ago

Comments

andrewmcwatters•1h ago
I wonder how good it is compared to Claude Sonnet 4, and when it's coming to GitHub Copilot.

I almost exclusively wrote and released https://github.com/andrewmcwattersandco/git-fetch-file yesterday with GPT 4o and Claude Sonnet 4, and the latter's agentic behavior was quite nice. I barely had to guide it, and was able to quickly verify its output.

croemer•1h ago
> GPT‑5 also excels at long-running agentic tasks—achieving SOTA results on τ2-bench telecom (96.7%), a tool-calling benchmark released just 2 months ago.

Yes, but it does worse than o3 on the airline version of that benchmark. The prose is totally cherry picker.

Fogest•1h ago
How does the cost compare though? From my understanding o3 is pretty expensive to run. Is GPT-5 less costly? If so if the performance is close to o3 but cheaper, then it may still be a good improvement.
low_tech_punk•1h ago
I find it strange that GPT-5 is cheaper than GPT-4.1 in input token and is only slightly more expensive in output token. Is it marketing or actually reflecting the underlying compute resources?
AS04•1h ago
Very likely to be an actual reflection. That's probably their real achievement here and the key reason why they are actually publishing it as GPT-5. More or less the best or near to it on everything while being one model, substantially cheaper than the competition.
aliljet•1h ago
Between Opus aand GPT-5, it's not clear there's a substantial difference in software development expertise. The metric that I can't seem to get past in my attempts to use the systems is context awareness over long-running tasks. Producing a very complex, context-exceeding objective is a daily (maybe hourly) ocurrence for me. All I care about is how these systems manage context and stay on track over extended periods of time.

What eval is tracking that? It seems like it's potentially the most imporatnt metric for real-world software engineering and not one-shot vibe prayers.

realusername•1h ago
Personally I think I'll wait for another 10x improvement for coding because with the current way it's going, they clearly need that.
fsloth•1h ago
From my experience when used through IDE such as Cursor the current gen Claude model enables impressive speedruns over commodity tasks. My context is a CAD application I’ve been writing as a hobby. I used to work in that field for a decade so have a pretty good touch on how long I would expect tasks to take. I’m using mostly a similar software stack as that at previous job and am definetly getting stuff done much faster on holiday at home than at that previous work. Of course the codebase is also a lot smaller, intrinsic motivation, etc, but still.
42lux•54m ago
How often do you have to build the simple scaffolding though?
bdangubic•1h ago
context awareness over long-running tasks

don’t have long-running tasks, llms or not. break the problem down into small manageable chunks and then assemble it. neither humans nor llms are good at long-running tasks.

beoberha•1h ago
A series of small manageable chunks becomes a long running task :)

If LLMs are going to act as agents, they need to maintain context across these chunks.

bastawhiz•55m ago
> neither humans nor llms are good at long-running tasks.

That's a wild comparison to make. I can easily work for an hour. Cursor can hardly work for a continuous pomodoro. "Long-running" is not a fixed size.

echelon•39m ago
Humans can error correct.

LLMs multiply errors over time.

vaenaes•45m ago
You're holding it wrong
swader999•1h ago
If GPT 5 truly has 400k context, that might be all it needs to meaningfully surpass Opus.
AS04•1h ago
400k context with 100% on the fiction livebench would make GPT-5 the undisputably best model IMHO. Don't think it will achieve that though, sadly.
simonw•57m ago
It's 272,000 input tokens and 128,000 output tokens.
dimal•45m ago
Even with large contexts there's diminishing returns. Just having the ability to stuff more tokens in context doesn't mean the model can effectively use it. As far as I can tell, they always reach a point in which more information makes things worse.
Byamarro•40m ago
More of a question is its context rot tendency than the size of its context :) LLMs are supposed to load 3 bibles into their context, but they forget what they were about to do after loading a 600LoC of locales.
andrewmutz•39m ago
Having a large context window is very different from being able to effectively use a lot of context.

To get great results, it's still very important to manage context well. It doesn't matter if the model allows a very large context window, you can't just throw in the kitchen sink and expect good results

tekacs•30m ago
Coupled with the humungous price difference...
logicchains•18m ago
>Between Opus aand GPT-5, it's not clear there's a substantial difference in software development expertise.

If there's no substantial difference in software development expertise then GPT-5 absolutely blows Opus out of the water due to being almost 10x cheaper.

nadis•2m ago
It's pretty vague, but the OP had this callout:

>"GPT‑5 is the strongest coding model we’ve ever released. It outperforms o3 across coding benchmarks and real-world use cases, and has been fine-tuned to shine in agentic coding products like Cursor, Windsurf, GitHub Copilot, and Codex CLI. GPT‑5 impressed our alpha testers, setting records on many of their private internal evals."

risho•1h ago
over the last week or so I have put probably close to 70 hours into playing around with cursor and claude code and a few other tools (its become my new obsession). I've been blown away by how good and reliable it is now. That said the reality is in my experience the only models that actually work in any sort of reliable way are claude models. I dont care what any benchmark says because the only thing that actually matters is actual use. I'm really hoping that this new gpt model actually works for this usecase because competition is great and the price is also great.
ralfd•1h ago
Just replying to ask you next week what your assessment on GPT5 is.
throwaway_2898•59m ago
How much of the product were you able to build to say it was good/reliable? IME, 70 hours can get you to a PoC that "works", building beyond the initial set of features — like say a first draft of all the APIs — does it do well once you start layering features?
Centigonal•51m ago
Ditto here, except I'm using Roo and it's Claude and Gemini pro 2.5 that work for me.
neuronexmachina•9m ago
> That said the reality is in my experience the only models that actually work in any sort of reliable way are claude models.

Anecdotally, the tool updates in the latest Cursor (1.4) seem to have made tool usage in models like Gemini much more reliable. Previously it would struggle to make simple file edits, but now the edits work pretty much every time.

zarzavat•7m ago
The magic is the prompting/tool use/finetuning.

I find that OpenAI's reasoning models write better code and are better at raw problem solving, but Claude code is a much more useful product, even if the model itself is weaker.

timhigins•1h ago
I opened up the developer playground and the model selection dropdown showed GPT-5 and then it disappeared. Also I don't see it in ChatGPT Pro. What's up?
Fogest•1h ago
It's probably being throttled due to high usage.
IAmGraydon•4m ago
Not showing in my Pro account either. As someone else mentioned, I’m sure it’s throttling due to high use right now.
sebdufbeau•1h ago
Has the API rollout started? It's not available in our org, even if we've been verified for a few months

EDIT: It's out now

spullara•1h ago
it is out yet. i poll the api for the models and update this GitHub hourly.

https://github.com/spullara/models

low_tech_punk•1h ago
The ability to specify a context-free grammar as output constraint? This blows my mind. How do you control the auto regressive sampling to guarantee the correct syntax?
qsort•1h ago
You sample only from tokens that could possibly result in a valid production for the grammar. It's an inference-only thing.
low_tech_punk•1h ago
ah, thanks!
evnc•49m ago
I assume they're doing "Structured Generation" or "Guided generation", which has been possible for a while if you control the LLM itself e.g. running an OSS model, e.g. [0][1]. It's cool to see a major API provider offer it, though.

The basic idea is: at each auto-regressive step (each token generation), instead of letting the model generate a probability distribution over "all tokens in the entire vocab it's ever seen" (the default), only allow the model to generate a probability distribution over "this specific set of tokens I provide". And that set can change from one sampling set to the next, according to a given grammar. E.g. if you're using a JSON grammar, and you've just generated a `{`, you can provide the model a choice of only which tokens are valid JSON immediately after a `{`, etc.

[0] https://github.com/dottxt-ai/outlines [1] https://github.com/guidance-ai/guidance

low_tech_punk•1h ago
Tried using gpt-5 family with response API and got error "gpt-5 does not exist or you don't have access to it". I guess they are not rolling out in lock step with the live stream and blog article?
diggan•1h ago
Seems they're doing rollout over time, I'm not seeing it anywhere yet.
low_tech_punk•26m ago
Can confirm that they are rolling out. It's working for me.
catigula•1h ago
I thought we were going to have AGI by now.
RS-232•51m ago
No shot. LLMs are simple text predictors and they are too stupid to get us to real AGI.

To achieve AGI, we will need to be capable of high fidelity whole brain simulations that model the brain's entire physical, chemical, and biological behavior. We won't have that kind of computational power until quantum computers are mature.

evantbyrne•26m ago
It will be interesting to see if humans can manage to bioengineer human-level general intelligence into another species before computers.
machiaweliczny•23m ago
I call bullshit. No need for simulation. Can be achieved via RL with some twist
bopbopbop7•12m ago
“some twist” is doing a lot of heavy lifting in that statement.
IAmGraydon•6m ago
Not going to happen any time soon, if ever. LLMs are extremely useful, but the intelligence part is an illusion that nearly everyone appears to have fallen for.
skepticATX•1h ago
This was really a bad release for OpenAI, if benchmarks are even somewhat indicative of how the model will perform in practice.
robterrell•29m ago
In what ways?
jumploops•1h ago
If the model is as good as the benchmarks say, the pricing is fantastic:

Input: $1.25 / 1M tokens (cached: $0.125/1Mtok) Output: $10 / 1M tokens

For context, Claude Opus 4.1 is $15 / 1M for input tokens and $75/1M for output tokens.

The big question remains: how well does it handle tools? (i.e. compared to Claude Code)

Initial demos look good, but it performs worse than o3 on Tau2-bench airline, so the jury is still out.

addaon•1h ago
> Output: $10 / 1M tokens

It's interesting that they're using flat token pricing for a "model" that is explicitly made of (at least) two underlying models, one with much lower compute costs than the other; and with use ability to at least influence (via prompt) if not choose which model is being used. I have to assume this pricing model is based on a predicted split between how often the underlying models get used; I wonder if that will hold up, if users will instead try to rouse the better model into action more than expected, or if the pricing is so padded that it doesn't matter.

simianwords•51m ago
> that is explicitly made of (at least) two underlying models

what do you mean?

addaon•26m ago
> a smart and fast model that answers most questions, a deeper reasoning model for harder problems, and a real-time router that quickly decides which model to use based on conversation type, complexity, tool needs, and explicit intent (for example, if you say “think hard about this” in the prompt).

From https://openai.com/index/gpt-5-system-card/

mkozlows•45m ago
That's how the browser-based ChatGPT works, but not the API.
6thbit•1h ago
Seems they have quietly increased the context window up to 400,000

https://platform.openai.com/docs/models/gpt-5

ralfd•1h ago
How does that compare to Claude/GPT4?
6thbit•58m ago
4o - 128k o3 - 200k Opus 4.1 - 200k Sonnet 4 - 200k

So, at least twice larger context than those

hrpnk•58m ago
gpt4.1 has 1M input and 32k output, Sonnet 4 200k/64k
simianwords•50m ago
but is it for the model in chatgpt.com as well?
mehmetoguzderin•1h ago
Context-free grammar and regex support are exciting. I wonder what, or whether, there are differences from the Lark-like CFG of llguidance, which powers the JSON schema of the OpenAI API [^1].

[^1]: https://github.com/guidance-ai/llguidance/blob/f4592cc0c783a...

msp26•54m ago
Yeah that was the only exciting part of the announcement for me haha. Can't wait to play around with it.

I'm already running into a bunch of issues with the structured output APIs from other companies like Google and OpenAI have been doing a great job on this front.

belter•1h ago
We were promised AGI and all we got was code generators...
bmau5•51m ago
It's a logical starting point, given there are pretty defined success/failure criteria
pamelafox•1h ago
I am testing out gpt-5-mini for a RAG scenario, and I'm impressed so far.

I used gpt-5-mini with reasoning_effort="minimal", and that model finally resisted a hallucination that every other model generated.

Screenshot in post here: https://bsky.app/profile/pamelafox.bsky.social/post/3lvtdyvb...

I'll run formal evaluations next.

potatolicious•33m ago
This feels like honestly the biggest gain/difference. I work on things that do a lot of tool calling, and the model hallucinating fake tools is a huge problem. Worse, sometimes the model will hallucinate a response directly without ever generating the tool call.

The new training rewards that suppress hallucinations and tool-skipping hopefully push us in the right direction.

ralfd•8m ago
Q: What does a product manager do?

GPT4: Collaborating with engineering, sales, marketing, finance, external partners, suppliers and customers to ensure …… etc

GPT5: I don't know.

Upon speaking these words, AI was enlightened.

fatty_patty89•54m ago
What the fuck? Nobody else saw the cursor ceo looking through the gpt5 generated code, mindlessly scrolling saying "this looks roughly correct, i would love to merge that" LOL

You can't make this up

hrpnk•50m ago
The github issue showed in the livestream is getting lots of traction: https://github.com/openai/openai-python/issues/2472

It was (attempted to be) solved by a human before, yet not merged... With all the great coding models OpenAI has access to, their SDK team still feels too small for the needs.

te_chris•48m ago
https://platform.openai.com/docs/guides/latest-model

Looks like they're trying to lock us into using the Responses API for all the good stuff.

henriquegodoy•44m ago
I dont think there's so much difference from opus 4.1 and gpt-5, probably just the context size, waiting for the gemini 3.0
sberens•43m ago
Interesting there doesn't seem to be benchmarking on codeforces
jaflo•39m ago
I just wish their realtime audio pricing would go down but it looks like GPT-5 does not have support for that so we’re stuck with the old models.
zaronymous1•15m ago
Can anyone explain to me why they've removed parameter controls for temperature and top-p in reasoning models, including gpt-5? It strikes me that it makes it harder to build with these to do small tasks requiring high-levels of consistency, and in the API, I really value the ability to set certain tasks to a low temp.

Version Museum: visual history of popular websites and software

https://www.versionmuseum.com/
1•Bogdanp•49s ago•0 comments

Show HN: Student attempt at proving P ≠ NP using geometry and lattices

https://zenodo.org/records/16759468
1•LaghZen•1m ago•0 comments

CQGS, Approved by Robot

https://institute.lot-systems.com
1•vadikmarmeladov•2m ago•0 comments

401(k) Plans Will Get More Fun

https://www.bloomberg.com/opinion/newsletters/2025-08-07/401-k-plans-will-get-more-fun
2•ioblomov•3m ago•1 comments

Ditching the dating apps? How to meet people in real life

https://www.rnz.co.nz/life/relationships/ditching-the-dating-apps-how-to-meet-people-in-real-life
3•colinprince•6m ago•0 comments

Practical Techniques for Enhancing Digital Identity Security During Login

https://guptadeepak.com/staying-secure-when-logging-in-a-practical-guide-to-protecting-your-digital-identity/
1•guptadeepak•7m ago•1 comments

Google and Perplexity give free AI search to win India users

https://restofworld.org/2025/google-perplexity-ai-search-india/
1•donohoe•10m ago•0 comments

Artificial intelligence saves doctors time, but makes mistakes – study

https://www.rnz.co.nz/news/national/569348/artificial-intelligence-saves-doctors-time-but-makes-mistakes-study
5•billybuckwheat•15m ago•1 comments

Building Fast UPDATEs for ClickHouse

https://clickhouse.com/blog/updates-in-clickhouse-2-sql-style-updates
1•sdairs•15m ago•0 comments

Sand Batteries Are a Game Changer for Clean Energy

https://oilprice.com/Energy/Energy-General/Sand-Batteries-Are-a-Game-Changer-for-Clean-Energy.html
1•PaulHoule•15m ago•0 comments

And I thought AI had it hard... White Face and Black Face Optical Illusion

https://www.psy.ritsumei.ac.jp/akitaoka/saishin72e.html
2•zahirbmirza•17m ago•1 comments

Google TV's Uncertain Future

https://www.theverge.com/lowpass-newsletter/724970/google-tv-ads-monetization-problem
2•speckx•18m ago•0 comments

Core – A self-governing AI that modifies its own code via a constitution

https://github.com/DariuszNewecki/CORE
1•d_newecki•19m ago•1 comments

OpenAI's new open-source model is basically Phi-5

https://www.seangoedecke.com/gpt-oss-is-phi-5/
2•emschwartz•19m ago•0 comments

Amazon Web Services gives the Trump admin $1B coupon

https://www.politico.com/news/2025/08/07/amazon-trump-admin-1-billion-coupon-00497009
2•c420•21m ago•0 comments

Ask HN: How did you like GPT-4.5?

1•felipemesquita•21m ago•2 comments

Grok 4 beats GPT-5 on ARC-AGI

https://twitter.com/elonmusk/status/1953512163571904671
1•tosh•21m ago•0 comments

Has the Internet Succumbed to the Tragedy of the Commons?

https://howtosavetheworld.ca/2025/08/05/has-the-internet-succumbed-to-the-tragedy-of-the-commons/
1•freediver•22m ago•0 comments

Show HN: A light GPT-5 vs. Claude Code comparison

https://www.charlielabs.ai/research/gpt-5
2•neom•22m ago•0 comments

Using GitHub as Commenting Platform, 2025 Edition

https://kiko.io/post/Using-GitHub-as-Commenting-Platform-2025-Edition/
1•speckx•24m ago•1 comments

Generation X is officially old

https://johnivison.substack.com/p/generation-x-is-officially-old
3•throw0101a•26m ago•1 comments

Heterogeneous CPU Cores, HDMI AndWork Continues for Enhancing FreeBSD on Laptops

https://www.phoronix.com/news/FreeBSD-Laptops-July-2025
1•losgehts•26m ago•0 comments

Historical Myths That Were Eventually Proven True

https://laughingsquid.com/historical-myths-proven-true/
1•Bluestein•27m ago•0 comments

Measuring AI Ability to Complete Long Tasks – METR

https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/
1•diginova•28m ago•0 comments

How Apple could send democracy to the spam folder

https://www.washingtonpost.com/opinions/2025/08/07/apple-ios-update-spam-polling-democracy/
4•CharlesW•30m ago•4 comments

Lawmakers want to end to HR ghosting during the interview process

https://www.cnbc.com/2025/08/07/lawmakers-want-to-end-to-hr-ghosting-during-the-interview-processheres-how.html
2•rntn•32m ago•0 comments

Community Update #35 Baldur's Gate 3 Turns Two

https://baldursgate3.game/news/community-update-35-baldur-s-gate-3-turns-two_143
2•doener•33m ago•0 comments

A real example of how GPT-5 behaves in Amp

https://twitter.com/beyang/status/1953525665946362180
1•tosh•33m ago•0 comments

Microsoft is cautiously onboarding Grok 4 following Hitler concerns

https://www.theverge.com/notepad-microsoft-newsletter/754647/microsoft-grok-4-roll-out-private-preview-notepad
1•ingve•36m ago•0 comments

Nearly a million more deaths than births in Japan last year

https://www.bbc.com/news/articles/c74dnzr4jdvo
7•Someone•39m ago•5 comments