frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Building more with GPT-5.1-Codex-Max

https://openai.com/index/gpt-5-1-codex-max/
118•hansonw•1h ago

Comments

iamronaldo•1h ago
That was quick
bigyabai•1h ago
My first thought was "they must not be seeing as many Claude Code conversions as they hoped"
giancarlostoro•1h ago
Whenever one of them releases a milestone release the rest start publishing big milestones too. I'm waiting for Opus 5 next.
LZ_Khan•1h ago
all i care about is performance on metr benchmark
Reubend•1h ago
OpenAI likes to time their announcements alongside major competitor announcements to suck up some of the hype. (See for instance the announcement of GPT-4o a single day before Google's IO conference)

They were probably sitting on this for a while. That makes me think this is a fairly incremental update for Codex.

Palmik•20m ago
GPT 5.1 / Codex already beats Gemini 3 on SWE Bench Verified and Terminal Bench and this pushes the gap further. Seems like a decent improvement.
bugglebeetle•10m ago
That’s how the game is played. We should be grateful for all the competition that is driving these improvements, not whinging about the realities of what companies have to do to contest each other’s position.
peab•7m ago
it's really getting old
johnwheeler•6m ago
Gemini is eating their lunch, and OpenAI knows it.
spmartin823•1h ago
I still want something no one has, which is the ability to launch agents in different git worktrees simultaneously and check the results out on my main branch for testing when they are finished.
bradly•1h ago
Would this be similar to how Charlie and Jules work?
cube2222•1h ago
I think I’ve described how I achieve kinda your desired workflow in a comment yesterday [0].

[0]: https://news.ycombinator.com/item?id=45970668

agentifysh•57m ago
ha! very interesting how slept on jj is

its been essential to my workflow as well

i use both jj and git and jj is great for just creating a snapshot that i can revert to incase it fails

im still exploring it to see what else i can do with it for agentic use

agentifysh•59m ago
lots of tools that do this and I ended up going down this rabbit hole something that could just plug in to codex instead of requiring a fork

http://github.com/agentify-sh/10x

does minimal overhead with agent orchestration (its just a bash/typescript) as its main focus was adding enhancements to codex like double redundant checkpoint via git and jj (lessons learned from codex being git reset --hard happy), something like claude skills (just a bunch of mds that steer it towards specific activity like think, plan, execute), timeout wrappers (to get you unstuck if codex waits a long time), blacklist commands during yolo (rm -rf, git reset banned even if it by small chance run it) MIT licensed

you can work sequentially (subagents launch one after the other) or parallel (worktrees) but tbh sequentially is better because you understand what is going on with parallel it might be best for dealing with tests and UI.

poly2it•44m ago
Your link is a 404.
lysecret•57m ago
Cursor has this too
agentifysh•1h ago
so this was arctic fox it seems, lot of us ended up downgrading to codex 5.0 because of the token burn was too much, i see codex max is a step up which is welcome but still unsure if they solved that github issue around tool use that impacts tokens

going to wait and see after being burned by 5.1 before i upgrade back to 0.58

gemini 3 has been a let down tbh to see agentic coding wasn't a top priority im sticking with codex for now and using gemini 3 for frontend

jasonthorsness•1h ago
"Starting today, GPT‑5.1-Codex-Max will replace GPT‑5.1-Codex as the default model in Codex surfaces."

Wow, I spent last weekend using a tag-team of Claude and Codex and found Codex to more often get better results (TypeScript physics/graphics application). I probably only wrote a few hundred lines of code out of many thousands; it did a really good job.

Now I guess I'll ask the new Codex to review the work of the old!

taurath•1h ago
These 2 sentences right next to each other stood out to me:

> a new step towards becoming a reliable coding partner

> GPT‑5.1-Codex-Max is built for long-running, detailed work

Does this not sound contradictory? It’s been the shorter form work that has built what little confidence I have in these as a coding partner - a model that goes off and does work without supervision is not a partner to me.

causal•1h ago
Absolutely contradictory. The long-running tendency for Codex is why I cannot understand the hype around it: if you bother to watch what it does and read its code the approaches it takes are absolutely horrifying. It would rather rewrite a TLS library from scratch than bother to ask you if the network is available.
keeganpoppen•28m ago
these things are actually fixable with prompting. is it easy? no. is it PEBKaC if you don’t do anything to change course as it builds a TLS library? yes, but paperclip maximized! xD
ntonozzi•56m ago
If you haven't, give Cursor's Composer model a shot. It might not be quite as good as the top models, but in my experience it's almost as good, and the lightning fast feedback is more than worth the tradeoff. You can give it a task, wait ten seconds, and evaluate the results. It's quite common for it to not be good enough, but no worse than Sonnet, and if it doesn't work you just wasted 30 seconds instead of 10 minutes.
embirico•39m ago
(Disclaimer: Am on the Codex team.) We're basically trying to build a teammate that can do both short, iterative work with you, then as you build trust (and configuration), you can delegate longer tasks to it.

The "# of model-generated tokens per response" chart in [the blog introducing gpt-5-codex](https://openai.com/index/introducing-upgrades-to-codex/) shows an example of how we're improving the model good at both.

simianwords•1h ago
> Compaction enables GPT‑5.1-Codex-Max to complete tasks that would have previously failed due to context-window limits, such as complex refactors and long-running agent loops by pruning its history while preserving the most important context over long horizons. In Codex applications, GPT‑5.1-Codex-Max automatically compacts its session when it approaches its context window limit, giving it a fresh context window. It repeats this process until the task is completed.

Wouldn't the model automatically do that using attention techniques? Why do you need to do it at the token layer and not leave it to the model to automatically decide which tokens are worth paying attention to?

qsort•1h ago
> due to context-window limits
simianwords•58m ago
context window is not some physical barrier but rather the attention just getting saturated. what did i get wrong here?
qsort•52m ago
> what did i get wrong here?

You don't know how an LLM works and you are operating on flawed anthropomorphic metaphors.

Ask a frontier LLM what a context window is, it will tell you.

Palmik•15m ago
It's a fair question, even if it might be coming from a place of misunderstanding.

For example, DeepSeek 3.2, which employs sparse attention [1], is not only faster with long context than normal 3.1, but also seems to be better (perhaps thanks to reducing the noise?).

[1] It uses still quadratic router, but it's small, so it scales well in practice. https://api-docs.deepseek.com/news/news250929

paradite•3m ago
In theory, auto-regressive models should not have limit on context. It should generate the next token with all previous tokens.

In practice, when training a model, people select a context window so that during inference, you know how much GPU memory to allocate for a prompt and reject the prompt if it exceeds the memory limit.

Of course there's also degrading performance as context gets longer, but I suspect memory limit is the primary factor of why we have context window limits.

adastra22•52m ago
Attention is quadratic, so you have to pick a cutoff for context window size. In addition, the error/noise in state space increases with longer contexts, resulting in poorer performance. So even if you're willing to take the O(n^2) slowdown of a larger context window, it still won't work.
hansonw•1h ago
Rest assured that we are better at training models than naming them ;D

- New benchmark SOTAs with 77.9% on SWE-Bench-Verified, 79.9% on SWE-Lancer, and 58.1% on TerminalBench 2.0

- Natively trained to work across many hours across multiple context windows via compaction

- 30% more token-efficient at the same reasoning level across many tasks

Let us know what you think!

agentifysh•1h ago
did you address this https://github.com/openai/codex/issues/6426 ?

how much more token efficient is this compared to 5.0

had to use 5.0 because 5.1 was eating tokens like crazy and seemed like a slight incremental improvement barely noticeable

EnPissant•1h ago
Compaction is just what Claude Code has done forever, right?
enraged_camel•58m ago
I am also trying to understand the difference between compaction, and what IDEs like Cursor do when they "summarize" context over long-running conversations.

Is this saying that said summarization now happens at the model level? Or are there other differences?

GardenLetter27•54m ago
I think the point here is not that it does compaction (which Codex also already does) - but that the model was trained with examples of the Codex compaction, so it should perform better when compaction has taken place (a common source for drops in performance for earlier models).
EnPissant•52m ago
Codex previously did only manual compaction, but yeah, maybe some extra training for compaction, too?
iyn•52m ago
Looks like a great change! I'll take it for a spin in a moment.

I really like the "subagent" feature in Claude Code — it's super useful to manage context in complex codebases. Here are some examples of agents that can be useful: https://github.com/humanlayer/humanlayer/tree/main/.claude/a...

Would it make sense to have a similar feature in Codex CLI? I often do "spec-driven development", which is basically a loop of:

    research -> implementation plan -> actual implementation (based on research + plan) -> validation
I have multiple subagents that I use for each phase that (based on subjective judgement) improve the output quality (vs keeping everything, every tool use etc. in the "main" context window).

Codex CLI is great and I use it often but I'd like to have more of these convenient features for managing context from CC. I'm super happy that compaction is now available, hopefully we'll get more features for managing context.

NitpickLawyer•51m ago
Will -minis come for the codex family of models? About two months ago I used 5-mini as a daily driver for a few weeks and quite liked it, it seemed capable enough on small tasks with some hand holding and the speed/price were great as well.
coder543•27m ago
codex-mini was released a couple of weeks ago: https://platform.openai.com/docs/models/gpt-5.1-codex-mini
NitpickLawyer•17m ago
Thanks! I somehow missed that. Will check it out.
qsort•37m ago
Codex is an outstanding product and incremental upgrades are always welcome. I'll make sure to give it a try in the coming days. Great work! :)
robotswantdata•5m ago
Sorry don’t like the max model, feels like it needs a lot more guiding. The plans it writes however are better, so I tried feeding it back in (meta prompt style) and working okay so far. Very large repository.
causal•1h ago
Sigh. Time to try it again I guess. I give OpenAI way more chances than it deserves.
EcommerceFlow•1h ago
Gemini 3 had a great 24 hour SOTA run for coding
croes•1h ago
The new detergent now washes even whiter
bgwalter•33m ago
Come on folks, this is funny. They also have industrial strength laundromats to go with the detergent.
pton_xd•12m ago
I love how programming discussions du jour have basically devolved into "really? my socks definitely smell better after using 2 scoops of last month's soap. what spin cycle are you using?"
SunshineTheCat•1h ago
My observation has been that Codex tends to hit logical/data-driven/back-end tasks out of the park while doing weird, random nonsense with even simple UI tasks. This could me needing to improve how I phrase my prompts, but it will be interesting to see if it's improved in that arena at all.
cube2222•1h ago
Somewhat related, after seeing the praise for codex in the Sonnet 4.5 release thread I gave it a go, and I must say, that CLI is much worse than Claude Code (even if the model is great, I’m not sure where the issue really lies between the two).

It was extremely slow (like, multiple times slower than Sonnet with Claude Code, though that’s partially on me for using thinking-high I guess) to finish the task, with the back-and-forths being on the order of tens of minutes.

Moreover, the context management seems to be really weird. I’m not sure how exactly it works, but - 1. It uses very little tokens / fills up the context slowly (good I guess) 2. Doesn’t seem to actually internalize the contents of files you mention to it, or it edits.

#2 here being the main one - I usually context-dump reference code for Claude Code, and it does a perfect job of adhering to codebase patterns and its architecture, while codex was completely ignorant of the existing code style.

Moreover, it wrote extremely defensive code, even for code where it wrote both ends itself.

All in all, I was really let down after seeing all the praise.

agentifysh•48m ago
sure claude code has better ux but honestly its hard to get any good amount of usage out of the subscriptions vs what codex offers at the same price

with claude im constantly hitting rate limits with codex getting substantially more and "slow" isn't really a problem for me as long as it keep working

the only complaint i have is that codex itself has usage limited now (Either due to outstanding git issues around tools or by throttling on their end) compared to a few months ago

the true magical moment was codex pro letting me run swarms of agents day in day out without any worries about rate limits it truly felt unlimited

if claude manages to release a smaller model or some way to deal with the rapidly depleting usage limits (this is the top complaint on reddit and they eventually just stopped allowing threads about it) it would definitely be used more

but for now codex is clearly the workhorse and claude used side by side.

cube2222•40m ago
Well as I said, codex didn’t adhere to codebase standards for me and the code quality was worse (very defensive), so even after waiting longer, results weren’t there for me.

But the subscription thing is a non-issue for me as I use the API, and mostly use Claude Code synchronously, with the occasional rare background agent.

tosh•1h ago
Codex CLI 0.59 got released (but has no changelog text)

https://github.com/openai/codex/releases/tag/rust-v0.59.0

bgwalter•57m ago
So they all release before the Nvidia numbers tonight. The real question is: How well can Nvidia hide the circular deals in the books?
amluto•46m ago
I would love to see all the big players put 1% of the effort they put into model training into making the basic process of paying and signing in suck less.

Claude: they barely have a signin system at all. Multiple account support doesn’t exist. The minimum seat count for business is nonsense. The data retention policies are weak.

OpenAI: Make ZDR a thing you can use or buy without talking to sales, already. And for those using containers or a remote system or really anything other than local development with the codex CLI, you really really need to fix this bug. I bet Codex could do at least the client part for you!

https://github.com/openai/codex/issues/2798

(Hint: Claude Code gets this right by default, despite the fact that everything else about Claude sign-in is a joke.)

Google: get all your B2B AI product managers in one room and tell them that they need to make one single product menu on one single webpage with all the pricing on that page and that the Google Cloud people are not permitted to make anything that isn’t actually logically Google Cloud depend on Google Cloud Billing. Your product cannot compete with OpenAI or Anthropic if people need to ask an LLM to figure out what your product is and if your own fancy LLMs can’t give a straight answer. My company pays for a non-Google product primarily because it’s too complicated to pay for the Google product! Right now, trying to use Google’s AI is like trying to ride Bay Area public transit before the Clipper Card.

atonse•35m ago
Agree 1,000%.

I just won’t even waste my time with the google stuff cuz I can’t figure out how to pay with it.

And that’s a problem everywhere at google. Our google play account is suspended cuz I can’t verify the company. It won’t let me cuz it says I’m not the owner. I’ve always been the owner of my company. For 18 years. There is no one else.

Once some error said make sure the owner email matches your profile in google payments and I was like, what is google payments and where do I even begin with that? I’ve never paid for google play so what does payments have to do with anything?

It’s totally random stuff. Get your shit together, google. Make your products and payment systems coherent, rather than it obviously looking like it was designed by a fiefdom full of territorial managers.

nico•22m ago
Can relate. My inactive google ads account all of a sudden got banned. No explanation except some generic link to their terms of service. Appealed, got automatic denial, no reason given. Have retried multiple times, same result
computerex•35m ago
Couldn't agree more about the google product offerings. Vertex AI? AI Studio? Maker studio? Gemini? The documentation is fragmented with redundant offerings making it confusing to determine what is what. GCS billing is complicated to figure out vs OpenAI billing or anthropic.

Sad part is Google does offer a ChatML/OpenAI compliant endpoint to do LLM calls and I believe they in an experiment also reduced friction in getting an API key to start making calls right away but discoverability ever remains a challenge with google services.

hassleblad23•30m ago
Adding to this, Google's models can only be used with GCP while OpenAI's models can be used with Azure, Anthropic's models can be used with AWD Bedrock, in addition to their own platforms.

I'd love to see the Gemini models being available by other providers :) or if they just build a simple prepaid wallet like OpenAI and Anthropic.

temp0826•4m ago
Didn't realize these stipulations for the models. Looking at devops-y job descriptions the last few months I noticed nearly everyone has some kind of Azure requirement now (which I've mostly avoided because I don't want to end up managing someone's AD), but is openai the actual reason for it?
skerit•5m ago
Last night, just after Gemini 3 was released and became available for Gemini-CLI, I saw Gemini-CLI's team post that you could access Gemini 3 with either an API key OR with _Gemini AI Ultra_, so I thought: great, I'll get that!

Now you CAN NOT get the Google One stuff if your account is part of a workspace. I thought: how awful. I want to pay, but I simply can't?

Oh, but then I noticed: You CAN add a _Gemini AI Ultra_ license via the Google Workspace Admin area, great!

Turns out: you fucking can't. That's _Google AI Ultra FOR BUSINESS_ and that IS NOT supported.

So I had to get the Google One subscription on my personal account after all.

Combine that with the _pathetic_ usage limits: somehow not token-based, but amount of requests per 24 hour window (which is 500 for Gemini 3) and Gemini 3's incredible chattiness (it uses A LOT more requests to get something done compared to Claude) and you hit the usage limits in just 2 hours.

kytazo•41m ago
500 Internal Server Error.
morog•14m ago
ditto. Also OpenAI vector stores are down right now across the board
nakamoto_damacy•39m ago
It’s good but Gemini 3 beats it.
syntaxing•38m ago
I rarely used Codex compared to Claude because it was extremely slow in GitHub copilot . Like maybe 2-5X slower than Claude Sonnet. I really wish they just made their models faster than “better”
nartho•36m ago
Have you tried Mistral ? Definitely one of the fastest models
syntaxing•33m ago
My employer doesn’t offer/allow anything besides the “traditional” offerings on GitHub copilot.
levocardia•23m ago
Very interesting to see the range of peoples' preferences. I would almost always prefer smart over fast; I have all my LLMs to be all-thinking-all-the-time.
syntaxing•10m ago
It’s a balance, I haven’t felt like codex provided anything that Sonnet 4.5 didn’t. Why wait longer for getting the same results.

Though that does bring up an interesting point. Anecdotally, Sonnet does a lot more grep-ing while Codex reads files straight up. Might be the difference in speed and maybe smarter models will do better. Once this model is on copilot, I can test it out.

andai•35m ago
Sizeable if veracious!
the__alchemist•35m ago
This is a tangent: Has anyone noticed that GPT-5.0 at some point started producing much faster, crappier answers, then 5.1 made it slower + better again? (Both in Thinking mode)
wincy•28m ago
I did notice that, I thought maybe I’d exceeded my thinking requests
Narciss•32m ago
Here we go again....
johnfn•30m ago
I've been using a lot of Claude and Codex recently.

One huge difference I notice between Codex and Claude code is that, while Claude basically disregards your instructions (CLAUDE.md) entirely, Codex is extremely, painfully, doggedly persistent in following every last character of them - to the point that i've seen it work for 30 minutes to convolute some solution that was only convoluted because of some sentence I threw in the instructions I had completely forgotten about.

I imagine Codex as the "literal genie" - it'll give you exactly what you asked for. EXACTLY. If you ask Claude to fix a test that accidentally says assert(1 + 1 === 3), it'll say "this is clearly a typo" and just rewrite the test. Codex will rewrite the entire V8 engine to break arithmetic.

Both these tools have their uses, and I don't think one approach is universally better. Because Claude just hacks its way to a solution, it is really fast, so I like using it for iterate web work, where I need to tweak some styles and I need a fast iterative loop. Codex is much worse at that because it takes like 5 minutes to validate everything is correct. Codex is much better for longer, harder tasks that have to be correct -- I can just write some script to verify that what it did work, and let it spin for 30-40 minutes.

nico•18m ago
> Claude basically disregards your instructions (CLAUDE.md) entirely

A friend of mine tells Claude to always address him as “Mr Tinkleberry”, he says he can tell when Claude is not paying attention to the instructions on CLAUDE.md when Claude stops calling him “Mr Tinkleberry” consistently

benzible•14m ago
Yep, it's David Lee Roth's brown M&M trick https://www.smithsonianmag.com/arts-culture/why-did-van-hale...
wilg•21m ago
I have been using GPT 5 High Fast in Cursor primarily over Codex, because Codex seems to take way longer and generally annoy me by doing strange CLI stuff, but hopefully I can switch to this new one. I also tried it against Gemini 3 Pro in Cursor and it's hard to tell but at least in some cases I felt like GPT5 was giving better results.
LZ_Khan•12m ago
Woah, metr results look impressive. Still looking exponential

Datadog Is Down

https://status.datadoghq.com/incidents/cvdjtf81756n
3•markiannucci•4m ago•0 comments

DMA Collectives for Efficient ML Communication Offloads

https://arxiv.org/abs/2511.06605
1•matt_d•5m ago•0 comments

Taking prenatal supplements associated with 30% lower risk of autism

https://medicalxpress.com/news/2025-11-prenatal-supplements-autism.html
1•bikenaga•5m ago•0 comments

Calculated Risk: Trade Deficit Decreased to $59.6B in August

https://www.calculatedriskblog.com/2025/11/trade-deficit-decreased-to-596-billion.html
1•speckx•6m ago•0 comments

Our tools are failing us

https://blank.page/@mo/our-tools-are-failing-us
1•boudra•7m ago•0 comments

Pixar: The Early Days

https://stevejobsarchive.com/stories/pixar-early-days
2•tosh•8m ago•0 comments

Act-1: A Robot Foundation Model Trained on Zero Robot Data

https://www.sunday.ai/journal/no-robot-data
1•pr337h4m•8m ago•0 comments

Fine, Trade Labubu Futures

https://www.bloomberg.com/opinion/newsletters/2025-11-19/fine-trade-labubu-futures
1•ioblomov•9m ago•1 comments

Enumerating Three Billion WhatsApp Accounts for Security and Privacy

https://github.com/sbaresearch/whatsapp-census
2•filippofinke•11m ago•0 comments

Understanding neural networks through sparse circuits – OpenAI

https://openai.com/index/understanding-neural-networks-through-sparse-circuits/
1•JnBrymn•12m ago•0 comments

Gov. Abbott's office redacts pages of emails about Elon Musk

https://www.kut.org/politics/2025-11-19/texas-governor-abbott-elon-musk-emails-redacted
6•pavel_lishin•14m ago•0 comments

Nest Thermostats upload 50 megabytes to Google every day after being disabled [video]

https://www.youtube.com/watch?v=jC5wcJM8iuU
1•tartoran•15m ago•1 comments

Building with Distributed Actors: What and Why

https://withblue.ink/2025/11/19/distributed-actors-model.html
2•ItalyPaleAle•16m ago•0 comments

Europe wants to make space food out of thin air and astronaut pee

https://www.space.com/space-exploration/human-spaceflight/europe-wants-to-make-space-food-out-of-...
3•domofutu•17m ago•0 comments

What AI Is Really For

https://www.chrbutler.com/what-ai-is-really-for
4•delaugust•18m ago•0 comments

A simple UK self-employed tax calculator (instant monthly estimate)

https://selfemployedtaxcalculators.co.uk/
1•seo-punk•19m ago•1 comments

Was MCP a mistake? The internet weighs in

https://www.aiengineering.report/p/was-mcp-a-mistake-the-internet-weighs
3•waprin•20m ago•0 comments

Chinese EV makers accelerate robotics drive for 'game-changing' edge over US

https://www.scmp.com/business/china-evs/article/3333310/chinese-ev-makers-accelerate-robotics-dri...
2•Teever•21m ago•0 comments

OpenHands Software Agent SDK

https://github.com/OpenHands/software-agent-sdk
1•rbren•22m ago•0 comments

Show HN: Allein - Markdown editor with AI autocompletion, completely offline

https://github.com/szilarddoro/allein
1•szdoro•23m ago•0 comments

Firefox adds support for customizable keyboard shortcuts

https://bugzilla.mozilla.org/show_bug.cgi?id=1995889
3•spiros•25m ago•2 comments

Clinically ready magnetic microrobots for targeted therapies

https://www.science.org/doi/10.1126/science.adx1708
2•domofutu•26m ago•0 comments

Session Theft and DPoP

https://byo.propelauth.com/post/session-theft-and-dpop
4•aisrael•26m ago•0 comments

Pyrefly Beta (fast type checker and language server for Python) [video]

https://www.youtube.com/watch?v=4o0RLJJ-FAo
1•ocamoss•27m ago•0 comments

Show HN: Dia2, open-weights TTS model for realtime speech to speech

https://github.com/nari-labs/dia2
1•toebee•27m ago•0 comments

I'm 3.5 webapps deep into nothingness

1•thepra•27m ago•0 comments

Sunday's Memo Robot

https://twitter.com/sundayrobotics/status/1991196264772387261
1•kelguerin•28m ago•0 comments

Linus "my first, and hopefully last flamefest" Torvalds (1992)

http://groups.google.com/group/comp.os.minix/msg/6372404c547d7ab4
2•birdculture•28m ago•0 comments

Versatile gene-switch tool uses non-toxic molecule for safer research

https://phys.org/news/2025-11-versatile-gene-tool-toxic-molecule.html
1•PaulHoule•32m ago•0 comments

Show HN: Build AI chatbots and structured APIs easily with custom RAG knowledge

https://easyai.passiolife.com
1•aebranton•33m ago•1 comments