frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Unrolling the Codex agent loop

https://openai.com/index/unrolling-the-codex-agent-loop/
102•tosh•2h ago•45 comments

New YC homepage

https://www.ycombinator.com/
70•sarreph•4h ago•32 comments

Banned C++ Features in Chromium

https://chromium.googlesource.com/chromium/src/+/main/styleguide/c++/c++-features.md
47•szmarczak•2h ago•18 comments

Proof of Corn

https://proofofcorn.com/
273•rocauc•4h ago•196 comments

Gas Town's agent patterns, design bottlenecks, and vibecoding at scale

https://maggieappleton.com/gastown
202•pavel_lishin•6h ago•231 comments

Route leak incident on January 22, 2026

https://blog.cloudflare.com/route-leak-incident-january-22-2026/
94•nomaxx117•4h ago•15 comments

Mental Models (2018)

https://fs.blog/mental-models/
15•hahahacorn•1h ago•3 comments

Microsoft gave FBI set of BitLocker encryption keys to unlock suspects' laptops

https://techcrunch.com/2026/01/23/microsoft-gave-fbi-a-set-of-bitlocker-encryption-keys-to-unlock...
496•bookofjoe•4h ago•356 comments

TrueVault (YC W14) is hiring a Growth Lead to test different growth channels

https://www.ycombinator.com/companies/truevault/jobs/njvSGDj-growth-lead
1•jason_wang•1h ago

Gold fever, cold, and the true adventures of Jack London in the wild

https://www.smithsonianmag.com/history/gold-fever-deadly-cold-and-amazing-true-adventures-jack-lo...
22•janandonly•5d ago•0 comments

Vargai/SDK – JSX for AI video, declarative programming language for Claude Code

https://varg.ai/sdk
33•alex_varga•1d ago•11 comments

KORG phase8 – Acoustic Synthesizer

https://www.korg.com/us/products/dj/phase8/
156•bpierre•8h ago•82 comments

Notes on the Intel 8086 processor's arithmetic-logic unit

https://www.righto.com/2026/01/notes-on-intel-8086-processors.html
58•elpocko•5h ago•7 comments

Booting from a vinyl record (2020)

https://boginjr.com/it/sw/dev/vinyl-boot/
250•yesturi•12h ago•98 comments

Killing the ISP Appliance: An eBPF/XDP Approach to Distributed BNG

https://markgascoyne.co.uk/posts/ebpf-bng/
55•chaz6•5h ago•15 comments

EquipmentShare (YC W15) goes public

https://www.ycombinator.com/blog/congratulations-to-equipmentshare/
6•subsequent•1h ago•5 comments

Waypoint-1: Real-Time Interactive Video Diffusion from Overworld

https://huggingface.co/blog/waypoint-1
42•avaer•7h ago•13 comments

Nobody likes lag: How to make low-latency dev sandboxes

https://www.compyle.ai/blog/nobody-likes-lag/
46•mnazzaro•5h ago•13 comments

Proton Spam and the AI Consent Problem

https://dbushell.com/2026/01/22/proton-spam/
451•dbushell•15h ago•303 comments

Show HN: Whosthere: A LAN discovery tool with a modern TUI, written in Go

https://github.com/ramonvermeulen/whosthere
182•rvermeulen98•10h ago•66 comments

Floating-Point Printing and Parsing Can Be Simple and Fast

https://research.swtch.com/fp
71•chmaynard•4d ago•1 comments

The tech monoculture is finally breaking

http://www.jasonwillems.com/technology/2025/12/17/Tech-Is-Fun-Again/
82•at1as•7h ago•114 comments

Show HN: Zsweep – Play Minesweeper using only Vim motions

https://zsweep.com
57•oug-t•5d ago•25 comments

Neko: History of a Software Pet (2022)

https://eliotakira.com/neko/
22•mifydev•1h ago•7 comments

Nanotimetamps: Time-Stamped Data on Nano Block Lattice

https://github.com/SerJaimeLannister/nanotimestamp/wiki
5•Imustaskforhelp•4d ago•1 comments

Zotero 8

https://www.zotero.org/blog/zotero-8/
153•bouchard•4h ago•32 comments

Show HN: New 3D Mapping website - Create heli orbits and "playable" map tours.

https://www.easy3dmaps.com/gallery
25•dobodob•5h ago•13 comments

Radicle: The Sovereign Forge

https://radicle.xyz
240•ibobev•9h ago•119 comments

Anthropic Economic Index report: economic primitives

https://www.anthropic.com/research/anthropic-economic-index-january-2026-report
94•malshe•1d ago•53 comments

European Alternatives

https://european-alternatives.eu
544•s_dev•9h ago•294 comments
Open in hackernews

Unrolling the Codex agent loop

https://openai.com/index/unrolling-the-codex-agent-loop/
102•tosh•2h ago

Comments

mkw5053•1h ago
I guess nothing super surprising or new but still valuable read. I wish it was easier/native to reflect on the loop and/or histories while using agentic coding CLIs. I've found some success with an MCP that let's me query my chat histories, but I have to be very explicit about it's use. Also, like many things, continuous learning would probably solve this.
MultifokalHirn•1h ago
thx :)
jumploops•1h ago
One thing that surprised me when diving into the Codex internals was that the reasoning tokens persist during the agent tool call loop, but are discarded after every user turn.

This helps preserve context over many turns, but it can also mean some context is lost between two related user turns.

A strategy that's helped me here, is having the model write progress updates (along with general plans/specs/debug/etc.) to markdown files, acting as a sort of "snapshot" that works across many context windows.

crorella•1h ago
Same here! I think it would be good if this could be made by default by the tooling. I've seen others using SQL for the same and even the proposal for a succinct way of representing this handoff data in the most compact way.
vmg12•1h ago
I think this explains why I'm not getting the most out of codex, I like to interrupt and respond to things i see in reasoning tokens.
behnamoh•44m ago
that's the main gripe I have with codex; I want better observability into what the AI is doing to stop it if I see it going down the wrong path. in CC I can see it easily and stop and steer the model. in codex, the model spends 20m only for it to do something I didn't agree on. it burns OpenAI tokens too; they could save money by supporting this feature!
zeroxfe•37m ago
You're in luck -- /experimetal -> enable steering.
behnamoh•36m ago
I first need to see real time AI thoughts before I can steer it tho! Codex hides most of them
sdwr•1h ago
That could explain the "churn" when it gets stuck. Do you think it needs to maintain an internal state over time to keep track of longer threads, or are written notes enough to bridge the gap?
behnamoh•48m ago
but that's why I like Codex CLI, it's so bare bone and lightweight that I can build lots tools on top of it. persistent thinking tokens? let me have that using a separate file the AI writes to. the reasoning tokens we see aren't the actual tokens anyway; the model does a lot more behind the scenes but the API keeps them hidden (all providers do that).
postalcoder•4m ago
Codex is wicked efficient with context windows with the tradeoff of time spent. It hurts the flow state, but overal I've found that it's the best at having long conversations/coding sessions.
CjHuber•37m ago
It depends on the API path. Chat completions does what you describe, however isn't it legacy?

I've only used codex with the responses v1 API and there it's the complete opposite. Already generated reasoning tokens even persist when you send another message (without rolling back) after cancelling turns before they have finished the thought process

Also with responses v1 xhigh mode eats through the context window multiples faster than the other modes, which does check out with this.

EnPissant•21m ago
I don't think this is true.

I'm pretty sure that Codex uses reasoning.encrypted_content=true and store=false with the responses API.

reasoning.encrypted_content=true - The server will return all the reasoning tokens in an encrypted blob you can pass along in the next call. Only OpenaAI can decrypt them.

store=false - The server will not persist anything about the conversation on the server. Any subsequent calls must provide all context.

Combined the two above options turns the responses API into a stateless one. Without these options it will still persist reasoning tokens in a agentic loop, but it will be done statefully without the client passing the reasoning along each time.

ljm•2m ago
I’ve been using agent-shell in emacs a lot and it stores transcripts of the entire interaction. It’s helped me out lot of times because I can say ‘look at the last transcript here’.

It’s not the responsibility of the agent to write this transcript, it’s emacs, so I don’t have to worry about the agent forgetting to log something. It’s just writing the buffer to disk.

dfajgljsldkjag•1h ago
The best part about this is how the program acts like a human who is learning by doing. It is not trying to be perfect on the first try, it is just trying to make progress by looking at the results. I think this method is going to make computers much more helpful because they can now handle the messy parts of solving a problem.
written-beyond•1h ago
Has anyone seriously used codex cli? I was using LLMs for code gen usually through the vscode codex extension, Gemini cli and Claude Code cli. The performance of all 3 of them is utter dog shit, Gemini cli just randomly breaks and starts spamming content trying to reorient itself after a while.

However, I decided to try codex cli after hearing they rebuilt it from the ground up and used rust(instead of JS, not implying Rust==better). It's performance is quite literally insane, its UX is completely seamless. They even added small nice to haves like ctrl+left/right to skip your cursor to word boundaries.

If you haven't I genuinely think you should give it a try you'll be very surprised. Saw Theo(yc ping labs) talk about how open ai shouldn't have wasted their time optimizing the cli and made a better model or something. I highly disagree after using it.

procinct•57m ago
Same goes for Claude Code. Literally has vim bindings for editing prompts if you want them.
behnamoh•37m ago
CC is the clunkiest PoS software I've ever used in terminal; feels like it was vibe coded and anthroshit doesn't give a shit
ewoodrich•55m ago
OpenCode also has an extremely fast and reliable UI compared to the other CLIs. I’ve been using Codex more lately since I’m cancelling my Claude Pro plan and it’s solid but haven’t spent nearly as much time compared to Claude Code or Gemini CLI yet.

But tbh OpenAI openly supporting OpenCode is the bigger draw for me on the plan but do want to spend more time with native Codex as a base of comparison against OpenCode when using the same model.

I’m just happy to have so many competitive options, for now at least.

behnamoh•45m ago
Seconded. I find codex lacks only two things:

- hooks (this is a big one)

- better UI to show me what changes are going to be made.

the second one makes a huge diff and it's the main reason I stopped using opencode (lots of other reasons too). in CC, I am shown a nice diff that I can approve/reject. in codex, the AI makes lots of changes but doesn't pin point what changes it's doing or going to make.

written-beyond•41m ago
Yeah it's really weird with automatically making changes. I read in it's chain of thought that it's going to request approval for something from the user, the next message was approval granted doing it. Very weird...
williamstein•51m ago
I strongly agree. The memory and cpu usage of codex-cli is also extremely good. That codex-cli is open source is also valuable because you can easily get definitive answers to any questions about its behavior.

I also was annoyed by Theo saying that.

georgeven•45m ago
I found codex cli to be significantly better than claude code. It follows instructions and executes the exact change I want without going off on an "adventure" like Claude code. Also the 20 dollars per month sub tier gives very generous limits of the most powerful model option (5.2 codex high).

I work on SSL bio acoustic models as context.

behnamoh•42m ago
codex the model (not the cli) is the big thing here. I've used it in CC and w/ my claude setup, it can handle things Opus could never. it's really a secret weapon not a lot of people talk about. I'm not even using xhigh most of the time.
copperx•36m ago
When you say CC is it Codex CLI or Claude Code?
behnamoh•2m ago
claude code
wahnfrieden•24m ago
No, the codex harness is also optimized for the codex models. Highly recommend using first-party OpenAI harnesses for codex.
CuriouslyC•41m ago
The problem with codex right now is it doesn't have hook support. It's hard to understate how big of a deal hooks are, the Ralph loop that the newer folks are losing their shit over is like the level 0, most rudimentary use of hooks.

I have a tool that reduces agent token consumption by 30%, and it's only viable because I can hook the harness and catch agents being stupid, then prompt them to be smarter on the fly. More at https://sibylline.dev/articles/2026-01-22-scribe-swebench-be...

estimator7292•2m ago
It's pretty good, yeah. I get coherent results >95% of the time (on well-known problems).

However, it seems to really only be good at coding tasks. Anything even slightly out of the ordinary, like planning dialogue and plot lines it almost immediately starts producing garbage.

I did get it stuck in a loop the other day. I half-assed a git rebase and asked codex to fix it. It did eventually resolve all debased commits, but it just kept going. I don't really know what it was doing, I think it made up some directive after the rebase completed and it just kept chugging until I pulled the plug.

The only other tool I've tried is Aider, which I have found to be nearly worthless garbage

ppeetteerr•47m ago
I asked Claude to summarize the article and it was blocked haha. Fortunately, I have the Claude plugin in chrome installed and it used the plugin to read the contents of the page.
sdwvit•46m ago
Great achievement. What did you learn?
ppeetteerr•44m ago
Nothing particularly insightful other than avoiding messing with previous messages so as not to mess with the cache.
rvnx•38m ago
Summary by Claude:

    Codex works by repeatedly sending a growing prompt to the model, executing any tool calls it requests, appending the results, and repeating until the model returns a text response
rvnx•42m ago
Codex agent loop:

    Call the model. If it asks for a tool, run the tool and call again (with the new result appended). Otherwise, done
https://i.ytimg.com/vi/74U04h9hQ_s/maxresdefault.jpg
jmkni•32m ago
I think this should be called the Homer Simpson loop, it seems more apt
rvnx•30m ago
They sadly renamed the Ralph Wiggum loop due to copyright concerns so little hope for Homer :(

https://github.com/anthropics/claude-plugins-official/commit...

jmkni•28m ago
ha I didn't know that, very interesting
coffeeaddict1•32m ago
What I really want from Codex is checkpoints ala Copilot. There are a couple of issues [0][1] opened about on GitHub, but it doesn't seem a priority for the team.

[0] https://github.com/openai/codex/issues/2788

[1] https://github.com/openai/codex/issues/3585

wahnfrieden•27m ago
They routinely mention in GitHub that they heavily prioritize based on "upvotes" (emoji reacts) in GitHub issues, and they close issues that don't receive many. So if you want this, please "upvote" those issues.
postalcoder•2m ago
They reverted it because of how it interacted with git staging. Do other harnesses have the same issues?
tecoholic•26m ago
I use 2 cli - Codex and Amp. Almost every time I need a quick change, Amp finishes the task in the time it takes Codex to build context. I think it’s got a lot to do with the system prompt and a the “read loop” as well, amp would read multiple files in one go and get to the task, but codex would crawl the files almost one by one. Anyone noticed this?
sumedh•20m ago
Which Gpt model and reasoning level did you use in Codex and Amp?

Generally I have noticed Gpt 5.2 codex is slower compared to Sonnet 4.5 in Claude Code.