frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Δ-Mem: Efficient Online Memory for Large Language Models

https://arxiv.org/abs/2605.12357
91•44za12•3h ago•18 comments

Accelerando (2005)

https://www.antipope.org/charlie/blog-static/fiction/accelerando/accelerando.html
44•eamag•1h ago•12 comments

Fecal transplants for autism deliver success in clinical trials

https://refractor.io/adhd-autism/fecal-transplants-for-autism-delivers-success-in-clinical-trials/
68•breve•3h ago•33 comments

Futhark by Example

https://futhark-lang.org/examples.html
49•tosh•3h ago•13 comments

Europe built sovereign clouds to escape US control. Forgot about the processors

https://www.theregister.com/systems/2026/05/16/europe-built-sovereign-clouds-to-escape-us-control...
62•beardyw•1h ago•44 comments

SANA-WM, a 2.6B open-source world model for 1-minute 720p video

https://nvlabs.github.io/Sana/WM/
9•mjgil•58m ago•6 comments

Project Gutenberg – keeps getting better

https://www.gutenberg.org/
1009•JSeiko•20h ago•207 comments

Kyber (YC W23) Is Hiring a Founding Marketer

https://www.ycombinator.com/companies/kyber/jobs/1rLQAro-founding-marketer-content-community
1•asontha•1h ago

A Tiny E Reader

https://nthp.me/blog/2026/a-tiny-e-reader/
26•louismerlin•2d ago•6 comments

Nearly 50 Years Later, WKRP in Cincinnati Becomes a Real Radio Station

https://www.openculture.com/2026/05/nearly-50-years-later-wkrp-in-cincinnati-becomes-a-real-radio...
28•bookofjoe•3d ago•15 comments

Frontier AI has broken the open CTF format

https://kabir.au/blog/the-ctf-scene-is-dead
200•frays•6h ago•170 comments

Ploopy Bean: a trackpoint for every computer

https://ploopy.co/shop/bean-pointing-stick/
120•jibcage•3d ago•52 comments

I believe there are entire companies right now under AI psychosis

https://twitter.com/mitchellh/status/2055380239711457578
1497•reasonableklout•16h ago•767 comments

Gaining control of every projector and camera on campus

https://www.edna.land/blogs/posts/scanning/
48•ednaordinary•2d ago•14 comments

The bird eye was pushed to an evolutionary extreme

https://www.quantamagazine.org/how-the-bird-eye-was-pushed-to-an-evolutionary-extreme-20260513/
146•sohkamyung•2d ago•54 comments

Orthrus-Qwen3: up to 7.8×tokens/forward on Qwen3, identical output distribution

https://github.com/chiennv2000/orthrus
136•FranckDernoncou•14h ago•21 comments

The Physics–and Physicality–Of Extreme Juggling (2018)

https://www.wired.com/story/the-physicsand-physicalityof-extreme-juggling/
5•ColinWright•3d ago•0 comments

The main thing about P2P meth is that there's so much of it (2021)

https://dynomight.net/p2p-meth/
147•tomjakubowski•13h ago•166 comments

Additive Blending on the Nintendo 64

https://phoboslab.org/log/2026/05/n64-additive-blending
139•ibobev•22h ago•16 comments

Where to buy a non-Apple, non-Google smartphone

https://www.theregister.com/on-prem/2026/05/01/where-to-buy-a-non-apple-non-google-smartphone/521...
77•_____k•4h ago•51 comments

OpenClaw Creator Spent $1.3M on OpenAI Tokens in 30 Days

https://twitter.com/steipete/status/2055346265869721905
44•eamag•1h ago•48 comments

England Runestones

https://en.wikipedia.org/wiki/England_runestones
64•cl3misch•3d ago•24 comments

The sigmoids won't save you

https://www.astralcodexten.com/p/the-sigmoids-wont-save-you
228•Tomte•1d ago•216 comments

A 0-click exploit chain for the Pixel 10

https://projectzero.google/2026/05/pixel-10-exploit.html
396•happyhardcore•23h ago•215 comments

How to Write to SSDs [pdf]

https://www.vldb.org/pvldb/vol19/p1469-lee.pdf
141•matt_d•14h ago•17 comments

Naturally Occurring Quasicrystals

https://johncarlosbaez.wordpress.com/2026/05/14/naturally-occurring-quasicrystals/
107•lukeplato•1d ago•10 comments

A Meta employee gets real about the horror of working there

https://sfstandard.com/pacific-standard-time/2026/05/15/meta-employee-gets-real-horror-working-ri...
6•forrestbrazeal•45m ago•4 comments

Charity – Categorical programming language (1998)

https://github.com/mietek/charity-lang/blob/master/doc/README.md
11•matteodelabre•3d ago•1 comments

Bill to block publishers from killing online games advances in California

https://arstechnica.com/gaming/2026/05/bill-to-keep-online-games-playable-clears-key-hurdle-in-ca...
513•Lihh27•17h ago•335 comments

EMiX: Emulating Beyond Single-FPGA Limits

https://arxiv.org/abs/2604.27012
14•PaulHoule•2d ago•1 comments
Open in hackernews

OpenClaw Creator Spent $1.3M on OpenAI Tokens in 30 Days

https://twitter.com/steipete/status/2055346265869721905
44•eamag•1h ago

Comments

mtct88•47m ago
It's a very peculiar way to flex.
Avicebron•39m ago
It's like the nerd equivalent of rolling coal?
discordance•21m ago
I work at a bigtech and we’re being measured on how many tokens we consume.

We know it’s totally stupid, but unfortunately tokenmaxxing is real. I know our management line isn’t that dumb, but this is what you get when the business is selling it.

comboy•46m ago
worth mentioning that openai hired him some time ago
zxornand•42m ago
And was he 5x more productive in those 30d than a years worth of a dev making 200k/yr?

Doubtful lol, dudes killing the environment just for fun at this point.

vessenes•34m ago
If you review the openclaw release schedule and code output you will see that yes, he was. I’m not saying you’ll like what you see, but the openclaw release schedule is well faster than human ability to assess it.
SecretDreams•30m ago
That's a metric for management to pump AI if I've ever seen one.
rowanG077•29m ago
It's fast for sure. But not 5 years of dev time compressed into 30 days fast.
Philip-J-Fry•23m ago
With a lot of these AI tools yea, they release very often. But half the features they add aren't even that useful. They just add shit because they can and they introduce bugs and change behaviour all the time.

Opencode has the same problems. They often do multiple releases of that app a day, yet within the span of a week or two I have had to update my config because some random change has altered the behaviour and my permissions broke. Or I've noticed the way the app renders is suddenly different.

Yet, my day to day usage has barely changed since the version I installed last year. It's like everything changes but nothing changes.

realusername•18m ago
> the openclaw release schedule is well faster than human ability to assess it.

That doesn't sound very positive to me...

risyachka•14m ago
Thats the single reason it is faster. Just pushing to prod whatever.

All projects can become fast if they drop guardrails.

This does not correlate with productivity increase

minraws•12m ago
I am not joking when I say this, if you pay me 1.3 million dollars today, I will get so much more done with just a single 200$ codex sub in 30 days than he has in 30 days, I can promise you that.

I just checked the code and feature outputs, and I can build all that in 15 days, for 1.3M USD. Fuck I would do it for 1M...

Scratch that, if it's 300K then sure I could do the same too, if you paid me that for 30 days of work. Lmao, the quality and the feature volume is just not worth anything worth paying so much money for.

I am not saying this because I don't like LLMs or I may think that AI coding can't work, but folks whatever openclaw has built for that much money is not worth nearly that much money...

wiseowise•13m ago
> And was he 5x more productive in those 30d than a years worth of a dev making 200k/yr?

He was. When it comes to marketing. This is was most people don't understand. Peter is a great marketing guy who got hired because of a hype vision, not because he is an outstanding engineer. Think of it like OpenAI hiring MrBeast of the coding world.

thomasahle•41m ago
He used 600B tokens in 30 days.

I use more than 150B a month with just 15 codex accounts.

60 accounts is "just" $12,000/month. So Peter could "save" 100x by using monthly accounts.

Of course, he doesn't have to, as he works at OpenAI now.

MadxX79•38m ago
Sounds like a healthy industry, selling tokens at 1000x below cost.
SecretDreams•29m ago
It's to build a moat, of course!

Narrator: there was no moat

ianm218•36m ago
What do you do with all those accounts?
peteforde•56s ago
What I truly don't understand, as a daily heavy Opus 4.7 user, is how you can coherently prompt 15 different parallel conversations at the same time.

For me it's not even a "what the hell are you working on" so much as complete inability to understand how you can keep so many different processes working on distinct tasks. It simply doesn't map on to how I use these tools.

I spend most of my day writing extremely detailed prompts and that's how I'm able to get the sort of excellent results that confound skeptics. But I have to be honest with you: I don't think I can write (or think) fast enough to do two of these at a time, much less 15.

I definitely could not review what they are generating with any degree of confidence.

I'm really hoping you can explain what the heck your usage pattern actually looks like, because reading this makes me feel like I'm missing something.

tom1337890•40m ago
After trying openclaw a bit myself, no wonder. Without the best models, capabilities drop significantly. And I guess he has a lot of automations and stuff, which explains the 19'000 daily spend. I hit my personal spend limit when it cost like 40 USD to get Google auth tokens working. Which is very complicated when you run openclaw on a vps. And it even broke like a week after. Maybe one could justify the 40usd if it would save my time instead. But I was babysitting openclaw doing it anyhow. So I actually double spend. Money plus time.

Btw, same frustration for me setting up signal, Whatsapp or slack...

vessenes•31m ago
It’s a moving target for sure. I’m excited for the LTS release series - keeping up with twice or three times weekly releases is not for humans :)
boesboes•40m ago
He should be brought to the hague XD
wiseowise•38m ago
What a clown. And Twitter bozos will cheer and clap. As far as money spent, this is still much better than rounding up and/or bombing brown people, but shows insanity of the current market. The saddest part is that bootlickers/temporarily embarrassed AI millionaires will defend this.

And of course I'm just yet another envious hater from "the orange website". Your conscience is clear, AI bros. /s

vessenes•32m ago
OpenClaw is the fastest growth open source project ever. This isn’t clowning.
boxed•29m ago
Both things can be true. The Chinese communist party was one of the biggest social movements ever. Millions died.
phpnode•25m ago
Goodness me that’s quite a comparison
wiseowise•29m ago
> OpenClaw is the fastest growth open source project ever.

By which metrics?

> This isn’t clowning.

Why?

orphea•27m ago
Yep, and surely it has nothing to do with buying GitHub stars. Very organic growth.
backscratches•11m ago
Lol if your only metric is "I say so"
athrow•35m ago
What does he have to show for it?
Nzen•28m ago
tl;dr Peter Steinberger shared a product demo for CodexBar [0] with a graph of OpenAI token usage. This graph shows one million spent, prefers gpt-5.5 and spent twenty thousand today.

[0] https://github.com/steipete/CodexBar

However, I do not see a strong reason to believe that this is his actual, personal usage. It could be all openclaw usage or some subset of openai usage, given that he is inside them. I suspect it is far more likely to be fake data [1] that exercises the graph library in a visually satisfying way. Notice that it has no usage for a 'week' after April 15 (a Wednesday), but picks up a bunch later. As marketing copy it needn't have any basis in reality [2]. I should hope openai would put a procedure in front of their entrepreneur acquisition that prevents accidentally exposing trade secrets [3].

[1] https://github.com/faker-js/faker

[2] https://www.reddit.com/r/proceduralgeneration/comments/lf2n4...

[3] https://tvtropes.org/pmwiki/pmwiki.php/Main/PostingWhatYouSh...

Tiberium•27m ago
This is quite a misleading title because this is the raw API cost, but he (obviously) has unlimited usage as an OpenAI employee. Moreover, if you use e.g. the $200 Codex sub, you get about ~$5k-$6k monthly API usage if you spend every week of your usage, if not more, which shows that the raw API cost is not how much it (likely) costs to OpenAI, unless they're subsidizing all this.

He did clarify that it was with fast mode. Without fast mode it'd "only" be $300k in raw API cost, or ~60 $200 Codex subscriptions.

rvz•10m ago
But even going with the $5k - $6k monthly usage on a $200 codex subscription even going over their limits is also unrealistic in the long term and that is just ONE person.

Lets say I was at the casino and was spending a lot on casino chips but I also happen to work at the casino. I'm not really losing money whether if I win / lose since I'm using the houses money and there's little risk involved on every press of the button. The risk is far higher if I don't have that level of access and continue to spend the same amount of money on lots of tokens.

The same is true here with these agents. Some companies will realize that they can no longer afford to spend millions a month on tokens or even startups spending $5k - $6k per person per month on tokens.

I can only see local efficient models making sense as recovering from this unnecessary spending or even light gambling on tokens.

Terretta•6m ago
Even at unlimited budget, there is a crossover where outsourcing thinking to the machine costs more than the machine.

What I mean by this:

1. Intern, analyst, junior, or offshore level coding is cheaper when done by the machine.

// Side note: There is good reason the industry invests in suboptimal output from this set which moves to the "cost" column when using an LLM, but nobody's accounting for that.

2. For the interns, analysts, junior, or offshoring to do the right thing costs a multiple of the coding effort: the PdM/PjM stuff of course, but also the Stakeholder, Product Owner, Architect, Principal Engineer, QA, and SRE stuff.

3. If you are not a principal or staff engineer level engineer, you are likely unqualified to catch and fix the errors LLMs make across engineering, much less these other PDLC (product development lifecycle, which includes SDLC and SRE) loop.

4. For LLM output to be useful, your 'harness' has to incorporate all of that as well, which because it's so much harder than transliterating spec-to-code, balloons tokens exponentially.

5. Today it is faster, more efficient, and costs less, to work with LLMs "XP" (eXtreme Programming) style, pairing with the LLM actively co-creating and co-reviewing, steering for more effective turns.

So, your options are:

- ship garbage while costing less than a median first world SWE

- pair with the LLM actively for the benefits of XP

- add enough harness and steering the LLM costs more than SWEs, and still needs a human loop “move fast and break things to find out what's broken” style

I would expect that within a couple years, these other disciplines can be baked in enough the machine costs less for everything but surprises.

faangguyindia•27m ago
how many of those tokens were spent to buy fake stars using fake email signups?
Philip-J-Fry•27m ago
So he's spent $20k in one day. There's not a chance in hell he's actually doing productive work with all these tokens.

Grifters gonna grift. What a state of affairs.

malshe•21m ago
Come on, he is very productive on twitter /s
malshe•26m ago
AI bros love hyping about their insanely inefficient token usage. It's become some sort of a dick-measuring contest. And if you work for OpenAI, of course you can claim insane measurements.

Just last week I saw a dude boasting about how they used their $20/month ChatGPT subscription to earn $15 (or similar trivial amount) in a bug bounty by running the model the whole day. Sam Altman replied to that tweet but not entirely positively.

OpenAI has been removing limits on token usage to take on Anthropic but I'm sure most of the users they are acquiring are these AI bros who are burning tokens for the sake of it. Massive price hikes are coming after OpenAI and Anthropic IPOs probably an order of magnitude larger than what happened to ride sharing.

vslira•26m ago
Regardless of one’s opinion about AI, from a product perspective this seems somewhat similar to the dev using his 48gb ram machine and latest iphone to test an app that will be used by consumers with entry-level devices
Terretta•24m ago
The mentioned menu bar app is a MITM (man in the middle) and rightly discloses that it gets all your session creds and uses them, along with keychain and full disk access:

Privacy: Reuses existing provider sessions — OAuth, device flow, API keys, browser cookies, local files — so no passwords are stored.

macOS permissions: Full Disk Access for Safari cookies, Keychain access for cookie decryption and OAuth flows...

It's excellent this is disclosed as a reminder of how things work and the tradeoffs you're making to use it.

hansmayer•22m ago
What product or feature did he build with it and how much ARR did it generate for OpenAI?
0gs•19m ago
you have to admit: he is not as difficult to project paratechnical admiration onto as sama is. maybe the board wants him to be the next ceo
Robdel12•17m ago
Once you see how much crap they’re running to police the agents on the repo, you’ll ‘get’ the spend https://x.com/steipete/status/2055405041843052792

I won’t lie, if I had the access to this, I’d do the same exact thing.

tedggh•11m ago
Same mindset as Marc Andreessen when working on Mosaic: Design for infinite (Internet) bandwidth.
danpalmer•1m ago
"All that automation allows us to run extremely lean"

He has a different opinion of what it means to be lean than almost everyone else. That's fine, he's allowed to, but it's something you have to understand to make sense of any of his comments on things. He has a radically different set of values to most people.

wolttam•14m ago
Nobody here talking about what this represents for demand on these models, if these numbers aren’t made up.

One person using 600B tokens in a month. The most I’ve hit is around 500M tokens and I thought that was a huge amount.

We’re going to have some major compute shortages for a while

voidfunc•9m ago
500m tokens is easy... I'm burning about 2b a week.
onion2k•4m ago
Jensen Huang was saying humanity is going to need 1000x the current energy production in the future. He might not be wrong.
yodakohl•12m ago
You can look at the output here https://github.com/steipete Sample commit from 5 minutes ago https://github.com/openclaw/crabbox/pull/113 May 2026: 8,826 commits in 94 repositories