frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
530•klaussilveira•9h ago•146 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
860•xnx•15h ago•519 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
72•matheusalmeida•1d ago•13 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
180•isitcontent•9h ago•21 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
182•dmpetrov•10h ago•80 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
294•vecti•11h ago•130 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
70•quibono•4d ago•13 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
343•aktau•16h ago•168 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
339•ostacke•15h ago•90 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
434•todsacerdoti•17h ago•226 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
237•eljojo•12h ago•147 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
373•lstoll•16h ago•252 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
13•romes•4d ago•2 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
6•videotopia•3d ago•0 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
41•kmm•4d ago•3 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
220•i5heu•12h ago•162 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
91•SerCe•5h ago•75 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
62•phreda4•9h ago•11 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
162•limoce•3d ago•82 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
38•gfortaine•7h ago•11 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
127•vmatsiiako•14h ago•53 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
18•gmays•4h ago•2 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
261•surprisetalk•3d ago•35 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1029•cdrnsf•19h ago•428 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
55•rescrv•17h ago•18 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
83•antves•1d ago•60 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
18•denysonique•6h ago•2 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
5•neogoose•2h ago•1 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
109•ray__•6h ago•54 comments
Open in hackernews

I let my AI agents run unsupervised and they burned $200 in 2 hours

https://blog.justcopy.ai/p/i-let-my-ai-agents-run-unsupervised
21•anupsingh123•3mo ago

Comments

anupsingh123•3mo ago
Classic "I'll be right back" moment that cost me real money.

Building justcopy.ai - lets you clone, customize and ship any website. Built 7 AI agents to handle the dev workflow automatically.

Kicked them off to test something. Went to grab coffee.

Came back to a $100 spike on my OpenRouter bill. First thought: "holy shit we have users!"

We did not have users.

Added logging. The agent was still running. Making calls. Spending money. Just... going. Completely autonomous in the worst possible way. Final damage: $200.

The fix was embarrassingly simple: - Check for interrupts before every API call - Add hard budget limits per session - Set timeouts on literally everything - Log everything so you're not flying blind

Basically: autonomous ≠ unsupervised. These things will happily burn your money until you tell them to stop.

Has this happened to anyone else? What safety mechanisms are you using?

fragmede•3mo ago
Privacy.com credit card with a limit set, and making sure that billing is not set to auto on the LLM platform.
anupsingh123•3mo ago
How would that help with supervising agent runs for each user on justcopy.ai?
fragmede•3mo ago
Anthropic won't run your API calls if you're out of API credits (and on that plan) so if there's only $10 in the account, you run $10 worth of API calls, and then the calls fail instead of costing you money.
W3schoolz•3mo ago
What a great learning opportunity! Supervision is key and budget limits are highly valuable in preventing surprises.

That said, I think a budget limit of $5-10k per agent makes sense IMO. You're underpaying your agents and won't get principal engineer quality at those rates.

magicalhippo•3mo ago
I thought the hotel AI's playing poker together in Altered Carbon was a bit cheesy until these newfangled LLM-driven agents came along, and it all seemed a lot more realistic.

Agents doing nothing, just doing things for the sake of doing things.

Seems we're there.

vorpalhex•3mo ago
"Good job claude, go ahead and fire up some poker with your friends for a few hours. You've earned some downtime."

I am now going to make a multi-agent poker MCP as a joke. Thank you.

SpaceNoodled•3mo ago
My chief safety mechanism is not using money-burning slop generators.
anupsingh123•3mo ago
That's one approach. For me, the agent setup cut what used to be a full day of manual work down to minutes - even with the $200 learning tax, that's still a net win. But I get the skepticism.
leptons•3mo ago
Oh, they burned a lot more than $200, you just paid only $200. These things are costing way more than what people pay for them, the price heavily subsidized.
simonw•3mo ago
I think the opposite is much more likely to be true: that vendors who charge money for inference are charging more than it costs them to service a prompt.

I've heard from sources that I trust that both AWS and Google Gemini charge more than it costs them in energy to run inference.

You can get a good estimate for the truth here by considering open weight models. It's possible to determine exactly how much energy it costs to serve DeepSeek V3.2 Exp, since that model is open weight. So run that calculation, then take a look at how much providers are charging to serve it and see if they are likely operating at a loss.

Here are some prices for that particular model: https://openrouter.ai/deepseek/deepseek-v3.2-exp/providers

Tade0•3mo ago
If that's the case, then why are AI companies bleeding money?

Or: what are they bleeding money on?

anupsingh123•3mo ago
btw this was DeepSeek-V3.2. If I'd been using Claude Sonnet 4.5, we'd be looking at a $2000 bill instead.
Tade0•3mo ago
Okay, yikes. Good thing that you even can set up those controls, unlike with that other company in the compute infrastructure business.
barrkel•3mo ago
Research runs mostly.

https://epoch.ai/data-insights/openai-compute-spend

simonw•3mo ago
They lose money on research and training and offering model trials for free (a marketing expenses).

That doesn't mean that when they do charge for the models - especially via their APIs - that they are serving them at a unit cost loss.

surgical_fire•3mo ago
Depends on the vendor and how they charge. OpenAI loses money on subscriptions [1]. Maybe the people who pay 200 bucks on a subscription are exactly the kind of people that will try to use the maximum out of it, and if you go down to the 20 bucks tier you will find more of the type of user that pays but doesn't use it all that much?

I would presume that companies selling compute for AI inference either make some money or at least break even when they serve a request. But I wouldn't b surprised if they are subsidizing this cost for the time being.

[1]: https://finance.yahoo.com/news/sam-altman-says-losing-money-...

simonw•3mo ago
That "losing money on subscriptions" story is a one-off Sam Altman tweet from January 2025, when they were promoting their brand new $200 account and the first version of Sora. I wouldn't treat that as a universal truth.

https://twitter.com/sama/status/1876104315296968813

"insane thing: we are currently losing money on openai pro subscriptions!

people use it much more than we expected"

surgical_fire•3mo ago
Sam Altman is a bullshitter. A liar cares about the truth and attempts to hide it. A bullshitter doesn't care if something is true of false, and is just using rhetoric to convince you of something.

I don't doubt that it is true that they lose money on a 200 subscription because the people that pay 200 are probably the same people that will max out usage over time, no matter how wasteful. Sam Altman was framing it in a way to say "it's so useful people are using it more than we expected!", because he is interested in having everyone believe that LLMs are the future. It's all bullshit.

If I had to guess, they probably at least break even on API calls, and might make some money on lower tier subscriptions (i.e.: people that pay for it but use it sparingly on a as-need basis).

But that is boring, and hints at limited usability. Investors won't want to burn hundreds of billions in cash for something that may be sort of useful. They want destructive amounts of money in return.

Tade0•3mo ago
Ok, fine, but I think it's disindigenous to only mention energy expenditure. There's also infrastructure, necessary re-training and R&D - of which we don't know how much must be spent just to stay in the market.
simonw•3mo ago
Competitive, venture backed companies losing money when you take R&D into account in a high growth market is how the tech industry has worked for decades.

Shopify, Uber and Airbnb all hit profitability after 14 years. Amazon took 9.

Tade0•3mo ago
The mentioned didn't require the sort of R&D AI does.

And this isn't something that will go away anytime soon. OpenAI for instance is projecting that in 2030 R&D will still account for 45% of their costs. They think they'll be profitable by that time, or so they're telling investors.

leptons•3mo ago
And none of those companies lost anywhere near as much money as "AI" is currently, and will continue to do. Just because they become profitable 5 or 10 or 15 years from now does not mean that they will be able to pay off the hundreds of billions to trillions spent getting them there anytime soon. And for what? AI slop ruining every fucking thing while heating the planet ever faster? Sounds like a great future we have ahead with "AI".
Ferret7446•3mo ago
On building the next new feature/integration/whatever? I feel like this should be a rhetorical question, but the fact that it was asked I also feel it is not so...
beAbU•3mo ago
You cant conveniently ignore the cost of model development and training.

This is like saying solar power is free if you ignore the equipment and installation costs.

Even worse still, model creators are in an arms race. They can't release a model and call it a day, waiting for it to start paying for itself. They need to immediately jump on to the next version of the model or risk falling behind.

automatic6131•3mo ago
The kind of person who wants to build a website copier is exactly who I had in mind for the target of vibecoding.

Bad idea, bad execution, I like it when a plan comes together.

anupsingh123•3mo ago
I think there's some confusion about what justcopy does - it's for cloning YOUR OWN projects, not scraping other people's websites. Built it out of frustration when I tried to fork one of my projects for a different idea and it took a full day even with Claude Code and Cursor. Lots of manual config updates, dependency changes, renaming stuff, etc. The $200 mistake was about agent orchestration, not the ethics of the product. But appreciate the feedback - clearly need to communicate the use case better.
automatic6131•3mo ago
I'm not going to pay you to slightly rip off my own ideas. Who is going to pay you for this, and what are they doing with it?
chucksta•3mo ago
>For those who don’t know, we’re building a tool that lets you copy any website, customize it, and deploy it - all automated.

_any_ website, can't imagine why there is _any_ confusion.

dwaltrip•3mo ago
God, I hate marketing lies.

I don’t care if you make less money, don’t fucking lie.

brazukadev•3mo ago
Are you expecting people to believe that?

This reminds of that law that people can only legally play their own games using console emulators.

dominicrose•3mo ago
Even without AI, companies have been burning cash uncontrollably on cloud services. I guess it's worth it when time saved, scalability etc, is much much more valuable than money.
pjdkoch•3mo ago
If you buy senior engineering hours and give them vague requirements, this is close enough to what you'll get.
cafebabbe•3mo ago
Ah so this is where the current GDP growth comes from.
jb4020•3mo ago
This is phishing/scam heaven. I already warned some european friends in healthcare about this and hope someone considers legal steps against such unethical and dangerous practices.