frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
135•yi_wang•4h ago•40 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
57•RebelPotato•4h ago•13 comments

SectorC: A C Compiler in 512 bytes (2023)

https://xorvoid.com/sectorc.html
256•valyala•12h ago•51 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
166•surprisetalk•12h ago•158 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
199•mellosouls•15h ago•353 comments

Total surface area required to fuel the world with solar (2009)

https://landartgenerator.org/blagi/archives/127
22•robtherobber•4d ago•16 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
41•rolph•2h ago•26 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
66•swah•4d ago•120 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
73•gnufx•11h ago•59 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
180•AlexeyBrin•17h ago•35 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
172•vinhnx•15h ago•17 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
320•jesperordrup•22h ago•97 comments

First Proof

https://arxiv.org/abs/2602.05192
135•samasblack•14h ago•79 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
67•chwtutha•3h ago•11 comments

Why there is no official statement from Substack about the data leak

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
17•witnessme•1h ago•6 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
83•momciloo•12h ago•17 comments

Wood Gas Vehicles: Firewood in the Fuel Tank (2010)

https://solar.lowtechmagazine.com/2010/01/wood-gas-vehicles-firewood-in-the-fuel-tank/
31•Rygian•2d ago•8 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
63•duxup•2h ago•14 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
105•thelok•14h ago•24 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
40•mbitsnbites•3d ago•5 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
580•theblazehen•3d ago•211 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
112•randycupertino•7h ago•235 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
306•1vuio0pswjnm7•18h ago•488 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
233•limoce•4d ago•125 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
156•speckx•4d ago•241 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
144•josephcsible•10h ago•179 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
904•klaussilveira•1d ago•276 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
34•languid-photic•4d ago•16 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
304•isitcontent•1d ago•39 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
189•valyala•12h ago•178 comments
Open in hackernews

Stevey's Birthday Blog

https://steve-yegge.medium.com/steveys-birthday-blog-34f437139cb5
47•throwawayHMM19•2w ago

Comments

swah•2w ago
Still thinking about https://lucumr.pocoo.org/2026/1/18/agent-psychosis/
leoc•2w ago
https://www.youtube.com/watch?v=15A0F5aOoPM
xyzsparetimexyz•2w ago
Neat!
vessenes•2w ago
Good summary.

Upshot: Steve thinks he’s built a quality task tracker/work system (beads), and is iterating on architectures, and has gotten convinced an architecture-builder is going to make sense.

Meanwhile, work output is going to improve independently. The bet is that leverage on the top side is going to be the key factor.

To co-believe this with Steve, you have to believe that workers can self-stabilize (e.g. with something like the Wiggum loop you can get some actual quality out of them, unsupervised by a human), and that their coordinators can self stabilize.

If you believe those to be true, then you’re going to be eyeing 100-1000x productivity just because you get to multiply 10 coordinators by 10 workers.

I’ll say that I’m generally bought in to this math. Anecdotally I currently (last 2 months) spend about half my coding agent time asking for easy in-roads to what’s been done; a year ago, I spent 10% specifying and 90% complaining about bugs.

Example, I just pulled up an old project, and asked for a status report — I got a status report based on existing beads. I asked it to verify, and the computer ran the program and reported a fairly high quality status report. I then asked it to read the output (a PDF), and it read the PDF, noticed my main complaints, and issued 20 or so beads to get things in the right shape. I had no real complaints about the response or workplan.

I haven’t said “go” yet, but I presume when I do, I’m going to be basically checking work, and encouraging that work checking I’m doing to get automated as well.

There’s a sort of not-obvious thing that happens as we move from 0.5 9s to say 3 9s in terms of effectiveness — we’re going to go from constant intervention needed at one order of magnitude of work to constant intervention needed at 2.5x that order of magnitude of work — it’s a little hard to believe unless you’ve really poked around — but I think it’s coming pretty soon, as does Steve.

Who, nota bene, to be clear, is working at a pace that he is turning down 20 VCs a week, selling memecoin earnings in the hundreds of thousands of dollars and randomly ‘napping’ in the middle of the day. Stay rested Steve, keep on this side of the manic curve please, we need you.. I’d say it’s a good sign he didn’t buy any GAS token himself.

slfnflctd•2w ago
> Stay rested Steve, keep on this side of the manic curve please, we need you

This is my biggest takeaway. He may or may not be on to something really big, but regardless, it's advancing the conversation and we're all learning from it. He is clearly kicking ass at something.

I would definitely prefer to see this be a well paced marathon rather than a series of trips and falls. It needs time to play out.

xyzsparetimexyz•2w ago
That something being psychosis
cap11235•2w ago
And crypto
throwup238•2w ago
> He is clearly kicking ass at something.

Publishing unmaintainable garbage code to Github?

Have you looked at the beads codebase? It's a bad joke at our expense.

johnfn•2w ago
I spent some time reading about Gas Town to see if I could understand what Stevey was trying to accomplish. I think he has some good ideas in there, actually - it really does seem like he's thought a bit about what coding in the future might look like. Unfortunately, it's so full of esoteric language and vibecoded READMEs that it is quite difficult to get into. The most concerning thing is that Stevey seems totally unaware of this. He writes about how when he tried to explain this to people they just stared at him like they were idiots, and so they must all be wrong -- that's a bit worrying, from a health and psychosis angle.
cgio•2w ago
There’s an acquaintance here in Australia that has built something similar without the crazy terminology and it is pretty solid.
wewewedxfgdf•2w ago
I instantly read any Steve Yegge blog. Not true of anyone else.
lovich•2w ago
Every time I read another article from this guy I get even more frustrated telling if he’s grifting or legitimately insane

Between quotes like these

> I had lunch again (Kirkland Cactus) with my buddies Ajit Banerjee and Ryan Snodgrass, the ones who have been chastising teammates for acting on ancient 2-hour-old information.

, and trying to argue that this is the future of all productivity while taking time to physically go to a bank to get money off a crypto coin while also crowing about how he can’t waste time on money.

On top of that this entire gas town thing is predicated on not caring about the cost but AI firms are currently burning money as fast as possible selling a dollar for 10 cents. How does the entire framework/technique not crash and burn the second infinite investment stops and the AI companies need to be profitable and not a money hole?

leoc•2w ago
Even if something like Gas Town isn't remotely affordable today it could potentially be a useful glimpse at what can be done in principle and what might be affordable in, say, 10 years. There's a long history of this in computing, of course https://en.wikipedia.org/wiki/Expensive_Typewriter https://en.wikipedia.org/wiki/Sketchpad https://en.wikipedia.org/wiki/Xerox_Alto . OTOH it could certainly make the approach totally unsuitable for VC funding at present, and that's without even considering the other reasons to be wary of Gas Town and Beads.
lovich•2w ago
Nothing I have read about LLMs and related makes it seem like they will be affordable in the future when it comes to software specifically.

I will preface this that I think AI agents can accomplish impressive things, but in the same way that the Great Pyramid of Giza was impressive while not being economically valuable.

Software is constantly updating. For LLMs to be useful they need to be relatively up to date with software. That means training and from what I understand the training is the vast majority of costs and there is no plausible technical solution around this.

Currently LLMs seem amazing for software because AI companies like OpenAI or Anthropic are doing what Uber and Lyft did in their heyday where they sold dollars for pennys, just to gain market share. Mr. Yegge and friends have made statements about if cost scares you, then step away. Even in the article this thread is about he has this quote

> Jeffrey, as we saw from his X posts, has bought so many Claude Pro Max $200/month plans (22 so far, or $4400/month in tokens) that he got auto-banned by Anthropic’s fraud detection.

And so far what I’ve seen is that he’s developed a system that lets him scale out the equivalent of interns/junior engineers en masse under his tentative supervision.

We already had the option to hire a ton of interns/junior engineers for every experienced engineer. It was quite common 1.5-3 decades ago. You’d have an architect who sketched out the bones of the application down to things like classes or functions, then let the cheap juniors implement.

Everyone moved off that model because it wasn’t as productive per dollar spent.

Mr. Yegge’s Gas Town, to me, looks like someone thought “what if we could get that same gaggle of juniors, for the same cost or more, but they were silicon instead of meat”

Nothing he’s outlined has made me convinced that the unit economics of this are going to work out better than just hiring a bunch of young bright people right out of college, which corporations are increasingly loathe to do.

If you have something to point to for why they thought is incorrect, in regards to this iteration of AI, then please link it.

leoc•2w ago
But why should one expect no future improvement in training costs (all else being equal) from Moore's Law, never mind any future use of eg. more efficient algorithms or more specialised hardware?
lovich•2w ago
If you are quoting Moore's Law in 2026 as the reason LLMs will be profitable, I don’t know how to interact with you.

I guess we’ll make up the losses per unit at scale, and grow to infinity.

leoc•1w ago
I'm not making some general claim about LLMs being profitable. To be clear, are you claiming with high confidence that LLM training costs will show no meaningful reduction, for some fixed quality, over roughly the next 10 years?
marcus_holmes•2w ago
> telling if he’s grifting or legitimately insane

or if he's talking to us from 5 years in the future.

Ignoring the financial aspect, this all makes sense - one LLM is good, 100 is better, 1000 is better still. The whole analogy with the industrial revolution makes sense to me.

> AI firms are currently burning money as fast as possible selling a dollar for 10 cents.

The financial aspect is interesting, but we're dealing with today's numbers, and those numbers have been changing fast over the last few years. I'm a big fan of Ed Zitron's writing, and he makes some really good points, but I think condemning all creative uses of LLMs because of the finances is counterproductive. Working out how to use this technology well, despite the finances not making much sense, is still useful.

xyzsparetimexyz•2w ago
Better in what sense? What are we actually building with 1000 LLMs
igor47•2w ago
A system to build more systems with more LLMs, of course!
throwup238•2w ago
> Ignoring the financial aspect, this all makes sense - one LLM is good, 100 is better, 1000 is better still. The whole analogy with the industrial revolution makes sense to me.

How does this make sense? LLMs are doing knowledge work so they face the same coordination problem that humans do, they're not assembly line workers. We have no reason to believe that the lessons of the mythical man month don't apply to LLMs too since the coordination costs, especially when they're touching the same piece of code, are very high.

riwsky•2w ago
“I’m going to go lay down and, uh, think about the problem with my eyes closed”

Oh good, mainstream coders finally catching up with the productivity of 2010s Clojurists and their “Hammock Driven Development”! (https://m.youtube.com/watch?v=f84n5oFoZBc)

barrkel•2w ago
I think there's an interesting idea behind Gas Town (basically, using supervisor trees to make agents reliable, analogous to how Erlang uses them to make processes reliable), but it's lacking a proper quality ratchet (agents often don't mind changing or deleting tests instead of fixing code) and architectural function (agents tend to reinvent the wheel over and over again, the context window simply isn't big enough to fit everything in).

However, Steve Yegge's recent credulous foray into promoting a crypto coin, which was (IMO) transparently leveraging his audience and buzz to execute a pump and dump scheme, with him being an unwitting collaborator, makes me think all is not necessarily well in Yegge land.

I think Steve needs to take a step back from his amazing productivity machine and have another look at that code, and consider if it's really production quality.

minebreaker•2w ago
> Steve Yegge's recent credulous foray into promoting a crypto coin

I didn't notice that. Can you give me a source?

sandinmyjoints•2w ago
He wrote all about it in https://steve-yegge.medium.com/bags-and-the-creator-economy-...
tom_•2w ago
There's some related discussion here: https://news.ycombinator.com/item?id=46654878
wrs•2w ago
I read this post as saying he won’t take funding from VCs, but he will from (his own word) crypto-bros?
PrayagS•2w ago
> have another look at that code

So true. beads[0] is such a mess. Keeps breaking often with each release. Can't understand how people can rely on it for their day-to-day work.

[0] https://github.com/steveyegge/beads

CharlesW•2w ago
That's been my experience as well. I like the idea of Beads, but it's fallen apart for me after a couple weeks of moderate use on two different projects now. Luckily, it's easy to migrate back to plain ol' Markdown files, which work just as well and have never failed me.
marcins•2w ago
> have another look at that code

That would assume he's even looked at the code in the first place - I think his whole thesis is based on you never looking at the code.

jfultz•2w ago
"Quality ratchet" is such a great name. Thanks for that.
deng•2w ago
Indeed, the Gas-Town token is down 97% from all-time high, see https://coinmarketcap.com/currencies/gas-town/

He's obviously a smart guy, so he definitely should've known better. It's weird how these AI evangelists use AI for everything, but somehow he didn't ask ChatGPT what all of this means and if it may have reputational damage, because I just asked if I should claim these trading fees, and it said:

   Claiming could be interpreted as:

   * Endorsing the token

   * Being complicit if others get rugged later

   * This matters if your X account has real followers.
and in the end told me to NOT claim these fees unless I'm OK with being associated with that token.
barrkel•2w ago
When you're under a lot of stress, your internal evaluation function for what is moral can start to break down. It may have been hard for him to turn the money down, especially if he's addicted to the sense of power he's getting from his coding agent spend. As he said, his wife suggested they can't afford it.

There's another thing. A certain type of engineer seems to get sucked into Amazon's pressure culture. They either are, or end up, a bit manic. Laid back and relaxed one day (especially after holidays), but wound up and under a lot of internal pressure to produce the next, and a lot more of the latter. Something like Gas Town must be a crazy fix when you're feeling that pain. Combined with the vision that if you don't, you're unemployed/unemployable in 12 to 24 months, you might feel you have no choice but to spend every waking minute at it.

It's a bit (more than a bit) rude to analyse someone at a distance. And to be honest, I think something like Gas Town is probably one of the possible shapes of things to come. I don't think what I can observe looks super healthy, is all.

tveita•2w ago
> Indeed, the Gas-Town token is down 97% from all-time high,

What else could possibly have happened? Surely every one put their money in with the express intention of participating in a pump and dump.

Not taking the money would have been the high road. I don't think basing the economy on gambling and scams is good for society. But who could realistically claim to be a 'victim' here?

mcphage•2w ago
This all reminds me of the offhand comment from an old XKCD: “You own 3D googles, which you use to view rotating models of better 3D googles.” He’s got this gigantic angentic orchestration system, which he uses to build… an agentic orchestration system. Okay.
fizx•2w ago
What a fever dream!
fizx•2w ago
Does anyone know how much the cost-per-token is trending down year-over-year for models of similar quality? Seems like whether this idea works really depends on that curve.
Citizen_Lame•2w ago
Match made in heaven, AI bro turns crypto grifter.