frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Orchestrate teams of Claude Code sessions

https://code.claude.com/docs/en/agent-teams
167•davidbarker•2h ago

Comments

bhasi•1h ago
Seems similar to Gas Town
nickorlow•1h ago
yeah, seems like a much simpler design though (i.e. only seems like one 'special/leader' agent, and the rest are all workers vs gastown having something like 8 different roles mayor, polecat, witnesses, etc).

Wonder how they compare?

greenfish6•1h ago
i would have to imagine the gastown design isn't optimal though? why 8, and why does there need to multiple hops of agent communications before two arbitrary agents communicate with each other as opposed to single shared filespace?
Ethee•39m ago
I've been using Gas Town a decent bit since it was released. I'd agree with you that it's design is sub-optimal, but I believe that's more due to the way the actual agents/harnesses have been designed as opposed to optimal software design. The problem you often run into is that agents will sometimes hang thinking they need human input for a problem they are on, or they think they're at a natural stopping point. If you're trying to do fully orchestrated agentic coding where you don't look at the code at all (putting aside whether that's good or not for a second) then this is sub-optimal behavior, and so these extra roles have been designed to 'keep the machine going' as it were.

Often times if I'm only working on a single project or focus, then I'm not using most of those roles at all and it's as you describe, one agent divvying out tasks to other agents and compiling reports about them. But due to the fact that my velocity with this type of coding is now based on how fast I can tell that agent what I want, I'm often working on 3 or 4 projects simultaneously, and Gas Town provides the perfect orchestration framework for doing this.

temuze•1h ago
Yeah but worse

No polecats smh

ramesh31•1h ago
>"Seems similar to Gas Town"

I love that we are in this world where the crazy mad scientists are out there showing the way that the rest of us will end up at, but ahead of time and a bit rough around the edges, because all of this is so new and unprecedented. Watching these wholly new abstractions be discovered and converged upon in real time is the most exciting thing I've seen in my career.

bredren•1h ago
The action is hot, no doubt. This reminds me of Spacewar! -> Galaxy Game / Computer Space.
koakuma-chan•1h ago
I don't know what Gas Town is, but Claude Code Agent Teams is what I was doing for a while now. You use your main conversation only to spawn sub agents to plan and execute, allowing you to work for a long time without losing context or compacting, because all token-heavy work is done by sub agents in their own context. Claude Code Agent Teams just streamlines this workflow as far as I can tell.
nprz•55m ago
Gas Town --> https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
rafram•56m ago
I'm not anti-whimsy, but if your project goes too hard on the whimsy (and weird AI-generated animal art), it's kind of inevitable that someone else is going to create a whimsy-free clone, and their version will win because it's significantly less embarrassing to explain to normal people.
reissbaker•14m ago
Where are the polecats, though? What about the mayor's dog?
taikahessu•1h ago
Clean up the team
Retr0id•1h ago
Claude Town
Sol-•1h ago
With stuff like this, might be that all the infra build-out is insufficient. Inference demand will go up like crazy.
kylehotchkiss•1h ago
It'd be nice if CC could figure out all the required permissions upfront and then let you queue the job to run overnight
Der_Einzige•1h ago
Anyone paying attention has known that demand for all type of compute than can run LLMs (i.e. GPUs, TPUs, hell even CPUs) was about to blow up, and will remain extremely large for years to come.

It's just HN that's full of "I hate AI" or wrong contrarian types who refuse to acknowledge this. They will fail to reap what they didn't sow and will starve in this brave new world.

emp17344•1h ago
This reads like a weird cult-ish revenge fantasy.
RGamma•1h ago
And what about you? Show your "I used AI today" badge, right now!
mrkeen•1h ago
Oh yeah I mean if you're a webdev and you haven't built several data centres already you're basically asking to be homeless.
ffffuuuuuccck•47m ago
i love how your dystopia involves the masses submissively starving rather than expropriating your possessions and raping your wife for funsies
aaaalone•22m ago
If ai progresses slow enough, we will end in a society were high unemployment numbers are the norm and we are stuck in capitalism.

And if I think about one 'senior' in my team I would pref an expensive ai subscription over that one person already.

Der_Einzige•16m ago
The kind of people who want to politically organize against AI are limp wristers (white collar coastal elite liberals). Good luck, I'll be hiding behind my legally owned arsenal and autonomous turrets cyberpunk style. The right embraced AI and their primarily blue collar work is safe. They have all the guns and the knowledge on how to use them.
emp17344•12m ago
What the fuck is wrong with you? This guy is either a troll or legitimately mentally ill.
RGamma•1h ago
Unlocking the next order of magnitude of software inefficiency!

Though I do hope the generated code will end up being better than what we have right now. It mustn't get much worse. Can't afford all that RAM.

Sol-•4m ago
Dunno, it's probably less energy efficient than a human brain, but being able to turn electricity into intelligence is pretty amazing. RAM and power generation are engineering problems to be solved for civilization to benefit from this.
IhateAI•1h ago
Any self respecting engineer should recognize that these tools and models only serve to lower the value of your labor. They aren't there to empower you, they aren't going to enable you to join the ruling class with some vibe-rolled slop SaaS.

Using these things will fry your brain's ability to think through hard solutions. It will give you a disease we haven't even named yet. Your brain will atrophy. Do you want your competency to be correlated 1:1 to the quality and quantity of tokens you can afford (or be loaned!!)?

Their main purpose is to convince C-suite suits that they don't need you, or they should be justified in paying you less.This will of course backfire on them, but in the meantime, why give them the training data, why give them the revenue??

I'd bet anything these new models / agentic-tools are designed to optimize for token consumption. They need the revenue BADLY. These companies are valued at 200 X Revenue.. Google IPO'd at 10-11 x lmfao . Wtf are we even doing? Can't wait to watch it crash and burn :) Soon!

theappsecguy•1h ago
The crash and burn can't come soon enough.
tjr•1h ago
People often compare working with AI agents to being something like a project manager.

I've been a project manager for years. I still work on some code myself, but most of it is done by the rest of the team.

On one hand, I have more bandwidth to think about how the overall application is serving the users, how the various pieces of the application fit together, overall consistency, etc. I think this is a useful role.

On the other hand, I definitely have felt mental atrophy from not working in the code. I still think; I still do things and write things and make decisions. But I feel mentally out of shape; I lack a certain sharpness that I perceived when I was more directly in tune with the code.

And I'm talking, all orthogonal to AI. This is just me as a project manager with other humans on the project.

I think there is truth to, well, operate at a higher level! Be more systems-minded, architecture-minded, etc. I think that's true. And there are surely interesting new problems to solve if we can work not on the level of writing programs, but wielding tools that write programs for us.

But I think there's also truth to the risk of losing something by giving up coding. Whether if that which might be lost is important to you or not, is your own decision, but I think the risk is real.

IhateAI•1h ago
I definitely think what you're losing is extremely important, and can't be compensated with LLMs once its gone.

Back when automatic piano players came out, if all the world's best piano players stopped playing and mostly just composing/writing music instead, would the quality of the music have increased or decreased. I think the latter.

sathish316•1h ago
I do think there’s a real risk of Brain Atrophy when you rely on AI coding tools for everything and while learning something new. About a year ago, I dealt with this problem by using Neovim and having shortcuts like below to easily toggle GitHub Copilot on/off. Now that AI is baked into almost every part of the toolchain in VSCode, Cursor, ClaudeCode, Intellij, I don't know how the newer engineers will learn without AI assistance.
IhateAI•1h ago
I think in-line autocomplete is likely not that dangerous, if it's used in this manner responsibly, it's the large agentic tools that are problematic for your brain imo. But in-line autocompletes aren't going to raise billions of dollars and aren't flashy.
xpct•46m ago
I'd say autocomplete introduces a certain level of fuzziness into the code we work with, though to a lower degree. I used autocomplete for over a year, and initially it did feel like a productivity boost, yet when I later stopped using them, it never felt like my productivity decreased. I stopped because something about losing explicit intent of my code feels uncomfortable to me.
majormajor•1h ago
It's very difficult to operate effectively at a higher level for a continued period of time without periodically getting back into the lower levels to try new things and learn new approaches or tools.

That doesn't even have to be writing a ton of code, but reading the code, getting intimately familiar with the metrics, querying the logs, etc.

markab21•1h ago
Shaking fist at clouds!!
IhateAI•1h ago
Wow, a bunch of NFT people used to say the same thing.

lmao, please explain to me why these companies should be valued at 200x revenue.. They are providing autocomplete APIs.

How come Google's valuation hasn't increased 100-200x, they provide foundation models + a ton more services as well and are profitable. None of this makes sense, its destined to fail.

tock•1h ago
Google is valued at 4T. Up from 1.2T in 2022.
OsrsNeedsf2P•1h ago
I like your name, it suggests you're here for a good debate.

Let me start by conceding on the company value front; they should not have such value. I will also concede that these models lower your value of labor and quality of craft.

But what they give in return is the ability to scale your engineering impact to new highs - Talented engineers know which implementation patterns work better, how to build debuggable and growable systems. While each file in the code may be "worse" (by whichever metric you choose), the final product has more scope and faster delivery. You can likewise choose to hone in the scope and increase quality, if that's your angle.

LLMs aren't a blanket improvement - They come with tradeoffs.

hareykrishna•47m ago
it's too late to hateAI!
ramesh31•1h ago
>I'd bet anything these new models / agentic-tools are designed to optimize for token consumption.

You would think, but Claude Code has gotten incredibly more efficient over time. They are doing so much dogfooding with these things at this point that it makes more sense to optimize.

fooker•1h ago
It would be tragically ironic if this post is AI generated.
M4R5H4LL•1h ago
From an economic standpoint this is basically machines doing work humans used to do. We’ve already gone through this many times. We built machines that can make stuff orders of magnitude faster than humans, and nobody really argues we should preserve obsolete tools and techniques as a valued human craft. Obviously automation messes with jobs and identity for some people, but historically a large chunk of human labor just gets automated as the tech gets better. So I feel that arguing about whether automation is good or bad in the abstract is a bit beside the point. The more interesting question imho is how people and companies adapt to it, because it’s probably going to happen either way.
ottah•1h ago
Honestly my job is to ensure code quality and to protect the customer. I love working with claude code, it makes my life easier, but in no way would a team of agents improve code quality or speed up development. I would spend far too much time reviewing and fixing laziness and bad design decisions.

When you hear execs talking about AI, it's like listening to someone talk about how they bought some magic beans that will solve all their problems. IMO the only thing we have managed to do is spend alot more money on accelerated compute.

spelunker•59m ago
How Butlerian of you.
wantlotsofcurry•58m ago
I agree on all parts. I do not understand why anyone in the software industry would bend over backwards to show their work is worth less now.
hareykrishna•53m ago
Hare Krishnaa, Hare Raam! Breath deep, Stay Calm!
cstrahan•41m ago
> Any self respecting engineer should recognize that these tools and models only serve to lower the value of your labor.

Depends on what the aim of your labor is. Is it typing on a keyboard, memorizing (or looking up) whether that function was verb_noun() or noun_verb(), etc? Then, yeah, these tools will lower your value. If your aim is to get things done, and generate value, then no, I don't think these tools will lower your value.

This isn't all that different from CNC machining. A CNC machinist can generate a whole lot more value than someone manually jogging X/Y/Z axes on an old manual mill. If you absolutely love spinning handwheels, then it sucks to be you. CNC definitely didn't lower the value of my brother's labor -- there's no way he'd be able to manually machine enough of his product (https://www.trtvault.com/) to support himself and his family.

> Using these things will fry your brain's ability to think through hard solutions.

CNC hasn't made machinists forget about basic principles, like when to use conventional vs climb milling, speeds and feeds, or whatever. Same thing with AI. Same thing with induction cooktops. Same thing with any tool. Lazy, incompetent people will do lazy, incompetent things with whatever they are given. Yes, an idiot with a power tool is dangerous, as that tool magnifies and accelerates the messes they were already destined to make. But that doesn't make power tools intrinsically bad.

> Do you want your competency to be correlated 1:1 to the quality and quantity of tokens you can afford (or be loaned!!)?

We are already dependent on electricity. If the power goes out, we work around that as best as we can. If you can't run your power tool, but you absolutely need to make progress on whatever it is you're working on, then you pick up a hand tool. If you're using AI and it stops working for whatever reason, you simply continue without it.

I really dislike this anti-AI rhetoric. Not because I want to advocate for AI, but because it distracts from the real issue: if your work is crap, that's on you. Blaming a category of tool as inherently bad (with guaranteed bad results) suggests that there are tools that are inherently good (with guaranteed good results). No. That's absolutely incorrect. It is people who fall on the spectrum of mediocrity-to-greatness, and the tools merely help or hinder them. If someone uses AI and generates a bunch of slop, the focus should be on that person's ineptitude and/or poor judgement.

We'd all be a lot better off if we held each other to higher standards, rather than complaining about tools as a way to signal superiority.

aaaalone•20m ago
When I use Google maps, I learn faster.

And I haven't to solve real hard problems for ages.

Some people will have problems some will not.

Future will tell.

dangoodmanUT•7m ago
username checks out
greenfish6•1h ago
Excited to try this out. I've seen a lot of working systems on my own computer that share files to talk between different Claude Code agents and I think this could work similarly to that.

(i thought gas town was satire? people in comments here seem to be saying that gas town also had multi-agent file sharing for work tracking)

nkmnz•1h ago
I’m looking for something like this, with opus in the driver seat, but the subagents should be using different LLMs, such as Gemini or Codex. Anyone know if such a tool? just-every/code almost does this, but the lead/orchestrator is always codex, which feels too slow compared to opus or Gemini.
fosterfriends•1h ago
I think this is where future cursor features will be great - to coordinate across many different model providers depending on the sub-jobs to be done
nkmnz•1h ago
What I want is something else: I want them to work in parallel on the same problem, and the orchestrator to then evaluate and consolidate their responses. I’m currently doing this manually, but it’s tedious.
sathish316•1h ago
You can run an ensemble of LLMs (Opus, Gemini, Codex) in Claude Code Router via OpenRouter or any Agent CLI that supports Subagents and not tied to a single LLM like Opencode. I have an example of this in Pied-Piper, a subagent orchestrator that runs in Claude Code or ClaudeCodeRouter and uses distinct model/roles for each Subagent:

1. GPT-5.2 Codex Max for planning

2. Opus 4.5 for implementation

3. Gemini for reviews

It’s easy to swap models or change responsibilities. Doc and steps here: https://github.com/sathish316/pied-piper/blob/main/docs/play...

knes•1h ago
At Augment' we've been working on this. Multi agents orchestration, spec driven, different models for different tasks, etc.

https://www.augmentcode.com/product/intent

can use the code AUGGIE to skip the queue. Bring your own agent (powered by codex, CC, etc) coming to it next week.

nikcub•1h ago
I use opus for coding and codex for views. I trigger the reviews in each work task with a review skill that calls out to codex[0]

I don't need anything more complicated than that and it works fine - also run greptile[1] on PR's

[0] https://github.com/nc9/skills/tree/main/review

[1] https://www.greptile.com/

morleytj•1h ago
Gas Town decimated by Claude bomb from orbit
greenfish6•1h ago
something i really like from tryin git out over the last 10 minutes is that the main agent will continue talking to you while other agents are working, so you don't have to queue a message
ottah•1h ago
I absolutely cannot trust Claude code to independently work on large tasks. Maybe other people work on software that's not significantly complex, but for me to maintain code quality I need to guide more of the design process. Teams of agents just sounds like adding a lot more review and refactoring that can just be avoided by going slower and thinking carefully about the problem.
BonoboIO•1h ago
You definitely have to create some sort of PLAN.md and PROGRESS.md via a command and an implement command that delegates work. That is the only way that I can get bigger things done no matter how „good“ their task feature is.

You run out of context so quickly and if you don’t have some kind of persistent guidance things go south

koakuma-chan•1h ago
I tried doing that and it didn't work. It still adds "fallbacks" that just hide errors or the fact that there is no actual implementation and "In a real app, we would do X, just return null for now"
ottah•57m ago
It's not sufficient, especially if I am not learning about the problem by being part of the implementation process. The models are still very weak reasoners, writing code faster doesn't accelerate my understanding of the code the model wrote. Even with clear specs I am constantly fighting with it duplicating methods, writing ineffective tests, or implementing unnecessarily complex solutions. AI just isn't a better engineer than me, and that makes it a weak development partner.
nprz•56m ago
There is research[0] currently being done on how to divide tasks and combine the answers to LLMs. This approach allows LLMs reach outcomes (solving a problem that requires 1 million steps) which would be impossible otherwise.

[0]https://arxiv.org/abs/2511.09030

ottah•53m ago
No offense to the academic profession, but they're not a good source of advice for best practices in commercial software development. They don't have the experience or the knowledge sufficient to understand my workplace and tasks. Their skill set and job is orthogonal to the corporate world.
nprz•48m ago
Yes, the problem solved in the paper (Tower of Hanoi) is far more easily defined than 99% of actual problems you would find in commercial software development. Still proof of "theoretically possible" and seems like an interesting area of research.
stpedgwdgfhgdd•39m ago
Exactly, one out of four or three prompts require tuning, nudging or just stopping it. However it takes seniority to see where it goes astray. I suspect that lots of folks dont even notice that CC is off. It works, it passes the tests, so it is good.
aqme28•17m ago
I agree, but I've found that making an "adversarial" model within claude helps with the quality a lot. One agent makes the change, the other picks holes in it, and cycle. In the end, I'm left with less to review.

This sounds more like an automation of that idea than just N-times the work.

turtlebits•12m ago
Humans can't handle large tasks either, which is why you break them into manageable chunks.

Just ask claude to write a plan and review/edit it yourself. Add success criteria/tests for better results.

ndesaulniers•56m ago
Subagents are out, put it all on agent teams!
pronik•46m ago
To the folks comparing this to GasTown: keep in mind that Steve Yegge explicitely pitched agent orchestrators to among others Anthropic months ago:

> I went to senior folks at companies like Temporal and Anthropic, telling them they should build an agent orchestrator, that Claude Code is just a building block, and it’s going to be all about AI workflows and “Kubernetes for agents”. I went up onstage at multiple events and described my vision for the orchestrator. I went everywhere, to everyone. (from "Welcome to Gas Town" https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...)

That Anthropic releases Agent Teams now (as rumored a couple of weeks back), after they've already adopted a tiny bit of beads in form of Tasks) means that either they've been building them already back when Steve pitched orchestrators or they've decided that he's been right and it's time to scale the agents. Or they've arrived at the same conclusions independently -- it won't matter in the larger scale of things. I think Steve greately appreciates it existing; if anything, this is a validation of his vision. We'll probably be herding polecats in a couple of months officially.

isoprophlex•41m ago
There seems to be a lot of convergent evolution happening in the space. Days before the gas town hype hit, I made a (less baroque, less manic) "agent team" setup: a shell script to kick off a ralph wiggum loop, and CLAUDE-MESSAGE-BUS.md for inter-ralph communication (Thread safety was hacked into this with a .claude.lock file).

The main claude instance is instructed to launch as many ralph loops as it wants, in screen sessions. It is told to sleep for a certain amount of time to periodically keep track of their progress.

It worked reasonably well, but I don't prefer this way of working... yet. Right now I can't write spec (or meta-spec) files quick enough to saturate the agent loops, and I can't QA their output well enough... mostly a me thing, i guess?

pronik•16m ago
> Right now I can't write spec (or meta-spec) files quick enough to saturate the agent loops, and I can't QA their output well enough... mostly a me thing, i guess?

Same for me, however, the velocity of the whole field is astonishing and things change as we get used to them. We are not talking that much about hallucinating anymore, just 4-5 months ago you couldn't trust coding agents with extracting functionality to a separate file without typos, now splitting Git commits works almost without a hinch. The more we get used to agents getting certain things right 100% of the time, the more we'll trust them. There are many many things that I know I won't get right, but I'm absolutely sure my agent will. As soon as we start trusting e.g. a QA agent to do his job, our "project management" velocity will increase too.

Interestingly enough, the infamous "bowling score card" text on how XP works, has demonstrated inherently agentic behaviour in more way than one (they just didn't know what "extreme" was back then). You were supposed to implement a failing test and then implement just enough functionality for this test to not fail anymore, even if the intended functionality was broader -- which is exactly what agents reliably do in a loop. Also, you were supposed to be pair-driving a single machine, which has been incomprehensible to me for almost decades -- after all, every person has their own shortcuts, hardware, IDEs, window managers and what not. Turns out, all you need is a centralized server running a "team manager agent" and multiple developers talking to him to craft software fast (see tmux requirement in Gas Town).

aaaalone•27m ago
Honestly this is one of plenty ideas I also have.

But this shows how much stuff is still to do in the ai space

bonesss•21m ago
Compare both approaches to mature actor frameworks and they don’t seem to be breaking much ice. These kinds of supervisor trees and hierarchies aren’t new for actor based systems and they’re obvious applications of LLM agents working in concert.

The fact that Anthropic and OpenAI have been going on this long without such orchestration, considering the unavoidable issues of context windows and unreliable self-validation, without matching the basic system maturity you get from a default Akka installation shows us that these leading LLM providers (with more money, tokens, deals, access, and better employees than any of us), are learning in real time. Big chunks of the next gen hype machine wunder-agents are fully realizable with cron and basic actor based scripting. Deterministically, write once run forever, no subscription needed.

Kubernetes for agents is, speaking as a krappy kubernetes admin, not some leap, it’s how I’ve been wiring my local doom-coding agents together. I have a hypothesis that people at Google (who are pretty ok with kubernetes and maybe some LLM stuff), have been there for a minute too.

Good to see them building this out, excited to see whether LLM cluster failures multiply (like repeating bad photocopies), or nullify (“sorry Dave, but we’re not going to help build another Facebook, we’re not supposed to harm humanity and also PHP, so… no.”).

ttoinou•11m ago
If it was so obvious and easy, why didn't we have this a year ago ? Models were mature enough back then to make this work
ruined•9m ago
what mature actor frameworks do you recommend?
jghn•7m ago
They did mention Akka in their post, so I would assume that's one of them.
segmondy•15m ago
This is nothing new, folks have been doing this for since 2023. Lots of paper on arxiv and lots of code in github with implementation of multiagents.

... the "limit" were agents were not as smart then, context window was much smaller and RLVR wasn't a thing so agents were trained for just function calling, but not agent calling/coordination.

we have been doing it since then, the difference really is that the models have gotten really smart and good to handle it.

GoatOfAplomb•40m ago
I wonder if my $20/mo subscription will last 10 minutes.
simlevesque•9m ago
I've had good results with Haiku for certain tasks.
asdev•22m ago
I personally have no use for this type of workflow. I like parallel claude code instances in worktrees but nothing beyond that
avereveard•11m ago
"finish Claude tokens quota in 3 minutes, largely over delegation and result messages instead of code writing"

Claude Opus 4.6

https://www.anthropic.com/news/claude-opus-4-6
804•HellsMaddy•2h ago•360 comments

GPT-5.3-Codex

https://openai.com/index/introducing-gpt-5-3-codex/
518•meetpateltech•1h ago•189 comments

Orchestrate teams of Claude Code sessions

https://code.claude.com/docs/en/agent-teams
169•davidbarker•2h ago•81 comments

Don't rent the cloud, own instead

https://blog.comma.ai/datacenter/
967•Torq_boi•14h ago•403 comments

There Will Come Soft Rains (1950) [pdf]

https://www.btboces.org/Downloads/7_There%20Will%20Come%20Soft%20Rains%20by%20Ray%20Bradbury.pdf
25•wallflower•4d ago•5 comments

Ardour 9.0 Released

https://ardour.org/whatsnew.html
96•PaulDavisThe1st•1h ago•15 comments

A small, shared skill library by builders, for builders. (human and agent)

https://github.com/PsiACE/skills
19•recrush•1h ago•0 comments

European Commission Trials Matrix to Replace Teams

https://www.euractiv.com/news/commission-trials-european-open-source-communications-software/
241•Arathorn•3h ago•126 comments

The New Collabora Office for Desktop

https://www.collaboraonline.com/collabora-office/
118•mfld•6h ago•66 comments

Advancing finance with Claude Opus 4.6

https://claude.com/blog/opus-4-6-finance
73•da_grift_shift•2h ago•12 comments

Psychometric Jailbreaks Reveal Internal Conflict in Frontier Models

https://arxiv.org/abs/2512.04124
19•toomuchtodo•1h ago•18 comments

Maihem (YC W24): hiring sr robotics perception engineer (London, on-site)

https://jobs.ashbyhq.com/maihem/8da3fa8b-5544-45de-a99e-888021519758
1•mxrns•3h ago

Flock CEO calls Deflock a "terrorist organization" [video]

https://www.youtube.com/watch?v=l-kZGrDz7PU
77•cdrnsf•1h ago•15 comments

150 MB Minimal FreeBSD Installation

https://vermaden.wordpress.com/2026/02/01/150-mb-minimal-freebsd-installation/
84•vermaden•4d ago•11 comments

Anthropic's Claude Opus 4.6 uncovers 500 zero-day flaws in open-source code

https://www.axios.com/2026/02/05/anthropic-claude-opus-46-software-hunting
88•speckx•1h ago•45 comments

We tasked Opus 4.6 using agent teams to build a C Compiler

https://www.anthropic.com/engineering/building-c-compiler
98•modeless•59m ago•82 comments

Company as Code

https://blog.42futures.com/p/company-as-code
178•ahamez•7h ago•94 comments

When internal hostnames are leaked to the clown

https://rachelbythebay.com/w/2026/02/03/badnas/
396•zdw•14h ago•212 comments

GB Renewables Map

https://renewables-map.robinhawkes.com/
104•RobinL•7h ago•37 comments

Nanobot: Ultra-Lightweight Alternative to OpenClaw

https://github.com/HKUDS/nanobot
175•ms7892•10h ago•96 comments

Fela Kuti First African to Get Grammys Lifetime Achievement Award

https://www.aljazeera.com/news/2026/2/1/fela-kuti-becomes-first-african-to-get-grammys-lifetime-a...
73•defrost•4d ago•18 comments

A Broken Heart

https://allenpike.com/2026/a-broken-heart/
130•memalign•4d ago•34 comments

Programming Patterns: The Story of the Jacquard Loom

https://www.scienceandindustrymuseum.org.uk/objects-and-stories/jacquard-loom
64•andsoitis•4d ago•26 comments

The time I didn't meet Jeffrey Epstein

https://scottaaronson.blog/?p=9534
20•pfdietz•38m ago•4 comments

CIA suddenly stops publishing, removes archives of The World Factbook

https://simonwillison.net/2026/Feb/5/the-world-factbook/
177•ck2•5h ago•57 comments

Triton Bespoke Layouts

https://www.lei.chat/posts/triton-bespoke-layouts/
6•matt_d•4d ago•0 comments

Simply Scheme: Introducing Computer Science (1999)

https://people.eecs.berkeley.edu/~bh/ss-toc2.html
86•AlexeyBrin•4d ago•27 comments

Unsealed court documents show teen addiction was big tech's "top priority"

https://techoversight.org/2026/01/25/top-report-mdl-jan-25/
195•Shamar•2h ago•101 comments

Show HN: Micropolis/SimCity Clone in Emacs Lisp

https://github.com/vkazanov/elcity
131•vkazanov•11h ago•36 comments

Making Ferrite Core Inductors at Home

https://danielmangum.com/posts/making-ferrite-core-inductors-home/
93•hasheddan•3d ago•29 comments