It's just HN that's full of "I hate AI" or wrong contrarian types who refuse to acknowledge this. They will fail to reap what they didn't sow and will starve in this brave new world.
Though I do hope the generated code will end up being better than what we have right now. It mustn't get much worse. Can't afford all that RAM.
Using these things will fry your brain's ability to think through hard solutions. It will give you a disease we haven't even named yet. Your brain will atrophy. Do you want your competency to be correlated 1:1 to the quality and quantity of tokens you can afford (or be loaned!!)?
Their main purpose is to convince C-suite suits that they don't need you, or they should be justified in paying you less.This will of course backfire on them, but in the meantime, why give them the training data, why give them the revenue??
I'd bet anything these new models / agentic-tools are designed to optimize for token consumption. They need the revenue BADLY. These companies are valued at 200 X Revenue.. Google IPO'd at 10-11 x lmfao . Wtf are we even doing? Can't wait to watch it crash and burn :) Soon!
I've been a project manager for years. I still work on some code myself, but most of it is done by the rest of the team.
On one hand, I have more bandwidth to think about how the overall application is serving the users, how the various pieces of the application fit together, overall consistency, etc. I think this is a useful role.
On the other hand, I definitely have felt mental atrophy from not working in the code. I still think; I still do things and write things and make decisions. But I feel mentally out of shape; I lack a certain sharpness that I perceived when I was more directly in tune with the code.
And I'm talking, all orthogonal to AI. This is just me as a project manager with other humans on the project.
I think there is truth to, well, operate at a higher level! Be more systems-minded, architecture-minded, etc. I think that's true. And there are surely interesting new problems to solve if we can work not on the level of writing programs, but wielding tools that write programs for us.
But I think there's also truth to the risk of losing something by giving up coding. Whether if that which might be lost is important to you or not, is your own decision, but I think the risk is real.
Back when automatic piano players came out, if all the world's best piano players stopped playing and mostly just composing/writing music instead, would the quality of the music have increased or decreased. I think the latter.
That doesn't even have to be writing a ton of code, but reading the code, getting intimately familiar with the metrics, querying the logs, etc.
lmao, please explain to me why these companies should be valued at 200x revenue.. They are providing autocomplete APIs.
How come Google's valuation hasn't increased 100-200x, they provide foundation models + a ton more services as well and are profitable. None of this makes sense, its destined to fail.
Let me start by conceding on the company value front; they should not have such value. I will also concede that these models lower your value of labor and quality of craft.
But what they give in return is the ability to scale your engineering impact to new highs - Talented engineers know which implementation patterns work better, how to build debuggable and growable systems. While each file in the code may be "worse" (by whichever metric you choose), the final product has more scope and faster delivery. You can likewise choose to hone in the scope and increase quality, if that's your angle.
LLMs aren't a blanket improvement - They come with tradeoffs.
You would think, but Claude Code has gotten incredibly more efficient over time. They are doing so much dogfooding with these things at this point that it makes more sense to optimize.
When you hear execs talking about AI, it's like listening to someone talk about how they bought some magic beans that will solve all their problems. IMO the only thing we have managed to do is spend alot more money on accelerated compute.
Depends on what the aim of your labor is. Is it typing on a keyboard, memorizing (or looking up) whether that function was verb_noun() or noun_verb(), etc? Then, yeah, these tools will lower your value. If your aim is to get things done, and generate value, then no, I don't think these tools will lower your value.
This isn't all that different from CNC machining. A CNC machinist can generate a whole lot more value than someone manually jogging X/Y/Z axes on an old manual mill. If you absolutely love spinning handwheels, then it sucks to be you. CNC definitely didn't lower the value of my brother's labor -- there's no way he'd be able to manually machine enough of his product (https://www.trtvault.com/) to support himself and his family.
> Using these things will fry your brain's ability to think through hard solutions.
CNC hasn't made machinists forget about basic principles, like when to use conventional vs climb milling, speeds and feeds, or whatever. Same thing with AI. Same thing with induction cooktops. Same thing with any tool. Lazy, incompetent people will do lazy, incompetent things with whatever they are given. Yes, an idiot with a power tool is dangerous, as that tool magnifies and accelerates the messes they were already destined to make. But that doesn't make power tools intrinsically bad.
> Do you want your competency to be correlated 1:1 to the quality and quantity of tokens you can afford (or be loaned!!)?
We are already dependent on electricity. If the power goes out, we work around that as best as we can. If you can't run your power tool, but you absolutely need to make progress on whatever it is you're working on, then you pick up a hand tool. If you're using AI and it stops working for whatever reason, you simply continue without it.
I really dislike this anti-AI rhetoric. Not because I want to advocate for AI, but because it distracts from the real issue: if your work is crap, that's on you. Blaming a category of tool as inherently bad (with guaranteed bad results) suggests that there are tools that are inherently good (with guaranteed good results). No. That's absolutely incorrect. It is people who fall on the spectrum of mediocrity-to-greatness, and the tools merely help or hinder them. If someone uses AI and generates a bunch of slop, the focus should be on that person's ineptitude and/or poor judgement.
We'd all be a lot better off if we held each other to higher standards, rather than complaining about tools as a way to signal superiority.
(i thought gas town was satire? people in comments here seem to be saying that gas town also had multi-agent file sharing for work tracking)
1. GPT-5.2 Codex Max for planning
2. Opus 4.5 for implementation
3. Gemini for reviews
It’s easy to swap models or change responsibilities. Doc and steps here: https://github.com/sathish316/pied-piper/blob/main/docs/play...
https://www.augmentcode.com/product/intent
can use the code AUGGIE to skip the queue. Bring your own agent (powered by codex, CC, etc) coming to it next week.
I don't need anything more complicated than that and it works fine - also run greptile[1] on PR's
You run out of context so quickly and if you don’t have some kind of persistent guidance things go south
> I went to senior folks at companies like Temporal and Anthropic, telling them they should build an agent orchestrator, that Claude Code is just a building block, and it’s going to be all about AI workflows and “Kubernetes for agents”. I went up onstage at multiple events and described my vision for the orchestrator. I went everywhere, to everyone. (from "Welcome to Gas Town" https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...)
That Anthropic releases Agent Teams now (as rumored a couple of weeks back), after they've already adopted a tiny bit of beads in form of Tasks) means that either they've been building them already back when Steve pitched orchestrators or they've decided that he's been right and it's time to scale the agents. Or they've arrived at the same conclusions independently -- it won't matter in the larger scale of things. I think Steve greately appreciates it existing; if anything, this is a validation of his vision. We'll probably be herding polecats in a couple of months officially.
The main claude instance is instructed to launch as many ralph loops as it wants, in screen sessions. It is told to sleep for a certain amount of time to periodically keep track of their progress.
It worked reasonably well, but I don't prefer this way of working... yet. Right now I can't write spec (or meta-spec) files quick enough to saturate the agent loops, and I can't QA their output well enough... mostly a me thing, i guess?
bhasi•1h ago
nickorlow•52m ago
Wonder how they compare?
greenfish6•50m ago
Ethee•7m ago
Often times if I'm only working on a single project or focus, then I'm not using most of those roles at all and it's as you describe, one agent divvying out tasks to other agents and compiling reports about them. But due to the fact that my velocity with this type of coding is now based on how fast I can tell that agent what I want, I'm often working on 3 or 4 projects simultaneously, and Gas Town provides the perfect orchestration framework for doing this.
temuze•51m ago
No polecats smh
ramesh31•50m ago
I love that we are in this world where the crazy mad scientists are out there showing the way that the rest of us will end up at, but ahead of time and a bit rough around the edges, because all of this is so new and unprecedented. Watching these wholly new abstractions be discovered and converged upon in real time is the most exciting thing I've seen in my career.
bredren•38m ago
koakuma-chan•30m ago
nprz•22m ago
rafram•24m ago