frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

BAGen – Free Before After Photo Maker

https://beforeafterphotomaker.site/
1•Febe1212•2m ago•1 comments

Embedding documentation in shell script (2007)

http://bahut.alma.ch/2007/08/embedding-documentation-in-shell-script_16.html
1•sdovan1•4m ago•1 comments

There's a new standard for privacy on the horizon

1•dog5pk•9m ago•0 comments

Libdex 1.0 – deferred execution for gnome

https://blogs.gnome.org/chergert/2025/09/01/libdex-1-0/
1•JNRowe•10m ago•1 comments

Validating demand for an AR-Agentic airport guide – would you pay?

https://flyx.netlify.app/
1•bengpepin•16m ago•0 comments

Definite Rust bugs found by the Miri UB detector

https://github.com/rust-lang/miri
2•exikyut•17m ago•0 comments

Show HN: Undatas.io – A pay-on-accept document parsing API

https://undatas.io/
1•jojogh•18m ago•0 comments

Beijing tightens control ahead of Xi's big moment on world stage

https://www.bbc.com/news/articles/cn020wrnw78o
1•nationsecwatch•20m ago•0 comments

Ollama 0.11.9 CPU/GPU Performance Optimization

https://www.phoronix.com/news/ollama-0.11.9-More-Performance
1•tanelpoder•26m ago•0 comments

Lagrangian vs. Newtonian Mechanics [video]

https://www.youtube.com/watch?v=_Hp11d1Z-78
2•surprisetalk•27m ago•0 comments

NPM: How did we get here?

https://blog.kevinroleke.com/npm
1•kevinroleke•32m ago•0 comments

Students on this Quebec island commute to school by plane

https://www.cbc.ca/news/canada/montreal/students-isle-aux-grues-plane-1.7621100
1•macote•34m ago•0 comments

Search LBJ Presidential Library Collections

https://discoverlbj.org/
1•mooreds•38m ago•0 comments

The Hidden Vulnerabilities of Open Source

https://fastcode.io/2025/09/02/the-hidden-vulnerabilities-of-open-source/
1•pabs3•38m ago•0 comments

Nano Banana > Photoshop

https://videotube.ai/nano-banana
1•MiaTaylor•39m ago•1 comments

Meta putting wood in bit barns in bid to get greener

https://www.theregister.com/2025/08/06/meta_pilots_wooden_datacenters/
1•PaulHoule•40m ago•0 comments

Hey Tech Bro–Your Dream City Is Doomed

https://www.zocalopublicsquare.org/hey-tech-bro-your-dream-city-is-doomed/
3•Improvement•41m ago•4 comments

Zig Software Foundation 2025 Financial Report and Fundraiser

https://ziglang.org/news/2025-financials/
5•smlavine•48m ago•0 comments

Quantum Chess

https://q-chess.com/
1•anematode•49m ago•0 comments

Nano Image – Generate Images with Nano Banana AI Technology

https://nanoimage.app
1•MintNow•54m ago•0 comments

US Pulls TSMC China Waiver

https://finance.yahoo.com/news/us-pulls-tsmc-waiver-china-203406976.html
2•stevenally•56m ago•0 comments

Nano Banana – 2025's Fastest AI Image Editor (Text-to-Edit, Not Gen)

https://www.nano-banana-ai.net
1•Viaya•57m ago•2 comments

My Claude code setup on windows

https://harishgarg.com/my-claude-code-setup-on-windows
2•hgarg•1h ago•0 comments

Laravel inventor: don't code 'cathedrals of complexity'

https://www.theregister.com/2025/09/01/laravel_inventor_clever_devs/
2•samspenc•1h ago•0 comments

China Victory Day Military Parade (Live)

https://www.youtube.com/watch?v=YoYZV_AKzYI
4•NedF•1h ago•1 comments

Show HN: Tallyit.co – Upload. Describe. Bill

https://tallyit.co/
2•cat-turner•1h ago•1 comments

Top Python Libraries for Visualization: Which One to Use?

https://codecut.ai/top-6-python-libraries-for-visualization-which-one-to-use/
2•Ben5555•1h ago•1 comments

open source is distinct from Open Source

https://www.paritybits.me/open-source/
1•NiloCK•1h ago•1 comments

Speeding up Unreal Editor launch by not spawning 38000 tooltips

https://larstofus.com/2025/09/02/speeding-up-the-unreal-editor-launch-by-not-spawning-38000-toolt...
1•samspenc•1h ago•0 comments

Cracking Down, Pricing Up: Housing Supply in the Wake of Mass Deportation

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4729511
3•gamechangr•1h ago•2 comments
Open in hackernews

Parallel AI agents are a game changer

https://morningcoffee.io/parallel-ai-agents-are-a-game-changer.html
56•shiroyasha•3h ago

Comments

epolanski•3h ago
Look, I like AI coding but we're already way past the need for parallelism.

LLMs write so much code in such a short time that the bottleneck is already the human having to review, correct, rewrite.

Parallel agents working on different parts of the application just compound this problem worse, it's impossible to catch up.

The only far fetched use case I can see is swarming hundreds of solutions against a properly designed test case and spec documents and having an agent selecting the best solutions.

Still, I'm quite convinced humans would be the bottleneck.

rcarr•3h ago
You are the main thread:

https://www.claudelog.com/mechanics/you-are-the-main-thread/

SatvikBeri•2h ago
It really depends on the project. For example, there's a lot of thorny devops debugging where I can just let Claude spin for 30 minutes and it'll solve the problem (or fail) with a relatively short final answer.

The sweet spot for me tends to be running one of these slower projects on a worktree in the background, and one more active coding project.

lawlessone•3h ago
Wouldn't the first "AI" use in coding be code suggestions that IDE's have already had since before LLMS?

Or UML tool that generate code?

furyofantares•2h ago
The sweet spot for me is 2 agents on different projects. Surprisingly the context switch is easy. It's harder when doing 2 tasks on the same project.
merlincorey•2h ago
> on different projects

This seems like an important caveat the author of the article failed to mention when they described this:

> you can have several agents running simultaneously - one building a user interface, another writing API endpoints, and a third creating database schemas.

If these are all in the same project then there has to be some required ordering to it or you get a frontend written to make use of a backend that doesn't have the endpoints used, and you get a backend that makes use of a different database schema than the separately generated database schema.

furyofantares•2h ago
On the same project you can use worktrees or otherwise separate clones of the repo - that part is not that bad. My comment was just about my own context switch.
kasey_junk•2h ago
This is just project management. Teams of software devs have been doing this for decades. And it’s easier with agents because there is no harm in letting one sit idle.
rcarr•1h ago
A technique I have found that works well is to have it working on one feature and then to have another session planning the next. Whilst it's busy generating some code, I open up another instance, tell it the next task and instruct it to create a gherkin feature file with an implementation plan. I then go back and forth between reviewing the code for the current feature and the plan for the next one.
manveerc•2h ago
When I read the title, I thought you were referring to https://parallel.ai, which also is a game changer in my opinion :)

PS: I have no affiliation with Parallel the company

zzzeek•2h ago
I use Claude every day. When I give it a program that does something straightforward in one file and it writes it from scratch, it does great. When I have it fix issues or add functionality to small to medium sized apps that have a mostly simple design, it does great. When I point it at codebases that are 20 years old and have a lot of indirection in their design due to years of hard lessons learned and a lot (like a LOT) of cases covered, it really struggles (just read my profile to know what codebase this is). I mostly try to get it to write changelog messages, docs and tests, where it works, but I have to really wrestle with it. I can't imagine doing anything on "vibes" and it all seems quite ridiculous if you are working on hardcore library oriented software with tens of thousands of users.

If we're going to say, who cares, with LLMs we'll never need 20 year old codebases we'll just keep writing new stuff, OK you do you.

localhost•1h ago
one thing that i find works really well is to ask it to research things in the codebase and write a plan first. codex with gpt-5 is exceedingly good at doing this. then ask it to write a plan for what it would do with that information, i.e., i want you to research codebase for <goal>. then write a plan for how you would achieve <goal> given what you have learned.
zzzeek•16m ago
Claude writes out plans and all that, it's good about that.

Sure would be great if ai agents could learn from conversations. That would really makes things better. I tell Claude to capture things in the Claude.md file, but I have to manually tend to that quite a lot.

ambicapter•2h ago
Obviously I'm an AI-tools skeptic, but this is hilarious:

> 1. Prepare issues with sufficient context

> Start by ensuring each GitHub issue contains enough context for agents to understand what needs to be built and how it integrates with the system. This might include details about feature behavior, file locations, database structure, or specific requirements such as displaying certain fields or handling edge cases.

> You can’t do half-hearted prompts and fix as you go, because those fixes come an hour later.

> Skills That Become More Important > Full-stack understanding > Problem decomposition > Good writting skills > QA and Code Review skills

This is just software engineering?!?

edit: On the other hand, maybe I can convince people in my org to get better at software engineering by telling them its for the AI to work better.

shiroyasha•2h ago
Yes. AI assisted software engineering is still software engineering. I don't see that part changing anytime soon.
DetroitThrow•2h ago
>This is just software engineering?!?

Absolutely. The existence of vibe coding does not mean production code is going to be engineered without the same principles we've always used - even if we're using AI to generate a lot more code than before.

Any crowd suggesting that this was not the case has lost the plot, imo.

Aeolun•2h ago
People find it a lot more palatable when the AI requires all this information than when software engineers do though. If I ask for clear requirements I’m asked to just figure it out. But if the AI implements nonsense without clear requirements that the fault of the specs.
lazide•2h ago
Well, that’s because the software engineers are irritating when they push back and say ‘no’ or ‘wtf’.

When the AI does it, it’s being polite and stuff. /s, kinda.

ambicapter•1h ago
You're right. That's an excellent observation! I will make sure to use those language patterns in all my professional communications going forward.

/s but not really?

tjr•2h ago
I am amazed at how suddenly people are on board with writing clear design documentation now that it means AI can generate the code rather than humans.

I wonder how much better humans would be at generating code given the same abundance of clearly-written design documentation?

dzhiurgis•46m ago
My workplace was always pretty good at writing requirements so I call myself a chatgpt wrapper now.
electroglyph•2h ago
lmao, "good writting skills" =)
ScotterC•2h ago
I lol'ed too but then thought - at least he actually wrote this!
shiroyasha•2h ago
Heh, damn. Made a typo at the worst spot
ambicapter•1h ago
[sic]
pvtmert•1h ago
> This is just software engineering?!?

Indeed yes. Although most places shipping software in a "software development" and/or "programming" fashion for many years.

Many, many places certainly do not do the engineering part, even though resulting product is a software.

rukuu001•1h ago
Yes, the ability to clearly and unambiguously communicate what's required works on both humans and machines.
wrs•1h ago
Yeah, it’s funny, we may finally have a way to get developers to write documentation for other developers, it’s just that the other developers aren’t human!
skhameneh•1h ago
> On the other hand, maybe I can convince people in my org to get better at software engineering by telling them its for the AI to work better.

Really good engineering practices are fundamental to get the most out of AI tooling. Convoluted documentation, code fragmentation, etc all pollute context when working with AI tools.

In my experience, just having one outdated word (especially if it's a library name) anywhere in code or documentation can create major ongoing headaches.

The worst part of it is trying to avoid negative assertions. When the AI tooling keeps trying to do "the wrong thing" it's sometimes a challenge to rephrase instructions for "the right thing" to frame a positive assertion.

osn9363739•2h ago
Can this guy, or someone else post a full days (4-8 hours, or what ever is spent in the weeds) stream of work to youtube or something. I just want to watch the process to see what I'm missing. Or if there is anyone that already does that can they recommend it to me. I would appreciate it.
slig•2h ago
https://youtu.be/xAKVi_jvvg4

Two hours of Web Dev Cody.

shiroyasha•2h ago
Web dev cody is great. I recommend him.

I (author) sometimes stream my work here as well https://www.youtube.com/@operatelybackstage.

_345•2h ago
Are you saying that because you're also skeptical? I haven't had the best time switching to agent coding. I mean for throwaway work its fine but its kind of boring and aider still messes up from time to time
osn9363739•2h ago
I probably lean on the sceptical side of the spectrum. I'm not against giving it a go if I can get value out of it but I'm not having the wonderful experience that these people are having. - The asynchronous nature of it slows me down and it feels the opposite of what this bloke is saying around getting into a flow. - I miss things because I'm not thinking it all the way through. - The issues with errors or hallucinations. - It does not feel faster (I might blow through a couple of things really fast, but the issues created elsewhere sometimes eat all that saved time up). - The quality of work is all over the shop. Bigger projects just fall apart after a while. I also wonder if the way I think is hindering me. I don't like natural language. I struggle to communicate at the best of times. All my emails are dot points. If someone asks me for a diagram I write it in plantuml or using a python library. I work in DevOps and love declarative manifests and templates.
adriand•2h ago
Try as an initial step having the agentic AI improve your prompt for you. I have a "prompt improvement prompt template", which is a standardized document (customized for each project I'm working on), that has a bunch of boilerplate instructions in it, along with a section where I paste in my first-draft prompt. I then feed this document (boilerplate + crappy prompt) into the AI and it creates a way better prompt for me. Then I edit that to ensure it's correct, and then that becomes the prompt I use.
kasey_junk•2h ago
Very strange that Devin and Claude code weren’t in the list of systems that support these workflows.
tomlockwood•2h ago
So the solution that gets upvoted during this hype cycle is the one that requires throwing more money at these companies? Curious.
asdev•2h ago
Context switching between more than 2 threads of work is untenable if you want to really review code in depth. And with AI, you need to go through everything with a fine toothed comb
Aeolun•2h ago
Yeah, I find I need to interrupt Claude at least once every two turns to prevent it from going off into the wrong rabbit hole.
shiroyasha•2h ago
The same work as any senior software engineer reviewing his teams work, imho.
lazide•2h ago
Eh, you quickly learn who you can trust and who needs the super skeptical detailed look with humans. With LLM’s you have you to super skeptical of everything.
CuriouslyC•1h ago
Counterpoint, you need to develop more robust automated systems so you don't have to go through everything with a fine toothed comb.
muratsu•2h ago
I find Codex and Claude Code to have different strength/weaknesses and wanted to be able to use them from a single interface. Currently hacking on https://devfleet.ai to make agent management more easy on myself.

Briefly mentioned on the article but async agents really thrive on small and scoped issues. Imagine hooking them up to your feedback tool (eg canny) and automatically having a PR as you review the customer feedback. Now this would likely not work for large asks but for smaller asks, you can just accept the PR and ship it really fast!

conradkay•2h ago
Cool project! Do you think a lot of Codex's strengths are just from using GPT-5 as the model?
muratsu•2h ago
The codex model is trained differently than the normal models. It has extra training on how to use cli and I find it to be better at project scope tasks (eg running tests, migrations, etc). Whereas in my experience Claude is the better coding model.
modarts•2h ago
It still amuses me how literally people took Kapathy's famous tweet around vibe coding https://x.com/karpathy/status/1886192184808149383

If people were to actually read beyond the first sentence, it would become clear very quickly that this was meant to be tongue in cheek.

krapp•2h ago
People took it seriously because that's exactly how a lot of LLM users think and exactly what they want 'coding' to be. Honestly I'm not even certain it is satire.
pvtmert•1h ago
Because most people have the context-window of 10 tokens, they do not read further than the first sentence (or two).
stavros•1h ago
I don't think it's tongue-in-cheek at all. It refers to a specific type of LLM coding, where you literally don't care about how bad the code is and just code stuff and hope it works. That's how I use the term, and that's why I use it rarely.
tptacek•2h ago
So:

(1) I feel like most people call these async agents, though maybe "parallel" is the term that will stick.

(2) Async is great for reasons other than concurrent execution.

(3) Concurrent execution is tricky, at least for tightly defined projects, because the PRs will step on each other, and (maybe this is just me) I would rather rewrite an entire project than try to pick through a complicated merge conflict.

shiroyasha•2h ago
I agree, async does feel like a better description. I wish I used that term for the title.
CuriouslyC•2h ago
Nah, I saw this problem a while ago and already spec'd out the solution. First, agents need to be doing atomic commits, and second you can just have a massive merge queue with bisection, if you're using bazel you can handle ci gating on thousands of PRs with very little overhead, and when a merge batch fails you find the bad patch set in O(log(n)) time and dispatch to an agent for reconciliation. I even built a prototype, works great in benchmarks but I don't have a need for it over merge trains in gitlab yet.
mmaunder•2h ago
The author is lying. My team and I are heavy users of Claude code and other agents and it ain’t like this. You need to manage an AI coding agent carefully and course correct frequently. There are cases for parallel agents but they are tasks like parallel document fetches and summarization, and other tasks that don’t require supervision.

The idea of having multiple parallel agents merge pull requests or resolve issues in parallel is still just an idea.

Please don’t post or upvote attention seeking crap like this. It gives a very exciting and promising technology a bad name.

hu3•1h ago
Your comment is disproportionately rude. Just because your team can't leverage multiple coding agents doesn't mean no one else can.

And even if OP also can't, this is a good place to discuss possible problems and solutions for parallel development using coding agents.

Please refrain from gatekeeping.

mmaunder•1h ago
“With this approach, I can manage to have 10–20 pull requests open at once, each handled by a dedicated agent.”

A quote from the post. No, I think my post is calibrated quite well considering what OPs post does to our industry.

hu3•1h ago
Having 20 PRs open at once doesn't necessarily mean managing 20 agents simultaneously.

It can mean, for example, that 2 agents worked for some time in a list of 20 TODO features and produced 20 PRs to be reviewed. They could have worked overnight even.

You're seemingly judging from the least generous interpretation, which is not constructive and is also against HN guidelines fyi.

shiroyasha•1h ago
Exactly! To make it more clear, here is how I approach my day:

9-10am: I comb through our issue list, figuring out which issues are well defined, which need more input or design decision. => I pick a hand-full, lets say 10 that I kick off to run in the background, and lets say another 10 for further specification.

10-2pm: I thinker with 10 issues to figure out the exact specs and to expand the requirement list.

2pm-6pm: I review the code written by the agents, one by one. I kick off for further work things that need more input, or merge things that look good.

mmaunder•1h ago
I’m not ok with someone self promoting here at the cost of thousands of people thinking they’re either not smart enough, or are doing something incorrectly. We saw this same pattern during the dot com boom a quarter century ago with self promoters creating a “you just don’t get it” culture which eventually collapsed like a house of cards. What we share should be reproducible by others and we should avoid hand wavey excitement without substance. Especially here on HN where many of the next great companies and ideas will be born.
hu3•39m ago
Technology evolves. At some point there's going to be things that other people are doing that you can't replicate yet. That doesn't mean you're not smart enough. But it might mean that you are doing something wrong. Often though, you just got to try different things or wait for methodologies to consolidate and become mainstream.

Even if parallel agents is not something easily done currently, debating about ways to do it is constructive enough for me.

pvtmert•2h ago
Is it me or the post sounding (showing!) that they haven't tried the mentioned approach in real life.

Because in real life, one agent tries to fix build issue with rm -rf node_modules and the other is already running a server (ie: npm server), conflicting with each other nearly all the time!. (even if it's not a destructive action, the second npm server will most likely to fail due to port-allocation conflicts!)

Meanwhile, what I found helpful is that: 1. Clone the same repo twice or three times 2. In each terminal or whatever, `cd` into it 3. Create a branch, run your ~commands~ prompts (each is their own session with their own repo) 4. commit/push then merge/rebase (also resolve conflicts if needed, use LLM again if you want)

Any other way multiple agents work in harmony in a single repo (filesystem/directory) at the same time is a pipe-dream with the current status of the MCP and agents.

Let alone being aware of each other (agents), they don't even have a proper locking mechanism. As soon as you make a new change, most of the editing (search/replace) functionality in the MCPs fail miserably. Then they re-read the entire file, just creating context-rot or over-filling with already-existing stuff. Soon you run out of tokens (or just pay extra for no reason)

> edit: comments mentioned that each agent runs in a VM isolated from others, kinda makes sense but still, there will be massive merge-conflicts unless each agent runs in a completely different set of service/code-base (ie frontend vs backend or couple of micro-services)

GZGavinZhao•2h ago
git workspaces?
abound•1h ago
The post is about using GitHub's integrated Copilot tooling, where each issue gets its own instance presumably running in a sandbox. This sidesteps the issues you're talking about here.
shiroyasha•1h ago
I don't claim to have lots of experience with this, I've been only doing it for a couple of weeks, but I do feel that some of your comments are disingenuous.

---

> Any other way multiple agents work in harmony in a single repo (filesystem/directory) at the same time is a pipe-dream with the current status of the MCP and agents.

Every agent runs in a separate VM on GitHub.

> Let alone being aware of each other (agents), they don't even have a proper locking mechanism.

Never claimed this. Feels like a strawman argument.

anthem2025•2h ago
Less convincing when you open up by gushing about every other lame AI tech then proceed to insist that this new thing is the real revolution.

Comes across as someone who just wants to shill for AI for some reason.

adriand•1h ago
I'm starting to think that the rather slow nature of Claude Code is a feature. In fact if they suddenly sped things up by 10X, I would want an option to slow it back down. Sometimes I am fine with it working unsupervised while I empty the dishwasher or take a shower, but a lot of the time I watch it work. Not only does this help me stop it from going down rabbit holes / chewing through all of my Opus usage cap, but I have a much better understanding of what it's built, in the same way I might if I was pair-programming with someone and they were driving.

The idea of having multiple instances working in parallel sounds like a nightmare to me. I find this technology works best when it is guided, ideally in real time.

sovietmudkipz•1h ago
To those who have worked with autonomous background agents techniques, can you describe the stack and the workflow?

Has anyone set up a local only autonomous agent, using an open source model from somewhere like huggingface?

Still a bit confused on the details of implementing the technique. Would appreciate any explanations (thanks in advance).

ravila4•17m ago
My experience with parallel agents is that the bottleneck is not how fast we can produce code but the speed at which we can review it and context switch. Realistically, I don’t think most people have the mental capacity to supervise more than one simultaneous task of any real complexity.