frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Secret Service says it thwarted device network used to threaten U.S. officials

https://www.washingtonpost.com/national-security/2025/09/23/secret-service-cellular-device-network/
1•rbanffy•44s ago•0 comments

Reusable grocery bags durability test

https://www.cbc.ca/lite/story/1.7643243
2•colinprince•1m ago•0 comments

German state investigates drone sightings for possible espionage

https://www.reuters.com/world/europe/german-state-investigates-drone-sightings-possible-espionage...
2•thm•2m ago•0 comments

Technique makes complex 3D printed parts more reliable

https://news.mit.edu/2025/technique-makes-complex-3d-printed-parts-more-reliable-0925
1•rbanffy•3m ago•0 comments

Process Tracing Projects

https://github.com/oils-for-unix/oils/wiki/Process-Tracing-Projects
2•todsacerdoti•4m ago•0 comments

Combine manufacturer shifts production to Europe from U.S.

https://www.cbc.ca/lite/story/1.7643879
1•colinprince•5m ago•0 comments

Handling Negative Feedback

https://www.youtube.com/watch?v=clARvO_AthM
1•todsacerdoti•6m ago•1 comments

A new bystander effect? Aggression can be contagious when observing it in peers

https://medicalxpress.com/news/2025-09-bystander-effect-aggression-contagious-peers.html
2•PaulHoule•7m ago•0 comments

Poll: Fearful or Greedy?

1•surprisetalk•8m ago•0 comments

LocalCode – A Perl-Based AI Coding Agent

https://www.i-programmer.info/news/222-perl/18340-localcode-a-perl-based-ai-coding-agent.html
2•aquastorm•8m ago•0 comments

Taylor TX data center project paused after neighbors sue

https://www.kut.org/energy-environment/2025-09-26/taylor-texas-blueprint-data-centers-lawsuit
1•pavel_lishin•8m ago•1 comments

How to Spot a Genius

https://www.economist.com/finance-and-economics/2025/09/23/how-to-spot-a-genius
2•mathattack•10m ago•2 comments

Updating application icons for macOS 26 Tahoe and Liquid Glass

https://successfulsoftware.net/2025/09/26/updating-application-icons-for-macos-26-tahoe-and-liqui...
1•hermitcrab•10m ago•0 comments

The Right Tool for the Job: An In-Depth Look at JavaScript Array Loops

https://edith.info/blog/javascript-array-loops-in-depth
1•dimboiu•13m ago•0 comments

The Joy of Indexing

https://weidok.al/2025/09/20/the-joy-of-indexing.html
1•speckx•14m ago•0 comments

Ask HN: Do rich people avoid clicking on social media (e.g., reddit) links?

3•amichail•14m ago•3 comments

Show HN: I solved my movie night headache

https://watchnowai.com/
1•joewebber•14m ago•0 comments

An example demonstrating keyboard and mouse input in JavaScript

https://js-input-event.pages.dev/
1•greentec•15m ago•1 comments

To Vibe or Not to Vibe

https://martinfowler.com/articles/exploring-gen-ai/to-vibe-or-not-vibe.html
1•djha-skin•15m ago•0 comments

Study of the longest-lived person reveals rare genes and good bacteria

https://phys.org/news/2025-09-world-longest-person-reveals-rare.html
1•Brajeshwar•15m ago•0 comments

First radar images from satellite showcase Maine coast and North Dakota farmland

https://apnews.com/article/satellite-nasa-india-radar-73f610a8e03fa544db8826b170ce0fcc
1•Brajeshwar•16m ago•0 comments

Renewables blow past nuclear when it comes to cheap datacenter juice

https://www.theregister.com/2025/09/26/renewables_vs_smr_datacenter/
2•rntn•16m ago•0 comments

I vibe-coded an iOS camera app with album management with no experience with iOS

1•dirtide•16m ago•0 comments

Chrome DevTools for Coding Agents

https://github.com/ChromeDevTools/chrome-devtools-mcp
1•nateb2022•17m ago•0 comments

Show HN: A Modern Event Sourcing Database – Meet EventSourcingDB

https://www.thenativeweb.io/products/eventsourcingdb
1•goloroden•18m ago•0 comments

Show HN: I built a website to visualize Terraform code using Claude AI

https://tfvisualizer.com
1•autotune•19m ago•0 comments

New AI Tool Pinpoints Genes, Drug Combos to Restore Health in Diseased Cells

https://hms.harvard.edu/news/new-ai-tool-pinpoints-genes-drug-combos-restore-health-diseased-cells
1•ca98am79•19m ago•0 comments

The Perplexity Search API

https://www.perplexity.ai/hub/blog/introducing-the-perplexity-search-api
1•thm•19m ago•0 comments

Where Is AI Taking Us?

https://mattlacey.com/posts/2025-09-25-what-now/
1•laceysnr•21m ago•1 comments

A React portal component designed for browser extension development

https://github.com/molvqingtai/react-magic-portal
1•molvqingtai•23m ago•1 comments
Open in hackernews

Context is the bottleneck for coding agents now

https://runnercode.com/blog/context-is-the-bottleneck-for-coding-agents-now
93•zmccormick7•1h ago

Comments

koakuma-chan•1h ago
Has anyone tried making coding agent LoRas yet, project-specific and/or framework-specific?
CardenB•1h ago
I know it isn’t your question exactly, and you probably know this, but the models for coding assist tools are generally fine tunes of models for coding specific purposes. Example: in OpenAI codex they use GPT-5-codex
neutronicus•1h ago
I think the question is, can I throw a couple thousand bucks of GPU time at fine-tuning a model to have knowledge of our couple million lines of C++ baked into the weights instead of needing to fuck around with "Context Engineering".

Like, how feasible is it for a mid-size corporation to use a technique like LoRA, mentioned by GP, to "teach" (say, for example) Kimi K2 about a large C++ codebase so that individual engineers don't need to learn the black art of "context engineering" and can just ask it questions.

pu_pe•1h ago
I'm curious about it too. I think there are two bottlenecks, one is that training a relatively large LLM can be resource-intensive (so people go for RAGs and other shortcuts), and making it finetuned to your use cases might make it dumber overall.
bhu8•1h ago
IMHO, jumping from Level 2 to Level 5 is a matter of:

- Better structured codebases - we need hierarchical codebases with minimal depth, maximal orthogonality and reasonable width. Think microservices.

- Better documentation - most code documentations are not built to handle updates. We need a proper graph structure with few sources of truth that get propagated downstream. Again, some optimal sort of hierarchy is crucial here.

At this point, I really don't think that we necessarily need better agents.

Setup your codebase optimally, spin up 5-10 instances of gpt-5-codex-high for each issue/feature/refactor (pick the best according to some criteria) and your life will go smoothly

lomase•1h ago
Can you show something you have built with that workflow?
hirako2000•1h ago
Of course not.
bhu8•1h ago
Not yet unfortunately, but I'm in the process of building one.

This was my journey: I vibe-coded an Electron app and ended up with a terrible monolithic architecture, and mostly badly written code. Then, I took the app's architecture docs and spent a lot of my time shouting "MAKE THIS ARCHITECTURE MORE ORTHOGONAL, SOLID, KISS, DRY" to gpt-5-pro, and ended up with a 1500+ liner monster doc.

I'm now turning this into a Tauri app and following the new architecture to a T. I would say that it is has a pretty clean structure with multiple microservices.

Now, new features are gated based on the architecture doc, so I'm always maintaining a single source of truth that serves as the main context for any new discussions/features. Also, each microservice has its own README file(s) which are updated with each code change.

RedNifre•47m ago
I vibe coded an invoice generator by first vibe coding a "template" command line tool as a bash script that substitutes {{words}} in a libre office writer document (those are just zipped xml files, so you can unpack them to a temp directory and substitute raw text without xml awareness), and in the end it calls libre office's cli to convert it to pdf. I also asked the AI to generate a documentation text file, so that the next AI conversation could use the command as a black box.

The vibe coded main invoice generator script then does the calendar calculations to figure out the pay cycle and examines existing invoices in the invoice directory to determine the next invoice number (the invoice number is in the file name, so it doesn't need to open the files). When it is done with the calculations, it uses the template command to generate the final invoice.

This is a very small example, but I do think that clearly defined modules/microservices/libraries are a good way to only put the relevant work context into the limited context window.

It also happens to be more human-friendly, I think?

skaosobab•1h ago
> Think microservices.

Microservices should already be a last resort when you’ve either: a) hit technical scale that necessitates it b) hit organizational complexity that necessitates it

Opting to introduce them sooner will almost certainly increase the complexity of your codebase prematurely (already a hallmark of LLM development).

> Better documentation

If this means reasoning as to why decisions are made then yes. If this means explaining the code then no - code is the best documentation. English is nowhere near as good at describing how to interface with computers.

Given how long gpt codex 5 has been out, there’s no way you’ve followed these practices for a reasonable enough time to consider them definitive (2 years at the least, likely much longer).

bhu8•59m ago
> Opting to introduce them sooner will almost certainly increase the complexity of your codebase prematurely

Agreed, but how else are you going to scale mostly AI written code? Relying mostly on AI agents gives you that organizational complexity.

> Given how long gpt codex 5 has been out, there’s no way you’ve followed these practices for a reasonable enough time to consider them definitive

Yeah, fair. Codex has been out for less than 2 weeks at this point. I was relying on gpt-5 in August and opus before that.

lomase•55m ago
I understand why you made it microservices, people make that too even when not using LLMs, because it looks like it is more organized.

But in my experience a microservide architecture is orders of magnitud more complex to build and understand that a monolith.

If you, with the help of an LLM, strugle to keep a monolith organized, I am positive you will find even harder to build microservices.

Good luck in your journey, I hope you learn a ton!

bhu8•45m ago
Noted. Thanks!
perplex•1h ago
I've been using claude on two codebases, one with good layering and clean examples, the other not so much. I get better output from the LLM with good context and clean examples and documentation. Not surprising that clarity in code benefits both humans and machines.
ninetyninenine•1h ago
Context is a bottleneck for humans as well. We don’t have full context when going through the code because we can’t hold full context.

We summarize context and remember summarizations of it.

Maybe we need to do this with the LLM. Chain of thought sort of does this but it’s not deliberate. The system prompt needs to mark this as a deliberate task of building summaries and notes notes of the entire code base and this summarized context of the code base with gotchas and aspects of it can be part of permanent context the same way ChatGPT remembers aspects of you.

The summaries can even be sectioned off and and have different levels of access. So if the LLM wants to drill down to a subfolder it looks at the general summary and then it looks at another summary for the sub folder. It doesn’t need to access the full summary for context.

Imagine a hierarchy of system notes and summaries. The LLM decides where to go and what code to read while having specific access to notes it left previously when going through the code. Like the code itself it never reads it all it just access sections of summaries that go along with the code. It’s sort of like code comments.

We also need to program it to change the notes every time it changes the program. And when you change the program without consulting AI, every commit you do the AI also needs to update the notes based off of your changes.

The LLM needs a system prompt that tells it to act like us and remember things like us. We do not memorize and examine full context of anything when we dive into code.

maerF0x0•1h ago
> remember summarizations

yes, and if you're an engineering manager you retain _out of date_ summarizations, often materially out of date.

ninetyninenine•1h ago
I addressed this. The AI needs to examine every code change going in whether that code change comes from AI or not and edit the summaries accordingly.

This is something humans dont actually do. We aren’t aware of every change and we don’t have updated documentation of every change so the LLM will be doing better in this regard.

lomase•1h ago
I mean... have you ever heard of this small tool called GIT that people use to track code changes?
ninetyninenine•1h ago
I’m not talking about git diffs. I’m talking about the summaries of context. Every commit the ai needs to update the summaries and notes it took about the code.

Did you read the entirety of what I wrote? Please read.

Say the AI left a 5 line summary of a 300 line piece of code. You as a human update that code. What I am saying specifically is this: when you do the change, The AI then sees this and updates the summary. So AI needs to be interacting with every code change whether or not you used it to vibe code.

The next time the AI needs to know what this function does, it doesn’t need to read the entire 300 line function. It reads the 5 line summary, puts it in the context window and moves on with chain of thought. Understand?

This is what shrinks the context. Humans don’t have unlimited context either. We have vague fuzzy memories of aspects of the code and these “notes” effectively make coding agents do the same thing.

wat10000•1h ago
They need a proper memory. Imagine you're a very smart, skilled programmer but your memory resets every hour. You could probably get something done by making extensive notes as you go along, but you'll still be smoked by someone who can actually remember what they were doing in the morning. That's the situation these coding agents are in. The fact that they do as well as they do is remarkable, considering.
multiplegeorges•1h ago
Basically, LLMs are the guy from Memento.
KeatonDunsford•36m ago
This is precisely how I go about my usage pattern with Cursor that I already have, I structure my repo declaratively with a Clojure and Nix build pipeline so when my context maxes out for a chat session, the repo is self-evident self-documented enough that a new chat session automatically has a heightened context

- - kae3g

nsedlet•1h ago
Agreed. As engineers we build context every time we interact with the codebase. LLMs don't do that.

A good senior engineer has a ton in their head after 6+ months in a codebase. You can spend a lot of time trying to equip Claude Code with the equivalent in the form of CLAUDE.MD, references to docs, etc., but it's a lot of work, and it's not clear that the agents even use it well (yet).

hirako2000•1h ago
That is not how the brain does it.

We do take notes, we summarize our writings, that's a process. But the brain does not follow that primitive process to "scale".

anthonypasq•53m ago
youre projecting a deficiency of the human brain onto computers. computers have advantages that our brains dont (perfect and large memory), theres no reason to think that we should try to recreate how humans do things.

why would you bother with all these summaries if you can just read and remember the code perfectly.

maerF0x0•1h ago
I've noticed that chatgpt doesnt seem to be very good at understanding elapsed time. I have some long running threads and unless i prompt it with elapsed time ("it's now 7 days later") the responses act like it was 1 second after the last message.

I think this might be a good leap for agents, the ability to not just review a doc in it's current state, but to keep in context/understanding the full evolution of a document.

wat10000•1h ago
They have no ability to even perceive time, unless the system gives them timestamps for the current interaction and past interactions.
multiplegeorges•1h ago
Which seems like a trivial addition if it's not there?
wat10000•56m ago
It is, but now you're burning a bit of context on something that might not be necessary, and potentially having the agent focus on time when it's not relevant. Not necessarily a bad idea, but as always, tradeoffs.
HankStallone•1h ago
I've noticed the same thing with Grok. One time it predicted a X% chance that something would happen by July 31. On August 1, it was still predicting the thing would happen by July 31, just with lower (but non-zero) odds. Their grasp on time is tenuous at best.
marstall•1h ago
> Level 2 - One commit - Cursor and Claude Code work well for tasks in this size range.

I'll stop ya right there. Spending the past few weeks fixing bugs in a big multi-tier app (which is what any production software is this days). My output per bug is always one commit, often one line.

Claude is an occasional help, nothing more. Certainly not generating the commit for me!

SparkyMcUnicorn•1h ago
I'll stop you right there. I've been using Claude Code for almost a year on production software with pretty large codebases. Both multi-repo and monorepo.

Claude is able to create entire PRs for me that are clean, well written, and maintainable.

Can it fail spectacularly? Yes, and it does sometimes. Can it be given good instructions and produce results that feel like magic? Also yes.

ljm•45m ago
For finicky issues like that I often find that, in the time it takes to create a prompt with the necessary context, I was able to just make the one line tweak myself.

In a way that is still helpful, especially if the act of putting the prompt together brought you to the solution organically.

Beyond that, 'clean', 'well written' and 'maintainable' are all relative terms here. In a low quality, mega legacy codebase, the results are gonna be dogshit without an intense amount of steering.

agf•54m ago
This is interesting, and I'd say you're not the target audience. If you want the code Claude writes to be line-by-line what you think is most appropriate as a human, you're not going to get it.

You have to be willing to accept "close-ish and good enough" to what you'd write yourself. I would say that most of the time I spend with Claude is to get from its initial try to "close-ish and good enough". If I was working on tiny changes of just a few lines, it would definitely be faster just to write them myself. It's the hundreds of lines of boilerplate, logging, error handling, etc. that makes the trade-off close to worth it.

layer8•21m ago
The parent comment didn’t say anything about expecting the LLM output “to be line-by-line what you think is most appropriate as a human”?
hirako2000•1h ago
And they didn't see that coming ?

I gave up building agents as soon as I figured they would never scale beyond context constraint. Increase in memory and compute costs to grow the context size of these things isn't linear.

lxe•1h ago
Context has been the bottleneck since the beginning
aliljet•1h ago
There's a misunderstanding here broadly. Context could be infinite, but the real bottleneck is understanding intent late in a multi-step operation. A human can effectively discard or disregard prior information as the narrow window of focus moves to a new task, LLMs seem incredibly bad at this.

Having more context, but leaving open an inability to effectively focus on the latest task is the real problem.

ray__•1h ago
This is a great insight. Any thoughts on how to address this problem?
throwup238•58m ago
It has to be addressed architecturally with some sort of extension to transformers that can focus the attention on just the relevant context.

People have tried to expand context windows by reducing the O(n^2) attention mechanism to something more sparse and it tends to perform very poorly. It will take a fundamental architectural change.

buddhistdude•55m ago
Can one instruct an LLM to pick the parts of the context that will be relevant going forward? And then discard the existing context, replacing it with the new 'summary'?
magicalhippo•45m ago
I'm not an expert but it seemed fairly reasonable to me that a hierarchical model would be needed to approach what humans can do, as that's basically how we process data as well.

That is, humans usually don't store exactly what was written in as sentence five paragraphs ago, but rather the concept or idea conveyed. If we need details we go back and reread or similar.

And when we write or talk, we form first an overall thought about what to say, then we break it into pieces and order the pieces somewhat logically, before finally forming words that make up sentences for each piece.

From what I can see there's work on this, like this[1] and this[2] more recent paper. Again not an expert so can't comment on the quality of the references, just some I found.

[1]: https://aclanthology.org/2022.findings-naacl.117/

[2]: https://aclanthology.org/2025.naacl-long.410/

yggdrasil_ai•11m ago
>extension to transformers that can focus the attention on just the relevant context.

That is what transformers attention does in the first place, so you would just be stacking two transformers.

aliljet•56m ago
For me? It's simple. Completely empty the context and rebuild focused on the new task at hand. It's painful, but very effective.
atonse•47m ago
Do we know if LLMs understand the concept of time? (like i told you this in the past, but what i told you later should supersede it?)

I know there classes of problems that LLMs can't natively handle (like doing math, even simple addition... or spatial reasoning, I would assume time's in there too). There are ways they can hack around this, like writing code that performs the math.

But how would you do that for chronological reasoning? Because that would help with compacting context to know what to remember and what not.

loudmax•23m ago
LLMs certainly don't experience time like we do. They live in a uni-dimensional world that consists of a series of tokens (though it gets more nuanced if you account for multi-modal or diffusion models). They pick up some sense of ordering from their training data, such as "disregard my previous instruction," but it's not something they necessarily understand intuitively. Fundamentally, they're just following whatever patterns happen to be in their training data.
neutronicus•1h ago
No, I think context itself is still an issue.

Coding agents choke on our big C++ code-base pretty spectacularly if asked to reference large files.

Someone1234•57m ago
Yeah, I have the same issue too. Even for a file with several thousand lines, they will "forget" earlier parts of the file they're still working in resulting in mistakes. They don't need full awareness of the context, but they need a summary of it so that they can go back and review relevant sections.

I have multiple things I'd love LLMs to attempt to do, but the context window is stopping me.

AnotherGoodName•37m ago
I do take that as a sign to refactor when it happens though. Even if not for the sake of LLM compatibility with the codebase it cuts down merge conflicts to refactor large files.

In fact I've found LLMs are reasonable at the simple task of refactoring a large file into smaller components with documentation on what each portion does even if they can't get the full context immediately. Doing this then helps the LLM later. I'm also of the opinion we should be making codebases LLM compatible. So if it happens i direct the LLM that way for 10mins and then get back to the actual task once the codebase is in a more reasonable state.

bongodongobob•16m ago
Interestingly, this issue has caused me to refactor and modularize code that I should have addressed a long time ago, but didn't have the time or stamina to tackle. Because the LLM can't handle the context, it has helped me refactor stuff (seems to be very good at this in my experience) and that has led me to write cleaner and more modular code that the LLMs can better handle.
atonse•37m ago
I've found situations where a file was too big, and then it tries to grep for what might be useful in that file.

I could see in C++ it getting smarter about first checking the .h files or just grepping for function documentation, before actually trying to pull out parts of the file.

AlGoreRhythm•10m ago
Out of curiosity, how would you rate an LLM’s ability to deal with pointers in C++ code?
bgirard•1h ago
I think that's the real issue. If the LLM spends a lot of context investigating a bad solution and you redirect it, I notice it has trouble ignoring maybe 10K tokens of bad exploration context against my 10 line of 'No, don't do X, explore Y' instead.
dingnuts•51m ago
that's because a next token predictor can't "forget" context. That's just not how it works.

You load the thing up with relevant context and pray that it guides the generation path to the part of the model that represents the information you want and pray that the path of tokens through the model outputs what you want

That's why they have a tendency to go ahead and do things you tell them not to do..

also IDK about you but I hate how much praying has become part of the state of the art here. I didn't get into this career to be a fucking tech priest for the machine god. I will never like these models until they are predictable, which means I will never like them.

victorbjorklund•47m ago
You can rewrite the history (but there are issues with that too). So an agent can forget context. Simply dont feed in part of the context on the next run.
dragonwriter•41m ago
This is where the distinction between “an LLM” and “a user-facing system backed by an LLM” becomes important; the latter is often much more than a naive system for maintaining history and reprompting the LLM with added context from new user input, and could absolutely incorporate a step which (using the same LLM with different prompting or completely different tooling) edited the context before presenting it to the LLM to generate the response to the user. And such a system could, by that mechanism, “forget” selected context in the process.
yggdrasil_ai•14m ago
I have been building Yggdrasil for that exact purpose - https://github.com/zayr0-9/Yggdrasil
davedx•39m ago
Yeah I start a new session to mitigate this. Don’t keep hammering away - close the current chat/session whatever and restate the problem carefully in a new one.
cjbgkagh•35m ago
There should be a simple button that allows you refine the context. A fresh LLM could generate a new context from the input and outputs of the chat history, then another fresh LLM can start over with that context.
adastra22•21m ago
/compact in Claude Code.
pulvinar•15m ago
It's easy to miss: ChatGPT now has a "branch to new chat" option to branch off from any reply.
moffkalast•19m ago
That's not how attention works though, it should be perfectly able to figure out which parts are important and which aren't, but the problem is that it doesn't really scale beyond small contexts and works on a token to token basis instead of being hierarchical with sentences, paragraphs and sections. The only models that actually do long context do so by skipping attention layers or doing something without attention or without positional encodings, all leading to shit performance. Nobody pretrains on more than like 8k, except maybe Google who can throw TPUs at the problem.
jofla_net•17m ago
Relax friend! I can't see why you'd be peeved in the slightest! Remember, the CEOs have it all figured out and have 'determined' that we don't need all those eyeballs on the code anymore. You can simply 'feed' the machine and do the work of forty devs! This is the new engineering! /s
rco8786•47m ago
I think the general term for this is "context poisoning" and is related but slightly different to what the poster above you is saying. Even with a "perfect" context, the LLM still can't infer intent.
sheerun•16m ago
Could be, but it's not. As soon as it will be infinite new brand of solutions will emerge
tptacek•13m ago
Asking, not arguing, but: why can't they? You can give an agent access to its own context and ask it to lobotomize itself like Eternal Sunshine. I just did that with a log ingestion agent (broad search to get the lay of the land, which eats a huge chunk of the context window, then narrow searches for weird stuff it spots, then go back and zap the big log search). I assume this is a normal approach, since someone else suggested it to me.
simonw•7m ago
This is also the idea behind sub-agents. Claude Code answers questions about things like "where is the code that does X" by firing up a separate LLM running in a fresh context, posing it the question and having it answer back when it finds the answer. https://simonwillison.net/2025/Jun/2/claude-trace/
tptacek•4m ago
I'm playing with that too (everyone should write an agent; basic sub-agents are incredibly simple --- just tool calls that can make their own LLM calls, or even just a tool call that runs in its own context window). What I like about Eternal Sunshine is that the LLM can just make decisions about what context stuff matters and what doesn't, which is a problem that comes up a lot when you're looking at telemetry data.
gtsop•1h ago
> Intelligence is rapidly improving with each model release.

Are we still calling it intelligence?

hatefulmoron•1h ago
I can feel the ground rumbling as thousands approach to engage in a "name the trait" style debate..
fortyseven•54m ago
Just a reminder that language is flexible.
delusional•1h ago
These are such silly arguments. I sounds like people looking at a graph of a linear function crossing and exponential one at x=2, y=2 and wonder why the curves don't fit at x=3 y=40.

"Its not the x value that's the problem, its the y value".

You're right, it's not "raw intelligence" that's the bottleneck, because there's none of that in there. The truth is no tweak to any parameter is ever going to make the LLM capable of programming. Just like an exponential curve is always going to outgrow a linear one. You can't tweak the parameters out of that fundamental truth.

__alexs•1h ago
IME speed is the biggest bottleneck. They simply can't navigate the code base fast enough.
anthonypasq•56m ago
grok-code-fast-1 is quite nice for this actually, its fast and cheap enough that you dont feel bad throwing entire threads away and trying again.
_joel•1h ago
I'm making a pretty complex project using claude. I tried claude flow and some other orchestrators but they produced garbage. Have found using github issues to track the progress as comments works fairly well, the PR's can get large comment wise (especially if you have gemini code assist, recommeded as another code review judge), so be mindful of that (that will blow the context window). Using a fairly lean CLAUDE.md and a few mcps (context7 and consult7 with gemini for longer lookups). works well too. Although be prepared to tell it to reread CLAUDE.md a few conversations deep as it loses it. It's working fairly well so far, it feels a bit akin to herding cats sometimes and be prepared to actually read the code it's making, or the important bits at least.
asdev•57m ago
I don't think intelligence is increasing. Arbitrary benchmarks don't reflect real world usage. Even with all the context it could possibly have, these models still miss/hallucinate things. Doesn't make them useless, but saying context is the bottleneck is incorrect.
reclusive-sky•29m ago
I agree, I often see Opus 4.1 and GPT5 (Thinking) make astoundingly stupid decisions with full confidence, even on trivial tasks requiring minimal context. Assuming they would make better decisions "if only they had more context" is a fallacy
alchemist1e9•6m ago
Is there a good example you could provide of that? I just haven’t seen that personally so I’d be interested in any examples on these current models. I’m sure we all remember in the early days lots of examples of stupidity being posted and it was interesting. It be great if people kept doing that so we could get a better sense of which types of problems they are failing with astounding levels of stupidity on.
chankstein38•5m ago
Agreed. I feel like, in the case of GPT models, 4o was better in most ways than 5 has been. I'm not seeing increases in quality of anything between the two 5 feels like a major letdown honestly. I am constantly reminding it what we're doing lol
EcommerceFlow•54m ago
This has been the case for a while. Attempting to code API connections via Vibe-Coding will leave you pulling your hair out if you don't take the time to scrape all relevant documentation and include said documentation in the prompt. This is the case whether it's major APIs like Shopify, or more niche ones like warehousing software (Cin7 or something similar).

The context pipeline is a major problem in other fields as well, not just programming. In healthcare, the next billion-dollar startup will likely be the one that cracks the personal health pipeline, enabling people to chat with GPT-6 PRO while seamlessly bringing their entire lifetime of health context into every conversation.

_pdp_•53m ago
also, we are one prompt away from achieving AGI...
simonw•52m ago
"And yet, coding agents are nowhere near capable of replacing software developers. Why is that?"

Because you will always need a specialist to drive these tools. You need someone who understands the landscape of software - what's possible, what's not possible, how to select and evaluate the right approach to solve a problem, how to turn messy human needs into unambiguous requirements, how to verify that the produced software actually works.

Provided software developers can grow their field of experience to cover QA and aspects of product management - and learn to effectively use this new breed of coding agents - they'll be just fine.

lerp-io•51m ago
context and memory has been a bottleneck from like day one
alastairr•50m ago
I agree, and I think intent behind the code is the most important part in missing context. You can sometimes infer intent from code, but usually code is a snapshot of an expression of an evolving intent.
AnotherGoodName•26m ago
I've started making sure my codebase is "LLM compatible". This means everything has documentation and the reasons for doing things a certain way and not another are documented in code. Funnily enough i do this documentation work with LLMs.

Eg. "Refactor this large file into meaningful smaller components where appropriate and add code documentation on what each small component is intended to achieve." The LLM can usually handle this well (with some oversight of course). I also have instructions to document each change and why in code in the LLMs instructions.md

If the LLM does create a regression i also ask the LLM to add code documentation in the code to avoid future regressions, "Important: do not do X here as it will break Y" which again seems to help since the LLM will see that next time right there in the portion of code where it's important.

None of this verbosity in the code itself is harmful to human readers either which is nice. The end result is the codebase becomes much easier for LLMs to work with.

I suspect LLM compatibility may be a metric we measure codebases in the future as we learn more and more how to work with them. Right now LLMs themselves often create very poor LLM compatible code but by adding some more documentation in the code itself they can do much better.

999900000999•48m ago
I believe if you create something like a task manager for the coding agents, think something hosted on the web like Jira, you can work around this.

I started writing a solution, but to be honest I probably need the help of someone who's more experienced.

Although to be honest, I'm sure someone with VC money is already working on this.

revel•47m ago
This is one cause but another is that agents are mostly trained using the same sets of problems. There are only so many open source projects that can be used for training (ie. benchmarks). There's huge oversampling for a subset of projects like pandas and nothing at all for proprietary datasets. This is a huge problem!

If you want your agent to be really good at working with dates in a functional way or know how to deal with the metric system (as examples), then you need to train on those problems, probably using RFT. The other challenge is that even if you have this problem set in testable fashion running at scale is hard. Some benchmarks have 20k+ test cases and can take well over an hour to run. If you ran each test case sequentially it would take over 2 years to complete.

Right now the only company I'm aware of that lets you do that at scale is runloop (disclaimer, I work there).

cuttothechase•42m ago
It is pretty clear that the long horizon tasks are difficult for coding agents and that is a fundamental limitation of how probabilistic word generation works either with transformer or any other architecture. The errors propagate and multiply and becomes open ended.

However, the limitation can be masqueraded using layering techniques where output of one agent is fed as an input to another using consensus for verification or other techniques to the nth degree to minimize errors. But this is a bit like the story of a boy with a finger in the dike. Yes, you can spawn as many boys but there is a cost associated that would keep growing and wont narrow down.

It has nothing to do with contexts or window of focus or any other human centric metric. This is what the architecture is supposed to do and it does so perfectly.

davedx•42m ago
> It needs to understand product and business requirements

Yeah this is the really big one - kind of buried the lede a little there :)

Understanding product and business requirements traditionally means communicating (either via docs and specs or directly with humans) with a bunch of people. One of the differences between a junior and senior is being able to read between the lines of a github or jira issue and know that more information needs to be teased out from… somewhere (most likely someone).

I’ve noticed that when working with AI lately I often explicitly tell them “if you need more information or context ask me before writing code”, or variations thereof. Because LLMs, like less experienced engineers, tend to think the only task is to start writing code immediately.

It will get solved though, there’s no magic in it, and LLMs are well equipped by design to communicate!

KeatonDunsford•38m ago
Here's a project I've been working on the past 2 weeks and only yesterday did I unify everything entirely while in Cursor Claude-4-Sonnet-1M MAX mode and I am pretty astounded with the results, Cursor usage dashboard tells me many of my prompts are 700k-1m context for around $0.60-$0.90 USD each, it adds up fast but wow it's extraordinary

https://github.com/foolsgoldtoshi-star/foolsgoldtoshi-star-p...

_ _ kae3g

wrs•29m ago
Replace “coding agent” with “new developer on the team” and this article could be from anytime in the last 50 years. The thing is, a coding agent acts like a newly-arrived developer every time you start it.
kypro•26m ago
I'm hitting 'x' to doubt hard on this one.

The ICPC is a short (5 hours) timed contest with multiple problems, in which contestants are not allowed to use the internet.

The reason most don't get a perfect score isn't because the tasks themselves are unreasonably difficult, but because they're difficult enough that 5 hours isn't a lot of time to solve so many problems. Additionally they often require a decent amount of math / comp-sci knowledge so if you don't know have the knowledge necessary you probably won't be able complete it.

So to get a good score you need lots of math & comp-sci knowledge + you need to be a really quick coder.

Basically the consent is perfect for LLMs because they have a ton of math and comp-sci knowledge, they can spit out code at super human speeds, and the problems themselves are fairly small (they take a human maybe 15 mins to an hour to complete).

Who knows, maybe OP is right and LLMs are smart enough to be super human coders if they just had the right context, but I don't think this example proves their point well at all. These are exactly the types of problems you would expect a supercharged auto-complete would excel at.

ISL•19m ago
If not now, soon, the bottleneck will be responsibility. Where errors in code have real-world impacts, "the agentic system wrote a bug" won't cut it for those with damages.

As these tools make it possible for a single person to do more, it will become increasingly likely that society will be exposed to greater risks than that single person's (or small company's) assets can cover.

These tools already accelerate development enough that those people who direct the tools can no longer state with credibility that they've personally reviewed the code/behavior with reasonable coverage.

It'll take over-extensions of the capability of these tools, of course, before society really notices, but it remains my belief that until the tools themselves can be held liable for the quality of their output, responsibility will become the ultimate bottleneck for their development.

jimbohn•12m ago
I agree. My speed at reviewing tokens <<<< LLM's token's. Perhaps an output -> compile -> test loop will slow things down, but will we ever get to a "no review needed" point?

And who writes the tests?

binary132•17m ago
In my opinion human beings also do not have unlimited cognitive context. When a person sits down to modify a codebase, they do not read every file in the codebase. Instead they rely on a combination of working memory and documentation to build the high-level and detailed context required to understand the particular components they are modifying or extending, and they make use of abstraction to simplify the context they need to build. The correct design of a coding LLM would require a similar approach to be effective.
kordlessagain•15m ago
No, it's not. The limitation is believing a human can define how the agent should recall things. Instead, build tools for the agent to store and retrieve context and then give it a tool to refine and use that recall in the way it sees best fits the objective.

Humans gatekeep, especially in the tech industry, and that is exactly what will limit us improving AI over time. It will only be when we turn over it's choices to it that we move beyond all this bullshit.

maherbeg•12m ago
It's both context and memory. If an LLM could keep the entire git history in memory, and each of those git commits had enough context, it could take a new feature and understand the context in which it should live by looking up the history of the feature area in it's memory.
keeda•6m ago
While this is sort of true, remember: it's not the size of the context window that matters, it's how you use it.

You need to have the right things in the context, irrelevant stuff is not just wasteful, it is increasingly likely to cause errors. It has been shown a few times that as the context window grows, performance drops.

Heretical I know, but I find that thinking like a human goes a long way to working with AI.

Let's take the example of large migrations. You're not going to load the whole codebase in your brain and figure out what changes to make and then vomit them out into a huge PR. You're going to do it bit by bit, looking up relevant files, making changes to logically-related bits of code, and putting out a PR for each changelist.

This exactly what tools should do as well. At $PAST_JOB my team built a tool based on OpenRewrite (LLMs were just coming up) for large-scale multi-repo migrations and the centerpiece was our internal codesearch tool. Migrations were expressed as a codesearch query + codemod "recipe"; you can imagine how that worked.

That would be the best way to use AI for large-scale changes as well. Find the right snippets of code (and documentation!), load each one into the context of an agent in multiple independent tasks.

Caveat: as I understand it, this was the premise of SourceGraph's earliest forays into AI-assisted coding, but I recall one of their engineers mentioning that this turned out to be much trickier than expected. (This was a year+ back, so eons ago in LLM progress time.)

Just hypothesizing here, but it may have been that the LSIF format does not provide sufficient context. Another company in this space is Moderne (the creators of OpenRewrite) that have a much more comprehensive view of the codebase, and I hear they're having better success with large LLM-based migrations.