frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
460•klaussilveira•6h ago•112 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
800•xnx•12h ago•484 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
154•isitcontent•7h ago•15 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
149•dmpetrov•7h ago•65 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
48•quibono•4d ago•5 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
24•matheusalmeida•1d ago•0 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
89•jnord•3d ago•11 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
259•vecti•9h ago•122 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
326•aktau•13h ago•157 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
199•eljojo•9h ago•128 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
322•ostacke•12h ago•85 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
405•todsacerdoti•14h ago•218 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
332•lstoll•13h ago•240 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
20•kmm•4d ago•1 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
51•phreda4•6h ago•8 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
113•vmatsiiako•11h ago•36 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
192•i5heu•9h ago•141 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
150•limoce•3d ago•79 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
240•surprisetalk•3d ago•31 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
3•romes•4d ago•0 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
990•cdrnsf•16h ago•417 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
23•gfortaine•4h ago•2 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
7•DesoPK•1h ago•4 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
45•rescrv•14h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
61•ray__•3h ago•18 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
36•lebovic•1d ago•11 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•57 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
5•gmays•2h ago•1 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
40•nwparker•1d ago•10 comments

The Oklahoma Architect Who Turned Kitsch into Art

https://www.bloomberg.com/news/features/2026-01-31/oklahoma-architect-bruce-goff-s-wild-home-desi...
21•MarlonPro•3d ago•4 comments
Open in hackernews

Optimizing Tool Selection for LLM Workflows with Differentiable Programming

https://viksit.substack.com/p/optimizing-tool-selection-for-llm
122•viksit•7mo ago

Comments

viksit•7mo ago
I was experimenting with how local, learnable routers can reduce token overhead, and lower costs, and decided to publish a post about it. The main goal is to delegate tool calls via a PyTorch based learner and examples of how to integrate this into a DSPy pipeline. Feedback welcome!
krohling•7mo ago
I think this is a creative approach. I wonder how the success rates for that little RNN compare to the success rates of the primary LLM, especially for complex queries or complex tool calls. At some point you have to scale that network up large enough to get better results. Eventually you've come back around and you might as well use an LLM. I think a similar approach with potentially better results (depends on the application) could be accomplished by using that same dataset to finetune a small language model. It'd be interesting to see some success rate comparisons.
viksit•7mo ago
thank you, appreciate the comment! thats a great point -- as I'm developing this intuition, I'm designing an eval which does a comparison of the openAI example there + tool call using a simple RNN + one that uses an encoder model. would love more feedback (on blog / X etc) when I post.
joe_the_user•7mo ago
My question is whether you have managed to make this work, perform a specific complex task, in some real world situation.
viksit•7mo ago
great q. thats coming up as a future post in the series.
ctxc•7mo ago
Nit - code screenshots are a PITA to read on mobile!
viksit•7mo ago
ty for the feedback, yes, balancing bad code blocks on substack vs making it look pretty lol. I'll post code next time.
zitterbewegung•7mo ago
Can you put all of the code into a gist or something?
viksit•7mo ago
yes apologies, the code rendering in substack wasn't great, but I'll put this in a gist!
bGl2YW5j•7mo ago
Creative. You’ve given me some ideas. Thanks!
rybosome•7mo ago
Thanks for the informative and inspiring post! This is definitely cool, and I can imagine very useful.

However I do want to mention that the “recommended” flow these days isn’t to separate out a tool request in the way you have. Eg instead of asking an LLM to route a tool, extracting that, running the tool, passing output back to the LLM, etc. - you simply pass the tool definitions, prompt, structural output expectations, and let the LLM (and your caller library) manage the tool use loop.

That’s how these modern LLMs are trained in post-training, and so I suspect it’s likely you’ll get different (and potentially worse?) results in trying to subvert this with a small, local model.

It comes with all the downsides you mentioned to let the LLM do this, but is also more likely to be in-distribution, and it’s easier to compose multiple tool calls.

Anyway, thanks for sharing! I’d love to see evals on a task where it compares the result when an LLM is involved in tool selection versus when it is handed tool output only - if I’m wrong about quality degradation then there’s a lot to like about your local tool routing.

viksit•7mo ago
great point, appreciate the comment. totally agree with your framing, though i think there’s still a gap in how tool use is handled today.

quick note: it doesn’t have to be an rnn. i’ve got a follow-up example coming that uses a transformer-style ToolController with self attention, more expressive routing, etc.

but here’s the thing — when you rely on few-shot bootstrapping the LLM, you never end up updating the model's priors. even after 100k tool calls, you’re still stuck in the same polluted context window and its all stateless.

this gets worse fast with more than 3–4 tool calls, especially when there’s branching logic (e.g., if api1 > 5, go left, else right).

what this approach offers is: backprop through tool calls. you can tune prompts and update priors across the full workflow, end to end. trying to develop this intuition a bit more, and would love feedback.

thanks for the suggestion on the eval — will post that comparison soon.

rybosome•7mo ago
That’s cool, I’d love to see the advanced ToolController when it’s available!

Great points about not updating priors. I also thought about it a bit more and realized that there’s a way you can largely mitigate the out-of-distribution inference requests after local tool selection, if you wanted to.

The tool use loop in an inference framework builds up history of each interaction and sends that along with each subsequent request. You could create “synthetic history”, where you send the LLM history containing the prompt, your local tool selection masquerading as though the LLM generated it, and the tool response. This would be in-distribution but still rely on your local tool routing.

If this works well enough, then I think your approach is very powerful once you’ve decided on a task and set of tools and are able to commit to training on that. Definitely want to try this myself now.

Looking forward to seeing more! I take it your substack is the best place to follow along?

viksit•7mo ago
this is really interesting! yes, its my substack.

also, if you're down, love to connect and talk more about what use cases / techniques you're using. I'm @viksit on X dms if that works.

Garlef•7mo ago
Is selection really the issue?

You'd still need to figure out what payload to give to the tool based on your context.

But I guess depending on your business case it might be worth it. It's not something I'd do from the beginning, though.

phanimahesh•7mo ago
This is a bigger problem than it looks like at first glance. For isecases where llm + tool calls make more sense compared to say llm assisted codegen, figuring out the tool arguments is nontrivial. Where it is relatively easy I think codegen is a better option wrt amortised running costs
viksit•7mo ago
this is a great point, ty.

in my mind the biggest difference is llms that are invoked during a workflow, and llms that are invoked when _creating_ code (codegen).

for the former, tools could be well defined till they are small in number, but at some point, the system needs to examine a library of tools, understand how to call it and integrate it, and at its peak, even create new tools to talk to systems not already present in that library (codegen).

viksit•7mo ago
it’s not just about selection. say you’ve got 100k tool calls — in the current hosted llm setup, you don’t actually learn anything new about your data to improve future tool accuracy.

this gets worse when you’re chaining 3–4+ tools. context gets noisy, priors stay frozen and there's prompt soup..

my intuition here is: you can learn the tool routing and the llm prompts before and after the call. (can always swap out the rnn for a more expressive encoder model and backprop through the whole thing).

super useful when you’re building complex workflows -- it gives you a way to learn the full pipeline, not just guess and hope.

tomlue•7mo ago
you could also propagate loss into the tools themselves.
arthurcolle•7mo ago
huge research area
viksit•7mo ago
this is my goal :) appreciate the feedback.
viksit•7mo ago
+1 - you can propagate the loss for a workflow across prompts + tools, which would make it much better to do resilient workflows. or "agents" as everyone calls them now ;)
shusaku•7mo ago
Yes I think once you’ve got an LLM in the loop it’s easy to be lazy and just use it to make all decisions. But it’s good to step back and think if there is a cheaper way, I mean even some hardcoded logic can do the job.
j45•7mo ago
Very true. Making a non-deterministic system make determinations is also harder for it to do.

Right tool for the step to the right extent.

Feels like soft skills for software development.

bGl2YW5j•7mo ago
I don’t think the problem is “how to optimise tool selection for the LLM”. I think the real problem is using an LLM to do tool selection at all. This is control flow and I believe should be handled with hardcoded rules and/separation of concerns.

If LLMs could handle determinism better, I’d say having a single chat-based entrypoint into a plethora of services makes sense. But as they stand, it doesn’t make sense. Simpler control flow and constraining the number and type of downstream services that sit behind a single interface I think is the way to go.

That said, I agree we should keep the ambition to move to the one size fits all approach.

viksit•7mo ago
+1 on the control flow point.

I think of an llm as a differentiable interpreter of a program. it should do decision making (tool selection, argument routing), branching logic via weights + gates etc.

so as a differentiable state machine:

- each state == a stage in your workflow

- transitions == tool calls

- encode this as a rnn or graph

and learn transitions and actions via supervision or RL

apsears•7mo ago
I have been thinking a lot about tool selection lately, and something that I keep repeating to myself is: "the LLM has intuition, but I have data".

I guess that applies when you're not able to fine-tune the LLM you're using. Presumably Anthropic has a lot of data too.

viksit•7mo ago
+1 - the biggest issue is not being able to fine tune the llm to learn the specifics of how to make a tool call better over time, which an approach like this can bring to the table.
nphard85•7mo ago
Very interesting. How does this approach work for complex agentic workflows where the LLM is expected to orchestrate across multiple tools (such as when using MCP)? Or is this mainly for simple cases like the ones presented in the blog post?
lgas•7mo ago
The work described appears as if it would handle a complex set of multiple tools just fine, but you do train the controller on a specific tool set, so you would presumably need to train (or at least something like "fine tune") a controller for each toolset you wanted to use.
viksit•7mo ago
for sure, there's a way here where I think we ought to be able to learn multiple tool calls and prompts together with real world data. investigating that next.
viksit•7mo ago
+1 thanks for mentioning MCP!

re: different tools (apis vs mcps). in my mind, there should be no real difference at what kind of tools is called at this moment since I model this as a softmax over a label set of tools.

that said, an idea I want to investigate is whether tools can live in a learned embedding space, where selection isn’t a softmax over discrete labels but a nearest-neighbor or attention mechanism over continuous vectors.

this is the intuition I'm developing as we speak and in some of my other comments on this thread (see differentiable state machine comment).

viksit•7mo ago
(author here, put the code in a gist here for reference)

https://gist.github.com/viksit/c67d1d960c4cec89488290496defb...

jaksa•7mo ago
Figuring out which tool to call is trivial, passing the correct arguments is the difficult and error prone part. Smarter agents would even use a varying amount of tool calls until they get the desired response.
viksit•7mo ago
there's a world where the model could infer that as well!
crazylogger•7mo ago
I can see this makes sense for simple { user_query -> search -> llm_answer } usage, where tool use is only a means to retrieve background info.

For complex real-world agent flows though, tool use is often the only thing that the LLM is expected to do. Like in a coding agent:

```

User: Develop a program to ...

Agent: Bash("touch main.py") > 0, ""

Agent: Edit("main.py", initial_patch) > 0, ""

Agent: Bash("python main.py") > 1, "SyntaxError: ..."

Agent: Edit("main.py", fix_patch) > 0, ""

Agent: Bash("python main.py") > 0, "OK"

Agent: FINISH

```

Here, tool selection (+ writing the arguments) is actually the whole job. It's also easy to see that if you omit even one of the tool use records in the middle, the agent wouldn't work at all.

pcwelder•7mo ago
You've essentially just trained your own LM instead of using a pretrained large LM.

Speaking generically -- any place in your workflow you feel the task is not hard, you can use smaller and cheaper LM.

Smaller LMs come with accuracy reduction, particularly in tail cases. So in the real world this doesn't work out.

Also is gumble softmax usage intentional? It looks like a straightforward classifier that just needs regular softmax.

digitcatphd•7mo ago
this is smart, but I think NVIDIA's paper on fine tuning small language models presents a sightly more efficient approach
viksit•7mo ago
would you have a link?
digitcatphd•7mo ago
https://arxiv.org/pdf/2506.02153
bigmadshoe•7mo ago
This is super cool!

From the article:

  Each LLM call incurs latency, cost, and token overhead. More subtly, it compounds context:
  every step includes not only the original query, but intermediate outputs and scratchpad logic from earlier prompts. 
  This creates a growing burden on both inference and model performance.
I was working with agents over a year ago before the common workflows had really been set in stone. At that time we were heavily doctoring the context to give a very streamlined representation of what had occurred during a given run to the LLM. Is this not standard practice?
viksit•7mo ago
yes, AFAIK right now, there are no easy ways of "slimming" context because no one knows what it should be or how.