frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Same Surface, Different Weight

https://www.robpanico.com/articles/display/?entry_short=same-surface-different-weight
1•retrocog•50s ago•0 comments

The Rise of Spec Driven Development

https://www.dbreunig.com/2026/02/06/the-rise-of-spec-driven-development.html
1•Brajeshwar•5m ago•0 comments

The first good Raspberry Pi Laptop

https://www.jeffgeerling.com/blog/2026/the-first-good-raspberry-pi-laptop/
2•Brajeshwar•5m ago•0 comments

Seas to Rise Around the World – But Not in Greenland

https://e360.yale.edu/digest/greenland-sea-levels-fall
1•Brajeshwar•5m ago•0 comments

Will Future Generations Think We're Gross?

https://chillphysicsenjoyer.substack.com/p/will-future-generations-think-were
1•crescit_eundo•8m ago•0 comments

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•righthand•11m ago•0 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•12m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•12m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
2•vinhnx•13m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
3•tosh•18m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•22m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•27m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•28m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•29m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
5•okaywriting•35m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•38m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•39m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•40m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•41m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•41m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•42m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•42m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•46m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•46m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•47m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•47m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•56m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•56m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
2•surprisetalk•58m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•58m ago•0 comments
Open in hackernews

How to Fix Your Context

https://www.dbreunig.com/2025/06/26/how-to-fix-your-context.html
93•itzlambda•5mo ago

Comments

profsummergig•5mo ago
This seems to be an important article.

However it uses various terms that I'm not sure of the definition for.

E.g. the word "Tool".

How can I learn about the definitions of these words? Will reading prior articles in the series help? Please advise.

nativeit•5mo ago
tool (n)

One who lacks the mental capacity to know he is being used. A fool. A cretin. Characterized by low intelligence and/or self-esteem.

Disclaimer: I only cite definitions from Urban Dictionary, but I remain firmly convinced they are correct definitions in context.

1123581321•5mo ago
You should make a Chrome plugin that fills in Urban Dictionary definitions of first names while you’re on LinkedIn.
jcheng•5mo ago
Here's my attempt at explaining tool calling:

https://youtu.be/owDd1CJ17uQ?si=Z2bldI8IssG7rGON&t=1330

It's an _incredibly_ important concept to understand if you have even a passing interest in LLMs. You need to understand it if you want to have any kind of mental model for how LLM-powered agents are even possible.

profsummergig•5mo ago
Thank you, I watched it. The key takeaway I got was that the client (browser, I suppose) does the actual usage of such tools. The user hands over control of these tools to the AI (and the tool-use happens in the background so it might look to the user like the AI is the one doing the actual usage of the tools).
tptacek•5mo ago
Ordinarily:

   you> what's going on?
   > It's going great --- how can I help you today?
Tool calls:

   you> [json blob of available "tools": "ls", "grep", "cat"]
   you> what's going on?
   > [json blob selecting "ls"]
   (you) presumably run "ls"
   you> [json blob of "ls" output]
   > [json blob selecting "cat foo.c"]
   (you) dump "foo.c"
   you> [json blob of "cat foo.c"]
   > I can see that we're in a C project that does XYZ...
The key thing is just: tools are just a message/conversation abstraction LLMs are trained to adhere to: they know to spit out a standardized "tool call" JSON, and they know to have multi-round conversations with sets of different "tools" (whichever ones are made available to them) to build up context to answer questions with.

That's the whole thing.

trjordan•5mo ago
This is all true, and we've prototyped a number of these things at my current startup. You need to be pretty considered about implementing them.

For a counter-example, consider Claude Code:

- 1 long context window, with (at most) 1 sub-agent

- same tools available at all times, and to sub-agent (except: spawning a sub-sub-agent)

- Full context stays in conversation, until you hit the context window limit. Compaction is automatic but extremely expensive. Quality absolutely takes a dive until everything is re-established.

- Deterministic lookup of content. Claude reads files with tools, not includes random chunks from RAG cosine similarity.

I could go on. In my experience, if you're going to use these techniques 1) maybe don't and 2) turn up the determinism to 11. Get really specific about _how_ you're going to use, and why, in a specific case.

For example, we're working on code migrations [0]. We have a tool that reads changelogs, migration guides, and OSS source. Those can be verbose, so they blow the context window on even 200k models. But we're not just randomly deleting things out of the "plan my migration" context, we're exposing a tool that deliberately lets the model pull out the breaking changes. This is "Context Summarization," but before using it, we had to figure out that _those_ bits were breaking the context, _then_ summarizing them. All our attempts at generically pre-summarizing content just resulted in poor performance because we were hiding information from the agent.

[0] https://tern.sh

jasonjmcghee•5mo ago
What do you mean re Claude Code, "at most 1 sub-agent"?
trjordan•5mo ago
It only spawns a single sub-agent (called Task iirc), which can do everything Claude Code can, except call Task().

This is different from a lot of the context-preserving sub-agents, which have fully different toolsets and prompts. It's much more general.

CuriouslyC•5mo ago
This isn't going to be much of a problem for long. I'm wrapping up on an agent context manager that gives effectively infinite context while producing ~77% better results than naive vector+BM25 baseline on my benchmark suite.
rl3•5mo ago
May I ask the rationale for writing your own? Were you using an existing tool that didn't quite fit your needs?

This is an itch I've been wanting to scratch myself, but the space has so many entrants that it's hard to justify the time investment.

CuriouslyC•5mo ago
Existing off the shelf IR tools are mid, more recent research is often not productionized, and there are a lot of assumptions that hold for agentic context (at least in the coding realm, which is the one that matters) that you can take advantage of to push performance.

That plus babysitting Claude Code's context is annoying as hell.

rl3•5mo ago
Thanks.

>That plus babysitting Claude Code's context is annoying as hell.

It's crazy to me that—last I checked—its context strategy was basically tool use of ls and cat. Despite the breathtaking amount of engineering resources major AI companies have, they're eschewing dense RAG setups for dirt simple tool calls.

To their credit it was good enough to fuel Claude Code's spectacular success, and is fine for most use cases, but it really sucks not having proper RAG when you need it.

On the bright side, now that MCP has taken off I imagine one can just provide their preferred RAG setup as a tool call.

CuriouslyC•5mo ago
You can, but my tool actually handles the raw chat context. So you can have millions of tokens in context, and actual message that gets produced for the LLM is an optimized distillate, re-ordered to take into account LLM memory patterns. RAG tools are mostly optimized for QA anyhow, which has dubious carryover to coding tasks.
olejorgenb•5mo ago
> ... re-ordered to take into account LLM memory patterns.

If I understand you correctly, doesn't this break prefix KV caching?

CuriouslyC•5mo ago
It is done at immediately before the LLM call, transforming the message history for the API call.

This does reduce the context cache hit rate a bit, but I'm cache aware so I try to avoid repacking the early parts if I can help it. The tradeoff is 100% worth it though.

psadri•5mo ago
I’m curious about this project (I’m working on something similar). Anyway to get in contact with you?
CuriouslyC•5mo ago
you can click my spam protected email links on https://sibylline.dev, those should be working now. Any CTA will get me.
barbazoo•5mo ago
> When prompting DeepSeek-v3, the team found that selecting the the right tools becomes critical when you have more than 30 tools. Above 30, the descriptions of the tools begin to overlap, creating confusion. Beyond 100 tools, the model was virtually guaranteed to fail their test. Using RAG techniques to select less than 30 tools yielded dramatically shorter prompts and resulted in as much as 3x better tool selection accuracy.

> For smaller models, the problems begin long before we hit 30 tools. One paper we touched on last post, “Less is More,” demonstrated that Llama 3.1 8b fails a benchmark when given 46 tools, but succeeds when given only 19 tools. The issue is context confusion, not context window limitaions.

High number of tools is a bit of a "smell" to me and often makes me wonder if the agent doesn't have too much responsibility. A bit like a method with so many parameters, it can do almost anything.

Have folks had success with agents like that? I found the fewer tools the better, e.g. <10 "ballpark".

knewter•5mo ago
we have success with 39 but we're introducing more focused agents and a smart router because we see the writing on the wall among other things (benefits)