frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: Badge that shows how well your codebase fits in an LLM's context window

https://github.com/qwibitai/nanoclaw/tree/main/repo-tokens
42•jimminyx•2h ago
Small codebases were always a good thing. With coding agents, there's now a huge advantage to having a codebase small enough that an agent can hold the full thing in context.

Repo Tokens is a GitHub Action that counts your codebase's size in tokens (using tiktoken) and updates a badge in your README. The badge color reflects what percentage of an LLM's context window the codebase fills: green for under 30%, yellow for 50-70%, red for 70%+. Context window size is configurable and defaults to 200k (size of Claude models).

It's a composite action. Installs tiktoken, runs ~60 lines of inline Python, takes about 10 seconds. The action updates the README but doesn't commit, so your workflow controls the git strategy.

The idea is to make token size a visible metric, like bundle size badges for JS libraries. Hopefully a small nudge to keep codebases lean and agent-friendly.

GitHub: https://github.com/qwibitai/nanoclaw/tree/main/repo-tokens

Comments

agentica_ai•1h ago
Smart idea. Token budgets are becoming the new line count metric for the LLM era.
irishcoffee•1h ago
Nah. I can write a whole program using 0 tokens, I can’t write a whole program with 0 lines of code.
collabs•1h ago
This is an interesting concept. Thank you for sharing. I have an export.sh or export.ps1 script that takes the relevant files in my repository and puts them in a `dump.txt` file inside `docs/llm`.

I am not very good with AI though. Is there a quick and easy way to calculate token count and add this to my dump.txt file, ideally using just simple, included by default Linux tools in bash or simple, included by default Windows tools in powershell?

Thank you in advance.

Towaway69•1h ago
What’s the going rate for tokens in terms of dollars? How much are companies spending on “tokens”?

Also kind of ironic that small codebases are now in vogue, just when google monolithic repos were so popular.

c0balt•1h ago
> What’s the going rate for tokens in terms of dollars?

It depends on the provider/model, usually pricing is calculated as $/million tokens with input/output tokens having different per token pricing (output tends to be more expensive than input). Some models also charge more per token if the context size is above a threshold. Cached operations may also reduce the price per token.

OpenRouter has a good overview over provider and models, https://openrouter.ai/models

The math on what people are actually paying is hard to evaluate. Ime, most companies rather buy a subscription than give their developers API keys (as it makes spending predictable).

Towaway69•1h ago
Api keys with hard limits I assume?

Are there companies out there that add token counts to ticket “costs”, i.e. are story points being replaced/augmented by token counts?

Or even worse, an exchange rate of story points to tokens used…

jannniii•1h ago
Interesting concept, but is it going to age well with context sizes of models are changing all the time (growing, mostly)?
Retr0id•1h ago
max context sizes are probably going to go up, but smaller contexts will always be cheaper/more-efficient than larger ones
nebezb•1h ago
Useful and useless (or good and “less good”) aren’t easily mapped to big and small.

From a purely UX perspective, showing a red badge seems you’re conflating “less good” with size. Who is the target for this? Lots of useful codebases are large.

I do agree, however, that there’s value in splitting up domains into something a human can easily learn and keep in their head after, say, a few days of being deeply entrenched. Tokens could actually be a good proxy for this.

iterateoften•1h ago
> Who is the target for this?

Agents. Going to be more tools and software targeted for consumption by agents

adam_arthur•1h ago
Yeah, but a large monorepo can consist of many small subprojects. And arguably this is becoming a best practice.

Just spawn the agent in one of the subprojects

Retr0id•1h ago
Some say that the ideal size of an individual function in a codebase is related to the amount of information you can hold in working memory. Maybe the ideal size for a library is the amount you can fit in an LLM context window?
ai-christianson•1h ago
This is a really interesting metric to track. I agree with the sentiment that token budgets are becoming the new 'lines of code' metric. Even though context windows are constantly expanding (like the 200k default you used for Opus), there's still a tangible benefit to keeping a codebase lean. It's not just about fitting it into the window, but also about the signal-to-noise ratio for the agent. The color-coding based on percentage is a nice touch for a quick visual health check.
kccqzy•1h ago
It’s interesting but I think it’s measuring the wrong thing. Abstraction is a fundamental principle in software. As a human, I’ve worked with classes and modules far larger than what fits in my head, just because I’m only fitting the function signatures and purpose into my head, and not the implementation details. In practice I find Claude really good at extracting useful information in a human-like way from a codebase. It doesn’t usually stuff the entire codebase into its context window.
daxfohl•48m ago
Also this rewards dynamic languages over typed languages, penalizes comments, descriptive function names, etc. Though frankly, it'd be interesting to see whether AI would work better with a project in Javascript that barely fits in context, or the same thing in typescript that overflows. I could imagine either, but my guess is "it depends". Though, "depends on what" would be interesting to know.

Still, this seems useful for being able to see at a glance. I have no idea where most of my own projects would land.

b112•1h ago
It's a fun, in the "style of the time" thing to track, but within a year or two, context window limitations won't be a thing.

Doubt me?

Think back 2 years. Now compare today. Change is at massive speed, and this issue is top line to be resolved in some fashion.

arscan•1h ago
I’m not so sure an increasingly large context window will be seen as a critical enabler (as it was viewed 6 months ago), after watching how amazingly effective subagents and tool calls are at tackling parts of the problem and surfacing the just the relevant bits for the task at hand. And if increasing the context window isn’t the current bottleneck, effort will be put elsewhere.
spot5010•1h ago
I agree. My suspicion is that token efficiency is what will drive more efficient tool calls, and tool building. And we want that. Agents should rely less on raw intelligence (ability to hold everyting in context), and more on building tools to get the job done.
written-beyond•1h ago
Gemini 1.5 Announced the 1 million token context window in 2024. I admire this view of being forward looking towards new technologies, specially when we see the history of how bad people can be at predictions just by looking at history HN posts/comments.

If we look at back 2 years, companies weren't investing into training their LLMs so heavily on code. Any code they got their hands on was what was in the LLMs training corpus, it's well known that the most recent improvements in LLM productivity occurred after they spent millions on different labs to produce more coding datasets for them.

So while LLMs have gotten a lot better at not needing the entire codebase in context at once, because their weights are already so well tuned to development environments they can better infer and index things as needed. However, I fail to see how the context window limitation would no longer be an issue since it's a fundamental part of the real world. Would we get better and more efficient ways of splitting and indexing context windows? Surely. Will that reduce our fear of soiling our contexts with bad prompt response cycles? Probably not...

spicyusername•1h ago
I'm not sure that smaller bases are always better.
unglaublich•49m ago
value/size
KingOfCoders•1h ago
Interesting, but not adding something to my CI for a badge, too paranoid.
ramoz•50m ago
I haven't cared too much about repo tokens in a good while.

But my coolest app was a better context creator. I found it hard to extend to actual agentic coding use. Agentic discovery is generally useful and reliable - the overhead of tokens can be managed by the harness (i.e. Claude Code).

https://prompttower.com/

layer8•25m ago
Maybe it’s useful to dig out the concept of modularization with a distinction between interface and implementation again, and construct agents that are able to make effective use of it.

In the case that interfaces remain unchanged, agents only need to look at the implementation of a single module at a time plus the interfaces it consumes and implements. And when changing interfaces, agents only need to look at the interfaces of the modules concerned, and at most a limited number of implementation considerations.

It’s the very reason why we humans invented modularization: so that we don’t have to hold the complete codebase in our heads (“context windows”) in order to reason about it and make changes to it in a robust and well-grounded way.

sltr•8m ago
Blogged about this very thinga few days ago

https://www.slater.dev/2026/02/relieve-your-context-anxiety-...

We deserve a better streams API for JavaScript

https://blog.cloudflare.com/a-better-web-streams-api/
173•nnx•3h ago•66 comments

We gave terabytes of CI logs to an LLM

https://www.mendral.com/blog/llms-are-good-at-sql
37•shad42•1h ago•27 comments

The Pentagon is making a mistake by threatening Anthropic

https://www.understandingai.org/p/the-pentagon-is-making-a-mistake
102•speckx•2h ago•71 comments

Statement from Dario Amodei on our discussions with the Department of War

https://www.anthropic.com/news/statement-department-of-war
2614•qwertox•18h ago•1397 comments

Tenth Circuit: 4th Amendment Doesn't Support Broad Search of Protesters' Devices

https://www.eff.org/deeplinks/2026/02/victory-tenth-circuit-finds-fourth-amendment-doesnt-support...
159•hn_acker•2h ago•16 comments

Show HN: RetroTick – Run classic Windows EXEs in the browser

https://retrotick.com/
97•lqs_•4h ago•33 comments

Show HN: Badge that shows how well your codebase fits in an LLM's context window

https://github.com/qwibitai/nanoclaw/tree/main/repo-tokens
42•jimminyx•2h ago•25 comments

F-Droid Board of Directors nominations 2026

https://f-droid.org/2026/02/26/board-of-directors-nominations.html
117•edent•6h ago•63 comments

Can you reverse engineer our neural network?

https://blog.janestreet.com/can-you-reverse-engineer-our-neural-network/
198•jsomers•2d ago•127 comments

Experts sound alarm after ChatGPT Health fails to recognise medical emergencies

https://www.theguardian.com/technology/2026/feb/26/chatgpt-health-fails-recognise-medical-emergen...
71•simonebrunozzi•1h ago•57 comments

An interactive intro to quadtrees

https://growingswe.com/blog/quadtrees
144•evakhoury•2d ago•15 comments

Get free Claude max 20x for open-source maintainers

https://claude.com/contact-sales/claude-for-oss
156•zhisme•8h ago•90 comments

The Hunt for Dark Breakfast

https://moultano.wordpress.com/2026/02/22/the-hunt-for-dark-breakfast/
421•moultano•13h ago•159 comments

The normalization of corruption in organizations (2003) [pdf]

https://gwern.net/doc/sociology/2003-ashforth.pdf
208•rendx•11h ago•109 comments

Breaking Free

https://www.forbrukerradet.no/breakingfree/
123•Aissen•7h ago•20 comments

Sprites on the Web

https://www.joshwcomeau.com/animation/sprites/
28•vinhnx•3d ago•9 comments

The quixotic team trying to build a world in a 20-year-old game

https://arstechnica.com/gaming/2026/02/inside-the-quixotic-team-trying-to-build-an-entire-world-i...
79•nxobject•2d ago•15 comments

NASA announces major overhaul of Artemis program amid safety concerns, delays

https://www.cbsnews.com/news/nasa-artemis-moon-program-overhaul/
5•voxadam•52m ago•2 comments

Vibe coded Lovable-hosted app littered with basic flaws exposed 18K users

https://www.theregister.com/2026/02/27/lovable_app_vulnerabilities/
13•nottorp•38m ago•0 comments

Modeling Cycles of Grift with Evolutionary Game Theory

https://www.oranlooney.com/post/grifters-skeptics-marks/
3•ibobev•3d ago•2 comments

How to Allocate Memory

https://geocar.sdf1.org/alloc.html
38•tosh•2d ago•1 comments

Reading English from 1000 Ad

https://lewiscampbell.tech/blog/260224.html
60•LAC-Tech•3d ago•24 comments

Ubicloud (YC W24): Software Engineer – $95-$250K in Turkey, Netherlands, CA

https://www.ycombinator.com/companies/ubicloud/jobs/j4bntEJ-software-engineer
1•ozgune•8h ago

What Claude Code chooses

https://amplifying.ai/research/claude-code-picks
546•tin7in•23h ago•208 comments

Working on Pharo Smalltalk: BPatterns: Rewrite Engine with Smalltalk Style

http://dionisiydk.blogspot.com/2026/02/bpatterns-rewrite-engine-with-smalltalk.html
51•mpweiher•8h ago•4 comments

80386 Protection

https://nand2mario.github.io/posts/2026/80386_protection/
112•nand2mario•3d ago•27 comments

AirSnitch: Demystifying and breaking client isolation in Wi-Fi networks [pdf]

https://www.ndss-symposium.org/wp-content/uploads/2026-f1282-paper.pdf
394•DamnInteresting•1d ago•173 comments

What does " 2>&1 " mean?

https://stackoverflow.com/questions/818255/what-does-21-mean
381•alexmolas•21h ago•224 comments

The complete Manic Miner disassembly

https://skoolkit.ca/disassemblies/manic_miner/
50•sandebert•9h ago•7 comments

Layoffs at Block

https://twitter.com/jack/status/2027129697092731343
850•mlex•20h ago•963 comments