frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code

https://github.com/mksglu/claude-context-mode
40•mksglu•1h ago
Every MCP tool call dumps raw data into Claude Code's 200K context window. A Playwright snapshot costs 56 KB, 20 GitHub issues cost 59 KB. After 30 minutes, 40% of your context is gone.

I built an MCP server that sits between Claude Code and these outputs. It processes them in sandboxes and only returns summaries. 315 KB becomes 5.4 KB.

It supports 10 language runtimes, SQLite FTS5 with BM25 ranking for search, and batch execution. Session time before slowdown goes from ~30 min to ~3 hours.

MIT licensed, single command install:

/plugin marketplace add mksglu/claude-context-mode

/plugin install context-mode@claude-context-mode

Benchmarks and source: https://github.com/mksglu/claude-context-mode

Would love feedback from anyone hitting context limits in Claude Code.

Comments

handfuloflight•1h ago
One moment you're speaking about context but talking in kilobytes, can you confirm the token savings data?

And when you say only returns summaries, does this mean there is LLM model calls happening in the sandbox?

mksglu•1h ago
Hey! Thank you for your comment! There are test examples in the README. Could you please try them? Your feedback is valuable.
mksglu•1h ago
For your second question: No LLM calls. Context Mode uses algorithmic processing — FTS5 indexing with BM25 ranking and Porter stemming. Raw output gets chunked and indexed in a SQLite database inside the sandbox, and only the relevant snippets matching your intent are returned to context. It's purely deterministic text processing, no model inference involved.
handfuloflight•33m ago
Excellent, thank you for your responses. Will be putting it through a test drive.
mksglu•23m ago
Sure, thank you for your comment!
sim04ful•1h ago
Looks pretty interesting. How could i use this on other MCP clients e.g OpenCode ?
mksglu•1h ago
Hey! Thank you for your comment! You can actually use an MCP on this basis, but I haven't tested it yet. I'll look into it as soon as possible. Your feedback is valuable.
nightmunnas•1h ago
nice, I'd love to se it for codex and opencode
mksglu•1h ago
Thanks! Context Mode is a standard MCP server, so it works with any client that supports MCP — including Codex and opencode.

Codex CLI:

  codex mcp add context-mode -- npx -y context-mode
Or in ~/.codex/config.toml:

  [mcp_servers.context-mode]
  command = "npx"
  args = ["-y", "context-mode"]
opencode:

In opencode.json:

  {
    "mcp": {
      "context-mode": {
        "type": "local",
        "command": ["npx", "-y", "context-mode"],
        "enabled": true
      }
    }
  }
We haven't tested yet — would love to hear if anyone tries it!
vicchenai•57m ago
The BM25+FTS5 approach without LLM calls is the right call - deterministic, no added latency, no extra token spend on compression itself.

The tradeoff I want to understand better: how does it handle cases where the relevant signal is in the "low-ranked" 310 KB, but you just haven't formed the query that would surface it yet? The compression is necessarily lossy - is there a raw mode fallback for when the summarized context produces unexpected downstream results?

Also curious about the token count methodology - are you measuring Claude's tokenizer specifically, or a proxy?

mksglu•51m ago
Great questions.

--

On lossy compression and the "unsurfaced signal" problem:

Nothing is thrown away. The full output is indexed into a persistent SQLite FTS5 store — the 310 KB stays in the knowledge base, only the search results enter context. If the first query misses something, you (or the model) can call search(queries: ["different angle", "another term"]) as many times as needed against the same indexed data. The vocabulary of distinctive terms is returned with every intent-search result specifically to help form better follow-up queries.

The fallback chain: if intent-scoped search returns nothing, it splits the intent into individual words and ranks by match count. If that still misses, batch_execute has a three-tier fallback — source-scoped search → boosted search with section titles → global search across all indexed content.

There's no explicit "raw mode" toggle, but if you omit the intent parameter, execute returns the full stdout directly (smart-truncated at 60% head / 40% tail if it exceeds the buffer). So the escape hatch is: don't pass intent, get raw output.

On token counting:

It's a bytes/4 estimate using Buffer.byteLength() (UTF-8), not an actual tokenizer. Marked as "estimated (~)" in stats output. It's a rough proxy — Claude's tokenizer would give slightly different numbers — but directionally accurate for measuring relative savings. The percentage reduction (e.g., "98%") is measured in bytes, not tokens, comparing raw output size vs. what actually enters the conversation context.

rcarmo•5m ago
Nice trick. I’m going to see how I can apply it to tool calls in pi.dev as well

Sovereignty in a System Prompt

https://pop.rdi.sh/sovereignty-in-a-system-prompt/
15•0x5FC3•18h ago•2 comments

I'm helping my dog vibe code games

https://www.calebleak.com/posts/dog-game/
872•cleak•15h ago•253 comments

Cl-kawa: Scheme on Java on Common Lisp

https://github.com/atgreen/cl-kawa
30•varjag•2d ago•5 comments

Show HN: Moonshine Open-Weights STT models – higher accuracy than WhisperLargev3

https://github.com/moonshine-ai/moonshine
225•petewarden•10h ago•47 comments

Mercury 2: Fast reasoning LLM powered by diffusion

https://www.inceptionlabs.ai/blog/introducing-mercury-2
199•fittingopposite•9h ago•87 comments

Pi – A minimal terminal coding harness

https://pi.dev
327•kristianpaul•10h ago•138 comments

Show HN: Quantifying opportunity cost with a deliberately "simple" web app

https://shouldhavebought.com/
4•b0bbi•16h ago•0 comments

Mac mini will be made at a new facility in Houston

https://www.apple.com/newsroom/2026/02/apple-accelerates-us-manufacturing-with-mac-mini-production/
489•haunter•11h ago•479 comments

Turing Completeness of GNU Find: From Mkdir-Assisted Loops to Standalone Comput

https://arxiv.org/abs/2602.20762
13•todsacerdoti•3h ago•2 comments

React just left meta. Here's what that means for developers

https://sulat.com/p/react-just-left-meta-heres-what-that
6•zenoware•1h ago•0 comments

Georgian wine culture dates back, uninterrupted, approximately 8k years

https://www.wsetglobal.com/knowledge-centre/blog/2023/july/05/exploring-georgian-wine-history-gra...
51•Anon84•4d ago•39 comments

Japanese Death Poems

https://www.secretorum.life/p/japanese-death-poems-part-3
13•NaOH•2d ago•0 comments

Hacking an old Kindle to display bus arrival times

https://www.mariannefeng.com/portfolio/kindle/
233•mengchengfeng•12h ago•53 comments

Show HN: Emdash – Open-source agentic development environment

https://github.com/generalaction/emdash
145•onecommit•14h ago•58 comments

I pitched a roller coaster to Disneyland at age 10 in 1978

https://wordglyph.xyz/one-piece-at-a-time
445•wordglyph•19h ago•162 comments

Amazon accused of widespread scheme to inflate prices across the economy

https://www.thebignewsletter.com/p/amazon-busted-for-widespread-price
377•toomuchtodo•7h ago•119 comments

Nearby Glasses

https://github.com/yjeanrenaud/yj_nearbyglasses
305•zingerlio•14h ago•114 comments

Steel Bank Common Lisp

https://www.sbcl.org/
199•tosh•13h ago•77 comments

Cell Service for the Fairly Paranoid

https://www.cape.co/
83•0xWTF•9h ago•91 comments

Show HN: Context Mode – 315 KB of MCP output becomes 5.4 KB in Claude Code

https://github.com/mksglu/claude-context-mode
40•mksglu•1h ago•12 comments

Corgi Labs (YC W23) Is Hiring

https://www.ycombinator.com/companies/corgi-labs/jobs/ZiEIf7a-founders-associate
1•leastsquares•7h ago

Anthropic Drops Flagship Safety Pledge

https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/
255•cwwc•7h ago•102 comments

Hugging Face Skills

https://github.com/huggingface/skills
159•armcat•14h ago•44 comments

3D-Printed electric motor via multi-modal, multi-material extrusion

https://www.tandfonline.com/doi/full/10.1080/17452759.2026.2613185
8•westurner•3d ago•7 comments

Aesthetics of single threading

https://ta.fo/aesthetics-of-single-threading/
68•todsacerdoti•3d ago•15 comments

Looks like it is happening

https://www.math.columbia.edu/~woit/wordpress/?p=15500
169•jjgreen•10h ago•115 comments

We installed a single turnstile to feel secure

https://idiallo.com/blog/installed-single-turnstile-for-security-theater
331•firefoxd•2d ago•154 comments

Stripe valued at $159B, 2025 annual letter

https://stripe.com/newsroom/news/stripe-2025-update
197•jez•17h ago•209 comments

IRS Tactics Against Meta Open a New Front in the Corporate Tax Fight

https://www.nytimes.com/2026/02/24/business/irs-meta-corporate-taxes.html
206•mitchbob•19h ago•206 comments

Optophone

https://en.wikipedia.org/wiki/Optophone
68•Hooke•4d ago•12 comments