frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•gozzoo•1m ago•0 comments

A Horrible Conclusion

https://addisoncrump.info/research/a-horrible-conclusion/
1•todsacerdoti•1m ago•0 comments

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
1•tosh•2m ago•0 comments

From Zero to Hero: A Spring Boot Deep Dive

https://jcob-sikorski.github.io/me/
1•jjcob_sikorski•2m ago•0 comments

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•7m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•10m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•13m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•14m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•14m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•15m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•15m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
1•alainrk•16m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•17m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
1•edent•20m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•23m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•23m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•29m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•30m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•30m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•33m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•36m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•36m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•36m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•36m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•38m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•40m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•42m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•45m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•45m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•45m ago•1 comments
Open in hackernews

Show HN: USST – A protocol to reduce LLM context redundancy by 98.5%

https://gist.github.com/maverick069/06d6f6e89947d621b4905765245a220a
2•mgopanna•2mo ago
I’ve been working on a primitive called User-Segmented Session Tokens (USST).

The Problem: Currently, if a teacher (or lead dev) wants 50 students (or junior devs) to use an LLM with a specific, deep context (e.g., a 50-page curriculum or a complex repo), all 50 users have to re-upload and re-tokenize that context. It’s redundant, expensive, and forces everyone to have a high-tier subscription.

The Solution: USST allows a "Sponsor" (authenticated, paid account) to run a Deep Research session once and mint a signed Context Token. Downstream users (anonymous/free tier) pass this token in their prompt. The provider loads the pre-computed KV cache/context state without re-processing the original tokens.

Decouples payment from utility: Sponsor pays the heavy compute; Users pay the inference. Privacy: Users don't need the Sponsor's credentials, just the token. Efficiency: Removes the "Linear Bleed" of context re-computation.

I wrote up the full architecture and the "why" here: https://medium.com/@madhusudan.gopanna/the-8-6-billion-oppor...

The Protocol Spec / Repo is the main link above.

Would love feedback on the abuse vectors and how this fits with current provider caching (like Anthropic’s prompt caching).

Comments

mgopanna•2mo ago
I wanted to share the economic model that drove me to build this. I call it the "Redundancy Tax."

When you look at the hidden costs of "Per-Seat" architecture in an education setting, the numbers get large very quickly. I broke down the cost of redundant context re-processing:

The Baseline:

    Target: ~20M connected classrooms (secondary/tertiary globally).

    Volume: 1,000 high-value interactions per year (a conservative estimate for active AI tutoring).

    The Waste: Re-processing a 35k context window for every single student query instead of reusing the cached state.
The USST Math: By shifting from "Raw Mode" (everyone tokenizes everything) to "USST Mode" (Sponsor tokenizes once, students reuse):

    We see a ~98.5% reduction in incremental token load.

    That saves roughly $0.432 per interaction in compute costs.

    0.432×1,000 interactions×20M classrooms≈$8.6 Billion annually.
The Grid Impact: Beyond the money, this is an infrastructure stability issue. A simultaneous classroom start (e.g., 10:05 AM) currently looks like a 1 Megawatt spike on the grid. With shared context tokens, that drops to a 15 Kilowatt blip (just the inference delta).

We don't need 100x more chips to solve this; we just need a protocol that stops treating every user session as a blank slate.