frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: One-click AI employee with its own cloud desktop

https://cloudbot-ai.com
1•fainir•33s ago•0 comments

Show HN: Poddley – Search podcasts by who's speaking

https://poddley.com
1•onesandofgrain•1m ago•0 comments

Same Surface, Different Weight

https://www.robpanico.com/articles/display/?entry_short=same-surface-different-weight
1•retrocog•3m ago•0 comments

The Rise of Spec Driven Development

https://www.dbreunig.com/2026/02/06/the-rise-of-spec-driven-development.html
1•Brajeshwar•7m ago•0 comments

The first good Raspberry Pi Laptop

https://www.jeffgeerling.com/blog/2026/the-first-good-raspberry-pi-laptop/
2•Brajeshwar•8m ago•0 comments

Seas to Rise Around the World – But Not in Greenland

https://e360.yale.edu/digest/greenland-sea-levels-fall
1•Brajeshwar•8m ago•0 comments

Will Future Generations Think We're Gross?

https://chillphysicsenjoyer.substack.com/p/will-future-generations-think-were
1•crescit_eundo•11m ago•0 comments

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•righthand•14m ago•0 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•15m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•15m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
2•vinhnx•16m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
3•tosh•21m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•25m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•29m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•31m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•32m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
5•okaywriting•38m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•41m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•42m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•43m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•44m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•44m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•44m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•45m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•49m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•49m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•50m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•50m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•59m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•59m ago•0 comments
Open in hackernews

Why stop at 1M tokens when you can have 10M?

2•Zen_Sherbert•3mo ago
To start us off, I'm going to make a ridiculous claim.

On my 7800XT gaming GPU, using less than 3GB of VRAM for the buffer, I have built an architecture that can process a 10 million token context.

This is not a joke. You can run it in a Google Colab notebook, on a free T4, and prove it to yourself right now:

The Proteus Playground https://colab.research.google.com/github/Zen-Sherbert/Proteus-Attention/blob/main/TinyPlayground.ipynb

It runs flawlessly on both CUDA and ROCm. It works. With the proof-of-concept out of the way, here are the three core ideas that got me here.

1. DNA - Tokens have value.

My journey started with a simple idea: tokens mean something. They have value. So why don't we use it?

I built a system called DNA, where each attention "gate" learns a "taste" for certain tokens and pulls them in like gravity. The crazy part? On a raw, untrained model, I found that 334 out of 500 tokens were already being caught by this system. It's a natural, emergent behavior.

2. The Alpha Slider - "Why can't I just change my model?"

I hated that I couldn't just switch my model from dense, to sparse, to linear whenever I wanted. So, I built a custom Triton kernel to do exactly that.

The result is a single knob called alpha:

Dense, high-fidelity? alpha = 0.0.

Balanced sub-quadratic? alpha = 0.3.

Screaming-fast linear time? alpha = 1.0 and the attention mechanic goes brrrrrr.

3. Chunking & RoPE - "So I got rid of it."

My new systems got me far, but the VRAM bottleneck was still a headache. So I got rid of it.

The idea is simple: chunking. Break a massive context into small pieces, shunt them to system RAM, and use a tiny VRAM buffer for only the most important tokens.

DNA tells us what's important. As a Hail Mary, I added RoPE to preserve where it came from. This combination creates contextual teleportation. It allows the model to create a perfect "highlight reel" and reason over it as if critical facts, separated by thousands of pages, were sitting right next to each other. It's your own little wormhole across data space.

TL;DR: I built an extreme context system that costs less than Minecraft to run. Would love feedback, as I'm still exploring how far it can go.

Github: https://github.com/Zen-Sherbert/Proteus-Attention/tree/main

Comments

Zen_Sherbert•3mo ago
A little bit about the origin story for those who are interested:

This whole thing started with me trying to implement sparsity, and getting it totally wrong. The DNA idea came to me in the middle of the night during my shift as an asset protection officer. The rest of it was just fumbling from one discovery to the next, mostly ignoring the "right" way to do things.

I'm an 8-year veteran, a father of three, and I just finished my bachelor's. I am not an AI researcher. If I can build this, you can do something infinitely better.

Please, try the Colab. Break it. Play with it. I implore you to tell me how it breaks. I'm excited to see what the community thinks.

gus_massa•3mo ago
Clicky: https://colab.research.google.com/github/Zen-Sherbert/Proteu... https://github.com/Zen-Sherbert/Proteus-Attention/tree/main

> The idea is simple: chunking. Break a massive context into small pieces, shunt them to system RAM, and use a tiny VRAM buffer for only the most important tokens.

So, ... you are cherry picking some tokens to be added to the context?

Zen_Sherbert•3mo ago
In essence that's exactly the idea.

It's not what you think it is though. It's choose the right words in the right places under the right context.

You submit a 5 million token document of mixed data. It's a jumble of finances, cooking, and old stereo instructions for some reason.

You ask a the model what ingredients are in a chicken caprese.

It won't have to read millions of tokens, it will understand the what and the where and the why.

So chunking specifically isn't about understanding an entire context window of 5 million.

It's more about working with it in small pieces in relation to inference.

It is not a replacement, rather an alternative. An early one at that.

Thank you for taking the time to read, I appreciate the input and the skepticism too.

If you have more, can you provide any?

gus_massa•3mo ago
> It's not what you think it is though. It's choose the right words in the right places under the right context.

That's approximately what I thought. I want to be sure. Anyway, the details are very important.

> I appreciate the input and the skepticism too.

Let's say 20% skepticism and 80% it sounds like a good idea. I'm not using AI models too much, so it's hard for me to evaluate it. Let's hope someone else can give requested and unrequested feedback.