frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•gozzoo•2m ago•0 comments

A Horrible Conclusion

https://addisoncrump.info/research/a-horrible-conclusion/
1•todsacerdoti•2m ago•0 comments

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
1•tosh•3m ago•0 comments

From Zero to Hero: A Spring Boot Deep Dive

https://jcob-sikorski.github.io/me/
1•jjcob_sikorski•3m ago•0 comments

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•8m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•11m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•14m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•15m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•15m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•16m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•16m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
1•alainrk•17m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•18m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
1•edent•21m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•24m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•24m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•30m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•31m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•31m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•34m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•37m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•37m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•37m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•37m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•39m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•41m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•43m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•45m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•46m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•46m ago•1 comments
Open in hackernews

How to Give Your RTX 4090 Nearly Infinite Memory for LLM Inference

https://medium.com/data-science-collective/how-to-give-your-rtx-gpu-nearly-infinite-memory-for-llm-inference-de2c57af1e82
2•dikobraz•5mo ago

Comments

dikobraz•5mo ago
We explored a network-attached KV-cache for consumer GPUs to offset their limited VRAM. It doesn’t make RTX cards run giant models efficiently. Still, for workloads that repeatedly reuse lengthy prefixes—such as chatbots, coding assistants, and multi-turn threads—it delivers a 2–4× speedup in RPS and time-to-first-token on 7B and 70B models.

How it works: On return visits, instead of re-running the prompt through the model, we fetch previously computed KV blocks from network storage and skip re-computing those tokens (i.e., we avoid re-running prefill on repeated prefixes). This is helpful when VRAM can’t hold all sessions, and users pause between messages, which is almost always the case.

Why RTX benefits: Prefill is the computationally intensive part (quadratic attention, numerous reductions, and inter-GPU traffic). Without NVLink, PCIe becomes the choke point in multi-GPU setups. KV-caching cuts repeated prefill, leaving mostly the lighter decoding step—something PCIe-only RTX nodes handle well.

Results & endpoint: - 2–4× speedup on multi-turn benchmarks (RPS & TTFT) with RTX 4090. - We’ve opened one free public endpoint for demos, not production grade (https://console.cloudrift.ai/inference?modelId=meta-llama%2F...). Ping us at hello@cloudrift.ai if you need a reliable setup.

Technical Notes: - Works with consumer and data-center GPUs. In theory, you can even split roles: NVLink boxes do prefill, while cheaper RTX pods serve as decoders using stored KV. - We use special hardware to reduce fetch overhead and offload the CPU, but you can reproduce this at home with a regular NAS (with lower peak performance). - For a more in-depth walkthrough of the math and architecture of a KV-cache solution, please watch this video from the KV-cache solution vendor (https://www.youtube.com/watch?si=T69vxku8xPr6p7I0&v=CV4FYMTF...)