frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Take potentially dangerous PDFs, and convert them to safe PDFs

https://github.com/freedomofpress/dangerzone
29•dp-hackernews•1h ago•8 comments

Show HN: ChartGPU – WebGPU-powered charting library (1M points at 60fps)

https://github.com/ChartGPU/ChartGPU
481•huntergemmer•9h ago•143 comments

Claude's new constitution

https://www.anthropic.com/news/claude-new-constitution
268•meetpateltech•8h ago•256 comments

Golfing APL/K in 90 Lines of Python

https://aljamal.substack.com/p/golfing-aplk-in-90-lines-of-python
41•aburjg•5d ago•7 comments

Show HN: RatatuiRuby wraps Rust Ratatui as a RubyGem – TUIs with the joy of Ruby

https://www.ratatui-ruby.dev/
41•Kerrick•4d ago•4 comments

Skip is now free and open source

https://skip.dev/blog/skip-is-free/
254•dayanruben•9h ago•99 comments

Letting Claude play text adventures

https://borretti.me/article/letting-claude-play-text-adventures
65•varjag•5d ago•23 comments

The WebRacket language is a subset of Racket that compiles to WebAssembly

https://github.com/soegaard/webracket
85•mfru•4d ago•20 comments

Challenges in join optimization

https://www.starrocks.io/blog/inside-starrocks-why-joins-are-faster-than-youd-expect
37•HermitX•7h ago•8 comments

Show HN: Rails UI

https://railsui.com/
98•justalever•5h ago•62 comments

Stevey's Birthday Blog

https://steve-yegge.medium.com/steveys-birthday-blog-34f437139cb5
14•throwawayHMM19•1d ago•3 comments

Jerry (YC S17) Is Hiring

https://www.ycombinator.com/companies/jerry-inc/jobs/QaoK3rw-software-engineer-core-automation-ma...
1•linaz•3h ago

TrustTunnel: AdGuard VPN protocol goes open-source

https://adguard-vpn.com/en/blog/adguard-vpn-protocol-goes-open-source-meet-trusttunnel.html
47•kumrayu•7h ago•10 comments

Waiting for dawn in search: Search index, Google rulings and impact on Kagi

https://blog.kagi.com/waiting-dawn-search
209•josephwegner•6h ago•133 comments

Mystery of the Head Activator

https://www.asimov.press/p/head-activator
12•mailyk•3d ago•1 comments

Three types of LLM workloads and how to serve them

https://modal.com/llm-almanac/workloads
29•charles_irl•8h ago•1 comments

An explanation of cheating in Doom2 Deathmatch (1999)

https://www.doom2.net/doom2/cheating.html
16•Lammy•4d ago•1 comments

Setting Up a Cluster of Tiny PCs for Parallel Computing

https://www.kenkoonwong.com/blog/parallel-computing/
23•speckx•5h ago•11 comments

SIMD programming in pure Rust

https://kerkour.com/introduction-rust-simd
43•randomint64•2d ago•14 comments

Nested code fences in Markdown

https://susam.net/nested-code-fences.html
183•todsacerdoti•11h ago•61 comments

Can you slim macOS down?

https://eclecticlight.co/2026/01/21/can-you-slim-macos-down/
162•ingve•16h ago•201 comments

Tell HN: 2 years building a kids audio app as a solo dev – lessons learned

27•oliverjanssen•10h ago•24 comments

Scientists find a way to regrow cartilage in mice and human tissue samples

https://www.sciencedaily.com/releases/2026/01/260120000333.htm
242•saikatsg•6h ago•65 comments

Slouching Towards Bethlehem – Joan Didion (1967)

https://www.saturdayeveningpost.com/2017/06/didion/
53•jxmorris12•6h ago•4 comments

Open source server code for the BitCraft MMORPG

https://github.com/clockworklabs/BitCraftPublic
28•sfkgtbor•7h ago•9 comments

I finally got my sway layout to autostart the way I like it

https://hugues.betakappaphi.com/2026/01/19/sway-layout/
18•__hugues•15h ago•4 comments

Without benchmarking LLMs, you're likely overpaying

https://karllorey.com/posts/without-benchmarking-llms-youre-overpaying
132•lorey•1d ago•70 comments

JPEG XL Test Page

https://tildeweb.nl/~michiel/jxl/
158•roywashere•7h ago•109 comments

Show HN: TerabyteDeals – Compare storage prices by $/TB

https://terabytedeals.com
57•vektor888•3h ago•49 comments

TeraWave Satellite Communications Network

https://www.blueorigin.com/news/blue-origin-introduces-terawave-space-based-network-for-global-co...
113•T-A•5h ago•83 comments
Open in hackernews

Show HN: Grov – Multiplayer for AI coding agents

https://github.com/TonyStef/Grov
22•tonyystef•2h ago
Hi HN, I'm Tony.

I built Grov (https://grov.dev/) because I hit a wall with current AI coding assistants: they are "single-player." The moment I kill a terminal pane or close a chat session, the high-level reasoning and architectural decisions generated during that session are lost. If a teammate touches that same code an hour later, their agent has to re-derive everything from scratch or read many documentation files for basically any feature implemented or bug fixed.

I wanted to stop writing a lot of docs for everything just to give context to my agents or have to re-explain to my agents what my teammate did and why.

Grov is an open-source context layer that effectively gives your team's AI agents a shared, persistent memory.

Here is the technical approach:

1. Decision-grain memory, not document storage: When you sync a memory, Grov structures knowledge at the decision level. We capture the specific aspect (e.g., "Auth Strategy"), the choice made ("JWT"), and the reasoning ("Stateless for scaling"). Crucially, when your codebase evolves, we don't overwrite memories, we mark old decisions as superseded and link them to the new choice. This gives your team an audit trail of architectural evolution, not just the current snapshot.

2. Git-like branches for memories: Teams experimenting with different approaches can create memory branches. Memories on a feature branch stay isolated until you are ready to merge. Access control mirrors Git: main is team-wide, while feature branches keep noise isolated. When you merge the branch, those accumulated insights become instantly available to everyone's agents.

3. Two-stage injection (Token Optimization): The expensive part of shared memory isn't storage it's the context window. Loading 10 irrelevant memories wastes tokens and confuses the model. Grov uses a "Preview → Expand" strategy: Preview: A hybrid semantic/keyword search returns lightweight memory summaries (~100 tokens). Expand: The full reasoning traces (~500-1k tokens) are only injected if the agent explicitly requests more detail. This typically results in a 50-70% token reduction per session compared to raw context dumping.

The result: Your teammate's agent doesn't waste 5 minutes re-exploring why you chose Postgres over Redis, or re-reading auth middleware. It just knows, because your agent already figured it out and shared it.

Github: https://github.com/TonyStef/Grov

Comments

dang•1h ago
[under-the-rug stub - see https://news.ycombinator.com/item?id=45988611 for explanation]

[guys, don't do this! HN will flame you for it and it will ruin your otherwise fine Show HN thread]

ambersahdev•2h ago
Do you deal with memory compaction yourself or let the models handle it?
tonyystef•1h ago
We let the models handle it, we don't compact for them.
dolevalgam•2h ago
I really need this with all the sessions open
davelradindra•2h ago
Very useful.
sintem•2h ago
dope. let me give it a go.
kristopolous•1h ago
byterover has been doing something similar for a while. amp was initially doing a variation of this and then pivoted. I built a similar tool about 9 months ago and then abandoned it.

The approach seems tempting but there's something off about it I think I might have figure out.

indigodaddy•1h ago
exe.dev has pretty much solved this with Shelley