frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Mistral OCR 3

https://mistral.ai/news/mistral-ocr-3
109•pember•1d ago•10 comments

TP-Link Tapo C200: Hardcoded Keys, Buffer Overflows and Privacy

https://www.evilsocket.net/2025/12/18/TP-Link-Tapo-C200-Hardcoded-Keys-Buffer-Overflows-and-Priva...
135•sibellavia•2h ago•31 comments

Garage – An S3 object store so reliable you can run it outside datacenters

https://garagehq.deuxfleurs.fr/
316•ibobev•5h ago•66 comments

GotaTun -- Mullvad's WireGuard Implementation in Rust

https://mullvad.net/en/blog/announcing-gotatun-the-future-of-wireguard-at-mullvad-vpn
472•km•9h ago•99 comments

Amazon will allow ePub and PDF downloads for DRM-free eBooks

https://www.kdpcommunity.com/s/article/New-eBook-Download-Options-for-Readers-Coming-in-2026?lang...
449•captn3m0•10h ago•241 comments

Performance Hints – Jeff Dean and Sanjay Ghemawat

https://abseil.io/fast/hints.html
33•alphabetting•2h ago•4 comments

Show HN: Stickerbox, a kid-safe, AI-powered voice to sticker printer

https://stickerbox.com/
15•spydertennis•1h ago•11 comments

Brown/MIT shooting suspect found dead, officials say

https://www.washingtonpost.com/nation/2025/12/18/brown-university-shooting-person-of-interest/
21•anigbrowl•17h ago•11 comments

8-bit Boléro

https://linusakesson.net/music/bolero/index.php
28•Aissen•9h ago•5 comments

Believe the Checkbook

https://robertgreiner.com/believe-the-checkbook/
76•rg81•5h ago•30 comments

Reverse Engineering US Airline's PNR System and Accessing All Reservations

https://alexschapiro.com/security/vulnerability/2025/11/20/avelo-airline-reservation-api-vulnerab...
59•bearsyankees•2h ago•25 comments

The FreeBSD Foundation's Laptop Support and Usability Project

https://github.com/FreeBSDFoundation/proj-laptop
103•mikece•6h ago•40 comments

Show HN: TinyPDF – 3kb pdf library (70x smaller than jsPDF)

https://github.com/Lulzx/tinypdf
30•lulzx•1d ago•4 comments

Graphite Is Joining Cursor

https://cursor.com/blog/graphite
102•fosterfriends•5h ago•134 comments

Rust's Block Pattern

https://notgull.net/block-pattern/
36•zdw•16h ago•10 comments

Detailed balance in large language model-driven agents

https://arxiv.org/abs/2512.10047
8•Anon84•3d ago•0 comments

The pitfalls of partitioning Postgres yourself

https://hatchet.run/blog/postgres-partitioning
13•abelanger•3d ago•0 comments

Ask HN: How are you LLM-coding in an established code base?

12•adam_gyroscope•3d ago•7 comments

You can now play Grand Theft Auto Vice City in the browser

https://dos.zone/grand-theft-auto-vice-city/
163•Alifatisk•1h ago•42 comments

Lite^3, a JSON-compatible zero-copy serialization format

https://github.com/fastserial/lite3
91•cryptonector•6d ago•27 comments

NOAA deploys new generation of AI-driven global weather models

https://www.noaa.gov/news-release/noaa-deploys-new-generation-of-ai-driven-global-weather-models
18•hnburnsy•1d ago•2 comments

Show HN: I Made Loom for Mobile

https://demoscope.app
40•admtal•3h ago•27 comments

Wall Street Ruined the Roomba and Then Blamed Lina Khan

https://www.thebignewsletter.com/p/how-wall-street-ruined-the-roomba
98•connor11528•2h ago•57 comments

Building a Transparent Keyserver

https://words.filippo.io/keyserver-tlog/
44•noident•6h ago•16 comments

Show HN: MCPShark Viewer (VS Code/Cursor extension)- view MCP traffic in-editor

19•mywork-dev•2d ago•0 comments

Ask HN: Who here is not working on web apps/server code?

26•ex-aws-dude•1d ago•13 comments

1.5 TB of VRAM on Mac Studio – RDMA over Thunderbolt 5

https://www.jeffgeerling.com/blog/2025/15-tb-vram-on-mac-studio-rdma-over-thunderbolt-5
567•rbanffy•22h ago•209 comments

Prompt caching for cheaper LLM tokens

https://ngrok.com/blog/prompt-caching/
244•samwho•3d ago•59 comments

History LLMs: Models trained exclusively on pre-1913 texts

https://github.com/DGoettlich/history-llms
711•iamwil•22h ago•349 comments

Show HN: Stepped Actions – distributed workflow orchestration for Rails

https://github.com/envirobly/stepped
72•klevo•5d ago•10 comments
Open in hackernews

Show HN: Linggen – A local-first memory layer for your AI (Cursor, Zed, Claude)

https://github.com/linggen/linggen
16•linggen•3h ago
Hi HN,

Working with multiple projects, I got tired of re-explaining our complex multi-node system to LLMs. Documentation helped, but plain text is hard to search without indexing and doesn't work across projects. I built Linggen to solve this.

My Workflow: I use the Linggen VS Code extension to "init my day." It calls the Linggen MCP to load memory instantly. Linggen indexes all my docs like it’s remembering them—it is awesome. One click loads the full architectural context, removing the "cold start" problem.

The Tech:

Local-First: Rust + LanceDB. Code and embeddings stay on your machine. No accounts required.

Team Memory: Index knowledge so teammates' LLMs get context automatically.

Visual Map: See file dependencies and refactor "blast radius."

MCP-Native: Supports Cursor, Zed, and Claude Desktop.

Linggen saves me hours. I’d love to hear how you manage complex system context!

Repo: https://github.com/linggen/linggen Website: https://linggen.dev

Comments

linggen•2h ago
Hi HN, I’m the author.

Linggen is a local-first memory layer that gives AI persistent context across repos, docs, and time. It integrates with Cursor / Zed via MCP and keeps everything on-device.

I built this because I kept re-explaining the same context to AI across multiple projects. Happy to answer any questions.

Y_Y•1h ago
How can it stay on your device if you use Claude?
linggen•1h ago
Good question. Linggen itself always runs locally.

When using Claude Desktop, it connects to Linggen via a local MCP server (localhost), so indexing and memory stay on-device. The LLM can query that local context, but Linggen doesn’t push your data to the cloud.

Claude’s web UI doesn’t support local MCP today — if it ever does, it would just be a localhost URL.

ithkuil•8m ago
Of course, parts of the context (as decided by the MCP server, based on the context, no pun intended) are returned to claude which processes them on their servers.
linggen•2m ago
Yes, that’s correct — the model only sees the retrieved slices that the MCP server explicitly returns, similar to pasting selected context into a prompt.

The distinction I’m trying to make is that Linggen itself doesn’t sync or store project data in the cloud; retrieval and indexing stay local, and exposure to the LLM is scoped and intentional.

gostsamo•1h ago
How is it better than keeping project documentation and telling the agent to load the necessary parts? does it compress the info somehow or helps with context management?
linggen•1h ago
Compared to plain docs, Linggen indexes project knowledge into a vector store that the LLM can query directly.

The key difference is that it works across projects. While working on project A, I can ask: “How does project B send messages?” and have that context retrieved and applied, without manually opening or loading docs.