frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
26•mbitsnbites•3d ago•2 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
48•momciloo•7h ago•9 comments

Show HN: Browser based state machine simulator and visualizer

https://svylabs.github.io/smac-viz/
8•sridhar87•4d ago•3 comments

Show HN: Django-rclone: Database and media backups for Django, powered by rclone

https://github.com/kjnez/django-rclone
2•cui•1h ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
294•isitcontent•1d ago•39 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
44•sandGorgon•2d ago•20 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
362•eljojo•1d ago•218 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
374•vecti•1d ago•171 comments

Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals

https://github.com/writerslogic/witnessd
2•davidcondrey•2h ago•1 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
2•latentio•3h ago•0 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
97•antves•2d ago•70 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
85•phreda4•1d ago•17 comments

Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust

https://github.com/artifact-keeper
156•bsgeraci•1d ago•65 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
2•shubham-coder•6h ago•1 comments

Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements

https://www.biotradingarena.com/hn
29•dchu17•1d ago•12 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
55•nwparker•2d ago•12 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
3•Keyframe•7h ago•0 comments

Show HN: A toy compiler I built in high school (runs in browser)

https://vire-lang.web.app
3•xeouz•7h ago•1 comments

Show HN: Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp

https://github.com/rivet-dev/sandbox-agent/tree/main/gigacode
23•NathanFlurry•1d ago•11 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
18•denuoweb•2d ago•2 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
3•anipaleja•9h ago•0 comments

Show HN: Micropolis/SimCity Clone in Emacs Lisp

https://github.com/vkazanov/elcity
173•vkazanov•2d ago•49 comments

Show HN: MCP App to play backgammon with your LLM

https://github.com/sam-mfb/backgammon-mcp
3•sam256•11h ago•1 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
9•sakanakana00•12h ago•2 comments

Show HN: Horizons – OSS agent execution engine

https://github.com/synth-laboratories/Horizons
27•JoshPurtell•1d ago•5 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•12h ago•1 comments

Show HN: Daily-updated database of malicious browser extensions

https://github.com/toborrm9/malicious_extension_sentry
14•toborrm9•1d ago•8 comments

Show HN: Falcon's Eye (isometric NetHack) running in the browser via WebAssembly

https://rahuljaguste.github.io/Nethack_Falcons_Eye/
7•rahuljaguste•1d ago•1 comments

Show HN: Local task classifier and dispatcher on RTX 3080

https://github.com/resilientworkflowsentinel/resilient-workflow-sentinel
25•Shubham_Amb•2d ago•2 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
22•keepamovin•17h ago•6 comments
Open in hackernews

Show HN: Linggen – A local-first memory layer for your AI (Cursor, Zed, Claude)

https://github.com/linggen/linggen
36•linggen•1mo ago
Hi HN,

Working with multiple projects, I got tired of re-explaining our complex multi-node system to LLMs. Documentation helped, but plain text is hard to search without indexing and doesn't work across projects. I built Linggen to solve this.

My Workflow: I use the Linggen VS Code extension to "init my day." It calls the Linggen MCP to load memory instantly. Linggen indexes all my docs like it’s remembering them—it is awesome. One click loads the full architectural context, removing the "cold start" problem.

The Tech:

Local-First: Rust + LanceDB. Code and embeddings stay on your machine. No accounts required.

Team Memory: Index knowledge so teammates' LLMs get context automatically.

Visual Map: See file dependencies and refactor "blast radius."

MCP-Native: Supports Cursor, Zed, and Claude Desktop.

Linggen saves me hours. I’d love to hear how you manage complex system context!

Repo: https://github.com/linggen/linggen Website: https://linggen.dev

Comments

linggen•1mo ago
Hi HN, I’m the author.

Linggen is a local-first memory layer that gives AI persistent context across repos, docs, and time. It integrates with Cursor / Zed via MCP and keeps everything on-device.

I built this because I kept re-explaining the same context to AI across multiple projects. Happy to answer any questions.

Y_Y•1mo ago
How can it stay on your device if you use Claude?
linggen•1mo ago
Good question. Linggen itself always runs locally.

When using Claude Desktop, it connects to Linggen via a local MCP server (localhost), so indexing and memory stay on-device. The LLM can query that local context, but Linggen doesn’t push your data to the cloud.

Claude’s web UI doesn’t support local MCP today — if it ever does, it would just be a localhost URL.

ithkuil•1mo ago
Of course, parts of the context (as decided by the MCP server, based on the context, no pun intended) are returned to claude which processes them on their servers.
linggen•1mo ago
Yes, that’s correct — the model only sees the retrieved slices that the MCP server explicitly returns, similar to pasting selected context into a prompt.

The distinction I’m trying to make is that Linggen itself doesn’t sync or store project data in the cloud; retrieval and indexing stay local, and exposure to the LLM is scoped and intentional.

Y_Y•1mo ago
That's fine, but it's a very different claim to the one you made at first.

In particular, I don't know which parts of my data might get sent to Claude, so even if I hope it's only a small fraction, anything could in principle be transmitted.

linggen•1mo ago
That’s true — Linggen can’t control the behavior of Claude or any other cloud LLM.

What it can control is the retrieval boundary: what gets selected locally and exposed to the model. If nothing is returned, nothing is sent.

If a strict zero-exfiltration setup is required, then a fully local model would indeed be the right option.

linggen•1mo ago
I do have a local model path (Qwen3-4B) for testing.

The tradeoff is simply model quality vs locality, which is why Linggen focuses on controlling retrieval rather than claiming zero data ever leaves the device. Using a local LLM is straightforward if that’s the requirement.

gostsamo•1mo ago
How is it better than keeping project documentation and telling the agent to load the necessary parts? does it compress the info somehow or helps with context management?
linggen•1mo ago
Compared to plain docs, Linggen indexes project knowledge into a vector store that the LLM can query directly.

The key difference is that it works across projects. While working on project A, I can ask: “How does project B send messages?” and have that context retrieved and applied, without manually opening or loading docs.