frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
65•yi_wang•2h ago•23 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
2•jeremy_su•57m ago•0 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
69•momciloo•10h ago•13 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
35•mbitsnbites•3d ago•3 comments

Show HN: Axiomeer – An open marketplace for AI agents

https://github.com/ujjwalredd/Axiomeer
7•ujjwalreddyks•5d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
298•isitcontent•1d ago•39 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
366•eljojo•1d ago•218 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
374•vecti•1d ago•172 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
44•sandGorgon•2d ago•21 comments

Show HN: Craftplan – Elixir-based micro-ERP for small-scale manufacturers

https://puemos.github.io/craftplan/
16•deofoo•4d ago•4 comments

Show HN: Django-rclone: Database and media backups for Django, powered by rclone

https://github.com/kjnez/django-rclone
2•cui•4h ago•1 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
98•antves•2d ago•70 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
86•phreda4•1d ago•17 comments

Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals

https://github.com/writerslogic/witnessd
2•davidcondrey•5h ago•2 comments

Show HN: Artifact Keeper – Open-Source Artifactory/Nexus Alternative in Rust

https://github.com/artifact-keeper
158•bsgeraci•1d ago•65 comments

Show HN: BioTradingArena – Benchmark for LLMs to predict biotech stock movements

https://www.biotradingarena.com/hn
30•dchu17•1d ago•13 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
2•latentio•7h ago•0 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
55•nwparker•2d ago•12 comments

Show HN: More beautiful and usable Hacker News

https://twitter.com/shivamhwp/status/2020125417995436090
4•shivamhwp•2h ago•1 comments

Show HN: Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp

https://github.com/rivet-dev/sandbox-agent/tree/main/gigacode
23•NathanFlurry•1d ago•11 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
2•shubham-coder•9h ago•1 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
18•denuoweb•2d ago•2 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
3•Keyframe•10h ago•0 comments

Show HN: A toy compiler I built in high school (runs in browser)

https://vire-lang.web.app
3•xeouz•10h ago•1 comments

Show HN: Micropolis/SimCity Clone in Emacs Lisp

https://github.com/vkazanov/elcity
173•vkazanov•2d ago•49 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
2•ivanglpz•12h ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
3•anipaleja•12h ago•0 comments

Show HN: Daily-updated database of malicious browser extensions

https://github.com/toborrm9/malicious_extension_sentry
14•toborrm9•1d ago•8 comments

Show HN: Horizons – OSS agent execution engine

https://github.com/synth-laboratories/Horizons
27•JoshPurtell•2d ago•5 comments

Show HN: MCP App to play backgammon with your LLM

https://github.com/sam-mfb/backgammon-mcp
3•sam256•14h ago•1 comments
Open in hackernews

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
65•yi_wang•2h ago
I built LocalGPT over 4 nights as a Rust reimagining of the OpenClaw assistant pattern (markdown-based persistent memory, autonomous heartbeat tasks, skills system).

It compiles to a single ~27MB binary — no Node.js, Docker, or Python required.

Key features:

- Persistent memory via markdown files (MEMORY, HEARTBEAT, SOUL markdown files) — compatible with OpenClaw's format - Full-text search (SQLite FTS5) + semantic search (local embeddings, no API key needed) - Autonomous heartbeat runner that checks tasks on a configurable interval - CLI + web interface + desktop GUI - Multi-provider: Anthropic, OpenAI, Ollama etc - Apache 2.0

Install: `cargo install localgpt`

I use it daily as a knowledge accumulator, research assistant, and autonomous task runner for my side projects. The memory compounds — every session makes the next one better.

GitHub: https://github.com/localgpt-app/localgpt Website: https://localgpt.app

Would love feedback on the architecture or feature ideas.

Comments

ramon156•1h ago
Pro tip (sorry if these comments are overdone), write your posts and docs yourself (or at least edit them).

Your docs and this post is all written by an LLM, which doesn't reflect much effort.

bakugo•1h ago
> which doesn't reflect much effort.

I wish this was an effective deterrent against posting low effort slop, but it isn't. Vibe coders are actively proud of the fact that they don't put any effort into the things they claim to have created.

problynought•46m ago
EE with decades of experience here.

Started my career designing/prototyping expansion cards for telecom blade boards.

I have built homes from foundation up, rebuilt cars from frame up.

I have contributed artisanal code from hardware drivers to GUI.

I have experience with hard things. SWE is not hard, just time-consuming.

Always a laugh when some non-contributor office worker exploiting sweatshop labor calls others low-effort.

For years SWEs have been so many layers of abstraction above the machine, SWE has been basic bitch data entry.

You got used as a political prop/pawn during ZIRP/money printer go brrr era. Womp womp.

Will keep training/leveraging LLMs and look forward to future hardware that further reduces the need to subject myself to useless middle-man SWEs.

It's just electrons in a machine. None of the gibberish SaaS conjured up matters.

0_____0•28m ago
EE with decades of experience here. You have valid points (SWE tedium, LLMs allowing adjacent technical fields to access SW/FW work without involving SWEs) that are completely lost because you're being an asshole for no good reason.
g0h0m3•33m ago
Github repo that is nothing but forks of others projects and some 4chan utilities.

Professional codependent leveraging anonymity to target others. The internet is a mediocrity factory.

Szpadel•8m ago
counterargument: I always hated writing docs and therefore most of thing that I done at my day job didn't had any and it made using it more difficult for others.

I was also burnt many times where some software docs said one thing and after many hours of debugging I found out that code does something different.

LLMs are so good at creating decent descriptions and keeping them up to date that I believe docs are the number one thing to use them for. yes, you can tell human didn't write them, so what? if they are correct I see no issue at all.

theParadox42•1h ago
I am excited to see more competitors in this space. Openclaw feels like a hot mess with poor abstractions. I got bit by a race condition for the past 36 hours that skipped all of my cron jobs, as did many others before getting fixed. The CLI is also painfully slow for no reason other than it was vibe coded in typescript. And the errors messages are poor and hidden and the TUIs are broken… and the CLI has bad path conventions. All I really want is a nice way to authenticate between various APIs and then let the agent build and manage the rest of its own infrastructure.
dvt•1h ago
So weird/cool/interesting/cyberpunk that we have stuff like this in the year of our Lord 2026:

   ├── MEMORY.md            # Long-term knowledge (auto-loaded each session)
   ├── HEARTBEAT.md         # Autonomous task queue
   ├── SOUL.md              # Personality and behavioral guidance
Say what you will, but AI really does feel like living in the future. As far as the project is concerned, pretty neat, but I'm not really sure about calling it "local-first" as it's still reliant on an `ANTHROPIC_API_KEY`.

I do think that local-first will end up being the future long-term though. I built something similar last year (unreleased) also in Rust, but it was also running the model locally (you can see how slow/fast it is here[1], keeping in mind I have a 3080Ti and was running Mistral-Instruct).

I need to re-visit this project and release it, but building in the context of the OS is pretty mindblowing, so kudos to you. I think that the paradigm of how we interact with our devices will fundamentally shift in the next 5-10 years.

[1] https://www.youtube.com/watch?v=tRrKQl0kzvQ

atmanactive•53m ago
> but I'm not really sure about calling it "local-first" as it's still reliant on an `ANTHROPIC_API_KEY`.

See here:

https://github.com/localgpt-app/localgpt/blob/main/src%2Fage...

halJordan•52m ago
You absolutely do not have to use a third party llm. You can point it to any openai/anthropic compatible endpoint. It can even be on localhost.
dvt•43m ago
Ah true, missed that! Still a bit cumbersome & lazy imo, I'm a fan of just shipping with that capability out-of-the-box (Huggingface's Candle is fantastic for downloading/syncing/running models locally).
embedding-shape•33m ago
Ah come on, lazy? As long as it works with the runtime you wanna use, instead of hardcoding their own solution, should work fine. If you want to use Candle and have to implement new architectures with it to be able to use it, you still can, just expose it over HTTP.
AndrewKemendo•1h ago
Properly local too with the llama and onnx format models available! Awesome

I assume I could just adjust the toml to point to deep seek API locally hosted right?

applesauce004•57m ago
Can someone explain to me why this needs to connect to LLM providers like OpenAI or Anthropic? I thought it was meant to be a local GPT. Sorry if i misunderstood what this project is trying to do.

Does this mean the inference is remote and only context is local?

halJordan•51m ago
It doesn't need to
vgb2k18•48m ago
If local isn't configured then fallback to online providers:

https://github.com/localgpt-app/localgpt/blob/main/src%2Fage...

atmanactive•47m ago
It doesn't. It has to connect to SOME LLM provider, but that CAN also be local Ollama server (running instance). The choice ALWAYS need to be present since, depending on your use case, Ollama (local machine LLM) could be just right, or it could be completely unusable, in which case you can always switch to data center size LLMs.

The ReadMe gives only a Antropic version example, but, judging by the source code [1], you can use other providers, including Ollama, just by changing the syntax of that one config file line.

[1] https://github.com/localgpt-app/localgpt/blob/main/src%2Fage...

dalemhurley•46m ago
I’m am playing with Apple Foundation Models.
dpweb•40m ago
Made a quick bot app (OC clone). For me I just want to iMessage it - but do not want to give Full Disk rights to terminal (to read the imessage db).

Uses Mlx for local llm on apple silicon. Performance has been pretty good for a basic spec M4 mini.

Nor install the little apps that I don't know what they're doing and reading my chat history and mac system folders.

What I did was create a shortcut on my iphone to write imessages to an iCloud file, which syncs to my mac mini (quick) - and the script loop on the mini to process my messages. It works.

Wonder if others have ideas so I can iMessage the bot, im in iMessage and don't really want to use another app.

bravura•4m ago
Beeper API
mraza007•20m ago
I love how you used SQLite (FTS5 + sqlite-vec)

Its fast and amazing for generating embedding and lookups

thcuk•15m ago
Fails to build "cargo install localgpt" under Linux Mint. Git clone and change Cargo.toml by adding

"""rust # Desktop GUI eframe = { version = "0.30", default-features = false, features = [ "default_fonts", "glow", "persistence", "x11", ] } """

That is add "x11" Then cargo build --release succeeds. I am not a Rust programmer.

DetroitThrow•12m ago
It doesn't build for me unfortunately. I'm using Ubuntu Linux, nothing special.