I’ve been working on CodexLocal — a privacy-first, offline AI coding assistant that runs entirely in your browser (no servers, no tracking, no data sent anywhere).
It’s built with WebLLM and WebGPU, and supports RAG (retrieval-augmented generation) so it can be context-aware — even without internet access. Think of it as a local ChatGPT-style coding tutor, but one that never leaves your machine.
Why I built it Most AI coding assistants today are cloud-based — great for convenience, but not ideal for privacy-sensitive or educational settings. Bootcamps, schools, and dev teams often want to use AI without sending code or student data to third-party servers. CodexLocal aims to fill that gap.
Current MVP features ----------------- Works fully offline in your browser (WebLLM + WebGPU) Context memory via local RAG No login, tracking, or external API calls Works on Chrome, Edge, and soon Safari Light and dark themes
What’s next ----------------- File uploads for RAG context Offline model caching Classroom/enterprise deployment (commercial tier) NPM package / SDK for integrating into private dev environments
The ask ----------------- Would love feedback on:
Performance and model responsiveness in your browser Whether this kind of privacy-first setup would fit your org or classroom Any ideas for improving UX / RAG relevance
Try it: https://codexlocal.com
Demo video: https://www.youtube.com/watch?v=rnDmwW2x16s&feature=youtu.be Free for personal use, future commercial tier planned for enterprise deployments. Thanks for reading — feedback very welcome!