LocalCloud runs your entire AI development stack locally: - LLMs (Llama 3.2, Qwen 2.5, Mistral) - PostgreSQL with pgvector - Redis cache - S3-compatible storage
One command (`lc start`) and everything's running. Works great with Claude Code, Cursor, and other AI assistants.
We've been dogfooding for months - built everything from chatbots to code analysis tools without spending on APIs during development.
Tech: Go, Docker, designed to run on 4GB RAM.
Would love feedback from the HN community! What would you build if AI development had no cost barriers?