It's a self-hosted playground that runs entirely in Docker: - Local LLM inference (Qwen2.5 3B via Ollama) - Share prompts with URLs locally - Fork and remix prompts - Zero API costs - Markdown rendering with syntax highlighting
Everything is local - your prompts never leave your machine. One docker-compose up command and you're running.
This is my first real open source project, so feedback is very welcome!