I spent a while building N.E.O. (Native Executable Orchestrator) – an AI-powered tool that turns natural language prompts into compiled, live, running .NET desktop applications. Think of it like ChatGPT Canvas or Claude Artifacts, but for native Windows apps. Not mockups or web previews — actual (optionally sandboxed) binaries.
My core philosophy for this was frictionless setup. The tool requires absolutely no SDKs. All it needs is the standard .NET runtime. No .NET SDK, no Node.js, no manual environment configurations. If your prompt requires Python, N.E.O. automatically downloads and sets up the necessary environment under the hood. From a user perspective: download the installer, set your API keys, type a prompt, and it works.
I'm a teacher, not a Silicon Valley startup founder — I started this as a fun side project. At some point, I tried to make this a commercial product, but I suck at marketing and it didn't work out. :) But before I let the software collect dust, I wanted to ask HN: is this worth polishing and open-sourcing?
What N.E.O. does: - Instant Native UI: You type a prompt like "build me a dashboard with live charts" and the tool generates a running native desktop app (WPF or Avalonia) right before your eyes. - Natural Language Iteration: Describe changes in plain language. The AI applies diffs, recompiles (without an SDK!), and the app updates instantly. - Hybrid Stack Capabilities: You can mix native (C#), web (React), and Python in a single app. For example: "Build a stock analyzer with a React-based interactive chart, and use Python for the statistical calculations" — N.E.O. will embed the web components and Python environment behind a native UI.
Some technical details that might interest you: - SDK-less Compilation: Compiles C# on the fly without the .NET SDK. NuGet and Python dependencies are fetched automatically on demand. - Self-Healing: Errors are fed back to the AI automatically. Even runtime crashes trigger automatic repair loops. - Visual Click-to-Edit: Click any control in the running app (WPF only), and a basic property editor appears. - Security & Sandboxing: Dual-layer security (14 heuristic rules + AI semantic risk assessment) before applying generated code. Apps can also be run in an optional Windows AppContainer sandbox to restrict network and file access. - Branching Undo/Redo: The history is not a linear stack, but a tree. Explore different design directions and easily revert to any branch. - Cross-Platform Export: Export native WPF apps for Windows, or generate Avalonia UIs that can be exported for Windows, Linux, and macOS. - Apps Can Call AI: The resulting applications can themselves use Claude/GPT/Gemini/Ollama/ LM Studio at runtime via an automatically exposed API.
What I'm honestly wondering:
Desktop app builders feel like an abandoned category. Everyone is building web-first AI code generators. But aren't there still domains where desktop wins — enterprise internal tools, offline apps, hardware access, kiosk systems? I teach C# and WPF for a living, and one of my main motivations for building N.E.O. was to stress-test the entire C# toolchain: where exactly does an AI-driven native app generator hit a wall because essential APIs, tooling, or framework support is missing?
Is this worth polishing and open-sourcing as a community project, or should I just move on and try to perfect my brownie recipe?
The code is messy and far from perfect, but the core architecture works. Happy to answer any questions about the implementation!