I think the root problem isn’t the model. It’s that architecture is implicit, disposable, and not machine-readable.
The bigger the codebase, more entropy. I tried to fix this.
So I built Archeon: - a local architecture layer - a CLI that enforces constraints before code is generated - and a GUI that visualizes intent, relationships, and outcomes
This reduces context size, prevents invalid generations, and lets smaller or local models compete with larger ones.
Everything runs locally. No SaaS. No training data. It works alongside existing editors and AI tools.
Repo + demo video:
https://github.com/danaia/archeonGUI
https://www.youtube.com/watch?v=YtNKRKn5FEs
I’m curious whether others see architecture — not prompting — as the missing layer in AI-assisted development.
cheevly•1h ago
danamakes•1h ago