*The problem:* AI coding assistants produce inconsistent code. Every session yields different implementations, and AI "forgets" rules mid-conversation. Prompt engineering helps, but quality still depends on how well you explain things each time.
*The insight:* Don't ask AI to follow rules—make it impossible to break them.
*The approach:*
1. *Specialized agents with strict boundaries* - Instead of one AI doing everything, split responsibilities. Layout agent creates JSON UI structure (never touches data types). Data agent defines bindings (never writes business logic). ViewModel agent implements logic (never edits JSON).
2. *JSON as single source of truth* - One JSON definition generates iOS native (SwiftUI/UIKit), Android native (Compose/XML), Web (React/Tailwind), tests, and docs. All in sync. Always.
3. *Cross-platform test runner* - Same test JSON runs on XCUITest, UIAutomator, and Playwright.
*Result:* Spec, implementation, and docs stay in sync because they're generated from the same source. AI agents are productive because they have clear, narrow scopes.
Still in development. Repos:
- Core: SwiftJsonUI, KotlinJsonUI, ReactJsonUI - Test runner: jsonui-test-runner (CLI + platform drivers) - Agents: JsonUI-Agents-for-claude
GitHub: https://github.com/Tai-Kimura
Would love feedback on the agent design approach.