I built an early prototype called GALEN.
Instead of sending long-form speech or text directly to an AI model, GALEN converts it into a compact, structured instruction format first:
speech/text → structured representation → AI
The aim is not only to reduce token usage, but also to improve consistency, determinism and reliability across different AI systems.
So far the prototype appears to: - reduce AI input overhead by roughly 30–70% - work across multiple domains (healthcare, legal, finance, travel, etc.) - allow the same structured instruction to be sent to different models
Still very early and patent pending. I’m trying to work out whether there is genuinely a useful product here beyond the prototype.
Demo: https://galenvoice.com
Would really appreciate any thoughts or criticism.