This is a small experimental AI project I’ve been building called *ZTGI-AC*.
Most LLMs generate an answer immediately, but ZTGI-AC does something different: before responding, it runs an internal stability check.
It evaluates: • risk • jitter • dissonance • SAFE/WARN/BREAK modes • INT/EXT gating (self-monitoring loop)
Only after the internal signals stabilize does it generate a reply.
This project explores whether self-evaluation loops can reduce chaotic or unstable outputs in LLM-like systems.
*Demo:* https://ztgiai.pages.dev (Non-commercial, early prototype.)
I’d love feedback from the HN community — especially around: • whether self-monitoring loops are meaningful, • potential improvements to stability metrics, • and how this idea compares to classical alignment approaches.
Thanks for taking a look!
capter•1h ago
Happy to answer any questions or discuss the stability loop design. This is an early prototype and I'm exploring:
• whether internal self-monitoring can reduce unstable LLM behaviour • alternative stability metrics (beyond risk/jitter) • how gating (INT/EXT) affects output quality under noisy inputs • ideas for tests or failure modes worth trying
All feedback, criticism, or ideas are welcome!