Hey HN! Here's Hegelion -- applying Hegelian dialecticism to push LLMs to construct stronger arguments.
The motivation I think is pretty obvious -- most LLM answers are confident first drafts. They rarely surface their own contradictions or explore serious alternatives. Hegelion wraps any backend and makes it do three passes: Thesis – initial answer. Antithesis – targeted self-critique: contradictions, missing cases, bad assumptions. Synthesis – a reconciled, more defensible position.
The JSON output is designed for researchers and eval work. Each run includes: contradictions: itemized weaknesses the model identified in its own reasoning. research_proposals: testable hypotheses or follow-up questions from the synthesis. metadata: timings, backend info, prompt hashes, etc.
Repo includes a CLI + Python API, MCP server for multiple backends, & hegelion-bench tool for basic model comparison
hunterbown•2h ago
The motivation I think is pretty obvious -- most LLM answers are confident first drafts. They rarely surface their own contradictions or explore serious alternatives. Hegelion wraps any backend and makes it do three passes: Thesis – initial answer. Antithesis – targeted self-critique: contradictions, missing cases, bad assumptions. Synthesis – a reconciled, more defensible position.
The JSON output is designed for researchers and eval work. Each run includes: contradictions: itemized weaknesses the model identified in its own reasoning. research_proposals: testable hypotheses or follow-up questions from the synthesis. metadata: timings, backend info, prompt hashes, etc.
Repo includes a CLI + Python API, MCP server for multiple backends, & hegelion-bench tool for basic model comparison
Repo: https://github.com/Hmbown/Hegelion
I'm the creator (hmbown). Curious to hear if this is useful for your own work.
yodon•1h ago
hunterbown•1h ago