frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Symbolic reasoning system with local inference and full auditability

https://signal-zero.ai/examples.html
1•klietus•1mo ago

Comments

klietus•1mo ago
I built a symbolic AI system that synthesizes complex decisions using 1000+ compressed symbols, local inference, and complete traceability.

Demo: https://signal-zero.ai/demo.html

## The Problem

Most AI either: - Hallucinates confidently (no grounding) - Requires cloud APIs (privacy/cost issues) - Lacks auditability (black box reasoning) - Struggles with multi-domain synthesis

I wanted something that could handle real complexity while being verifiable, private, and economical.

## The Approach

*Symbolic compression via semiotic triads:* - Each symbol = concept + semiotic triad - Example: POWER-DRIFT-TACTIC → - 1000+ symbols across domains in ~7k token overhead - Hierarchical references enable deep retrieval without context explosion

*Uncertainty detection + web grounding:* - System recognizes knowledge gaps - Triggers web search for validation - Integrates results before responding - No hallucination on factual claims

*Local inference:* - Runs on M4 Max (gpt-oss-120B quantized) - ~30 second responses - Zero API costs after hardware - Complete data privacy

*Full symbolic traceability:* - Every claim linked to source symbol - Complete reasoning chain logged - Audit trail for regulatory compliance - 29 domains modeled

## Example Output

https://signal-zero.ai/examples.html

## Technical Stack

- Base: gpt-oss-120B (local quantized inference) - Symbols: 1000+ hand-curated across 7+ domains - Compression: Semiotic triads + hierarchical references - Tools: Ephemeral execution with validation retries - Context: ~15k tokens average (including output) - Grounding: Web search on uncertainty detection

## Why Symbolic?

Vector embeddings are great for retrieval but terrible for reasoning chains. Symbols provide:

1. *Composability* - combine across domains coherently 2. *Traceability* - explicit reasoning paths 3. *Efficiency* - massive compression via references 4. *Verifiability* - audit every claim to source

The emoji triads act as semantic anchors that survive context compression while remaining human-readable.

## Use Cases Tested

- OSINT / disinformation analysis - Bioethics committee decisions - Pharmaceutical regulatory pathways - Environmental impact assessment - Academic research synthesis - Medical triage (flags mental health concerns appropriately)

All demos live on site with full outputs.

## Current Status

Still figuring out productization. Core question: is the auditability + local inference + multi-domain synthesis combination valuable enough to matter for production use cases?

Open to feedback on: 1. Architecture improvements 2. Symbol library design 3. Real-world applications 4. Technical tradeoffs

Happy to run test analyses for anyone curious. Looking for validation that this approach has legs beyond being technically interesting.

---

Tech details for the architecture-curious:

*Symbol structure:*

{ id: "POWER-DRIFT-TACTIC", triad: "", domain: "negotiation", definition: "Gradual shift of authority...", related: ["NAR-LOOP", "SOFT-GRIND-COLLAPSE"] }

*Context management:* - Load symbol stubs (50 tokens each) into context - Full definitions retrieved only when activated - Ephemeral tool execution keeps working memory clean - Triads enable rapid pattern matching with ultra small compression of concepts.

*Validation loop:* Tool call → Parse → Validate → Retry if malformed (max 3×) Achieves 99%+ compliance vs ~60% without validation

*Web grounding trigger:* If (uncertainty_detected && factual_claim_present): web_search(targeted_query) integrate_results() cite_sources()

The system knows what it doesn't know.

---

Built this because I was frustrated with AI that couldn't show its work. Turns out symbolic reasoning + modern LLMs + proper engineering = actually useful for complex decisions.

Thoughts?