Hi HN,
I'm Marco — ex-ethologist turned AI systems engineer — and I built something I needed but couldn't find: an open-source framework to wire *cognition like circuits* — not spaghetti prompt chains.
It's called *OrKa*: the *Orchestrator Kit for Agents*.
WHY I BUILT IT • Tired of black-box LLM chains (LangChain, AutoGPT, etc.) • Needed a way to fork/join reasoning paths • Wanted true traceability, memory, and reproducibility • Inspired by how cognition decays, splits, reconverges • Built from scratch with Redis Streams, YAML logic, and local model support
I wanted reasoning I could *see*, *debug*, *version*, and *replay* — like functional circuits.
WHAT ORKA IS
> YAML-defined cognition graphs (versioned, inspectable)
> Fork/Join execution with trace replay
> Router agents w/ confidence-weighted branching
> Redis or Kafka backends
> Scoped memory (episodic, procedural, etc.)
> Visual UI (React-based): https://orka-ui.web.app
> ServiceNodes: RAG, Memory fetch, Writer, Embedder
> Full local+remote LLM support via LiteLLM / OpenAI / Ollama
> 76%+ test coverage, deterministic behavior
BENCHMARK (REAL)
• 1000 orchestration runs (2-agent pipeline)
• DeepSeek-R1 (1.5B) via Ollama on Pop!_OS
• Avg latency: ~7.6s per agent
• Zero agent drift across runs
• Total cost (simulated): ~$0.49
• CPU temp: stable 88–89°C
• RAM: < 5.3 GB
• RESULTS: https://github.com/marcosomma/orka-reasoning/blob/master/docs/orka_linux_stressTest_toShate.zip
LINKS
• PyPI → https://pypi.org/project/orka-reasoning/
• GitHub → https://github.com/marcosomma/orka-reasoning
• Examples → https://github.com/marcosomma/orka- reasoning/tree/master/examples
• UI (Docker)→ https://hub.docker.com/r/marcosomma/orka-ui
FEEDBACK WELCOME
• Is this the right abstraction layer for agentic reasoning?
• Should this remain infra-first or move toward hosted cognition-as-a-service?
• Anyone else frustrated with current LLM toolchains?
Thanks for reading. AMA. _