On the one hand, the approach overlaps a lot with my thinking, and has some original tweaks (like the emotionally valenced reward signals). Saying that as someone from a robotics/AI background nowadays involved in GenAI, with a few years of phd research on NeuroAI, curious about molecular neuroscience and the Free Energy Principle (as conceptualised by Karl Friston and Mark Solms).
On the other:
- this plausibility dilemma is the hallmark of LLMs
- has all the buzzwords imaginable
- no code, no raw outputs, no official confirmation (by ARC)
- Agentic AI play, walled demo page
I might just be too hopeful (and gullible)...
Doug_Bitterbot•1h ago
This paper proposes a solution to the symbol grounding problem by merging neural perception with symbolic logic.
We are currently testing this architecture in a live agent environment to verify the theoretical optimization claims. If you want to see the architecture in action, we have a running implementation here: https://www.bitterbot.ai
Happy to answer questions about the Neuro-Symbolic integration.