On the one hand, the approach overlaps a lot with my thinking, and has some original tweaks (like the emotionally valenced reward signals). Saying that as someone from a robotics/AI background nowadays involved in GenAI, with a few years of phd research on NeuroAI, curious about molecular neuroscience and the Free Energy Principle (as conceptualised by Karl Friston and Mark Solms).
On the other:
- this plausibility dilemma is the hallmark of LLMs
- has all the buzzwords imaginable
- no code, no raw outputs, no official confirmation (by ARC)
- Agentic AI play, walled demo page
I might just be too hopeful (and gullible)...
I could even provide you the logs where we achieved a 70% score on ARC2 and decided NOT to publish the results.
We're a bit guarded right now, hence the minimal presence. We're pre beta and just starting to get the word out...
That being said, try minimizing or closing the paywall. We're in the process of polishing the application up...but we did remove the actual paywall itself. You should be able to test bitterbot without it.
Please let me know if you can't. I would LOVE for you to test it a bit and play around with it.
Doug_Bitterbot•2mo ago
This paper proposes a solution to the symbol grounding problem by merging neural perception with symbolic logic.
We are currently testing this architecture in a live agent environment to verify the theoretical optimization claims. If you want to see the architecture in action, we have a running implementation here: https://www.bitterbot.ai
Happy to answer questions about the Neuro-Symbolic integration.