I’m Mendrika. I posted AGX v1 a few weeks ago. Honest take: it technically worked, but it didn't click. Something was missing, didn't end up using it as much as I thought.
v2 is an iteration focused on what I actually wanted day-to-day:
1) Runs that pause where it matters (approvals) AGX treats “side effects” as opt-in. When a run is about to edit files, commit, push, or open a PR, it stops and asks you. You can inspect what it intends to do, tweak the approach, or reject it. The goal is to make agent work feel less like “hope it doesn’t do something weird” and more like “interactive automation.”
2) Turn discussion into concrete work Instead of ending with a long chat transcript, AGX turns the conversation into a small set of explicit tasks and then executes them. There’s a short preflight where you can get a couple different perspectives and steer the direction, then AGX runs: *Extract Tasks → Execute*.
3) Reuse what you already learned When a run finishes, AGX can save a few structured notes about what happened (gotcha, decision, outcome). Future runs can pull those notes back in, scoped to the repo/project/task, so you don’t re-discover the same pitfalls every time. It’s local, inspectable, and easy to delete.
Try it:
npm install -g @mndrk/agx agx init agx chat start
Local-first: keys and data stay on your machine. Provider-agnostic: bring your own models/tools.
Demo (4:07): https://youtu.be/QtqBuf_6dkk
GitHub: https://github.com/ramarlina/agx
npm: @mndrk/agx
I’m looking for "reality checks" more than compliments. If you have 5 minutes to break it, I’d love to hear what feels unnecessary or where the abstraction fails.