Here’s what’s next in the coming days:
Today: AdaptiQ Core (what you see now) – CLI-based prompt/agent optimizer – Reinforcement Learning loop (offline Q-table) – Token/cost/CO₂ tracking per run – Markdown + badge reporting – Works with CrewAI + OpenAI (GPT-4, GPT-3.5)
Next Week: AdaptiQ ACE – HTTP Proxy Edition – Drop-in FinOps proxy for Claude, Gemini, GPT – Rewrites prompts on-the-fly – Tracks latency, compile pass, retries – GitHub Action: block PR if cost/test fails – RL reward = quality − β·tokens − γ·latency
What we’re solving: > Agents fail silently, burn credits, drift from style guides. > AdaptiQ gives your LLM prompts a feedback loop.
We’re building this in the open: roadmap, CLI, and future trace-spec are all public.
Questions? Feedback? Want support for LangChain / Autogen / Mistral? Let us know below – we’d love to expand!
Also: if you drop your prompt logs (token usage + outcome), we can pre-train the Q-table for your setup.
Cheers – Wassim / AdaptiQ team
adaptiq•5h ago
We just open-sourced [AdaptiQ Core](https://github.com/adaptiq-ai/adaptiq), a CLI tool that uses reinforcement learning (Q-learning) to optimize your LLM agents and reduce token usage, retries, and failed outputs.
It observes your agent runs, builds a local Q-table, and learns how to improve prompts/configs — all offline.
---
*What it does:*
• Prompt & agent optimizer (crewAI-compatible) • RL loop (offline Q-learning) • Pre-run cost prediction • FinOps reporting (token, $ and CO₂) • Markdown reports + GitHub badge • Works via CLI (`wizard`, `validate`, `report`)
---
*FinOps in Action* Saved Tokens: –37% Retries reduced: –60% Compile-pass +15 pts
GitHub: https://github.com/adaptiq-ai/adaptiq