• Contextual Result Analysis: The LLM receives tool outputs and uses them to inform its next steps, reflecting on progress and adapting as needed. The REFLECTION_THRESHOLD in the client ensures it periodically reviews its overall strategy.
• Unique MCP Agent Scaffolding & SSE Framework: • ◇ The MCP-Agent scaffolding (ReConClient.py): This isn't just a script runner. The MCP-scaffolding manages "plans" (assessment tasks), maintains conversation history with the LLM for each plan, handles tool execution (including caching results), and manages the LLM's thought process. It's built to be robust, with features like retry logic for tool calls and LLM invocations. ◇ Server-Sent Events (SSE) for Real-Time Interaction (Rizzler.py, mcp_client_gui.py): The backend (FastAPI based) communicates with the client (including a Dear PyGui interface) using SSE. This allows for: ◇ ▪ Live Streaming of Tool Outputs: Watch tools like port scanners or site mappers send back data in real-time. ▪ Dynamic Updates: The GUI reflects the agent's status, new plans, and tool logs as they happen. ▪ Flexibility & Extensibility: The SSE framework makes it easier to integrate new streaming or long-running tools and have their progress reflected immediately. The tool registration in Rizzler.py (@mcpServer.tool()) is designed for easy extension.
We Need Your Help to Make It Even Better! This is an ongoing project, and I believe it has a lot of potential. I'd love for the community to get involved: ◇ Try it Out: Clone the repo, set it up (you'll need a GOOGLE_API_KEY and potentially a local SearXNG instance, etc. – see .env patterns), and run some assessments! ◇ ▪ GitHub Repo: https://github.com/seyrup1987/ReconRizzler-Alpha