I was frustrated that feeding large time-series datasets directly into LLM prompts is highly inefficient, expensive, and prone to context-window hallucinations. To solve this, I built sktime-agentic-forecastor as an open-source proof-of-concept.
Instead of a single-shot prompt where the LLM blindly guesses a model, this uses a true ReAct (Reasoning + Acting) loop. It leverages the newly introduced Model Context Protocol (MCP) to proxy data safely into an ephemeral environment.
You can check out the source code, architecture diagrams, and run the quickstart examples here: https://github.com/amruth6002/sktime-agentic-forecastor
I'd love feedback on the overall architecture