So, I built TurtleSim Agent, an AI agent that turns the classic ROS 2 turtlesim turtle into a creative artist.
With this agent, you can give plain English commands like “draw a triangle” or “make a red star,” and it will reason through the instructions and control the simulated turtle accordingly. I’ve included demo videos on GitHub. Behind the scenes, it uses an LLM to interpret the text, decide what actions are needed, and then call a set of modular tools (motion, pen control, math, etc.) to complete the task.
If you're interested in LLM+robotics, ROS, or just want to see a turtle become a digital artist, I'd love for you to check it out:
GitHub: https://github.com/Yutarop/turtlesim_agent
Looking ahead, I’m also exploring frameworks like LangGraph and MCP (Modular Chain of Thought Planning) to see whether they might be better suited for more complex planning and decision-making tasks in robotics. If anyone here is familiar with these frameworks or working in this space, I’d love to connect or hear your thoughts.
dpflan•1d ago
karmakaze•1d ago
ponta17•1d ago
In this project, an agent is an LLM-powered system that takes a high-level user instruction, reasons about what steps are needed to fulfill it, and then executes those steps using a set of tools. So it’s more than a single prompted LLM call — the agent maintains a kind of working state and can call external functions iteratively as it plans and acts.
Concretely, in turtlesim_agent, the agent receives an input like “draw a red triangle,” and then: 1. Uses the LLM to interpret the intent, 2. Decides which tools to use (like move forward, turn, set pen color), 3. Calls those tools step-by-step until the task is done.
Hope that clears it up a bit!
paxys•1d ago