Here's a demo: https://youtu.be/KyOP9BY0WiY Website Link: https://timemachinesdk.dev/
Here is the initial problem we are trying to solve: Imagine it's Step 9 of 10 of an agent running, and it hallucinated a tool call, wrote garbage to your database, and crashed. You fix the prompt. You re-run. $1.50 gone. This happens six more times before lunch. Teams burning $100+ per day on re-runs is normal once you are running non-trivial workflows in production.
We built Time Machine around one idea: when an agent fails at step 9, you should be able to fork from step 8 and replay only what is downstream.
How: Drop in the TypeScript SDK (or the LangChain callback adapter for zero-code integration) and every step gets recorded — inputs, outputs, LLM calls, tool invocations, full state — persisted to PostgreSQL. The dashboard gives you a timeline and DAG of the execution. At any point, you can fork, change something (swap a model, edit a prompt, tweak an input), replay only the downstream steps, and diff the two runs side by side.
The internal framing we keep coming back to: Git for agent execution. Checkpoint, branch, diff, replay.
What we already see out there with some overlap: LangSmith, Helicone, and LangFuse. They are good tools, but mainly loggers. Observability is necessary but not sufficient when what you actually need is to change something and see what happens, which is what we enable you to do easily.
We also ship a native Claude Code integration. Install the hook bridge once, and every Claude Code session is automatically captured as a Time Machine execution: tool calls, token counts, file edits, git context, subagent trees. You get full observability over your Claude Code workflows in the same dashboard, with the same timeline and fork tooling, without any manual instrumentation. In addition to this, we are actively working on enabling Time Machine directly from your terminal, so you can ask Claude Code to pull a failed run, inspect the trace, and suggest a fix without leaving your editor. The intent is that the debugging loop stays where the development loop already lives.
We are also building an eval platform on the same infrastructure. Production runs become test cases automatically. You can run assertions (contains, regex, cosine similarity, LLM-as-judge, latency, and cost constraints) against replayed outputs and plug it into CI/CD so prompt changes get tested before they ship.
Current status: MVP is live - execution capture, session replay, fork/replay, and Claude Code integration. The Eval platform is shipping now. The SDK is zero-dependency.
Looking for teams actively debugging production agents who want to be early design partners. Happy to go deeper if this is a problem you are dealing with at scale. We would love for people to get their hands on this, test against real agent runs and let us know what can actually help us to take out the manual infra and Agent management overhead away from your hands - so you can focus on iterating and getting to value quickly.