I just open-sourced Habit Sprint - a different take on habit tracking and works great with OpenClaw.
It’s not a checklist app with a chat wrapper on top.
It’s an AI-native engine that understands:
Weighted habits
“Don’t break the chain” streak logic
Sprint scoring
Category tradeoffs
And how those things interact
The idea started in 2012 with a simple spreadsheet grid to track daily habits.
In 2020, I borrowed the two-week sprint cycle from software development and applied it to personal growth.
Two weeks feels like the sweet spot:
Long enough to build momentum
Short enough to course-correct
Built-in retrospective at the end
What’s new now is the interface.
You interact in plain language:
“I meditated and went to the gym today.”
“Log 90 minutes of deep work.”
“How consistent have I been this week?”
“Which category is dragging my score down?”
“Let’s run a habit retro.”
The model translates that into validated engine actions and returns clean markdown dashboards, sprint summaries, streak tracking, and retrospectives.
Under the hood:
Habits have weights based on behavioral leverage
Points accumulate based on weekly targets and consistency
Streaks are automatic
Two-week sprints support themes and experiments
Strict JSON contract between LLM and engine
Lightweight Python + SQLite backend
Structured SKILLS.md teaches the LLM the action schema
The user never sees JSON. The assistant becomes the interface.
It works as an LLM skill for Claude Code, OpenClaw, or any agent that supports structured tool calls.
I’m really interested in what AI-native systems look like when the traditional “app UI” fades away and the assistant becomes the operating layer.
ericblue•2h ago
It’s not a checklist app with a chat wrapper on top. It’s an AI-native engine that understands:
Weighted habits
“Don’t break the chain” streak logic
Sprint scoring
Category tradeoffs
And how those things interact
The idea started in 2012 with a simple spreadsheet grid to track daily habits. In 2020, I borrowed the two-week sprint cycle from software development and applied it to personal growth.
Two weeks feels like the sweet spot:
Long enough to build momentum
Short enough to course-correct
Built-in retrospective at the end
What’s new now is the interface.
You interact in plain language:
“I meditated and went to the gym today.”
“Log 90 minutes of deep work.”
“How consistent have I been this week?”
“Which category is dragging my score down?”
“Let’s run a habit retro.”
The model translates that into validated engine actions and returns clean markdown dashboards, sprint summaries, streak tracking, and retrospectives.
Under the hood:
Habits have weights based on behavioral leverage
Points accumulate based on weekly targets and consistency
Streaks are automatic
Two-week sprints support themes and experiments
Strict JSON contract between LLM and engine
Lightweight Python + SQLite backend
Structured SKILLS.md teaches the LLM the action schema
The user never sees JSON. The assistant becomes the interface.
It works as an LLM skill for Claude Code, OpenClaw, or any agent that supports structured tool calls.
I’m really interested in what AI-native systems look like when the traditional “app UI” fades away and the assistant becomes the operating layer.
Curious what people think. Would love feedback.