The backstory: I spent three months on research before writing a line of code. Two things had to be true first: the content had to be genuinely different from the wellness space, and the monetization couldn't contradict what the product was trying to do.
On content: I read research on decision fatigue, attentional residue, and the paradox of choice. Most "thought of the day" apps are fortune cookies with better design. I wanted cards that made people sit with something uncomfortable, not cards that made them feel validated. 1,250 cards written and tagged across 12 content buckets before launch.
On monetisation: most wellness apps monetise by maximising session time, which is directly at odds with a mindfulness tool. I landed on charging for what the app actually does. Seven-day trial, then €1.99 a month or €39.99 lifetime. After trial, content progressively blurs rather than hard-bricking.
What it does: One thought per day. A headline (max 10 words) and a body (max 120 words). You Carry it or Let Go. The app asks you to close it. Under two minutes. Cards are selected server-side by a Resonance Loop algorithm. Days 1 to 14 cycle through all 12 buckets. From day 15, selection weights toward buckets the person resonates with based on carry/let-go signals, with 20% exploration to prevent filter bubbles and 10% heat (cards with high community share rates).
Built with Claude Code, as a non-engineer: I knew nothing about Swift, SwiftUI, Firebase, or Cloud Functions when I started. I mean I knew what they were, but not from a code perspective. Built the whole thing with Claude Code as pair programmer.
The unexpected thing: you can ask genuinely stupid questions without fear of judgment. I asked about async/await race conditions four different ways until it clicked. I can now read Cloud Function logs and debug Firestore rules. Not because I studied them, because building the thing forced me to learn and I had a patient collaborator the whole time.
The QA lesson: This is what I would tell anyone using AI-assisted development. The code comes fast. The bugs are subtle.
14 builds before App Store approval. A code review found notifications had silently died after day 7 because the scheduler was never called on app open. A widget timezone mismatch gave US users the wrong card. A reinstall sync bug wiped returning users' history every time. None of these were caught by the AI. All were caught by systematic, edge-case-focused testing.
The underrated skill in AI-assisted development is not better prompting. It is thorough QA.
For a longer read on the research and philosophy behind this: https://onegoodthing.space/blog/the-app-that-asks-you-to-clo...
App Store: https://apps.apple.com/app/one-good-thing/id6759391105
Happy to dig into the algorithm, the SwiftUI patterns, the Cloud Functions architecture, or the Claude Code workflow.
jlongo78•22m ago
counter-intuitive take: this "one thing then leave" model is actually how i use my agent sessions. one focused task, close it, context switch deliberately. treating attention like a finite resource instead of something to be monetized changes how you build tools entirely.
the apps that respect your time end up being the ones you actually trust.