I finally made Spit Notes, a purpose-built iOS app for songwriters that seamlessly connects your audio to your lyrics, ensuring you never lose a song idea again.
For years, my own songwriting process caused my phone to be filled with hundreds of untitled voice memos (e.g., "New Recording 142") and a completely separate notes app held my lyrics. This forced a clunky, fragmented workflow that allowed for sparks of song ideas to be forgotten or slip through the cracks. Spit Notes is the app I wished existed to solve this.
What it does:
Unites melodies and lyrics: You can record audio directly next to a specific line of text. This preserves the crucial context between a melody and its corresponding lyric, so you're not trying to match a random voice memo to a note you wrote days ago.
Frictionless capture: The app is built for speed and simplicity. The moment inspiration strikes, you can capture it before it disappears. Just open the app, tap once, and you're recording.
Augments your creativity (doesn't replace it): While other apps are pushing AI-generated content, Spit Notes is deliberately focused on augmenting human creativity. It includes helpful tools for mechanical tasks like AI-powered transcription and a rhyme finder, but it doesn't try to write the song for you.
These are just a few of the features that I've been using to fuel my own songwriting process and I find I can finish songs easier now since I can just pick up in the middle of the song where the last snippet of audio left off to immediately catch the vibe while figuring out the next piece of the song. I also added a custom lyric video feature so artists can easily share parts of their songs or works in progress in style, like this: https://www.instagram.com/p/DPRkTvSj3nf/
The most interesting part for the HN crowd might be how it was built. I'm not a native Swift developer, but I built this app in about three months using a "human-assisted" AI workflow. I acted as the architect and product lead, providing the vision and QA, while AI coding agents (a mix of Codex, Gemini, and Claude) handled the bulk of the implementation. A key lesson was maintaining an ARCHITECTURE.md file that I could reference for the AI, which helped it stay aligned with the big picture as the codebase grew. This process finally allowed me to build the tool I'd been dreaming of for years.
You can download it on the App Store today. I’d love to hear what you think!
A side note on AI costs: I started with paying for Cursor and using Opus 4 but after getting insanely good initial results with Opus 4 and seeing my cursor costs start to rise, I remembered this post https://news.ycombinator.com/item?id=44167412 and took the plunge with Claude Code max $200 plan. This was incredibly valuable because it allowed me to use claude basically without limit. However, Gemini still had the biggest context window and as the project grew I had to use Gemini to create plans for big features and find deep bugs across all of the AI-generated modules. But once the Codex CLI became available on homebrew it was a wrap. I cancelled my claude code max plan and have been happily using codex without ever hitting any rate limits (other than when they made that update that accidentally reduced rate limits instead of increase them). Today, I pay for for chatgpt plus and gemini $20 plan and am able to clear most obstacles on the first or second prompt. I haven't tried opus 4.5 but since I am not really getting stuck with codex, for now I'll stick with that.