There's research backing this up (Ludke et al. 2013 — singing foreign vocabulary improves short-term recall vs speaking it). The mechanism seems to be melody acting as a retrieval cue. I wanted to see if that could work as an intentional learning strategy, not just a happy accident.
How it works: you select vocabulary words you're learning, pick a music genre, and the app generates a song with those words in the lyrics. Then you practice with SRS flashcards and take quizzes on the lyrics (karaoke-style song pauses, you fill in the missing word).
Tech stack: React Native + Expo (development build, not Expo Go — needed for react-native-track-player) Supabase (auth, postgres, edge functions, realtime) OpenAI gpt-4o-mini / Gemini for lyrics generation (2-stage: lyrics first, then line-by-line translations) Suno V4.5 for music generation via a proxy edge function (API key stays server-side) SM-2 spaced repetition algorithm with modifications (speed bonuses, combo tracking, leech detection at 8+ lapses) - Some interesting technical challenges: Lyric validation: AI generates lyrics but must include ALL selected vocabulary words. Retry mechanism if words are missing (up to 2 retries) SRS concurrency: optimistic locking on practice_count to prevent race conditions when multiple sessions update the same word Music generation is 2-stage with webhook callback (no polling)
Solo dev, been building this for about 6 months. 6 languages supported (EN, DE, ES, FR, TR, IT). Free tier available, no account needed to browse.
Curious what HN thinks about the approach. Also happy to answer questions about the tech or the learning methodology.