I built a small iOS app that performs text rewriting and summarization entirely on-device using Apple’s Foundation Models (iOS 18+).
Most AI writing tools rely on cloud APIs. That introduces API costs, privacy concerns, and infrastructure overhead. I wanted to explore what it looks like to build a useful AI writing utility with:
- no backend - no external APIs - no server costs - fully offline support
The app can rewrite text in different tones, generate summaries, extract key points, and produce short post-ready output. Everything runs locally on supported devices.
Why I built it
I was curious whether on-device language models are “good enough” for everyday writing tasks like cleaning up emails, summarizing articles, or restructuring notes.
I also wanted to test a different economics model: since inference runs locally, marginal cost per user is essentially zero. That allowed me to avoid subscriptions and instead use a simple free tier with an optional one-time unlock.
Technical notes
- SwiftUI main app - Share Extension for system-wide text processing - Apple FoundationModels for local inference - No networking layer
Some interesting constraints compared to cloud LLMs: - Smaller context window - More sensitive prompt design - Device-dependent performance - Latency perception is critical in extensions
On-device models are obviously less capable than large cloud models, but for constrained rewriting tasks they are surprisingly usable.
If anyone is experimenting with FoundationModels or on-device inference, I’d be curious how you’re handling prompt structure and long input segmentation.
App Store link: https://apps.apple.com/us/app/rewrite-text-ai-writing-tool/i...
Happy to answer technical questions.