What it does: Point your phone at the sky, see real-time aircraft info overlaid on your camera. ADS-B data from community feeders, WebSocket streaming, kinematic prediction to smooth positions between updates. No ARKit – just AVFoundation camera + CoreLocation/CoreMotion + math. SwiftUI overlays positioned via GPS/heading projection.
The humbling part: Spent 2 months debugging "FOV calibration issues." Built an 800-line calibration UI, a Flask debug server, Jupyter notebooks for pipeline analysis, extensive logging infrastructure. Hung a literal picture on the wall with a black rectangle of specific size to "calibrate" the FOV reported by my phone. The AI helped me build all of it beautifully.
The actual bug? A UI scale factor on the overlay boxes. Not FOV math. Not coordinate projection. Just scaleEffect() making things the wrong size. Commit message when I found it: "scale may have been fucking me over for a long time for 'fov issues'". Guess where the scaleEffect() function was introduced? That's right - AI generated. I asked it at one point something along the lines of "ok when you draw the boxes around the aircraft, make them smaller when the aircraft is farther away".
I went through 2-3 major model releases that I tested on this "hey I've been fighting a FOV bug for a while - can you please take a look and let me know if any issues jump out". Gemini 3 Pro, Opus 4.5, none of them found the "bug".
Takeaways from vibe-coding a full product:
- AI is incredible at building things fast – entire features in minutes. The entire UI, website, logo, etc, all AI. Claude Opus 4.5 kind of sucks at UI. Gemini 3 cleaned all that up.
- AI will also confidently help you debug the wrong thing for weeks
- Still need to know when to step back and question your assumptions
- Deleted 2,700 lines of debug infrastructure once I found the real bug
- Low performance? Just tell AI to rewrite it in a more performance language (load tested the process with 1000 connections - with Python/Django, tons of drops and latency spikes to 5000ms. Switched to c# and now it'll do 1000 and keep latency under 300ms)
Release process: painless, except for the test RevenueCat SDK key causing instacrash. Didn't test release locally. Approved in 6 minutes 2nd submission.
Question: what are people using to get super accurate heading out of Apple devices? The heading estimated error never drops below 10°. It's about 50/50 on being spot on vs not that close for the projections.
App link: https://apps.apple.com/us/app/skyspottr/id6756084687
blazingbanana•30m ago
> Still need to know when to step back and question your assumptions
Did you ever let the AI question your assumptions? I've found myself in a rut before and just giving it the issue with as little of my own personal context has helped surface what I need.
I'm curious how you found the bug in the first place? Was it during a vibe-code session or did you have a lightbulb moment?
Cool app btw.
auspiv•16m ago
And that peeling back was me looking at each function to see what it did (I am a dev, but not for SwiftUI). So yep, can't vibe code it all!