Since starting to work from home, I noticed my motivation to get dressed in the morning tanked. I’d default to the same sweatpants, which started affecting my mood and productivity. I wanted something to nudge me to dress better—without making it a chore.
That’s why I built Springus, a wardrobe companion for iOS. Instead of manually cataloguing every item, Springus uses a multi-class segmentation model to build your digital closet from fit pix. The recommendation system then suggests outfits from clothing you actually own, aiming to reduce decision fatigue and help you find combinations you might not have considered.
The hardest part was making the segmentation work reliably with real-world photos — messy backgrounds, bad lighting, and all. I ended up training a custom model on hundreds of my own fit pics and some of friends, iterating until it was good enough to share.
I’ve been using Springus every day for the last 2 months. It’s free, and there’s no catch — I plan to monetize later by recommending clothes that fit your style, but right now, it’s just a passion project I wanted to share.
If you’re interested, I’d love feedback — especially on the segmentation accuracy and the outfit recommendations. What would make this genuinely useful for you?
badmonster•10h ago
geooff_•9h ago
1. Can the app differentiate one article of clothing from background / other articles 2. Can the app group together identical articles of clothing
To answer 1. The app has decent performance with test set pixel level mean accuracy of 0.80 and mIoU of 0.69, the test set is all real world fit pix from myself and friends. The 0.8 is a bit misleading though as the errors often occur at clothing boundaries so in poor lighting there can be some border gore.
As for 2. this remains to be seen. Currently clothing aggregation (Grouping together two segmentations of the same shirt) is manual. I'm doing some studies on tuning cosign-sim thresholds but I think long term there may need to be a more robust approach.
badmonster•9h ago
geooff_•8h ago