I worked at a game company in Korea doing AI research — graphics, vision, and image generation. I built the in-house image gen service there. While reading generative AI papers, I came across virtual try-on research and had a realization: people will eventually shop by seeing products on themselves, not just browsing photos of models. I started experimenting on weekends. The early results were rough, but promising enough that I left my job.
The core technical challenge: when you use image generation models to transfer someone's look onto another person, they either lose your identity or drop the style details. You ask it to transfer a specific makeup look and it gives you a completely different face, or an outfit loses its pattern and texture, or the hairstyle comes out flat. A prompt-only approach just isn't precise enough.
So I built a multi-stage pipeline — object detection, inpainting, and several other steps — to preserve your identity while accurately transferring style details.
Unlike preset filters or brand catalog try-ons, users share styles from their own everyday photos and anyone in the community can try that look on themselves with one tap. It works across three categories: beauty (makeup transfer), fashion (outfit try-on), and hair (style and color).
I launched in the US and Korea about a month ago. Still early and plenty to improve — would love honest feedback. Does the try-on quality feel convincing?
Demo: https://youtube.com/shorts/mDLkiV3D4rI iOS: https://apps.apple.com/app/looktake-share-style-with-ai/id67... Android: https://play.google.com/store/apps/details?id=io.looktake.ap...