We are very excited today to finally be able to give back to this community and release our first open source model Drape1.
We are a self-funded small startup trying to crack AI for fashion. We started super early, when SD1.4 was all the rage with the vision of building a virtual fashion camera. A camera that can one day generate visuals directly on online stores, for each shopper. And we tried everything:
Training LORAs on every product is not scalable.
IPadapter was not accurate enough.
Try-ons models like IDM-VTON worked ok but needed two generations and a lot of scaffolding in a user-facing app, particularly around masking.
We believe that the perfect solution should generate an on-model photo from a single photo of the product, a prompt, in less than a second. At the time, we couldn’t find any solution so we trained our own:
Introducing Drape1, an SDXL adapter trained on 400k+ of pairs of flat lays and on-model photos. It can fit in 16g of VRAM (and probably less with more optimizations). It works with any SDXL model and its derivative, but we had the best results with Lightning models.
Drape1 got us our first 1000 paying users and helped us reach our first $10,000 in revenue. But it struggled with capturing fine details in the clothing accurately.
Since the past months we’ve been working on Drape2. A FLUX adapter, and we're actively iterating on to tackle those tricky small details and push the quality further. Our hope is to eventually open-source Drape2 as well, once we feel it's reached a mature state and we're ready to move onto the next generation.
axel_uwear•3h ago
We are very excited today to finally be able to give back to this community and release our first open source model Drape1.
We are a self-funded small startup trying to crack AI for fashion. We started super early, when SD1.4 was all the rage with the vision of building a virtual fashion camera. A camera that can one day generate visuals directly on online stores, for each shopper. And we tried everything:
Training LORAs on every product is not scalable. IPadapter was not accurate enough. Try-ons models like IDM-VTON worked ok but needed two generations and a lot of scaffolding in a user-facing app, particularly around masking.
We believe that the perfect solution should generate an on-model photo from a single photo of the product, a prompt, in less than a second. At the time, we couldn’t find any solution so we trained our own:
Introducing Drape1, an SDXL adapter trained on 400k+ of pairs of flat lays and on-model photos. It can fit in 16g of VRAM (and probably less with more optimizations). It works with any SDXL model and its derivative, but we had the best results with Lightning models. Drape1 got us our first 1000 paying users and helped us reach our first $10,000 in revenue. But it struggled with capturing fine details in the clothing accurately.
Since the past months we’ve been working on Drape2. A FLUX adapter, and we're actively iterating on to tackle those tricky small details and push the quality further. Our hope is to eventually open-source Drape2 as well, once we feel it's reached a mature state and we're ready to move onto the next generation.
HF: https://huggingface.co/Uwear-ai/Drape1
Let us know if you have any questions or feedback!