Hi! I made Thumby AI. It lets you create YouTube thumbnails
- from a text prompt
- by modifying existing images
- you can face swap a face onto an image
I've been testing AI image generation for a few months now, and when I saw MrBeast launch and then un-launch his AI thumbnail maker, I decided to jump in and make it anyway.
Also, I "vibe-coded" it. I put that in quotes because I did a minor in CS in uni, and also had my brother (a CS grad) help me at times. But for the bulk of the code base, it was generated by AI (gemini 2.5).
The hardest part was not that, though. It was getting good, consistent results - without having each image generation take multiple minutes. That abstraction layer is what I'm most proud of and what I think sets it apart from most AI thumbnail generators (although there's some good competition for sure).
I'm analyzing each prompt and input image with gpt 4o, and then choosing based on the prompt and input selections what model to use. Then based on that, can choose to 4o enhance the prompt and then send it to one of
- gpt image-1
- ideogram v3 / v2
- imagen 4
- flux kontext (trained)
Sometimes, I have to send it through multiple models. For example, gpt image-1 already costs the most and takes the longest - and only supports 3 aspect ratios for output, none of which are 16:9. So I have to run those images a second time through ideogram v3's reframe model to expand it.
The final images are also shown to 4o once to ensure they're not NSFW or obviously broken in some way, before showing to the user.
With all that, limitations:
- face swapping is hard. I wanted to offer a simple, "add a selfie to get a thumbnail" process, which eliminates training a model on a bunch of faces (which, in my testing, was not any better anyways). So far, gpt image-1 + 4o to scan the both the base & target face and use that to enhance the prompt gets (very) good results, but takes up to a minute or more (and needs further processing)
- text. Sometimes it's messed up, especially when using face swap. It's possible to avoid it by face swapping first and adding text second, or by modifying an image with broken text later on. I'm trying to add something for this in the generation pipeline but the time addition for each additional check is unfortunate
- style analysis is best for copying art style, not structure. This is temporary, since I've got a pretty good idea for fixing that (feeding a channel's recent thumbnails into a vision model, outputting as json and then exploring a consistent thumbnail layout and appending that as instruction for the style copied image)
- GIGO. a bad, nondescriptive prompt will give bad / random results.
Anyway, I'd love it if you guys could check it out and let me know what you think. It's my first time making anything and I'm sure I've made plenty of mistakes. I don't mind criticism <3
shahmir_•7h ago
- from a text prompt - by modifying existing images - you can face swap a face onto an image
I've been testing AI image generation for a few months now, and when I saw MrBeast launch and then un-launch his AI thumbnail maker, I decided to jump in and make it anyway.
Also, I "vibe-coded" it. I put that in quotes because I did a minor in CS in uni, and also had my brother (a CS grad) help me at times. But for the bulk of the code base, it was generated by AI (gemini 2.5).
The hardest part was not that, though. It was getting good, consistent results - without having each image generation take multiple minutes. That abstraction layer is what I'm most proud of and what I think sets it apart from most AI thumbnail generators (although there's some good competition for sure).
I'm analyzing each prompt and input image with gpt 4o, and then choosing based on the prompt and input selections what model to use. Then based on that, can choose to 4o enhance the prompt and then send it to one of - gpt image-1 - ideogram v3 / v2 - imagen 4 - flux kontext (trained)
Sometimes, I have to send it through multiple models. For example, gpt image-1 already costs the most and takes the longest - and only supports 3 aspect ratios for output, none of which are 16:9. So I have to run those images a second time through ideogram v3's reframe model to expand it.
The final images are also shown to 4o once to ensure they're not NSFW or obviously broken in some way, before showing to the user.
With all that, limitations:
- face swapping is hard. I wanted to offer a simple, "add a selfie to get a thumbnail" process, which eliminates training a model on a bunch of faces (which, in my testing, was not any better anyways). So far, gpt image-1 + 4o to scan the both the base & target face and use that to enhance the prompt gets (very) good results, but takes up to a minute or more (and needs further processing) - text. Sometimes it's messed up, especially when using face swap. It's possible to avoid it by face swapping first and adding text second, or by modifying an image with broken text later on. I'm trying to add something for this in the generation pipeline but the time addition for each additional check is unfortunate - style analysis is best for copying art style, not structure. This is temporary, since I've got a pretty good idea for fixing that (feeding a channel's recent thumbnails into a vision model, outputting as json and then exploring a consistent thumbnail layout and appending that as instruction for the style copied image) - GIGO. a bad, nondescriptive prompt will give bad / random results.
Anyway, I'd love it if you guys could check it out and let me know what you think. It's my first time making anything and I'm sure I've made plenty of mistakes. I don't mind criticism <3
thumbyai.com
- Shahmir