Back in the Windows era, I’d spend hours browsing and changing them constantly, even making my own. Same with Winamp skins. At some point I started using 3D software like 3ds Max just to experiment and create visuals for myself.
Later I got into photography and ended up creating a few wallpaper packs, some sold on Etsy, some given away on Unsplash and similar sites. Point is, this isn’t a new interest for me. I’ve spent a lot of time thinking about what makes a wallpaper worth keeping.
When Midjourney started taking off, I thought AI would finally give me the flexibility to create new wallpapers quickly while still keeping a sense of style and intent. That’s what led me to build an AI wallpaper app.
For the first version, I used Google’s Nano Banana models. From a technical standpoint they’re very solid. Photography and realism are strong, lighting is consistent, and the images are generally correct.
But after a lot of prompt refinement, I kept running into the same issue: the images lacked soul. They were clean and realistic, but rarely something you’d actually want to live with on your lock screen. Good images, boring wallpapers.
On top of that, prompting turned out to be a real problem. Many users would just type things like “dark wallpaper” or “colorful art”. Even with guidance, it was hard to get consistently good results. Retention suffered as a result.
My first attempt to fix this was building a chat-style interface to better extract intent and avoid low-effort prompts. That helped somewhat, but it still didn’t unlock the “wow” factor I was looking for.
Recently I started experimenting with more stylistic models, particularly LeonardoAI, and the difference was immediate. The images are less realistic, but they have far more personality. They feel designed rather than just generated. For wallpapers, that tradeoff seems to matter more than raw fidelity.
I’m now testing a hybrid approach and collecting feedback.
For context, the app I’m building is here: https://tallpaper.app (not a launch post, just sharing since it’s directly related)
At this point, I’m increasingly convinced that model aesthetics matter more than we tend to admit, and that choosing the “best” model is less about benchmarks and more about whether its biases align with the emotional job the product is meant to do.
bored-developer•1h ago
Back in the Windows era, I’d spend hours browsing and changing them constantly, even making my own. Same with Winamp skins. At some point I started using 3D software like 3ds Max just to experiment and create visuals for myself.
Later I got into photography and ended up creating a few wallpaper packs, some sold on Etsy, some given away on Unsplash and similar sites. Point is, this isn’t a new interest for me. I’ve spent a lot of time thinking about what makes a wallpaper worth keeping.
When Midjourney started taking off, I thought AI would finally give me the flexibility to create new wallpapers quickly while still keeping a sense of style and intent. That’s what led me to build an AI wallpaper app.
For the first version, I used Google’s Nano Banana models. From a technical standpoint they’re very solid. Photography and realism are strong, lighting is consistent, and the images are generally correct.
But after a lot of prompt refinement, I kept running into the same issue: the images lacked soul. They were clean and realistic, but rarely something you’d actually want to live with on your lock screen. Good images, boring wallpapers.
On top of that, prompting turned out to be a real problem. Many users would just type things like “dark wallpaper” or “colorful art”. Even with guidance, it was hard to get consistently good results. Retention suffered as a result.
My first attempt to fix this was building a chat-style interface to better extract intent and avoid low-effort prompts. That helped somewhat, but it still didn’t unlock the “wow” factor I was looking for.
Recently I started experimenting with more stylistic models, particularly LeonardoAI, and the difference was immediate. The images are less realistic, but they have far more personality. They feel designed rather than just generated. For wallpapers, that tradeoff seems to matter more than raw fidelity.
I’m now testing a hybrid approach and collecting feedback.
For context, the app I’m building is here: https://tallpaper.app (not a launch post, just sharing since it’s directly related)
At this point, I’m increasingly convinced that model aesthetics matter more than we tend to admit, and that choosing the “best” model is less about benchmarks and more about whether its biases align with the emotional job the product is meant to do.