Every time AI generates something, it consumes points. The generated images are like opening a blind box; they can't really meet all sorts of needs. For example, I want to generate materials of specific sizes and types, and I want to create various materials of the same type.
What I'm thinking is, can we develop a way to limit the general AI's image generation capabilities to a specific aspect? I'm not looking to see what AI can create; I want AI to provide me with what I want.
For example, if I want to create a 2D game, I need a set of usable game assets. I need to provide some parameter options: what kind of game assets, characters, items, or scene backgrounds? What style, hand-drawn, pixel, cartoon, or realistic?
For example, I want to create a social media image that attracts clicks. I need to provide some parameters to choose from, like the platform, size ratio, and copy content. Then, based on these goals and parameters, I can set limitations for the AI.
Also, every time when generating with AI, to reduce the chances of opening a blind box, we could generate some small preview images, but in large quantities, several at a time. You can choose one that you like, and then generate the complete material. After generating the complete material, we can also provide secondary processing for the material, generate other content in the same style using the material, or even convert the style of the material.
These are my personal thoughts, and I'm seeking suggestions from the community to see if this is feasible, if there are any issues, and what other scenarios we need to consider.
jaggs•1h ago