Good article. So this is sort of a tangent, but here's a bit of advice as someone who makes heavy use of GenAI imagery in the service of articles that I write.
Never use out-of-the-box images of CRT computers - 99% of the time the keyboards are an ergonomic train wreck and the text on the screen is a smeary blurry mess.
This is a combination of a GenAI image from NB Pro layered with a loop of the Win95 start sequence into a single animated gif. Notice I sidestepped the inclusion of a keyboard altogether.
Now, more topically: since the actual list of IF games doesn't appear to be a secret, I think it would have been better to feature it more prominently in the article, rather than tucking it away in a side note in the footer.
kqr•1w ago
> Never use out-of-the-box images of CRT computers
Thanks for the feedback! I'm very new to GenAI imagery and still finding my feet.
Seeing the results, I definitely considered compositing a real photograph of a computer with the rest of the landscape, but ended up deciding against it on account of a lack of time.
vunderba•1w ago
Cool - GenAI image generation is a deep rabbit hole that you're about to fall into!
Super happy that you pit LLMs against relatively recent IF to mitigate cheating through pre-existing training data as well.
FYI I've been running a SOTA model comparison site for about a year now that looks at prompt adherence across local (Qwen-Image, Flux) vs proprietary (NB Pro, Seedream) that might help give an idea where the capabilities are today.
Oh, wow, thanks! I've only been using Midjourney but the other models you showcase really do adhere much better to the details of the prompt. Do you know how I can get them to adhere to style suggestions better? They seem to be biased toward photorealism but that's not the vibe I'm going for. (I tried both Gemini 3.0 Pro and Seedream.)
vunderba•1w ago
Style is frustrating honestly.
You can try manually describing the style (for example, using modifiers or referencing similar art mediums like chiaroscuro, acrylic, etc.), and this can work.
Alternatively, if you have a specific style in mind:
Collect several image examples of the style you like, then either run them through CLIP or feed them into a multimodal model such as gpt-image-1.5. Ask it to generate a list of stylistic modifiers (keywords) that you can then append to prompts for NB / Seedream.
However, if the style is sufficiently underrepresented in the training data, no amount of prompting will fully overcome that limitation. Here's an example where I tried to see if NB Pro could recreate some of Yoichi Kotabe's earlier work but it just defaulted to a pretty generic looking illustrative style:
vunderba•1w ago
Never use out-of-the-box images of CRT computers - 99% of the time the keyboards are an ergonomic train wreck and the text on the screen is a smeary blurry mess.
See the image here for a good example:
https://mordenstar.com/blog/win9x-hacks
This is a combination of a GenAI image from NB Pro layered with a loop of the Win95 start sequence into a single animated gif. Notice I sidestepped the inclusion of a keyboard altogether.
Now, more topically: since the actual list of IF games doesn't appear to be a secret, I think it would have been better to feature it more prominently in the article, rather than tucking it away in a side note in the footer.
kqr•1w ago
Thanks for the feedback! I'm very new to GenAI imagery and still finding my feet.
Seeing the results, I definitely considered compositing a real photograph of a computer with the rest of the landscape, but ended up deciding against it on account of a lack of time.
vunderba•1w ago
Super happy that you pit LLMs against relatively recent IF to mitigate cheating through pre-existing training data as well.
FYI I've been running a SOTA model comparison site for about a year now that looks at prompt adherence across local (Qwen-Image, Flux) vs proprietary (NB Pro, Seedream) that might help give an idea where the capabilities are today.
https://genai-showdown.specr.net
kqr•1w ago
vunderba•1w ago
You can try manually describing the style (for example, using modifiers or referencing similar art mediums like chiaroscuro, acrylic, etc.), and this can work.
Alternatively, if you have a specific style in mind:
Collect several image examples of the style you like, then either run them through CLIP or feed them into a multimodal model such as gpt-image-1.5. Ask it to generate a list of stylistic modifiers (keywords) that you can then append to prompts for NB / Seedream.
However, if the style is sufficiently underrepresented in the training data, no amount of prompting will fully overcome that limitation. Here's an example where I tried to see if NB Pro could recreate some of Yoichi Kotabe's earlier work but it just defaulted to a pretty generic looking illustrative style:
https://imgur.com/a/failed-style-transfer-nb-pro-o3htsKn