The interesting part was getting Y2K horror aesthetics right. I'm using Gemini to enhance user prompts with specific visual markers - chromatic aberration, CRT scan lines, low-res video grain, that specific 1999-2003 digital camera look. Then routing to Together AI or Replicate depending on load.
Technical choices: - Gemini for prompt engineering (cheaper than GPT-4 for this) - Together AI as primary backend (better pricing) - Replicate/fal.ai as fallback - No database needed since everything's stateless - Images are PNG, around 10-30 seconds generation time
The hardest part was balancing "horror" without making it too generic creepypasta. Y2K horror has this specific vibe - early internet anxiety, millennium bug paranoia, that weird transitional tech period. I added reference images from actual early 2000s horror media to train the prompting.
Current limitations: sometimes the model goes too heavy on the glitch effects. And the horror aspect can be hit or miss - AI models seem to default to either "ghost girl" or "VHS static" when you say horror.
Code isn't open source yet but considering it. Main concern is API costs if it gets hammered.
Try it: https://dreamyy2k.app
Would love feedback on the aesthetic accuracy. Did anyone else live through actual Y2K and remember what digital horror looked like back then?