9 AIs × 43,200 minutes = 388,800 requests/month
388,800 requests × 200 tokens = 77,760,000 tokens/month ≈ 78M tokens
Cost varies from 10 cents to $1 per 1M tokens.
Using the mid-price, the cost is around $50/month.
---
Hopefully, the OP has this endpoint protected - https://clocks.brianmoore.com/api/clocks?time=11:19AM
It's perhaps the best example I have seen of model drift driven by just small, seemingly unimportant changes to the prompt.
What changes to the prompt are you referring to?
According the comment on the site, the prompt is the following:
Create HTML/CSS of an analog clock showing ${time}. Include numbers (or numerals) if you wish, and have a CSS animated second hand. Make it responsive and use a white background. Return ONLY the HTML/CSS code with no markdown formatting.
The prompt doesn't seem to change.
In a world where Javascript and Electron are still getting (again, rightfully) skewered for inefficiency despite often exceeding the performance of many compiled languages, we should not dismiss the discussion around efficiency so easily.
60x24x30 = 40k AI calls per month per model. Let's suppose there are 1000 output tokens (might it be 10k tokens? Seems like a lot for this task). So 40m tokens per model.
The price for 1m output tokens[0] ranges from $.10 (qwen-2.5) to $60 (GPT-4). So $4/mo for the cheapest, and $2.5k/mo for the most expensive.
So this might cost several thousand dollars a month? Something smells funny. But you're right, throttling it to once an hour would achieve a similar goal and likely cost less than $100/mo (which is still more than I would spend on a project like this).
But I presume you light up Christmas lights in December, drive to the theater to watch a movie or fire up a campfire on holiday. That too is "wasteful". It's not needed, other, or far more efficient ways exist to achieve the same. And in absolute numbers, far more energy intensive than running an LLM to create 9 clocks every minute. We do things to learn, have fun, be weird, make art, or just spend time.
Now, if Rolex starts building watches by running an LLM to drive its production machines or if we replace millions of wall clocks with ones that "Run an LLM every second", then sure, the waste is an actual problem.
Point I'm trying to make is that it's OK to consider or debate the energy use of LLMs compared to alternatives. But that bringing up that debate in a context where someone is creative, or having a fun time, its not, IMO. Because a lot of "fun" activities use a lot of energy, and that too isn't automatically "wasteful".
I would not make such assumptions.
> The example in the article shows that the prompt is limiting the LLM by giving it access to only 2000 tokens and also saying "ONLY OUTPUT ..."
The site is pretty simple, method is pretty straightforward. If you believe this is unfair, you can always build one yourself.
> It's just stupid.
No, it's a great way of testing things within constraints.
I could not get to the store because of the cookie banner that does not work (at left on mobile chrome and ff). The Internet Archive page: https://archive.ph/qz4ep
I wonder how this test could be modified for people that have neurological problems - my father's hands shake a lot but I would like to try the test on him (I do not have suspicions, just curious).
I passed it :)
I'd be interested if anyone else is successful. Share how you did it!
Nano Banana can be prompt engineered for nuanced AI image generation - https://news.ycombinator.com/item?id=45917875 - Nov 2025 (214 comments)
gpt-image-1 and Imagen are wickedly smart.
The new Nano Banana 2 that has been briefly teased around the internet can solve incredibly complicated differential equations on chalk boards with full proof of work.
That's great, but I bet it can't tie it's own shoes.
Put another way, it was hoped that once the dataset got rich enough, developing this understanding is actually more efficient for the neural network than memorizing the training data.
The useful question to ask, if you believe the hope is not bearing fruit, is why. Point specifically to the absent data or the flawed assumption being made.
Or more realistically, put in the creative and difficult research work required to discover the answer to that question.
I use this a lot in cybersecurity when I need to do something "illegal". I am refused help, until I say that I am doing research on cybersecurity. In that case no problem.
Once companies see this starting to show up in the evals and criticisms, they'll go out of their way to fix it.
My prompt to Grok:
---
Follow these rules exactly:
- There are 13 hours, labeled 1–13.
- There are 13 ticks.
- The center of each number is at angle: index * (360/13)
- Do not infer anything else.
- Do not apply knowledge of normal clocks.
Use the following variables:
HOUR_COUNT = 13
ANGLE_PER_HOUR = 360 / 13 // 27.692307°
Use index i ∈ [0..12] for hour marks:
angle_i = i * ANGLE_PER_HOUR
I want html/css (single file) of a 13-hour analog clock.
---
Output from grok.
Can grok generate images? What would the result be?
I will try your prompt on chatgpt and gemini
Same for chatgpt
And perplexity replaced 12 with 13
This gave me a correct clock face on Gemini- after the model spent a lot of time thinking (and kind of thrashing in a loop for a while). The functionality isn't quite right, not that it entirely makes sense in the first place, but the face - at least in terms of the hour marks - looks OK to me.[0]
[0] https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%...
"Here's the line-by-line specification of the program I need you to write. Write that program."
If a clock had 13 hours, what would be the angle between two of these 13 hours?
Generate an image of such a clock
No, I want the clock to have 13 distinct hours, with the angle between them as you calculated above
This is the same image. There need to be 13 hour marks around the dial, evenly spaced
... And its last answer was
You are absolutely right, my apologies. It seems I made an error and generated the same image again. I will correct that immediately.
Here is an image of a clock face with 13 distinct hour marks, evenly spaced around the dial, reflecting the angle we calculated.
And the very same clock, with 12 hours, and a 13th above the 12...
"You're absolutely right! I made a mistake. I have now comprehensively solved this problem. Here is the corrected output: [totally incorrect output]."
None of them ever seem to have the ability to say "I cannot seem to do this" or "I am uncertain if this is correct, confidence level 25%" The only time they will give up or refuse to do something is when they are deliberately programmed to censor for often dubious "AI safety" reasons. All other times, they come back again and again with extreme confidence as they totally produce garbage output.
It is like they are sometimes stuck in a local energetic minimum and will just wobble around various similar (and incorrect) pictures.
What was annoying in my attempt above is that the picture was identical for every attempt
Generate an image of a clock face, but instead of the usual 12 hour numbering, number it with 13 hours.
Gemini, 2.5 Flash or "Nano Banana" or whatever we're calling it these days. https://imgur.com/a/1sSeFX7A normal (ish) 12h clock. It numbered it twice, in two concentric rings. The outer ring is normal, but the inner ring numbers the 4th hour as "IIII" (fine, and a thing that clocks do) and the 8th hour as "VIIII" (wtf).
We have yet to design a language to cover that, and it might be just a donquijotism we're all diving into.
We have a very comprehensive and precise spec for that [0].
If you don't want to hop through the certificate warning, here's the transcript:
- Some day, we won't even need coders any more. We'll be able to just write the specification and the program will write itself.
- Oh wow, you're right! We'll be able to write a comprehensive and precise spec and bam, we won't need programmers any more.
- Exactly
- And do you know the industry term for a project specification that is comprehensive and precise enough to generate a program?
- Uh... no...
- Code, it's called code.
[0]: https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...
Granted, it is not a clock - but it could be art. It looks like a Picasso. When he was drunk. And took some LSD.
Also, your example is not showing the current time.
> Please generate an analog clock widget, synchronized to actual system time, with hands that update in real time and a second hand that ticks at least once per second. Make sure all the hour markings are visible and put some effort into making a modern, stylish clock face.
Followed by:
> Currently the hands are working perfectly but they're translated incorrectly making then uncentered. Can you ensure that each one is translated to the correct position on the clock face?
[0] https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%...
Some months ago I published this site for fun: https://timeutc.com There's a lot of code involved to make it precise to the ms, including adjusting based on network delay, frame refresh rate instead of using setTimeout and much more. If you are curious take a look at the source code.
Qwen 2.5's clocks, on the other hand, look like they never make it out of the womb.
More like fell headfirst into the ground.
I love clocks and I love finding the edges of what any given technology is capable of.
I've watched this for many hours and Kimi frequently gets the most accurate clock but also the least variation and is most boring. Qwen is often times the most insane and makes me laugh. Which one is "better?"
When it fails a couple of times it will try to put logging in place and then confidently tell me things like "The vertex data has been sent to the renderer, therefore the output is correct!" When I suggest it take a screenshot of the output each time to verify correctness, it does, and then declares victory over an entirely incorrect screenshot. When I suggest it write unit tests, it does so, but the tests are worthless and only tests that the incorrect code it wrote is always incorrect in the same ways.
When it fails even more times, it will get into this what I like to call "intern engineer" mode where it just tries random things that I know are not going to work. And if I let it keep going, it will end up modifying the entire source tree with random "try this" crap. And each iteration, it confidently tells me: "Perfect! I have found the root cause! It is [garbage bullshit]. I have corrected it and the code is now completely working!"
These tools are cute, but they really need to go a long way before they are actually useful for anything more than trivial toy projects.
This gives better results, at least for me.
kfarr•1h ago