My wife is doing a freelance project where basically humans are evaluating how good an LLM-generated prompt is, and if there’s any problem to be fixed. Some times the humans are amazed and can’t believe they are written by LLMs (other times they are flabbergasted by how bad it is, of course.)
So definitely it has been improved in this area, intentionally.
iguana2000•1h ago
Makes sense. This is probably one of the few clear examples of models getting better by learning from content online written about themselves.
vunderba•21m ago
This has been a thing for a while now - particularly around early versions of image generation models. It wasn't entirely uncommon to wire up a small llama-based LLM model into a ComfyUI workflow that would enrich and/or rewrite simpler prompts using SD 1.5 style notation ("hyperrealistic, octane render, 8k, etc").
KolenCh•1h ago
So definitely it has been improved in this area, intentionally.