This seems to be a misunderstanding. In the original OpenAI article, comment here is about code review comment, not comment in code.
I know neither of them are journalists -- I'm probably expecting too much -- but Simon should know better.
He was one of the original authors of Django, back when it was a “web framework for journalists with deadlines”.
I decided to risk it. Crucially OpenAI at no point asked for any influence over my content at all, aside from sticking to their embargo (which I've done with other companies before.)
They weren't deceptive about that - the new model IDs were clearly communicated - but with hindsight it did mean that those early impressions weren't an exact match for what was finally released.
My biggest miss was that I didn't pay attention to the ChatGPT router while I was previewing the models. I think a lot of the early disappointment in GPT-5 was caused by the router sending people to the weaker model.
For what it's worth, the GPT-5 I'm using today feels as impressive to me as the one I had during the preview. It's great at code and great at search, the two things I care most about.
I suspect that this is smaller than gpt-5 or at least a quantized version. Similar to what I suspect Opus 4.1 is. That would also explain why it's faster.
"Today, we’re releasing GPT‑5-Codex—a version of GPT‑5 further optimized for agentic coding in Codex."
So yeah, simplifying that to a "fine-tune" is likely incorrect. I just added a correction note about that to my article.
lostmsu•4mo ago
TiredOfLife•4mo ago
AstroBen•4mo ago