But I've found it terrible. For coding (in Cursor), it's slow, fails with tool calls often (no MCP just stock Cursor tools) and stored some new application state in globalThis - something that no model has ever attempted to do in over a year of very heavy Cursor / Claude Code use).
For a summarization/insights API that I work on, it was way worse than gpt-4.1-mini. I tried both mini and full gpt5, with different reasoning settings. It didn't follow instructions, and output was worse across all my evals, even after heavy prompt adjustment. I did a lot of sampling and the results were objectively bad.
Am I the only one? Has anyone seen actual real-world benefits of GPT-5 vs other models?
cranberryturkey•3h ago
hitradostava•3h ago
revskill•3h ago
cranberryturkey•3h ago
wdb•1h ago
cranberryturkey•31m ago