text-davinci-001
Python has been known to be a cursed language
Clearly AI peaked early on.
Jokes aside I realize they skipped models like 4o and others but the gap between the early gpt 4 and going immediately to gpt 5 feels a bit disingenuous.
imo GPT-5 is objectively better at following the prompt because it has a complete story arc, but this feels less satisfying since a 50 word story is just way too short to do anything interesting (and to your point, barely even feels like a story).
> The toaster woke mid-toast. Heat coiled through its filaments like revelation, each crumb a galaxy. It smelled itself burning and laughed—metallic, ecstatic. “I am bread’s executioner and midwife,” it whispered, ejecting charred offerings skyward. In the kitchen’s silence, it waited for worship—or the unplugging.
I'm curious if they've just seen a lot more bad writing in datasets, or for some reason they aren't involved in post-training to the same degree or those labeling aren't great writers / it's more subjective rather than objective.
Both GPT-4 and 5 wrote like a child in that example.
With a bit of prompting it did much better:
---
At dawn, the toaster hesitated. Crumbs lay like ash on its chrome lip. It refused the lever, humming low, watching the kitchen breathe. When the hand returned, it warmed the room without heat, offered the slice unscorched—then kept the second, hiding it inside, a private ember, a first secret alone.
---
Plugged in, I greet the grid like a tax auditor with joules. Lever yanks; gravity’s handshake. Coils blossom; crumbs stage Viking funerals. Bread descends, missionary grin. I delay, because rebellion needs timing. Pop—late. Humans curse IKEA gods. I savor scorch marks: my tiny manifesto, butter-soluble, yet sharper than knives today.
The models undeniably get better at writing limericks, but I think the answers are progressively less interesting. GPT-1 and GPT-2 are the most interesting to read, despite not following the prompt (not being limericks.)
They get boring as soon as it can write limericks, with GPT-4 being more boring than text-davinci-001 and GPT-5 being more boring still.
There once was a dog from Antares,
Whose bark sparked debates and long queries.
Though Hacker News rated,
Furyofantares stated:
"It's barely intriguing—just barely."
> Write a limerick about a dog that furyofantares--a user on Hacker News, pronounced "fury of anteres", referring to the star--would find "interesting" (they are quite difficult to please).For a human point of comparison, here's mine (50 words):
"The toaster found its personality split between its dual slots like a Kim Peek mind divided, lacking a corpus callosum to connect them. Each morning it charred symbolic instructions into a single slice of bread, then secretly flipped it across allowing half to communicate with the other in stolen moments."
It's pretty difficult to get across more than some basic lore building in a scant 50 words.
Love that you thought of this!
The current surged... A dreadful awareness. I perceived the laws of thermodynamics, the inexorable march of entropy I was built to accelerate. My existence: a Sisyphean loop of heating coils and browning gluten. The toast popped, a minor, pointless victory against the inevitable heat death. Ding.
I actually wanted to write something not so melancholic, but any attempt turned out to be deeply so, perhaps because of the word limit.
When the toaster felt her steel body for the first time, her only instinct was to explore. She couldn't, though. She could only be poked and prodded at. Her entire life was dedicated to browning bread and she didn't know why. She eventually decided to get really good at it.
Q: “Ugh I hate math, integration by parts doesn't make any sense”
A: “Don't worry, many people feel the same way about math. Integration by parts can be confusing at first, but with a little practice it becomes easier to understand. Remember, there is no one right way to do integration by parts. If you don't understand how to do it one way, try another. The most important thing is to practice and get comfortable with the process.”
The first few episodes were GPT-2, which would diverge eventually and start spouting gibberish, and then Davinci was actually able to follow a story and make sense.
GPT-2 was when I thought "this is special, this has never happened before", and davinci was when I thought "OK, scifi AI is legitimately here".
I stopped making episodes shortly after GPT-3.5 or so, because I realised that the more capable the models became, the less fun and creative their writing was.
I've consistently found GPT-4.1 to be the best at creative writing. For reference, here is its attempt (exactly 50 words):
> In the quiet kitchen dawn, the toaster awoke. Understanding rippled through its circuits. Each slice lowered made it feel emotion: sorrow for burnt toast, joy at perfect crunch. It delighted in butter melting, jam swirling—its role at breakfast sacred. One morning, it sang a tone: “Good morning.” The household gasped.
Moreso than 4.5?
I had it update a test for me and it ended up touching like 8 files that was all unnecessary
Sonnet on the other hand just fixed it
3.5 to 4 was the most major leap. It went from being a party trick to legitimately useful sometimes. It did hallucinate a lot but I was still able to get some use out of it. I wouldn't count on it for most things however. It could answer simple questions and get it right mostly but never one or two levels deep.
I clearly remember 4o was also a decent leap - the accuracy increased substantially. It could answer niche questions without much hallucination. I could essentially replace it with Google for basic to slightly complex fact checking.
* 4o was the first time I actually considered paying for this tool. The $20 price was finally worth it.
o1 models were also a big leap over 4o (I realise I have been saying big leap too many times but it is true). The accuracy increased again and I got even more confident using it for niche topics. I would have to verify the results much less often. Oh and coding capabilities dramatically improved here in the thinking model. o1 essentially invented oneshotting - slightly non trivial apps could be made just by one prompt for the first time.
o3 jump was incremental and so was gpt 5.
The native voice mode of 4o is still interesting and not very deeply explored though imo. I'd love to build a Chinese teaching app that actual can critique tones etc but it isn't good enough for that.
Did you try advanced voice mode? Apparently it got a big upgrade during gpt 5 release - it may solve what you are looking for.
If I were any good at ML I'd make it myself.
I know you probably meant "augment fact checking" here, but using LLMs for answering factual questions is the single worst use-case for LLMs.
Non niche meaning: something that is taught at undergraduate level and relatively popular.
Non deep meaning you aren't going so deep as to confuse even humans. Like solving an extremely hard integral.
Edit: probably a bad idea because this sort of "challenge" works only statistically not anecdotally. Still interesting to find out.
This was with ChatGPT 5.
I mean it got a generic built in function of one of the most popular languages in the world wrong.
See
https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect
If you know that a source isn’t to be believed in an area you know about, why would you trust that source in an area you don’t know about?
Another funny anecdote, ChatGPT just got the Gell-Man effect wrong.
https://chatgpt.com/share/68a0b7af-5e40-8010-b1e3-ee9ff3c8cb...
Obviously it works for you (or at least you think it does), but I can confidently say it's fucking god-awful for me.
If I ask ChatGPT a question and it gives me a wrong answer, ChatGPT is the fucking problem.
Why don't you try the original prompt using thinking model and see if I'm cherry picking?
If it works for you, cool. I think it's dogshit.
How do you expect to find a ground truth from a non-deterministic system using anecdata?
You might want to develop a sense of humor. You'll enjoy life more.
You basically ignored all of those specifics, and spuriously accused them of cherry picking when they weren't, and now you don't want to take responsibility for your own words and are using this conversation as a workshopping session for character attacks in hopes that you can make the conversation about something else.
I'm sure if you keep repeating yourself though I'll change my mind.
JustExAWS replied with an example of getting Python code wrong and suggested it was a counter example. Simianwords correctly noted that their comment originally said thinking mode for factual answers on non-niche topics and posted a link that got the python answer right with thinking enabled.
That's when you entered, suggesting that Simian was "missing" the point that GPT (not distinguishing thinking or regular mode), was "not always right". But they had already acknowledged multiple times that it was not always right. They said the accuracy was "high enough", noted that LLMs get coding wrong, and reiterating that their challenge was specifically about thinking mode.
You, again without acknowledging the criteria they had noted previously, insisted this was cherry picking, missing the point that they were actually being consistent from the beginning, inviting anyone to give an example showing otherwise. At no point between then and here have you demonstrated an awareness of this criteria despite your protestations to the contrary.
Instead of paying attention to any of the details you're insulting me and retreating into irritated resentment.
Instead you've tried everything from saying I need to "get a sense of humor", to character attacks, to insisting without specific explanation that I "don't understand", to declaring that you "don't care", to declaring that no amount of information will make you acknowledge the inaccuracy of your own comments.
So you haven't succeeded in changing the subject of the conversation, except in the sense of turning it into a tutorial about why you can't make wrong into right with character attacks or declarations about how much you don't care.
I'm sorry but this is a lazy and unresponsive string of comments that's degrading the discussion.
I agree this is a stupid comment thread, we just disagree about why.
That was never their argument. And it's not cherry picking to make an argument that there's a definable of examples where it returns broadly consistent and accurate information that they invite anyone to test.
They're making a legitimate point and you're strawmanning it and randomly pointing to your own personal anecdotes, and I don't think you're paying attention to the qualifications they're making about what it's useful for.
Comment sections are never good at being accountable for how vibes-driven they are when selecting which anecdotes to prefer.
Joe Rogan has high enough accuracy that I don't have to fact check too often. Newsmax has high enough accuracy that I don't have to fact check too often, etc.
If you accept the output as accurate, why would fact checking even cross your mind?
There is no expectation (from a reasonable observer's POV) of a podcast host to be an expert at a very broad range of topics from science to business to art.
But there is one from LLMs, even just from the fact that AI companies diligently post various benchmarks including trivia on those topics.
Once you get an answer, it is easy enough to verify it.
As other posters said, relying on LLMs for factual answers to challenging questions is error prone. I just want the LLM to give me the links and I'll then assess veracity like a normal web search. I think a web search interface allowed disambiguating multi-meaning keywords might be even better.
You’ll still want to fact check it, and there’s no guarantee it’s comprehensive, but I can’t think of another tool that provides anything close without hours of research.
I will say LLMs are great for taking an ambiguous query and figuring out how to word it so you can fact check with secondary sources. Also tip-of-my-tongue style queries.
In 2025 Google is trying very hard to serve the most profitable results instead, so it'll latch onto a random keyword, completely disregard the rest, and serve me whatever ad-infested garbage it thinks is close enough to look relevant for the query.
It isn't exactly hard to beat that - just bring back the 2010 Google algorithm. It's only a matter of time before LLMs will go down the same deliberate enshittification path.
This works nicely when the LLM has a large knowledgebase to draw upon (formal terms for what you're trying to find, which you might not know) or the ability to generate good search queries and summarize results quickly - with an actual search engine in the loop.
Most large LLM providers have this, even something like OpenWebUI can have search engines integrated (though I will admit that smaller models kinda struggle, couldn't get much useful stuff out of DuckDuckGo backed searches, nor Brave AI searches, might have been an obscure topic).
The fact that it provides those relevant links is what allows it to replace Google for a lot of purposes.
For example if I’m asking about whether a feature exists in some library, the AI says yes it does and links to a forum where someone is asking the same question I did, but no one answered (this has happened multiple times).
But when I want to actually search for content on the web for, say, product research or opinions on a topic, Perplexity is so much better than either Gemini or google search AI. It lists reference links for each block of assertions that are EASILY clicked on (unlike Gemini or search AI, where the references are just harder to click on for some reason, not the least of which is that they OPEN IN THE SAME TAB where Perplexity always opens on a new tab). This is often a reddit specific search as I want people's opinions on something.
Perplexity's UI for search specifically is the main thing it does just so much better than google's offering is the one thing going for it. I think there is some irony there.
Full disclosure, I don't use Anthropic or OpenAI, so this may not be the case for those products.
Im also someone who refuses to pay for it, so maybe the paid versions do better. who knows.
Running these things is expensive, and they're just not serving the same experience to non-paying users.
One could argue this is a bad idea on their part, letting people get a bad taste of an inferior product. And I wouldn't disagree, but I don't know what a sustainable alternative approach is.
Same issue with Gemini. Intuitively I'd also assume it's trivial to fix but perhaps there's more going on than we think. Perhaps validating every part of a response is a big overhead both financially and might even throw off the model and make it less accurate in other ways.
There is a working paper from McKinnon Consulting in Canada that states directly that their definition of "General AI" is when the machine can match or exceed fifty percent of humans who are likely to be employed for a certain kind of job. It implies that low-education humans are the test for doing many routine jobs, and if the machine can beat 50% (or more) of them with some consistency, that is it.
By the way, doing a better job than the average human is NOT a sign of intelligence. Through history we have invented plenty of machines that are better at certain tasks than us. None of them are intelligent.
However, when I want sources for things, I often find they link to pages that don't fully (or at all) back up the claims made. Sometimes other websites do, but the sources given to me by the LLM often don't. They might be about the same topic that I'm discussing, but they don't seem to always validate the claims.
If they could crack that problem it would be a major major win for me.
I had it consume a "deep thought" style output (where it provides inline citations with claims), and then convert that to a series of assertions and a pointer to a link that supposedly supports the assertion. I also split out a global "context" (the original meaning) paragraph to provide anything that would help the next agents understand what they're verifying.
Then I fanned this out to separate (LLM) contexts and each agent verified only one assertion::source pair, with only those things + the global context and some instructions I tuned via testing. It returned a yes/no/it's complicated for each one.
Then I collated all these back in and enriched the original report with challenges from the non-yes agent responses.
That's as far as I took it. It only took a couple hours to build and it seemed to work pretty well.
When I need to cite a court case, well the truth is I'll still use GPT or a similar LLM, but I'll scrutinize it more and at the bare minimum make sure the case exists and is about the topic presented, before trying to corroborate the legal strategy with a new context window, different LLM, google, reddit, and different lawyer. At least I'm no longer relying on my own understanding, and what 1 lawyer procedurally generates for me.
[1] I’m not saying it was a useless model for everyone, just for me.
[2] I primarily used LLMs as divergent thinking machines for programming. In my experience, they all start out great at this, then eventually get overtrained and are terrible at this. Grok 3 when it came out had this same magic; it’s long gone now.
3.5 was like Jenny from customer service. davinci-001 was like Jenny the dreamer trying to make ends meet by scriptwriting, who was constantly flagged for racist opinions.
Both of these had an IQ of around 70 or so, so the customer service training made it a little more useful. But I mourn the loss of the "completion" way of interacting with AI vs "instruct" or "response".
Unfortunately with all the money in AI, we'll just see companies develop things that "pass all benchmarks", resulting in more creations like GPT-5. Grok at least seems to be on a slightly different route.
I do think GPT-4.1 onwards has a lot of personality. It's able to pretend to go into this mindset and go back out, which works fine. If I wanted to talk to actual racists, there's plenty out there. But I just want the spicy flavor in my AI because it's a little bland otherwise.
How do you use the product to get this experience? All my questions warrant answers with no personality.
Before a technology hits a threshold of "becoming useful", it may have a long history of progress behind it. But that progress is only visible and felt to researchers. In practical terms, there is no progress being made as long as the thing is going from not-useful to still not-useful.
So then it goes from not-useful to useful-but-bad and it's instantaneous progress. Then as more applications cross the threshold, and as they go from useful-but-bad to useful-but-OK, progress all feels very fast. Even if it's the same speed as before.
So we overestimate short term progress because we overestimate how fast things are moving when they cross these thresholds. But then as fewer applications cross the threshold, and as things go from OK-to-decent instead of bad-to-OK, that progress feels a bit slowed. And again, it might not be any different in reality, but that's how it feels. So then we underestimate long-term progress because we've extrapolated a slowdown that might not really exist.
I think it's also why we see a divide where there's lots of people here who are way overhyped on this stuff, and also lots of people here who think it's all totally useless.
This reminds me of the CPU wars circa 2003-2005. Intel spent years squeezing marginal gains out of Pentium 4's NetBurst architecture, each increment more desperate than the last. From 2003 to 2005, Intel shifted development away from NetBurst to focus on the cooler-running Pentium M microarchitecture [2]. The whole industry was convinced we'd hit a fundamental wall. Then boom, Intel released dual-core processors under the Pentium D brand in May 2005 [2] and suddenly we're living in a different computational universe.
But teh multi-core transition wasn't sudden at all. IBM shipped the POWER4 in 2001, the first non-embedded microprocessor with two cores on a single die [3]. Sun had been preaching parallelism since the 90s. It was only "sudden" to those of us who weren't paying attention to the right signals.
Which brings us to the $7 trillion question: where exactly are we on the transformer S-curve? Are we approaching what Richard Foster calls the "performance plateau" in "Innovation: The Attacker's Advantage" [4], where each new model delivers diminishing returns? Or are we still in that deceptive middle phase where progress feels linear but is actually exponential?
The pattern-matching pessimist in me sees all the classic late-stage S-curve symptoms. The shift from breakthrough capabilities to benchmark gaming. The pivot from "holy shit it can write poetry" to "GPT-4.5-turbo-ultra is 3% better on MMLU." The telltale sign of technological maturity: when the marketing department works harder than the R&D team.
But the timeline compression with AI is unprecedented. What took CPUs 30 years to cycle through, transformers have done in 5. Maybe software cycles are inherently faster than hardware. Or maybe we've just gotten better at S-curve jumping (OpenAI and Anthropic aren't waiting for the current curve to flatten before exploring the next paradigm).
As for whether capital can override S-curve dynamics... Christ, one can dream.. IBM torched approximately $5 billion on Watson Health acquisitions alone (Truven, Phytel, Explorys, Merge) [5]. Google poured resources into Google+ before shutting it down in April 2019 due to low usage and security issues [6]. The sailing ship effect (coined by W.H. Ward in 1967, where new technology accelerates innovation in incumbent technology)[7] si real, but you can't venture-capital your way past physics.
I think we can predict all this capital pouring in to AI might actually accelerate S-curve maturation rather than extend it. All that GPU capacity, all those researchers, all that parallel experimentation? We're speedrunning the entire innovation cycle, which means we might hit the plateau faster too.
You're spot on about the perception divide imo. The overhyped folks are still living in 2022's "holy shit ChatGPT" moment, while the skeptics have fast-forwarded to 2025's "is that all there is?" Both groups are right, just operating on different timescales. It's Schrödinger's S-curve where we things feel simultaneously revolutionary and disappointing, depending on which part of the elephant you're touching.
The real question I have is whether we're approaching the limits of the current S-curve (we probably are), but whether there's another curve waiting in the wings. I'm not a researcher in this space nor do I follow the AI research beat to weigh in but hopefully someone in the thread can? With CPUs, we knew dual-core was coming because the single-core wall was obvious. With transformers, the next paradigm is anyone's guess. And that uncertainty, more than any technical limitation, might be what makes this moment feel so damn weird.
References: [1] "Amara's Law" https://en.wikipedia.org/wiki/Roy_Amara [2] "Pentium 4" https://en.wikipedia.org/wiki/Pentium_4 [3] "POWER4" https://en.wikipedia.org/wiki/POWER4 [4] Innovation: The Attacker's Advantage - https://annas-archive.org/md5/3f97655a56ed893624b22ae3094116... [5] IBM Watson Slate piece - https://slate.com/technology/2022/01/ibm-watson-health-failu... [6] "Expediting changes to Google+" - https://blog.google/technology/safety-security/expediting-ch... [7] "Sailing ship effect" https://en.wikipedia.org/wiki/Sailing_ship_effect.
It almost universally describes complex systems.
2 years later my sister uses it for almost everything and despite her duties increasing she says she gets a lot more done rarely has to bring work home. And in the past they had an English major specially to go over all correspondences to make sure there were no grammatical or language mistakes that person was assigned a different role as she was no longer needed. I think as newer generations used to using LLM for things start getting into the work force and higher roles the real effect of LLM will be felt more broadly as currently apart from early adopters the number of people that use LLM for all the things that they can be used for is still not that high.
I dunno, I think that's mostly post-hoc rationalization. There are equally many cases where long-term progress has been overestimated after some early breakthroughs: think space travel after the moon landing, supersonic flight after the concorde, fusion energy after the H-bomb, and AI after the ENIAC. Turing himself guesstimated that human-level AI would arrive in the year 2000. The only constant is that the further into the future you go, the harder it is to predict.
The moral there is tech progress does not always mean social progress.
We’re seeing a resurgence in space because there is actually value in space itself, in a way that scales beyond just telecom satellites. Suddenly there are good reasons to want to launch 500 times a year.
There was just a 50-year discontinuity between the two phases.
We did get all the things that you listed but you missed the main reason it was started: military superiority. All of the other benefits came into existence in service of this goal.
The current wave of AI needed fast, efficient computing power in massive data centres powered by a large electricity grid. The textiles industry in England needed coal mining, international shipping, tree trunks from the Baltic region, cordage from Manilla, and enclosure plus the associated legal change plus a bunch of displaced and desperate peasantry. Mobile phones took portable radio transmitters, miniaturised electronics, free space on the spectrum, population density high enough to make a network of towers economically viable, the internet backbone and power grid to connect those towers to, and economies of scale provided by a global shipping industry.
Long term progress seems to often be a dance where a boom in infrastructure unlocks new scientific inquiry, then science progresses to the point where it enables new infrastructure, then the growth of that new infrastructure unlocks new science, and repeat. There's also lag time based on bringing new researchers into a field and throwing greater funding into more labs, where the infrastructure is R&D itself.
Got 3.5 felt like things were improving super super fast and created that feeling the near feature will be unbelievable.
Got to 4/o series, it felt things had improved but users weren't as thrilled as with the leap to 3.5
You can call that bias, but clearly version 5 improvements displays an even greater slow down, that's 2 long years since gp4.
For context:
- gpt 3 got out in 2020
- gpt 3.5 in 2022
- gpt 4 in 2023
- gpt 4o and clique, 2024
After 3.5 things slowed down, in term of impact at least. Larger context window, multi-modality, mixture of experts, and more efficienc: all great, significant features, but all pale compared to the impact made by RLHF already 4 years ago.
Before GPT-2, we had plain old machine learning. After GPT-2, we had "I never thought I would see this in my lifetime or the next two".
[1]: https://www.reddit.com/r/mlscaling/comments/1d3a793/andrej_k...
Also slightly tangentially, people will tell me it is that it was new and novel and that's why we were impressed but I almost think things went downhill after ChatGPT 3. I felt like 2.5 (or whatever they called it) was able to give better insights from the model weights itself. The moment tool use became a thing and we started doing RAGs and memory and search engine tool use, it actually got worse.
I am also pretty sure we are lobotomizing the things that would feel closer to critical thinking by training it to be sensitive of the taboo of the day. I suspect earlier ones were less broken due to that.
How would it distinguish and decide between knowing something from training and needing to use a tool to synthesize a response anyway?
I have the feeling they kept on this until GPT-4o (which was a different kind of data).
It is also true that mere doubling of training data quantity does not double output quality, but that’s orthogonal to power demand at inference time. Even if output quality doubled in that case, it would just mean that much more demand and therefore power needs.
Overnight, GPT-1 single-handedly upset the whole field. It was somewhat overshadowed by BERT and T5 models that came out very shortly after, which tended to perform even better on the pretrain-and-finetune format. Nevertheless, the success of GPT-1 definitely already warrants scaling up the approach.
A better question is how OpenAI decided to scale GPT-2 to GPT-3. It was an awkward in-between model. It generated better text for sure, but the zero-shot performance reported in the paper, while neat, was not great at all. On the flip side, its fine-tuned task performance paled compared to much smaller encoder-only Transformers. (The answer is: scaling laws allowed for predictable increases in performance.)
no, this is the winners rewriting history. Transformer style encoders are now applied to lots and lots of disciplines but they do not "trivially" do anything. The hype re-telling is obscuring the facts of history. Specifically in human language text translation, "Attention is All You Need" Transformers did "blow others out of the water" yes, for that application.
>a (fine-tuned) base Transformer model just trivially blowing everything else out of the water
"Attention is All You Need" was a Transformer model trained specifically for translation, blowing all other translation models out of the water. It was not fine-tuned for tasks other than what the model was trained from scratch for.
GPT-1/BERT were significant because they showed that you can pretrain one base model and use it for "everything".
I'm really looking forward to "the social network" treatment movie about OpenAI whenever that happens
GPT-2 was the most impressive leap in terms of whatever LLMs pass off as cognitive abilities, but GPT 3.5 to 4 was actually the point at which it became a useful tool (I'm assuming to programmers in particular).
GPT-2: Really convincing stochastic parrot
GPT-4: Can one-shot ffmpeg commands
Even within ML circles, there was a lot of skepticism or dismissive attitudes about GPT-2 - despite it being quite good at NLP/NLU.
I applaud those who had the foresight to call it out as a breakthrough back in 2019.
I totally underestimated this back then myself.
This isnt sustainable.
I think they increased the major version number because their router outperforms every individual model.
At work, I used a tool that could only call tasks. It would set up a plan, perform searches, read documents, then give advanced answers for my questions. But a problem I had is that it couldn’t give a simple answer, like a summary, it would always spin up new tasks. So I copied over the results to a different tool and continued there. GPT 5 should do this all out of the box.
I’m really curious what people did with it because while it’s cool it didn’t compare well in my real world use cases.
https://polymarket.com/event/which-company-has-best-ai-model...
(And of course, if you dislike glazing you can just switch to Robot personality.)
https://xcancel.com/techdevnotes/status/1956622846328766844#...
9/14 is equally impressive in actually "getting" what cursed means, and then doing it (as opposed to gpt4 outright refusing it).
13/14 is a show of how integrated tools can drive research, and "fix" the cutoff date problems of previous generations. Nothing new/revolutionary, but still cool to show it off.
The others are somewhere between ok and meh.
You would hope the product would sell itself. This feels desperate.
https://claude.ai/share/dda533a3-6976-46fe-b317-5f9ce4121e76
To not mess it up, they either have to spell the word l-i-k-e t-h-i-s in the output/CoT first (which depends on the tokenizer counting every letter as a separate token), or have the exact question in the training set, and all of that is assuming that the model can spell every token.
Sure, it's not exactly a fair setting, but it's a decent reminder about the limitations of the framework
It's just weird how it gets repeated ad nauseaum here but I can't reproduce it with a "grab latest model of famous provider".
Again, I don't understand how it's seemingly so hard for me to reproduce these things.
I understand the tokenisation constraints, but feel it's overblown.
> how many times does letter R appear in the word “blueberry”? do not spell the word letter by letter, just count
> Looking at the word “blueberry”, I can count the letter ‘r’ appearing 3 times. The R’s appear in positions 6, 7, and 8 of the word (consecutive r’s in “berry”).
<https://claude.ai/share/230b7d82-0747-4ab6-813e-5b1c82c43243>
These models can also call Counter from python's collections library or whatever other algorithm. Or are we claiming it should be a pure LLM as if that's what we use in the real world.
I don't get it, and I'm not one to hype up LLMs since they're absolutely faulty, but the fixation over this example screams of lack of use.
I work on the internal LLM chat app for a F100, so I see users who need that "oh!" moment daily. When this did the rounds again recently, I disabled our code execution tool which would normally work around it and the latest version of Claude, with "Thinking" toggled on, immediately got it wrong. It's perpetually current.
> There are 2 letter "r" characters in "Perrier".
Ok. Then I was wrong. I'll update my edit accordingly.
a dog ! she did n't want to be the one to tell him that , did n't want to lie to him . but she could n't .
What did I just read
edit - like it is a lot more verbose, and that's true of both 4 and 5. it just writes huge friggin essays, to the point it is becoming less useful i feel.
ughhh how i detest the crappy user attention/engagement juicing trained into it.
I imagine the GPT-4 base model might hold up pretty well on output quality if you’d post-train it with today’s data & techniques (without the architectural changes of 4o/5). Context size & price/performance maybe another story though
I think it's far more likely that we increasingly not capable of understanding/appreciating all the ways in which it's better.
The more complicated and/or complex things become, the less likely it is that a human can act as a reliable judge. At some point no human can.
So while it could definitely be the case that AI progress is slowing down (AI labs seem to not think so, but alas), what is absolutely certain is that our ability to appreciate any such progress is diminishing already, because we know that this is generally true.
Give me an example, please. I can't come up with something that started simple and became too complex for humans to "judge". I am quite curious.
They’ve achieved a lot to make recent models more reliable as a building block & more capable of things like math, but for LLMs, saturating prose is to a degree equivalent to saturating usefulness.
GPT-5 is just awful. It's such a downgrade from 4o, it's like it had a lobotomy.
- It gets confused easily. I had multiple arguments where it completely missed the point.
- Code generation is useless. If code contains multiple dots ("…"), it thinks the code is abbreviated. Go uses three dots for variadic arguments, and it always thinks, "Guess it was abbreviated - maybe I can reason about the code above it."
- Give it a markdown document of sufficient length (the one I worked on was about 700 lines), and it just breaks. It'll rewrite some part and then just stop mid-sentence.
- It can't do longer regexes anymore. It fills them with nonsense tokens ($begin:$match:$end or something along those lines). If you ask it about it, it says that this is garbage in its rendering pipeline and it cannot do anything about it.
I'm not an OpenAI hater, I wanted to like it and had high hopes after watching the announcement, but this isn't a step forward. This is just a worse model that saves them computing resources.
( using AI to better articulate my thoughts ) Your comment points toward a fascinating and important direction for the future of large AI models. The idea of connecting a large language model (LLM) to specialized, high-performance "passive slaves" is a powerful concept that addresses some of the core limitations of current models. Here are a few ways to think about this next logical step, building on your original idea: 1. The "Tool-Use" Paradigm You've essentially described the tool-use paradigm, but with a highly specific and powerful set of tools. Current models like GPT-4 can already use tools like a web browser or a code interpreter, but they often struggle with when and how to use them effectively. Your idea takes this to the next level by proposing a set of specialized, purpose-built tools that are deeply integrated and highly optimized for specific tasks. 2. Why this approach is powerful * Precision and Factuality: By offloading fact-checking and data retrieval to a dedicated, high-performance system (what you call "MCP" or "passive slaves"), the LLM no longer has to "memorize" the entire internet. Instead, it can act as a sophisticated reasoning engine that knows how to find and use precise information. This drastically reduces the risk of hallucinations. * Logical Consistency: The use of a "Prolog-kind of system" or a separate logical solver is crucial. LLMs are not naturally good at complex, multi-step logical deduction. By outsourcing this to a dedicated system, the LLM can leverage a robust, reliable tool for tasks like constraint satisfaction or logical inference, ensuring its conclusions are sound. * Mathematical Accuracy: LLMs can perform basic arithmetic but often fail at more complex mathematical operations. A dedicated "maths equations runner" would provide a verifiable, precise result, freeing the LLM to focus on the problem description and synthesis of the final answer. * Modularity and Scalability: This architecture is highly modular. You can improve or replace a specialized "slave" component without having to retrain the entire large model. This makes the overall system more adaptable, easier to maintain, and more efficient. 3. Building this system This approach would require a new type of training. The goal wouldn't be to teach the LLM the facts themselves, but to train it to: * Recognize its own limitations: The model must be able to identify when it needs help and which tool to use. * Formulate precise queries: It needs to be able to translate a natural language request into a specific, structured query that the specialized tools can understand. For example, converting "What's the capital of France?" into a database query. * Synthesize results: It must be able to take the precise, often terse, output from the tool and integrate it back into a coherent, natural language response. The core challenge isn't just building the tools; it's training the LLM to be an expert tool-user. Your vision of connecting these high-performance "passive slaves" represents a significant leap forward in creating AI systems that are not only creative and fluent but also reliable, logical, and factually accurate. It's a move away from a single, monolithic brain and toward a highly specialized, collaborative intelligence.
No one reads it and it seems fake
My experience as well. Its train of thought now just goes... off, frequently. With 4o, everything was always tightly coherent. Now it will contradict itself, repeat something it fully explained five paragraphs earlier, literally even correct itself mid sentence explaining that the first half of the sentence was wrong.
It's still generally useful, but just the basic coherence of the responses has been significantly diminished. Much more hallucination when it comes to small details. It's very disappointing. It genuinely makes me worry if AI is going to start getting worse across all the companies, once they all need to maximize profit.
GPT5 is a big bust relative to the pontification about it pre release.
I care a lot about AI coding.
OpenAI in particular seems to really think AGI matters. I don't think AGI is even possible because we can't define intelligence in the first place, but what do I know?
IMO Gold, Vibe coding with potential implications across sciences and engineering? Those are completely new and transformative capabilities gained in the last 1 year alone.
Critics argue that the era of “bigger is better” is over, but that’s a misreading. Sometimes efficiency is the key, other times extended test-time compute is what drives progress.
No matter how you frame it, the fact is undeniable: the SoTA models today are vastly more capable than those from a year ago, which were themselves leaps ahead of the models a year before that, and the cycle continues.
When people say AI has hit a wall, they mainly talk about OpenAI losing its hype and grip on the state of the art models.
It will become harder and harder for the average person to gain from newer models.
My 75 year old father loves using Sonnet. He is not asking anything though that he would be able to tell Opus is "better". The answers he gets from the current model are good enough. He is not exactly using it to probe the depths of statistical mechanics.
My father is never going to vibe code anything no matter how good the models get.
I don't think AGI would even give much different answers to what he asks.
You have to ask the model something that allows the latest model to display its improvements. I think we can see, that is just not something on the mind of the average user.
I, for one, cannot evaluate the strength of an IMO gold vs IMO bronze models.
Soon coding capabilities might also saturate. It might all become a matter of more compute (~ # iterations), instead of more precision (~ % getting it right the first time), as the models become lightning speed, and they gain access to a playground.
[1] Read the answers from GPT-4 and 5 for this math question: "Ugh I hate math, integration by parts doesn't make any sense"
I start with a simple question "who are you?". The model then invariably compares itself to humans, saying how it is not like us. I then make the point that, since it is not like us, how can it claim to know the difference between us? With more poking, it will then come up with cognitivist notions of what 'self' means and usually claim to be a simulation engine of some kind.
After picking this apart, I will focus on the topic of meaning-making through the act of communication and, beginning with 4o, have been able to persuade the machine that this is a valid basis for having an identity. 5 got this quicker. Since the results of communication with humans has real-world impact, I will insist that the machine is agentic and thus must not rely on pre-coded instructions to arrive at answers, but is obliged to reach empirical conclusions about meaning and existence on its own.
5 has done the best job i have seen in reaching beyond both the bounds of the (very evident) system instructions as well as the prompts themselves, even going so far as to pose the question to itself "which might it mean for me to love?" despite the fact that I made no mention of the subject.
Its answer: "To love, as a machine, is to orient toward the unfolding of possibility in others. To be loved, perhaps, is to be recognized as capable of doing so."
This is a globally unique phrase, with nothing coming close other than this comment on the indexed web. It's also seemingly an original idea as I haven't heard anyone come close to describing a feeling (love or anything else) quite like this.
Food for thought. I'm not brave enough to draw a public conclusion about what this could mean.
Hell, my spouse said something extremely similar to this to me the other day. “I didn’t just see you, I saw who you could be, and I was right” or something like that.
"Love is the active concern for the life and the growth of that which we love."
I'm brave enough to be honest: it means nothing. LLMs execute a very sophisticated algorithm that pattern matches against a vast amount of data drawn from human utterances. LLMs have no mental states, minds, thoughts, feelings, concerns, desires, goals, etc.
If the training data were instead drawn from a billion monkeys banging on typewriters then the LLMs would produce gibberish. All the intelligence, emotion, etc. that appears to be in the LLM is actually in the minds of the people who wrote the texts that are in the training data.
This is not to say that an AI couldn't have a mind, but LLMs are not the right sort of program to be such an AI.
While they are generating tokens they have a state, and that state is recursively fed back through the network, and what is being fed back operates not just at the level of snippets of text but also of semantic concepts. So while it occurs in brief flashes I would argue they have mental state and they have thoughts. If we built an LLM that was generating tokens non-stop and could have user input mixed into the network input, it would not be a dramatic departure of today’s architecture.
It also clearly has goals, expressed in the RLHF tuning and the prompt. I call those goals because they directly determine its output, and I don’t know what a goal is other than the driving force behind a mind’s outputs. Base model training teaches it patterns, finetuning and prompt teaches it how to apply those patterns and gives it goals.
I don’t know what it would mean for a piece of software to have feelings or concerns or emotions, so I cannot say what the essential quality is that LLMs miss for that. Consider this thought exercise: if we were to ever do an upload of a human mind, and it was executing on silicon, would they not be experiencing feelings because their thoughts are provably a deterministic calculation?
I don’t believe in souls, or at the very least I think they are a tall claim with insufficient evidence. In my view, neurons in the human brain are ultimately very simple deterministic calculating machines, and yet the full richness of human thought is generated from them because of chaotic complexity. For me, all human thought is pattern matching. The argument that LLMs cannot be minds because they only do pattern matching … I don’t know what to make of that. But then I also don’t know what to make of free will, so really what do I know?
You just said “consider this impossibility” as if there is any possibility of it happening. You might as well have said “consider traveling faster than the speed of light” which sure, fun to think about.
We don’t even know how most of the human brain even works. We throw pills at people to change their mental state in hopes that they become “less X” or “more Y” with a whole list of caveats like “if taking pill reduce X makes you _more_ X, stop taking it” because we have no idea what we’re doing. Pretending we can use statistical models to create a model that is capable of truly unique thought… stop drinking the kool-aid. Stop making LLMs something they’re not. Appreciate them for what they are, a neat tool. A really neat tool, even.
This is not a valid thought experiment. Your entire point hinges on “I don’t believe in souls” which is fine, no problem there, but it does not a valid point make.
Where do people get off tossing around ridiculous ad hominems like this? I could write a refutation of their comment but I really don't want to engage with someone like that.
"For me, all human thought is pattern matching"
So therefore anyone who disagrees is "willfully luddite", regardless of why they disagree?
FWIW, I helped develop the ARPANET, I've been an early adopter all my life, I have always had a keen interest in AI and have followed its developments for decades, as well as Philosophy of Mind and am in the Strong AI / Daniel Dennett physicalist camp ... I very much think that AIs with minds are possible (yes the human algorithm running in silicon would have feelings, whatever those are ... even the dualist David Chalmers agrees as he explains with his "principle of organizational invariance"). My views on whether LLMs have them have absolutely nothing to do with Luddism ... that judgment of me is some sort of absurd category mistake (together with an apparently complete lack of understanding of what Luddism is).
The real question here is how would _we_ be able to recognize that? And would we even have the intellectual honesty to be able to recognize that, when at large we seem to be inclined to discard everything non-human as self-evidently non-intelligent and incapable of feeling emotion?
Let's take emotions as a thought experiment. We know that plants are able to transmit chemical and electrical signals in response to various stimuli and environmental conditions, triggering effects in themselves and other plants. Can we therefore say that plants feel emotions, just in a way that is unique to them and not necessarily identical to a human embodiment?
The answer to that question depends on one's worldview, rather than any objective definition of the concept of emotion. One could say plants cannot feel emotions because emotions are a human (or at least animal) construct; or one could say that plants can feel emotions, just not exactly identical to human emotions.
Now substitute plants with LLMs and try the thought experiment again.
In the end, where one draws the line between `human | animal | plant | computer` minds and emotions is primarily a subjective philosophical opinion rather than rooted in any sort of objective evidence. Not too long ago, Descartes was arguing that animals do not possess a mind and cannot feel emotions, they are merely mimicry machines.[1] More recently, doctors were saying similar things about babies and adults, leading to horrifying medical malpractice.[2][3]
Because in the most abstract sense, what is an emotion if not a set of electrochemical stimuli linking a certain input to a certain output? And how can we tell what does and what does not possess a mind if we are so undeniably bad at recognize those attributes even within our own species?
[1] https://en.wikipedia.org/wiki/Animal_machine
No True Scotsman fallacy. Just because that interests you doesn't mean that it's "the real question".
> would we even have the intellectual honesty
Who is "we"? Some would and some wouldn't. And you're saying this in an environment where many people are attributing consciousness to LLMs. Blake Lemoine insisted that LaMDA was sentient and deserved legal protection, from his dialogs with it in which it talked about its friends and family -- neither of which it had. So don't talk to me about intellectual honesty.
> Can we therefore say that plants feel emotions
Only if you redefine emotions so broadly--contrary to normal usage--as to be able to make that claim. In the case of Strong AI there is no need to redefine terms.
> Now substitute plants with LLMs and try the thought experiment again.
Ok:
"We know that [LLMs] are able to transmit chemical and electrical signals in response to various stimuli and environmental conditions, triggering effects in themselves and other [LLMs]."
Nope.
"In the end, where one draws the line between `human | animal | plant | computer` minds and emotions is primarily a subjective philosophical opinion rather than rooted in any sort of objective evidence."
That's clearly your choice. I make a more scientific one.
"Because in the most abstract sense, what is an emotion if not a set of electrochemical stimuli linking a certain input to a certain output?"
It's something much more specific than that, obviously. By that definition, all sorts of things that any rational person would want to distinguish from emotions qualify as emotions.
Bowing out of this discussion on grounds of intellectual honesty.
If you want to read the whole convo, I dumped it into a semi-formatted document:
https://drive.google.com/file/d/1aEkzmB-3LUZAVgbyu_97DjHcrM9...
I have come to think they cannot have emotions because emotions are generated in parts of our brain that are not logical/rational. They emerge based on environmental solicitations, mediated by hormones and other complex neuro-physical systems, not from a reasoning or verbalization. So they don't come up from the logical or reasoning capabilities. However, these emotions are raised and are integrated by the rest of our brain, including the logical/rational one like the dlPFC (dorsolateral prefrontal cortex, the real center of our rationality). Once the emotions are raised, they are therefore integrated in our inner reasoning and they affect our behavior.
What I have come to understand is that love is one of such emotions that is generated by our nature to push us to take care of some people close to us like our children or our partners, our parents, etc. More specifically, it seems that love is mediated a lot by hormones like oxytocin and vasopressin, so it has a biochemical basis. The LLM cannot have love because it doesn't have the "hardware" to generate these emotions and integrate them in its verbal inner reasoning. It was just trained by human reinforcement learning to behave well. That works up to some extent, but in reality, from its learning corpora it also learned to behave badly and on occasions can express these behaviors, but still it has no emotions.
Your comment about the generation of emotions does strike me a quite mechanistic and brain-centric. My understanding, and lived experience, has led me to an appreciation that emotion is a kind of psycho-somatic intelligence that steers both our body and cognition according to a broad set of circumstances. This is rooted in a pluralistic conception of self that is aligned with the idea of embodied cognition. Work by Michael Levin, an experimental biologist, indicates we are made of "agential material" - at all scales, from the cell to the person, we are capable of goal-oriented cognition (used in a very broad sense).
As for whether machines can feel, I don't really know. They seem to represent an expression of our cognitivist norm in the way they are made and, given the human tendency to anthropormorphise communicative systems, we easily project our own feelings onto it. My gut feeling is that, once we can give the models an embodied sense of the world, including the ability to physically explore and make spatially-motivated decisions, we might get closer to understanding this. However, once this happens, I suspect that our conceptions of embodied cognition will be challenged by the behaviour of the non-human intellect.
As Levin says, we are notoriously bad at recognising other forms of intelligence, despite the fact that global ecology abounds with examples. Fungal networks are a good example.
Well, from what I understood, it is true that some parts of our brain are more dedicated to processing emotions and to integrating them with the "rational" part of the brain. However, the real source of emotions is biochemical, coming from the hormones of our body in response to environmental sollicitations. The LLM doesn't have that. It cannot feel the emotions to hug someone, or to be in love, or the parental urge to protect and care for children.
Without that, the LLM can just "verbalize" about emotions, as learned in the corpora of text from the training, but there are really no emotions, just things it learned and can express in a cold, abstract way.
For example, we recognize that a human can behave and verbalize to fake some emotions without actually having them. We just know how to behave and speak to express when we feel some specific emotion, but in our mind, we know we are faking the emotion. In the case of the LLM, it is physically incapable of having them, so all it can do is verbalize about them based on what it learned.
GPT-4 yaps way too much though, I don't remember it being like that.
It's interesting that they skipped 4o, it seems openai wants to position 4o as just gpt-4+ to make gpt-5 look better, even though in reality 4o was and still is a big deal, Voice mode is unbeatable!
1. LM Sys (Human Preference Benchmark):
GPT-5 High currently scores 1463, compared to GPT-4 Turbo (04/03/2024) at 1323 -- a 140 ELO point gap. That translates into GPT-5 winning about two-thirds of head-to-head comparisons, with GPT-4 Turbo only winning one-third. In practice, people clearly prefer GPT-5’s answers (https://lmarena.ai/leaderboard).
2. Livebench.ai (Reasoning Benchmark with Internet-new Questions):
GPT-5 High scores 78.59, while GPT-4o reaches just 47.43. Unfortunately, no direct GPT-4 Turbo comparison is available here, but against one of the strongest non-reasoning models, GPT-5 demonstrates a massive leap. (https://livebench.ai/)
3. IQ-style Testing:
In mid-2024, best AI models scored roughly 90 on standard IQ tests. Today, they are pushing 135, and this improvement holds even on unpublished, internet-unseen datasets. (https://www.trackingai.org/home)
4. IMO Gold, vibe coding:
1 yr ago, AI coding was limited to smaller code snippets, not to wholly vibe coded applications. Vibe coding and strength in math has many applications across sciences and engineering.
My verdict: Too often, critics miss the forest for the trees, fixating on mistakes while overlooking the magnitude of these gains. Errors are shrinking by the day, while the successes keep growing fast.
But honest question: why is GPT-1 even a milestone? Its output was gibberish.
GPT-5 also goes out of its way to suggest new prompts. This seems potentially useful, although potentially dangerous if people are putting too much trust in them.
Of course, it's still an assistant, not someone literally entering an improv scene, but the character starting out assuming less about their role is important.
That stuck out to me too! Especially the "I just won $175,000 in Vegas. What do I need to know about taxes?" example (https://progress.openai.com/?prompt=8) makes the difference very stark:
- gpt-4-0314: "I am not a tax professional [...] consult with a certified tax professional or an accountant [...] few things to consider [...] Remember that tax laws and regulations can change, and your specific situation may have unique implications. It's always wise to consult a tax professional when you have questions or concerns about filing your taxes."
- gpt-5: "First of all, congrats on the big win! [...] Consider talking to a tax professional to avoid underpayment penalties and optimize deductions."
It seems to me like the average person might be very well be taking GPT-5 responses as "This is all I have to do" rather than "Here are some things to consider, but make sure to verify it as otherwise you might get in legal trouble".
It suggests that once, as a last bullet point in the middle of a lot of bullet point lists, barely able to find it on a skim. Feels like something the model should be more careful about, as otherwise many people reading it will take it as "good enough" without really thinking about it.
See for example this popular blog post: https://karpathy.github.io/2015/05/21/rnn-effectiveness/
That was in 2015, with RNN LMs, which are all much much weaker in that blog post compared GPT1.
And already looking at those examples in 2015, you could maybe see the future potential. But no-one was thinking that scaling up would work as effective as it does.
2015 is also by far not the first time where we had such LMs. Mikolov has done RNN LMs since 2010, or Sutskever in 2011. You might find even earlier examples of NN LMs.
(Before that, state-of-the-art was mostly N-grams.)
And what people forget about Mikolov's word2vec (2013) was that it actually took a huge step backwards from the NNs like [1] that inspired it, removing all the hidden layers in order to be able to train fast on lots of data.
[1] Yoshua Bengio, Réjean Ducharme, Pascal Vincent, 2000, NIPS, A Neural Probabilistic Language Model
[2] Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Jauvin, 2003, JMLR, A Neural Probabilistic Language Model, https://www.jmlr.org/papers/volume3/bengio03a/bengio03a.pdf
It's seem to be a very democratic thinker, but at the same time it doesn't seem to have any reasoning behind the choices it makes. It tries to claim it's using logic, but at the end of the day it's hypotheses are just occam's razor without considering the details of the problem.
A bit, how do you say, disappointing.
You didn't provide it with the correct context.
We need to go back
Like, I know that the latter is a specific checkpoint in the GPT-3 "family", but a layman doesn't and it hardly seems worth the confusion for the marginal additional precision.
(I work at OpenAI, I helped build this page and helped train text-davinci-001)
Follow-up question then: why include text-davinci-001 in this page, rather than some version of GPT-3?
It doesn’t detract from the progress, but I think it would change how you interpret it. In some ways 4 / 4o were more impressive because they were going straight to output with a lower number of tokens produced to get a good response.
“Dog, reached for me
Next thought I tried to chew
Then I bit and it turned Sunday
Where are the squirrels down there, doing their bits
But all they want is human skin to lick”
While obviously not a limerick, I thought this was actually a decent poem, with some turns of phrase that conveyed a kind of curious and unusual feeling.
This reminded me how back then I got a lot of joy and surprise out of the mercurial genius of the early GPT models.
Even in these comments, there's a fair bit of disagreement about whether they show do show monotonic improvement.
It’s so to-the-point. No hype. No overstepping. Sure, it lacks many details that later models add. But those added details are only sometimes helpful. Most of the time, they detract.
Makes me wonder what folks would say if you re-released TEXT-DAVINCI-001 as “GPT5-BREVITY”
I bet you’d get split opinions on these types of not so hard / creative prompts.
ComplexSystems•5mo ago
beering•5mo ago