Maybe author should check before pressing "Publish" if the info in the post is not already outdated.
ChatGPT passed the image generation test mentioned: https://chatgpt.com/share/68171e2a-5334-8006-8d6e-dd693f2cec...
This case is especially egregious because of how there were probably two different models involved. I assume Marcus' images came from some AI service that followed what until very recently was the standard pattern: you ask an LLM to generate an image; the LLM goes and fluffs out your text, then passes it to a completely separate diffusion-based image generation model, which has only a rudimentary understanding of English grammar. So of course his request for "words and nothing else" was ignored. This is a real limitation of the image generation model, but that has no relevance to the strengths and weaknesses of the LLM itself. And 'AI will replace humans' scenarios typically focus on text-based tasks that use the LLM itself.
Arguably AI services are responsible for encouraging users to think of what are really two separate models (LLM and image generation) as a single 'AI'. But Marcus should know better.
And so it's not surprising that ChatGPT was able to produce dramatically better results now that it has "native" image generation, which supposedly uses the native multimodal capabilities of the LLM (though rumors are that that description is an oversimplification). The results are still not correct. But it's a major advancement that the model now respects grammar; it no longer just spots the word "fruit" and generates a picture of fruit. Illustration or no, Marcus is misrepresenting the state of the art by not including this advancement.
If Marcus had used a recent ChatGPT output instead, the comparison would be more fair, but still somewhat misleading. Even with native capabilities, LLMs are simply worse at both understanding and generating images than they are at understanding and generating text. But again, text capability matters much more. And you can't just assume that a model's poor performance on images will correlate with poor performance on text.
The thing is, I tend to agree with the substance of Marcus's post, including the part where portrayals of current AI capabilities are suspect because they don't pass the 'sniff test', or in other words, because they don't take into account how LLMs continue to fall down on some very basic tasks. I just think the proper tasks for this evaluation should be text-based. I'd say the original "count the number of 'r's in strawberry" task is a decent example, even if it's been patched, because it really showcases the 'confidently wrong' issue that continues to plague LLMs.
The problem is AI doesn’t think and if a task is totally new it doesn’t produce the correct answer.
So do this and pick the one where humans do best. I doubt that doing so would show all progress to be illusory.
But it would certainly be interesting to know what the easiest thing is that a human can do but current AIs struggle with.
Still "Count the R's" apparently.
I suspect this is because our proxies are predicated on a task set that inherently includes the physical world, which at some level connects all tasks and creates links between capabilities that generally pervade our environment. LLMs do not exist in this physical world, and are therefore not within the set of things that can be reasoned about with those proxies.
This will probably gradually change with robotics, as the competencies required to exist and function in the physical world will (I postulate) generalize to other tasks in such a way that it more closely matches the pattern that our assumptions are based on.
Of course, if we segregate intelligence into isolated modules for motility and cognition, this will not be the case as we will not be taking advantage of that generalization. I think that would be a big mistake, especially in light of the hypotheses that the massive leap in capabilities of LLMs came more from the training on things we weren’t specifically trying to achieve- the bulk of seemingly irrelevant data that unlocked simple language processing into reasoning and world modeling.
Perhaps not the mainstream models, but deepmind has been working on robotics models with simulated and physical RL for years https://deepmind.google/discover/blog/rt-2-new-model-transla...
This was before o3, but another tweet I saw (don't have the link) suggests it's also completely incapable of getting it.
How am I not surprised?
Gary says: - This is just the task length that the models were able to solve in THIS dataset. What about other tasks?
Yeah, obviously. The point is that models are improving on these tasks in a predicable fashion. If you care about software, you should care how good ai is at software.
- Gary says: Task length is a bad metric. What about a bunch of other factors of difficulty which might not factor into task length?
Task length is a pretty good proxy for difficulty, that's why people grade a bug in days. Of course many factors contribute to this estimate, but averaged over many tasks, time is a great metric for difficulty.
Finally, Gary just ignores that despite his perspective that the metric makes no sense and is meaningless, it has extremely strong predictive value. This should give you pause - how can an arbitrary metric with no connection to the true difficulty of a task, with no real way of comparing its validity of measuring difficulty across tasks or across task-takers, result in such a retrospectively smooth curve, and so closely predict the recent data points from sonnet and o3? something IS going on there, which cannot fit into Gary's ~spin~ narrative that nothing ever happens.
Nivge•9mo ago