The author, for whatever reason, views it as a foregone conclusion that every dollar spent in this way is a waste of time and resources, but I wouldn't view any of that as wasted investment at all. It isn't any different from any other trend - by this logic, we may as well view the cloud/SaaS craze of the last decade as a waste of time. After all, the last decade was also fueled by lots of unprofitable companies, speculative investment and so on, and failed to reach any pie-in-the-sky Renaissance-level civilization-altering outcome. Was it all a waste of time?
It's ultimately just another thing industry is doing as demand keeps evolving. There is demand for building the current AI stack out, and demand for improving it. None of it seems wasted.
https://www.youtube.com/watch?v=DtePicx_kFY https://www.bbc.com/news/articles/cy7e7mj0jmro
Also, if anyone wants to know what a real effort to waste a trillion dollars can buy ... https://costsofwar.watson.brown.edu/
One thing to keep in mind, is that most of these people who go around spreading unfounded criticism of LLMs, "Gen-AI" and just generally AI aren't usually very deep into understanding computer science, and even less science itself. In their mind, if someone does an experiment, and it doesn't pan out, they'll assume that means "science itself failed", because they literally don't know how research and science work in practice.
I’m quite critical, but I think we have to grant that he has plenty of credentials and understands the technical nature of what he’s critiquing quite well!
Its all about scale.
If you spend $100 on something that didn't work out that money wasn't wasted if you learned something amazing. If you spend $1,000,000,000,000 on something that didn't work out the expectation is that you learn something close to 1,000,000,000x more than the $100 spend. If the value of learning is several orders of magnitude less than the level of investment there is absolutely tremendous waste.
For example: nobody qualifies spending a billion dollars on a failed project as value if your learning only resulted in avoiding future paper cuts.
Of course, he includes enough weasel phrases that you could never nail him down on any particular negative sentiment; LLMs aren’t bad, they just need to be “complemented”. But even if we didn’t have context, the whole thesis of the piece runs completely counter to this — you don’t “waste” a trillion dollars on something that just needs to be complemented!
FWIW, I totally agree with his more mundane philosophical points about the need to finally unify the work of the Scruffies and the Neats. The problem is that he frames it like some rare insight that he and his fellow rebels found, rather than something that was being articulated in depth by one of the fields main leaders 35 years ago[1]. Every one of the tens of thousands of people currently working on “agential” AI knows it too, even if they don’t have the academic background to articulate it.
I look forward to the day when Mr. Marcus can feel like he’s sufficiently won, and thus get back to collaborating with the rest of us… This level of vitriolic, sustained cynicism is just antithetical to the scientific method at this point. It is a social practice, after all!
[1] https://www.mit.edu/~dxh/marvin/web.media.mit.edu/~minsky/pa...
naveen99•30m ago
Ilya should just enjoy his billions raised with no strings.
philipwhiuk•24m ago
Yes, indeed, that is why all we have done since the 90s is scale up the 'expert systems' we invented ...
That's such an a-historic take it's crazy.
* 1966: failure of machine translation
* 1969: criticism of perceptrons (early, single-layer artificial neural networks)
* 1971–75: DARPA's frustration with the Speech Understanding Research program at Carnegie Mellon University
* 1973: large decrease in AI research in the United Kingdom in response to the Lighthill report
* 1973–74: DARPA's cutbacks to academic AI research in general
* 1987: collapse of the LISP machine market
* 1988: cancellation of new spending on AI by the Strategic Computing Initiative
* 1990s: many expert systems were abandoned
* 1990s: end of the Fifth Generation computer project's original goals
Time and time again, we have seen that each academic research begets a degree of progress, improved by the application of hardware and money, but ultimately only a step towards AGI, which ends with a realisation that there's a missing congitive ability that can't be overcome by absurd compute.
LLMs are not the final step.
bbor•2m ago
CuriouslyC•5m ago
Read about the the No Free Lunch Theorem. Basically, the reason we need to "scale" so hard is because we're building models that we want to be good at everything. We could build models that are as good at LLMs at a narrow fraction of tasks we ask of them to do, at probably 1/10th the parameters.
an0malous•5m ago