On the other hand, if you're telling your investors that AGI is about two years away, then you can only do that for a few years. Rumor has it that such claims were made? Hopefully no big investors actually believed that.
The real question to be asking is, based on current applications of LLMs, can one pay for the hardware to sustain it? The comparison to smartphones is apt; by the time we got to the "Samsung Galaxy" phase, where only incremental improvements were coming, the industry was making a profit on each phone sold. Are any of the big LLMs actually profitable yet? And if they are, do they have any way to keep the DeepSeeks of the world from taking it away?
What happens if you built your business on a service that turns out to be hugely expensive to run and not profitable?
For me an AGI would mean truly at least human level as in "this clearly has a consciousness paired with knowledge", a.k.a. a person. In that case, what do the investors expect? Some sort of slave market of virtual people to exploit?
bpodgursky•53m ago
It doesn't matter whether they are lying. People want to hear it. It's comforting. So the market fills the void, and people get views and money for saying it.
Don't use the fact that people are saying it, as evidence that it is true.
righthand•43m ago
The inverse can be true too: Just because people ARE saying that Agi is coming, isn’t evidence that it is true.
bpodgursky•37m ago
"AI is getting better rapidly" is the current state of affairs. Arguing "AI is about to stop getting better" is the argument that requires strong evidence.
camillomiller•34m ago
bpodgursky•27m ago
You've frog-boiled yourself into timelines where "No WORLD SHAKING AI launches in the past 4 months" means "AI is frozen". In 4 months, you will be shocked if AI doesn't have a major improvement every 2 months. In 6 months, you will be shocked if it doesn't have a major update ever 1 month.
It's hard to see exponential curves while you're on it, I'm not trying to fault you here. But it's really important to stretch yourself to try.
backpackviolet•19m ago
th0ma5•19m ago
There's been the obvious notion that digitizing the world's information is not enough and that hasn't changed.
righthand•5m ago
You assume everyone is “impressed”.
backpackviolet•23m ago
… is it? I hear people saying that. I see “improvement”: the art generally has the right number of fingers more often, the text looks like text, the code agents don’t write stuff that even the linter says is wrong.
But I still see the wrong number of fingers sometimes. I still see the chat bots count the wrong number of letters in a word. I still see agents invent libraries that don’t exist.
I don’t know what “rapid” is supposed to mean here. It feels like Achilles and the Tortoise and also has the energy costs of a nation-state.
righthand•2m ago
righthand•7m ago
Llms getting better != a path to AGI.
SalmoShalazar•42m ago
There are interesting and well thought out arguments for why the AGI is not coming with the current state of technology, dismissing those arguments as propaganda/clickbait is not warranted. Yannic is also an AI professional and expert, not one to be offhandedly dismissed because you don’t like the messaging.
TheCraiggers•25m ago
Telling us all to remember that there's potential for bias isn't so bad. It's a hot button issue.
TheOtherHobbes•38m ago
It's not the AGI sceptics who are getting $500bn valuations.
camillomiller•36m ago
good_stuffs•14m ago