The Premise: There has been a lot of talk lately about the possibility that AI development (as we currently know it) is approaching a plateau. While I don't personally agree with this hypothesis, it is undeniably a common sentiment in the industry right now, so it’s worth investigating.
We have seen that increasing the number of parameters or "scaling up" a neural network doesn't always yield immediate linear improvements. With certain versions of ChatGPT, many users perceived a degradation in performance despite the underlying network complexity presumably being increased.
My Theory: Is it possible that we are seeing a "complexity dip"? In other words, could there be a phase where increasing complexity initially causes a drop in performance, only to be followed by a new phase where that same complexity allows for superior emergent properties?
To simplify, let’s imagine a hypothetical scale where we compare "Complexity" (parameters/compute) vs. "Performance." For example:
LLM: Chat GPT 3 // Complexity Level 1 // Performace 0.2
LLM: Chat GPT 3.5 // Complexity Level 10 // Performance 0.5
LLM: Chat GPT 4 // Complexity Level 100 // Performance 0.75
LLM: Chat GPT 4.2 // Complexity Level 1000 // Performance 0.6 (The "False Plateau" / Performance degradation)
LLM: Chat GPT 4.2X // Complexity Level 10000 // Performance 0.5 (Further degradation due to unmanaged complexity)
LLM: Chat GPT 6 // Complexity Level 100000 // Performance 0.8 (The "breakthrough": new abilities emerge)
LLM: Chat GPT 7 // Complexity Level 1000000 // Performance 0.99 (Potential AGI / Peak performance)
The Risk: The real problem here is economic and psychological. If we are currently in the "GPT-4.x" phase of this example, the industry might stop investing because the returns look negative. We might never reach the "GPT-6" level simply because we mistook a temporary dip for a permanent ceiling.
I’m curious to hear your thoughts. Have we seen similar "dips" in other complex systems before a new level of organization emerges? Or is the plateau a hard physical limit?
chrisjj•4w ago
Perhaps the cause is simply the presumption?
massicerro•4w ago
funkyfiddler69•3w ago
IMO, neither the plateau nor the perception of "a wrong path" are real. There are too many paths and we have too few humans with adequately capable brains.
Companies talk for the agenda's sake and thus the kick of the surprise. It's a marketing thing.
AI R&D is basically thinking out loud nowadays. It's just the pace of the news.
I believe that most AI development has reached "the end" of a logarithmic curve. The assigned humans will catch up. Then we'll see faster growth again. It takes time to get from one edge to the other or walk along it or explore the area.
The progress is there but it's infinitely small compared to the past years where it was relatively simple to get better results over and over and nobody will get it except if they are sensitized to it.
What kind of major leap in performance do you expect? What do others expect? Be specific and people will tell you whether there is a plateau or not enough hands on deck working on specific problems.