I love this. I think there's a tendency to extrapolate past performance gains into the future, while the primary driver of that (scaling) has proven to be dead. Continued improvements seem to be happening through rapid tech breakthroughs in RL training methodologies and to a lesser degree, architectures.
People should see this as a significant shift. With scaling, the path forward is more certain than what we're seeing now. That means you probably shouldn't build in anticipation of future capabilities, because it's uncertain when they will arrive.
nbstme•7h ago