The scribble model kind-of hints at what a better forecast would've done - you start from the scribbles and ask "what would it take to get that line, and how'd we get there". And I love that the initial set of scribbles will, amongst other things, expose your biases. (Because you draw the set of scribbles that seems plausible to you, a priori)
The fact that it can both guide you towards exploring alternatives and exposing biases, while being extremely simple - marvellous work.
Definitely going to incorporate this into my reasoning toolkit!
If everything goes "perfectly", then the logic works (to an extent, but the increasing rate of returns is a suspicious assumption baked into it).
But everything must go perfectly to do that, including all the productivity multipliers being independent and the USA deciding to take this genuinely seriously (not fake seriously in the form of politicians saying "we're taking this seriously" and not doing much), and therefore no-expenses-spared rush the target like it's actually an existential threat. I see no way this would be a baseline scenario.
You _should_ expect to see roughly comparable results, but often you don't and when you don't it can reveal hidden assumptions/flawed thinking.
^[1]: https://link.springer.com/book/10.1007/978-3-319-39756-6
Predicting AI is more or less impossible because we have no idea about the its properties. With other technologies, we can reason about how small or how how a component can get and this gives us psychical limitations that we can observe. With AI we throw in data and we are or we are not surprised by the behavior the model exhibits. With a few datapoints we have, it seems that more compute and more data usually lead to better performance, but that is more or less everything we can say about it, there is no theory behind it that would guarantee us the gains for the next 10x.
keeganpoppen•3h ago
the scribble method is, of course, quite sensitive to the number of hypotheses you choose to consider, as it effectively considers them all to be of equal probability, but it also surfaces a lot of interesting interactions between different hypotheses that have nothing to do with each other, but still have effectively the "same" prediction at various points in time. and i don't see any reason that you can't just be thoughtful about what "shapes" you choose to include and in what quantity-- basically like a meta-subjective model of which models are most likely or something haha. that said, there's also some value in the low-res aspect of just drawing the line-- you can articulate exactly what path you are thinking without having to pin that thinking to some model that doesn't actually add anything to the prediction other than fitting the same shape as what is in your mind.