Although, people creating a digital god would be pretty blasphemous. Tough one.
How can it not, tho? Like it can't get worse (at least for the open models. You have them now. The underlying models don't change). So the only obvious answer is they either stagnate, or get better. And a brief look at history shows that nothing truly stagnates.
Keep in mind that we're not even 3 years into this new paradigm. We're still scratching the surface of how we can use these things, that are unreasonably good for being trained for NTP.
That's my main argument against an "AI winter". We have so many things to try that it'll take decades to fully realise all the potential of this tech, even if all research into foundational models would stop now. And it surely won't. There is just too much capital allocated here, and that attracts a lot of smart people working the problem. Yeah, it's reasonable to say things will get better.
This seems so false to me we must be looking at different histories. Almost everything stagnates, we just over index in the things that don’t.
Many of them appear to be somewhere on the rationalist/EA (rationalist as in LessWrong enthusiasts, not the literal meaning of the term) spectrum, and probably wouldn't consider _themselves_ religious, though many would consider those things to be religions in their own right.
techpineapple•5mo ago
It has this longtermism adjacent feel where it makes very strange albeit “rational” argument about the liklihoid of certain out comes, and then over indexes on net present value to make decisions. Sure if there’s 100’s of trillions of dollars to be made it’s worth spends 100’s of billions of dollars for a 1 percent chance it happens, but it’s still a 1 percent chance.