That is, is the economic value that can be unlocked by today’s AI enough to justify the valuations, or are the valuations all predicated on AI becoming much much better than it is today? (AGI, ASI, etc)
I assume it’s a bit of both, just trying to get a sense of the balance between the two.
jimbo808•1d ago
We know what common types of human error to check for and those mistakes stick out, but LLMs make mistakes that no human would ever make, and they make them with confidence so as to create a false expectation of correctness.