The news seem to be very good news, but I cannot understand how agents from the very MIT can write a horrible mess like
> Artificial intelligence systems like ChatGPT provide plausible-sounding answers to any question you might ask. But they don’t always reveal the gaps in their knowledge or areas where they’re uncertain. That problem can have huge consequences as AI systems are increasingly used to do things like develop drugs, synthesize information, and drive autonomous cars
which conflates AI, AGI, NNs and LLMs. It is like writing «[software] systems like ChatGPT provide plausible-sounding answers to any question you might ask. But they don’t always reveal the gaps in their knowledge or areas where they’re uncertain. That problem can have huge consequences as [software] systems are increasingly used to do things like develop drugs, synthesize information, and drive autonomous cars». The system that outputs «plausible-sounding answers» is not supposed to «develop drugs», that which «synthesize[s] information» is not supposed to «drive autonomous cars». And it is normally malicious sophistry to use the pattern of "wheels wear out on roads, and «that problem can have huge consequences as» wheels are fundamental for pottery".
I cannot understand what happened and what is worse: that the writer responsible for the article did not understand what it wrote? That it decided that a public construed to be adults should be spoken to in the way used with children when you want them to remain mentally children? That something else decided towards that form of communication in unawareness?
mdp2021•1d ago
> Artificial intelligence systems like ChatGPT provide plausible-sounding answers to any question you might ask. But they don’t always reveal the gaps in their knowledge or areas where they’re uncertain. That problem can have huge consequences as AI systems are increasingly used to do things like develop drugs, synthesize information, and drive autonomous cars
which conflates AI, AGI, NNs and LLMs. It is like writing «[software] systems like ChatGPT provide plausible-sounding answers to any question you might ask. But they don’t always reveal the gaps in their knowledge or areas where they’re uncertain. That problem can have huge consequences as [software] systems are increasingly used to do things like develop drugs, synthesize information, and drive autonomous cars». The system that outputs «plausible-sounding answers» is not supposed to «develop drugs», that which «synthesize[s] information» is not supposed to «drive autonomous cars». And it is normally malicious sophistry to use the pattern of "wheels wear out on roads, and «that problem can have huge consequences as» wheels are fundamental for pottery".
I cannot understand what happened and what is worse: that the writer responsible for the article did not understand what it wrote? That it decided that a public construed to be adults should be spoken to in the way used with children when you want them to remain mentally children? That something else decided towards that form of communication in unawareness?