Absolutely.
But I argue the more urgent issue is simple misunderstanding.
LLMs were designed to summarize web pages while sidestepping copyright issues --- they are fundamentally *language* prediction engines. You don't have to take my word for it, it is right in the name --- Large *LANGUAGE* Model.
But what people seem to be expecting is a *logical* deduction engine. This is a total misapplication.
Examples are posted here on a daily basis --- "Can AI run your business?"
Short answer --- no, not now and not anytime soon. Expecting LLMs to actually *understand* or *reason* or *make decisions* is a misapplication. And one that I expect will carry significant legal liability issues once this becomes painfully clear.
Especially concerning that, even with those limitations, these systems are already being deployed inside workflows that influence decisions... I assume hiring (and maybe firing down the road?), content moderation, surveillance, targeting, triage. It's often done with the assumption that "the model is good enough" or that human oversight will catch errors later. In practice, that oversight tends to erode once systems scale.
Wondering how organizations are handling that gap: what the models can do vs. what they're implicitly trusted to do once integrated into real systems.
Similar to the way management handles most things (in the USA at least) --- by going with the flow until the error and pain of doing so becomes unbearable.
imadatalla•2h ago