> The Juno Award-winning musician said he learned of the online misinformation last week after a First Nation north of Halifax confronted him with the summary and cancelled a concert planned for Dec. 19.
Cool, so we’ve reached the brain-rot stage where people are not only taking AI summaries as fact (been here for a while now, even before LLMs with Google’s quick answer stuff) but they are _citing_ them as proof. Fuck me. I know that’s a little much for HN but still, just insane, at no point did someone think to get a more primary source before canceling the show.
The complete abdication of thinking and even the most minor research is depressing. I use LLMs daily but always make sure to check the sources, verify the claims. They are great for surfacing info but that’s just the first step. I’ve lost track of how many times an LLM has confidently stated something using sources and I check the sources and they say nothing of the sort.
This won't end well in my judgment.
bell-cot•1mo ago
"AI" makes for a clickier story, but you don't need it to have that kinda screw-up.
Actually, you don't even need the web. Back in the 90's, a young coworker of mine was denied a mortgage. Requested his credit report - and he learned that he'd already bought a house. In another city. At age 5. Based on income from the full-time job at Ford Motor he'd had since age 4. And several other laughable-in-retrospect hallucinations.
cowboylowrez•1mo ago
bell-cot•1mo ago
OTOH - yes, I get that "the AI said" is the new "dog ate my homework" excuse, for ignoble humans trying to dodge any responsibility for their own lazy incompetence.
jqpabc123•1mo ago
Your analogy is bad.
"Some little on-line forum" with a few angry users is not really comparable to a mega-corp with billions of users.
Lawyers could but are unlikely to go after a few misguided individual users for slander. As they say, you can't get blood out of a rock. Mega-corp is a much more tempting target.
Legal liability for bad AI is just getting started but I expect lawyers are giddy with anticipation.
bell-cot•1mo ago
How does that play out? IANAL, but I'm thinking Facebook says "Sorry, but Section 230 covers our ass" - and that's about it. Still no consequences.
cowboylowrez•1mo ago
jqpabc123•1mo ago
But AI slop is not "user generated content" --- it is content that the web site itself is generating with AI and publishing. As such, they become wholly responsible for the content (in my opinion).
jqpabc123•1mo ago
If individuals on Facebook post it, the individuals are responsible under US law section 230.
But if AI owned and operated by Facebook posts it, Facebook is responsible (in my opinion). There is no one else to blame for it.
Once corps start being held legally liable for their AI generated slop, I wouldn't be surprised if they start banning this "new technology" over liability concerns.
LLM based AI is inherently flawed and unreliable and everyone with only half a brain knows it. Making use of technology that is widely known to be flawed for any sort of "serious" work is a textbook example of negligence. And slander can be "serious". Lawyers live for this sort of thing.
bell-cot•1mo ago
> Once corps start being held legally liable for their AI generated slop...
While I personally agree with your ideal - in the current legal, regulatory, and political environment, I see precious little chance of any such corporation actually being held responsible for the output of its AI.
jqpabc123•1mo ago
For example, is medical and legal malpractice going to be voided just so incompetent AI can be applied?
I doubt it. The USA is ruled by lawyers.