Friendy reminder that the entire output from an LLM is fabricated.
on edit: that is to say the content of the citations might be fabulated, while the rest is merely fabricated.
I think "confabulated" is more appropriate: "To fill in gaps in one's memory with fabrications that one believes to be facts."
This is an opportunity for brands to sell verifiability, i.e., that the content they are selling has been properly vetted, which was obviously not the case here.
A similar approach should work w/ a DOI.
I work on Veracity https://groundedai.company/veracity/ which does citation checking for academic publishers. I see stuff like this all the time in paper submissions. Publishers are inundated
That sounds... counterproductive
So the better 'idea' would be to produce a CYA citation assistant that for a given paper adds all the remotely plausible references for all the known potential reviewers of a journal or conference. I honestly think this is not a hard problem, but doubt even that can be commercialized beyond Google Ads monetization.
Or did they take a human-written text and asked a machine to generate references/citations for it?
And maybe the authors were over-confident in the capabilities of current AI.
First check if the citation references a real thing. Then actually read and summarize the referenced text and give a confidence level that it says what was claimed.
But no, we have AI that are compounding the problem. That says something about unaligned incentives.
Also one of the things AI is likely the least suited for.
best I could imagine an AI can do is offer sources for you to check for a given citation.
I agree, if we are using the current idea of AI as language models.
But that’s very limiting. I’m old enough to remember when AI meant everything a human could do. Not just some subset that is being deceptively marketed as potentially the whole thing.
Lol, that answer sounds suspiciously much like LLM generated as well ..
It's also true that if you have fake CITATIONS in your works that such algorithms aren't necessary to know the work is trash - either it was written by AI or you knowingly faked your research and it doesn't really matter which.
veltas•8h ago
gammalost•7h ago
https://link.springer.com/book/10.1007/978-3-031-37345-9
WillAdams•5h ago
Magazines are even worse --- David Pogue claimed Steve Jobs used Windows 95 on a ThinkPad in one of his columns, when a moment's reflection, and a check of the approved models list at NeXT would have made it obvious it was running NeXTstep.
Even books aren't immune, a recent book on a tool cabinet held up as an example of perfection:
https://lostartpress.com/products/virtuoso
mis-spells H.O. Studley's name on the inside front cover "Henery" as well as making many other typos, myriad bad breaks, pedestrian typesetting which poorly presents numbers and dimensions (failing to use the multiplication symbol or primes) and that they are unwilling to fix a duplicated photo is enshrined in the excerpt which they publish online:
https://blog.lostartpress.com/wp-content/uploads/2016/10/vir...
where what should be photo of an iconic pair of jewelers pliers on pg. 70 is replaced with that of a pair of flat pliers from pg. 142. (any reputable publisher would have done a cancel and fixed that)
Sturgeon's Law, 90% of everything is crap, and I would be a far less grey, and far younger person if I had back all the time and energy I spent fixing files mangled by Adobe Illustrator, or where the wrong typesetting tool was used for the job (the six-weeks re-setting the book re-set by the vendor in Quark XPress when it needed to be in LaTeX was the longest of my life).
EDIT: by extension, I guess it's now 90% of everything is AI-generated crap, 90% of what's left is traditional crap, leaving 1% of worthwhile stuff.
cess11•3h ago
It was, in part, Springer that enabled Robert Maxwell.
antegamisou•4h ago
AIMA is Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig
PRML is Pattern Recognition and Machine Learning by Christopher Bishop.
ESL is Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani and Jerome Friedman.