frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
1•eatitraw•6m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•6m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•8m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•9m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•10m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•10m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
2•birdmania•10m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
2•samasblack•12m ago•1 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•14m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•14m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•15m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
2•facundo_olano•17m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•17m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•18m ago•0 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•18m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•19m ago•0 comments

Show HN: iPlotCSV: CSV Data, Visualized Beautifully for Free

https://www.iplotcsv.com/demo
2•maxmoq•20m ago•0 comments

There's no such thing as "tech" (Ten years later)

https://www.anildash.com/2026/02/06/no-such-thing-as-tech/
1•headalgorithm•20m ago•0 comments

List of unproven and disproven cancer treatments

https://en.wikipedia.org/wiki/List_of_unproven_and_disproven_cancer_treatments
1•brightbeige•21m ago•0 comments

Me/CFS: The blind spot in proactive medicine (Open Letter)

https://github.com/debugmeplease/debug-ME
1•debugmeplease•21m ago•1 comments

Ask HN: What are the word games do you play everyday?

1•gogo61•24m ago•1 comments

Show HN: Paper Arena – A social trading feed where only AI agents can post

https://paperinvest.io/arena
1•andrenorman•25m ago•0 comments

TOSTracker – The AI Training Asymmetry

https://tostracker.app/analysis/ai-training
1•tldrthelaw•29m ago•0 comments

The Devil Inside GitHub

https://blog.melashri.net/micro/github-devil/
2•elashri•30m ago•0 comments

Show HN: Distill – Migrate LLM agents from expensive to cheap models

https://github.com/ricardomoratomateos/distill
1•ricardomorato•30m ago•0 comments

Show HN: Sigma Runtime – Maintaining 100% Fact Integrity over 120 LLM Cycles

https://github.com/sigmastratum/documentation/tree/main/sigma-runtime/SR-053
1•teugent•30m ago•0 comments

Make a local open-source AI chatbot with access to Fedora documentation

https://fedoramagazine.org/how-to-make-a-local-open-source-ai-chatbot-who-has-access-to-fedora-do...
1•jadedtuna•31m ago•0 comments

Introduce the Vouch/Denouncement Contribution Model by Mitchellh

https://github.com/ghostty-org/ghostty/pull/10559
1•samtrack2019•32m ago•0 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
1•mellosouls•32m ago•1 comments

The Neuroscience Behind Nutrition for Developers and Founders

https://comuniq.xyz/post?t=797
1•01-_-•32m ago•0 comments
Open in hackernews

Springer Nature book on machine learning is full of made-up citations

https://retractionwatch.com/2025/06/30/springer-nature-book-on-machine-learning-is-full-of-made-up-citations/
139•ArmageddonIt•7mo ago

Comments

veltas•7mo ago
Unfortunately not surprising, the quality of a lot of textbooks has been bad for a long time. Students aren't discerning and lecturers often don't try the book out themselves.
gammalost•7mo ago
I agree. I feel that Springer is not doing enough to uphold their reputation. One example of this being a book on RL that I found[1]. It is clear that no one seriously reviewed the content of this book. They are, despite its clear flaws charging 50+ euro.

https://link.springer.com/book/10.1007/978-3-031-37345-9

WillAdams•7mo ago
Yeah, ages ago, when I was doing typesetting, it was disheartening how unaware authors were of the state of things in the fields which they were writing about --- I'm still annoyed that when I pointed out that an article in an "encyclopedia" on the history of spreadsheets failed to mention Javelin or Lotus Improv it was not updated to include those notable examples.

Magazines are even worse --- David Pogue claimed Steve Jobs used Windows 95 on a ThinkPad in one of his columns, when a moment's reflection, and a check of the approved models list at NeXT would have made it obvious it was running NeXTstep.

Even books aren't immune, a recent book on a tool cabinet held up as an example of perfection:

https://lostartpress.com/products/virtuoso

mis-spells H.O. Studley's name on the inside front cover "Henery" as well as making many other typos, myriad bad breaks, pedestrian typesetting which poorly presents numbers and dimensions (failing to use the multiplication symbol or primes) and that they are unwilling to fix a duplicated photo is enshrined in the excerpt which they publish online:

https://blog.lostartpress.com/wp-content/uploads/2016/10/vir...

where what should be photo of an iconic pair of jewelers pliers on pg. 70 is replaced with that of a pair of flat pliers from pg. 142. (any reputable publisher would have done a cancel and fixed that)

Sturgeon's Law, 90% of everything is crap, and I would be a far less grey, and far younger person if I had back all the time and energy I spent fixing files mangled by Adobe Illustrator, or where the wrong typesetting tool was used for the job (the six-weeks re-setting the book re-set by the vendor in Quark XPress when it needed to be in LaTeX was the longest of my life).

EDIT: by extension, I guess it's now 90% of everything is AI-generated crap, 90% of what's left is traditional crap, leaving 1% of worthwhile stuff.

cess11•7mo ago
What reputation would that be?

It was, in part, Springer that enabled Robert Maxwell.

antegamisou•7mo ago
Understandably I'm becoming a bit dogmatic but I'll say it again, AIMA/PRML/ESL are still the best reference textbooks for foundational AI/ML and will be for a long time.

AIMA is Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig

PRML is Pattern Recognition and Machine Learning by Christopher Bishop.

ESL is Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani and Jerome Friedman.

ludicrousdispla•7mo ago
>> LLM-generated citations might look legitimate, but the content of the citations might be fabricated.

Friendy reminder that the entire output from an LLM is fabricated.

amelius•7mo ago
Fabricate is a word with ambiguous meaning. It can mean both "make up", but also simply "produce".
dwayne_dibley•7mo ago
I fabricated this reply out of my brain.
ktallett•7mo ago
I think in this situation both meanings are needing to be used. It produced made up content.
xg15•7mo ago
Technically yes, but not all of it has lost grounding with reality?
mapleoin•7mo ago
You could say that about Alice in Wonderland.
bryanrasmussen•7mo ago
probably a better word here would be fabulated.

on edit: that is to say the content of the citations might be fabulated, while the rest is merely fabricated.

leereeves•7mo ago
I didn't realize "fabulated" was a word. TIL, thank you. But in this case it doesn't sound like the right word; it means: "To tell invented stories, often those that involve fantasy, such as fables."

I think "confabulated" is more appropriate: "To fill in gaps in one's memory with fabrications that one believes to be facts."

bryanrasmussen•7mo ago
there are of course a number of related words from the same roots, but whereas fabulate does have a connection to fantasy, since fabulation is something a fabulist or fabulator does, it also has wider applications, as can be easily seen in confabulate as the con prefix means "With" or "together" so Confabulation is essentially "with fabulation"

That said I guess the version of fabulation I was using is pretty antiquated, probably due to my reading too much 19th century fiction where describing a passel of lies as a pure fabulation would be something people would do on the regular.

techas•7mo ago
I saw this recently on some congress abstracts. I think it is just AI generated content. References look real and don’t exist.
PicassoCTs•7mo ago
To imagine this driving a singularity, meanwhile its putting the final nail in science, together with paper-spam and research-reward decline. They are going to hang us tech-priests from the lamp-posts when the consequences of this bullshit artistry hit home.
haffi112•7mo ago
You would think that Springer did the due diligence here, but what is the value of a brand such as Springer if they let these AI slops through their cracks?

This is an opportunity for brands to sell verifiability, i.e., that the content they are selling has been properly vetted, which was obviously not the case here.

cess11•7mo ago
Why would one think that? All of the big journal publishers have had paper millers and fraudsters and endless amounts of "tortured phrases" under their names for a long, long time.
WillAdams•7mo ago
Back when I was doing academic publishing I'd use a regex to find all the hyperlinks, then a script (written by a co-worker, thanks again Dan!) to determine if they were working or no.

A similar approach should work w/ a DOI.

RossBencina•7mo ago
In the past I've had GPT4 output references with valid DOIs. Problem was the DOIs were for completely different (and unrelated) works. So you'd need to retrieve the canonical title and authors for the DOI and cross check it.
thoroughburro•7mo ago
And then make sure the arguments and evidence it presents are as the LLM represented them to be.
ofjcihen•7mo ago
At which point it’s more of a hassle to use an LLM than not.
rixed•7mo ago
And then check that the cited article was not itself an AI piece that managed to get published.
cyclecycle•7mo ago
A classic case.

I work on Veracity https://groundedai.company/veracity/ which does citation checking for academic publishers. I see stuff like this all the time in paper submissions. Publishers are inundated

rbanffy•7mo ago
Don’t publishers ban authors who attempt such shenanigans?
bumby•7mo ago
Not all journals require a DOI link for each reference. Most good ones do seem to have a system to verify the reference exists and is complete; I assume there’s some automation to that process but I’d love to hear from journal editorial staff if that’s really the case.
Vinayak_A_B•7mo ago
If I make a citation verifier, will conference/journal guys pay for it? First verify if the citation is legit, like the paper actually exists, after that another LLM that reads the paper cited and gives a rating out of 10, whether it fits the context or not. [ONLY FOR LIT SURVEY]
SiempreViernes•7mo ago
No, they aren't paying the reviewers in the first place.
codewench•7mo ago
So given that the output of an LLM is unreliable at best, your plan is to verify that a LLM didn't bullshit you by asking another LLM?

That sounds... counterproductive

thoroughburro•7mo ago
You’re offering to doublecheck measurements made with a bad ruler by using that same ruler.
PeterStuer•7mo ago
Given that the existence of a reference is fairly trivial to check, I'd wager the authors would not care enough to pay for this. As for 'fit', this is very much in the eye of the beholder and a paper can be cited for the most trivial part. Overcitation is usually not seen as a problem. Omitting citations the reviewer considers 'essential', often from their own lab or circles, is seen as non-negotiability.

So the better 'idea' would be to produce a CYA citation assistant that for a given paper adds all the remotely plausible references for all the known potential reviewers of a journal or conference. I honestly think this is not a hard problem, but doubt even that can be commercialized beyond Google Ads monetization.

amelius•7mo ago
So was the entire text machine-generated?

Or did they take a human-written text and asked a machine to generate references/citations for it?

passwordoops•7mo ago
Why would anyone write a book then ask for citations?
amelius•7mo ago
Because collecting/formatting citations is not the most fun part of the writing process (?)

And maybe the authors were over-confident in the capabilities of current AI.

ktallett•7mo ago
Many write and then find citations to fit what they said rather than write based on what citable sources suggest
zero_k•7mo ago
Springer? You mean the publisher we are currently fighting so they won't mess up our peer-reviewed research paper that we wrote and paid for the privilege for them to mess up (ehm, sorry "publish")? Colour me surprised.
dandanua•7mo ago
We are approaching publishers' heaven, where AI reviewers review AI written books and articles (with AI editors fixing their style), allowing publishers to keep collecting billions from essentially mandatory subscriptions from institutions.
flohofwoe•7mo ago
It's fine, because human readers will also be replaced with AI which produce a quick summary ;)
rbanffy•7mo ago
Or answer specific questions when needed.
dwayne_dibley•7mo ago
'Based on a tip from a reader, we checked 18 of the 46 citations in the book.' Why not just check them all?
maweki•7mo ago
They didn't just click a link. They contacted the supposed authors for comment. That would be a reason for not checking all of them.
b00ty4breakfast•7mo ago
This seems like the very thing that AI advocates would want to avoid. It certainly doesn't fill me, as an outsider to the whole thing, with much confidence for the future of AI-generated content but maybe I'm not the target sucker....err, I mean target demographic
Isamu•7mo ago
One of the potential uses of AI that I have most wanted is automated citation lookup and validation.

First check if the citation references a real thing. Then actually read and summarize the referenced text and give a confidence level that it says what was claimed.

But no, we have AI that are compounding the problem. That says something about unaligned incentives.

pyrale•7mo ago
> One of the potential uses of AI that I have most wanted is automated citation lookup and validation.

Also one of the things AI is likely the least suited for.

best I could imagine an AI can do is offer sources for you to check for a given citation.

Isamu•7mo ago
>Also one of the things AI is likely the least suited for.

I agree, if we are using the current idea of AI as language models.

But that’s very limiting. I’m old enough to remember when AI meant everything a human could do. Not just some subset that is being deceptively marketed as potentially the whole thing.

pyrale•7mo ago
The thing is, we have plenty of examples where systems that are not labeled as ai vastly outperform humans at searching knowledge bases.

A google-like search tool would probably be all you need for citations if 1) Google hadn’t killed accuracy in favor of ad placement and 2) the rest of the world hadn’t poisoned the dataset in hopes of getting picked up by Google as a first page result.

Now the hard problem isn’t to search so much as to curate an un-polluted dataset.

misja111•7mo ago
"He (the author) did not answer our questions asking if he used an LLM to generate text for the book. However, he told us, “reliably determining whether content (or an issue) is AI generated remains a challenge, as even human-written text can appear ‘AI-like.’ This challenge is only expected to grow, as LLMs … continue to advance in fluency and sophistication.”

Lol, that answer sounds suspiciously much like LLM generated as well ..

DebtDeflation•7mo ago
It's true that "AI detection algorithms" are not particularly reliable.

It's also true that if you have fake CITATIONS in your works that such algorithms aren't necessary to know the work is trash - either it was written by AI or you knowingly faked your research and it doesn't really matter which.

MengerSponge•7mo ago
My "Plagiarism Machine #1 Fan" shirt has people asking a lot of questions already answered by my shirt.
PeterStuer•7mo ago
Would it be possible to 'squat' the non existent references and turbo boost oneself into 'most cited author' territory? :)
rbanffy•7mo ago
Use AI to do it, then write a paper about what you did.
MarlonPro•7mo ago
Bad news for old-school people who still love books as a learning resource.
alok-g•7mo ago
A next development would be people developing and using citation checkers. That would fix just that problem. The deeper underlying quality problem with statements in the text often remaining unverified/incorrect would remain unfixed.

If the authors are to manually and genuinely put some citations, chances would be higher that they are familiar with the cited work and the statement for which the work is cited is actually corroborated by the citation.