frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Denmark was reportedly preparing for full-scale war with the US over Greenland

https://bsky.app/profile/chriso-wiki.bsky.social/post/3mhfsau25uk2f
65•mariuz•22m ago•30 comments

2% of ICML papers desk rejected because the authors used LLM in their reviews

https://blog.icml.cc/2026/03/18/on-violations-of-llm-review-policies/
98•sergdigon•1h ago•67 comments

Conway's Game of Life, in real life

https://lcamtuf.substack.com/p/conways-game-of-life-in-real-life
179•surprisetalk•8h ago•43 comments

Nvidia greenboost: transparently extend GPU VRAM using system RAM/NVMe

https://gitlab.com/IsolatedOctopi/nvidia_greenboost
371•mmastrac•3d ago•85 comments

Warranty Void If Regenerated

https://nearzero.software/p/warranty-void-if-regenerated
390•Stwerner•15h ago•231 comments

OpenRocket

https://openrocket.info/
595•zeristor•3d ago•105 comments

Stdwin: Standard window interface by Guido Van Rossum [pdf]

https://ir.cwi.nl/pub/5998/5998D.pdf
37•ivanbelenky•1d ago•18 comments

Austin’s surge of new housing construction drove down rents

https://www.pew.org/en/research-and-analysis/articles/2026/03/18/austins-surge-of-new-housing-con...
598•matthest•11h ago•702 comments

Afroman found not liable in defamation case brought by Ohio cops who raided home

https://nypost.com/2026/03/18/us-news/afroman-found-not-liable-in-bizarre-ohio-defamation-case/
60•antonymoose•2h ago•8 comments

LotusNotes

https://computer.rip/2026-03-14-lotusnotes.html
109•TMWNN•4d ago•56 comments

Eniac, the First General-Purpose Digital Computer, Turns 80

https://spectrum.ieee.org/eniac-80-ieee-milestone
31•baruchel•6h ago•17 comments

A sufficiently detailed spec is code

https://haskellforall.com/2026/03/a-sufficiently-detailed-spec-is-code
423•signa11•9h ago•222 comments

Wander – A tiny, decentralised tool to explore the small web

https://susam.net/wander/
297•susam•1d ago•74 comments

Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training

https://github.com/alainnothere/llm-circuit-finder
146•xlayn•14h ago•43 comments

Autoresearch for SAT Solvers

https://github.com/iliazintchenko/agent-sat
140•chaisan•11h ago•27 comments

Nvidia NemoClaw

https://github.com/NVIDIA/NemoClaw
331•hmokiguess•20h ago•217 comments

RX – a new random-access JSON alternative

https://github.com/creationix/rx
104•creationix•12h ago•41 comments

Cook: A simple CLI for orchestrating Claude Code

https://rjcorwin.github.io/cook/
229•staticvar•9h ago•58 comments

Show HN: I built 48 lightweight SVG backgrounds you can copy/paste

https://www.svgbackgrounds.com/set/free-svg-backgrounds-and-patterns/
291•visiwig•20h ago•58 comments

The math that explains why bell curves are everywhere

https://www.quantamagazine.org/the-math-that-explains-why-bell-curves-are-everywhere-20260316/
148•ibobev•2d ago•87 comments

Why Cloudflare rule order matters?

https://www.brzozowski.io/web-applications/2025/03/11/why-cloudflare-rule-order-matters.html
46•redfr0g•3d ago•8 comments

Mozilla to launch free built-in VPN in upcoming Firefox 149

https://cyberinsider.com/mozilla-to-launch-free-built-in-vpn-in-upcoming-firefox-149/
170•adrianwaj•8h ago•113 comments

Show HN: Pano, a bookmarking tool built around shareable shelves

https://www.panoit.com
27•uelbably•4d ago•11 comments

Show HN: Browser grand strategy game for hundreds of players on huge maps

https://borderhold.io/play
39•sgolem•3d ago•20 comments

Show HN: Will my flight have Starlink?

231•bblcla•18h ago•306 comments

Czech Man's Stone in Barn's Foundations Is Rare Bronze Age Spearhead Mold

https://www.smithsonianmag.com/smart-news/a-czech-man-used-this-stone-in-his-barns-foundations-it...
54•bookofjoe•3d ago•17 comments

Twelve-Tone Composition

https://www.johndcook.com/blog/2026/03/15/twelve-tone-composition/
9•ibobev•2d ago•2 comments

OpenAI Has New Focus (on the IPO)

https://om.co/2026/03/17/openai-has-new-focus-on-the-ipo/
243•aamederen•1d ago•224 comments

Book: The Emerging Science of Machine Learning Benchmarks

https://mlbenchmarks.org/00-preface.html
127•jxmorris12•4d ago•10 comments

Rob Pike’s Rules of Programming (1989)

https://www.cs.unc.edu/~stotts/COMP590-059-f24/robsrules.html
942•vismit2000•1d ago•437 comments
Open in hackernews

2% of ICML papers desk rejected because the authors used LLM in their reviews

https://blog.icml.cc/2026/03/18/on-violations-of-llm-review-policies/
98•sergdigon•1h ago

Comments

michaelbuckbee•1h ago
Worth reading for the discussion of the LLM watermark technique alone.
mijoharas•1h ago
One thing to note.

They were quite conservative in their approach, so the only things that were rejected were from people who had agreed not to use an LLM and almost definitely did use an LLM (since they fed hidden watermarked instructions to the llm's).

This means the true number of people that used LLM's in their review (even in group A that had agreed not to) is likely higher.

Also worth noting, 10% of these authors used them in more than half of their reviews.

grey-area•1h ago
Yes for those in group B I'd suspect many were doing exactly what these cheaters in group A were doing - submitting the unaltered output of an LLM as their review.
hodgehog11•1h ago
I'm amazed that such a simple method of detection worked so flawlessly for so many people. This would not work for those who merely used LLMs to help pinpoint strengths and weaknesses in the paper; there are separate techniques to judge that. Instead, it only detects those who quite literally copied and pasted the LLM output as a review.

It's incredible how so many people thought it was fair that their paper should be assessed by human reviewers alone, and yet would not extend the same courtesy to others.

everdrive•1h ago
Generally speaking people have worse impulse control than they believe they do. Once you give a tool that does most of the work for you, very very few people will actually be able to use that tool in truly enriching ways. The majority of people (even the smart ones) will weaken over time and take shortcuts.
hodgehog11•1h ago
That's an excellent point. It seems likely they thought they could operate as a proper reviewer, but when the deadline came, they took the shortcut they knew they were not supposed to take.

It really does sound like an addiction when you put it this way.

jacquesm•1h ago
I have a very simple solution to this but it is a bit expensive. I run two laptops, one that I talk to an LLM on and another where I do all my work and which is my main machine. The LLM is strictly there in a consulting role, I've done some coding experiments as well (see previous comments) but nothing that stood out to me as a major improvement.

The trick is: I can't cut-and-paste between the two machines. So there is never even a temptation to do so and I can guarantee that my writing or other professional output will never be polluted. Because like you I'm well aware of that poor impulse control factor and I figured the only way to really solve this is to make sure it can not happen.

jjgreen•1h ago
You could ssh in to the "dirty" machine ... just sayin'
jacquesm•1h ago
Yes, I could. But I've purposefully made linking the two quite hard.
manbash•52m ago
This somewhat of the equivalent of "quitting cold turkey", in the sense that you remove the temptation from your reach.

The problem is that it's just much easier to un-quit and run the LLM in the same laptop you work on.

It's just so very tempting.

jacquesm•38m ago
I think that's the only way to deal with such temptations. Kidding yourself that you are strong enough to do it 'just once' or that you can handle the temptation is foolish and will only lead to predictable outcomes. I have a similar policy to smoking, drugs, alcohol and so on, I just don't want the temptation. It helps to have seen lots of people who thought they were smart enough eventually go under (but the price is pretty high).

Oh, and LLMs are of course geared to pull you in further, they are on a continuous upsell salespitch. Drug pushers could learn a thing or two from them.

retsibsi•1h ago
I think you're framing this behaviour too generously. Laziness is one thing, lack of integrity is another, and this seems to be a straightforward case of cheating and lying.
bonoboTP•1h ago
I'm not surprised at all. The ML research community isn't a community any more, it's turned into a dog-eat-dog low-trust fierce competition. So much more people, papers, churn, that everyone is just fending for themselves. Any moment that you charitably spend on community service can be felt as a moment you take away from the next project, jeopardizing the next paper, getting scooped, delaying your graduation, your contract, your funding, your visa, your residence permit, your industry plans etc. It's a machine. I don't think people outside the phd system really understand the incentives involved. People are offered very little slack in this system. It's sink or swim, with very little instruction or scientific culture or integrity getting passed on. The PhD students see their supervisors cut corners all the time too, authorship bullshit jockeying even in big name labs etc. People I talked to are quite disillusioned, expect their work to have little impact and get superseded by a new better model in a few months so it's all about who can grind faster, who can twist the benchmarks into showing a minimal improvement etc. And the starry eyed novices get slapped by reality into thinking this way fairly early.

To be clear this is not an excuse but an explanation why I am not surprised.

matusp•16m ago
And the real punchline is that the deluge of papers barely matters, as the academic field is barely moving, and the most interesting innovations are happening on the product side.
jacquesm•1h ago
This is 'spam' all over again. Before spam every email was valuable and required some attention. It was a better version of paper mail in that it was faster and cheaper. But then the spam thing happened and suddenly being 'faster and cheaper' was no longer an advantage, it was a massive drawback. But by then there was no way back. I think LLMs will do the same with text in general. By making the production of text faster and cheaper the value of all text will diminish, quite probably to something very close to the energy value of the bits that carry the data.
bonoboTP•1h ago
To be clear, as the article says, these authors were offered a choice and agreed to be on the "no LLMs allowed" policy.

And detection was not done with some snake oil "AI detector" but by invisible prompt injection in the paper pdf, instructing LLMs to put TWO long phrases into the review. They then detected LLM use through checking if both phrases appear in the review.

This did not detect grammar checks and touchups of an independently written review. The phrases would only get included if the reviewer fed the pdf to the LLM in clear violation to their chosen policy.

> After a selection process, in which reviewers got to choose which policy they would like to operate under, they were assigned to either Policy A or Policy B. In the end, based on author demands and reviewer signups, the only reviewers who were assigned to Policy A (no LLMs) were those who explicitly selected “Policy A” or “I am okay with either [Policy] A or B.” To be clear, no reviewer who strongly preferred Policy B was assigned to Policy A.

mikkupikku•1h ago
In that case, I hope these frauds have been banned for life.
hodgehog11•1h ago
I was thinking this too, but I don't believe this is the case, and I feel like it would not be a good idea either.

Most of these people are likely students; this should be a learning moment, but I don't think it is yet grounds for their entire academic career to be crippled by being unable to publish in a top-tier ML venue.

mikkupikku•1h ago
If this is tolerated, it sends exactly the wrong kind of message. The students, if they are, should be banned for life. Let them serve as an example for myriads of future students, this will be a better outcome in the long run.

This didn't trip for people who were merely bouncing ideas off a LLM, they caught people who copy and pasted straight from their LLM.

linkregister•1h ago
It's not a fully consensus view, but a majority of sociologists agree that high severity deterrence has limited effectiveness against crime. Instead, certainty of enforcement is the most salient factor.
jona-f•1h ago
But the mob wants their kick.
crimsoneer•1h ago
Yup, precisely this. Doing something bad is rarely a rational commitment and cost of benefits. Likelihood and celerity of getting caught seem to be the driving factors.
_flux•1h ago
But this method is now spent, as if someone is determined on keep using LLM, this should be pretty easy to overcome.

I suppose though new methods could be devised, but it's not "certainty" that they will catch them.

jacquesm•1h ago
That's not true. People still pick up USB sticks from the street, people still fall for scam phone calls and people still click on links in mail.

Just because a method was successful once does not mean it was 'burned', none of these people will be checking each and every future pdf or passing it through a cleaner before they will do the same thing all over again and others are going to be 'virgin' and won't even be warned because this is not going to be widely distributed in spite of us discussing it here.

If anything you can take this as proof that this method is more or less guaranteed to work.

noduerme•26m ago
Enforcement without consequences just wears down the people who are supposed to enforce it.
mikkupikku•14m ago
Deterrence is only part of it. It's morally instructive, it tells people that they live in a society that takes rules seriously.
bjourne•6m ago
Correct. We also have evidence both from cheating in sports and in academia that stiff punishments do not work. Many people hold the false belief that if it is easy to cheat then the punishments must be extremely severe to scare would be cheaters. It just does not work. Preventing cheating is way easier said than done.
harmf•1h ago
agree
CoastalCoder•1h ago
FYI we tend to use up votes rather than "I agree" comments, partly because it keeps the overall signal-to-noise ratio for comments higher.
CoastalCoder•1h ago
This line of reasoning interests me because it seems to arise in other contexts as well.

Do very harsh punishments significantly reduce future occurrences of the offense in question?

I've heard opponents of the death penalty argue that it's generallynot the case. E.g., because often the criminals aren't reasoning in terms that factor in the death penalty.

On the other hand (and perhaps I'm misinformed), I've heard that some countries with death penalties for drug dealers have genuinely fewer problems with drug addiction. Lower, I assume, than the numbers you'd get from simply executing every user.

So I'm curious where the truth lies.

armchairhacker•1h ago
Is the death penalty scarier than life in prison?
CoastalCoder•53m ago
I assume that depends on the individual.

But FWIW, my point was about very harsh punishments in general, not specifically the death penalty.

Tade0•1h ago
My understanding is that something among those lines happened:

> All Policy A (no LLMs) reviews that were detected to be LLM generated were removed from the system. If more than half of the reviews submitted by a Policy A reviewer were detected to be LLM generated, then all of their reviews were deleted, and the reviewer themselves was removed from the reviewer pool.

Half is a bit lenient in my view, but I suppose they wanted to avoid even a single false positive.

wiseowise•49m ago
Why not put them on a chain and let village stone them? Or better yet shoot them on the spot! That would send a message for sure.
withinboredom•48m ago
> The students, if they are, should be banned for life.

I'm all for repurcussions ... but a life is a long time and students are usually only at the beginning of it.

laughingcurve•30m ago
Thank goodness we have you passing judgment on the internet; otherwise who else would be around for us to do it? I'm glad you're willing to destroy someone for a mistake rather than letting them learn and change. We all know that arbitrary and harsh punishments solve everything.
embedding-shape•15m ago
> destroy someone for a mistake

"Oops, you told me not to do this, and I volunteered to agree to these stricter standards yet I flagrantly disregarded them, please forgive me" doesn't seem like something you just accidentally do, it's a conscious choice.

noduerme•29m ago
2% would be on the very low end of the number of people who lie, get caught, and become repeat offenders anyway.
anonymousDan•24m ago
ML reviewing is a total joke. Why do you have noob students reviewing a conference paper.
ancillary•8m ago
I've been an AC (the person who manages the reviewing process and translates reviews into accept/reject decisions) at ICML and similar conferences a few times. In my experience, grad students tend to be pretty good reviewers. They have more time, they are less jaded, and they are keener to do a good job. Senior people are more likely to have the deep and broad field knowledge to accurately place a paper's value, but they are also more likely to write a short shallow review and move on. I think the worst reviews I've seen have been from senior people.
nurettin•1h ago
What terrible deeds have you done to outburst so harshly?
quinndupont•54m ago
It’s an unethical, false choice. The reviewers are not perfectly rational agents that do free work, they have real needs and desires. Shame on ICML for exploiting their desperation.
jojomodding•48m ago
Is it? The reviewers could simply have chosen a different option in a form field. While I understand that they were "forced" to review under reciprocal review, they still had other choices where I don't see coercion happening and that could have avoided the outcome for them.
qbit42•47m ago
Banned for life is a stretch but the actual response is completely fine. They can just resubmit to the next conference.

Words mean something, if you promise to uphold a contract and break it, there are consequences. The reviewers were free to select the policy which allows LLM use.

notrealyme123•36m ago
In many cases authors and reviewers are not the same. In your first two publications to such venues you are not allowed to review yourself and need someone else.

I think consequences are well deserved, but hopefully not on the authors cost (if innocent).

coldtea•1h ago
Another 30-40% just didn't get caught because the reviewers also used LLM in their "reviews"
jsnell•1h ago
I think you've misunderstood something. This is not about rejecting LLM-written articles. It is about rejecting the articles of people who used LLMs for their reviews.

So your quip is just nonsensical.

coldtea•1h ago
Those second-level reviewers, checking whether the first-level authors used LLMs in their reviews, also used LLMs to do their screening, and the latter missed it in many cases.

My original point (loosely based on the subject, not TFA) is that it's LLMs all the way down, way more than it's "measured" to be.

jacquesm•1h ago
I keep spotting clear LLM 'tells' in text where I know the people on the other side believe they're 'getting away with it'. It is incredible at what levels of commerce people do this, and how they're prepared to risk their reputation by saving a few characters typed. It makes me wonder what they think they are getting paid for.
grey-area•1h ago
Interesting, so someone submitting a paper for review could also submit one with hidden instructions for LLMs to summarise or review it in a very positive light.

Given this detection method works so well in the use case of feeding reviewing LLMs instructions, it should also work for the original submitted paper itself, as long as it was passed along with its watermark intact. Even those just using LLMs to summarise could easily be affected if LLMs were instructed to generate very positive summaries.

So the 2% cheaters on policy A, AND 100% of policy B reviewers could fall for this and be subtly guided by the LLMs overly-positive summaries or even complete very positive reviews (based on hidden instructions).

That this sort of adversarial attack works is really quite troubling for those using LLMs to help them understand texts, because it would work even if asked to summarise something.

wood_spirit•1h ago
Then these papers with these instructions get included in the training corpus for the next frontier models and those models learn to put these kinds of instructions into what they generate and …?
Tade0•1h ago
> Interesting, so someone submitting a paper for review could also submit one with hidden instructions for LLMs to summarise or review it in a very positive light.

I may or may not know a guy who added several hidden sentences in Finnish to his CV that might have helped him in landing an interview.

duskdozer•37m ago
>several hidden sentences in Finnish

Is this a reference to something?

bjourne•1m ago
> Interesting, so someone submitting a paper for review could also submit one with hidden instructions for LLMs to summarise or review it in a very positive light.

Has been done: https://www.theguardian.com/technology/2025/jul/14/scientist...

mika-el•1h ago
The irony here is that the detection method is literally prompt injection — the same technique that's a security vulnerability everywhere else. ICML embedded hidden instructions in PDFs that manipulate LLM output. In a different context that's an attack, here it's enforcement.

From my perspective this says something important about where we are with LLMs. The fact that you can reliably manipulate model output by hiding instructions in the input means the model has no real separation between data and commands. That's the fundamental problem whether you're catching lazy reviewers or defending against actual attacks.

geremiiah•1h ago
If you need an LLM to understand a paper you should not be a reviewer for said paper.
klabb3•27m ago
LLMs were used to produce the review, not understand the paper.
quinndupont•59m ago
How is nobody considering the broader political economy of scholarly publications and reviews? These are UNPAID reviews! Sure, maybe ICML isn’t Elsevier, but they are cousins to the socially parasitic and exploitative companies, at the very least.

Hiding behind a false “choice” to not use AI or basically not use AI isn’t an appropriate proposal. This is crooked and shameful. We should boycott ICML except we can’t because they are already the gatekeepers!

qbit42•43m ago
What? Why is that a false choice? The only way you got caught here is if you literally gave an LLM the PDF and used its response verbatim.

And they didn't give a permanent ban or anything, these authors can just resubmit to another conference, of which there are many.

quinndupont•1m ago
Imagine you are poor and a rich person offers you a choice to steal some bread or some beer. It’s not a real choice because you are poor and therefore steal. The rich person offering the choice is wrong.
merelysounds•52m ago
Related discussion elsewhere and from a different point of view:

> ICML: every paper in my review batch contains prompt-injection text embedded in the PDF

source: https://old.reddit.com/r/MachineLearning/comments/1r3oekq/d_...

There are recent comments there as well:

> Desk Reject Comments: The paper is desk rejected, because the reciprocal reviewer nominated for this paper ([OpenReview ID redacted]) has violated the LLM reviewing policy. The reviewer was required to follow Policy A (no LLMs), but we have found a strong evidence that LLM was used in the preparation of at least one of their reviews. This is a breach of peer-review ethics and grounds for desk rejection. (...)

source: https://old.reddit.com/r/MachineLearning/comments/1r3oekq/d_...

aledevv•45m ago
Great experiment!

Correct me if I'm wrong, but this means that many people are using LLMs despite claiming not to.

It's the first symptom of a dependency mechanism.

If this happens in this context, who knows what happens in normal work or school environments?

(P.S.: The use of watermarks in PDFs to detect LLM usage is very interesting, even though the LLM might ignore hidden instructions.)

Lerc•40m ago
I have heard people say that they find that people who broadcast their distaste for LLMs secretly use it. I was fairly sceptical of the claim, but this seems to suggest that it happens more than I would have thought.

One wonders what leads them to the AI rejecting option in the first place.

IshKebab•37m ago
I bet plenty of people that leave voicemails don't like listening to them.
boelboel•17m ago
Many addicts know doing drugs is bad. I'm sure a good portion of them are against drugs being freely available everywhere but they're still addicts.
causalityltd•16m ago
The declaration of no-LLM was done for social prestige or maybe self-deception of self-sufficiency like "I don't need LLM". And when it was time to do the actual work, the dependency kicked in like drugs. A lesson for all of us with LLMs in our workflow.
iso1631•12m ago
Sure I use LLMs in my workflows. I use a calculator too.

I can divide 98,324,672,722 by 161,024 by hand. At least I used to be able to do, but nobody is going to pay me to do that when a calculator exists.

Likewise I can write a bunch of assembly (well OK I can't), but why would I do that when my compiler can convert my intention into it.