With AI, dishing out massive amount of research in these simulation-heavy fields is trivial, and doesn't even require empire building anymore where you have to work your way through funding for your personal army. Just give an LLM the right context and examples, and you can just prompt your way through a complete article, experimental validation included. That's the real skill/brilliancy now. If you have the decency to read and refine the final outcome, at least you can claim you retained some ethical standard. Or maybe you can get AI review it (spoiler alert: program committees do that already), so that it comes up with ideas, feedback, and suggestions for improvements. And then you implement those. Or actually you have the AI implement those. And then you review it again. Or the AI does. Maybe you put that in an adversarial for loop, and collect your paper just in time to submit for the deadline -- if you don't already have an agent setup doing that for you.
Measuring the actual impact of research outside of bibliometrics has always been next to impossible, especially for high-velocity domains like CS. We're at an age where, barring ethical standards, the only deterrent preventing a researcher from using an army of LLMs to publish in his name is the fear of getting completely busted by the community. The only currency to this is your face, and your credibility. 5 years ago you still had to come up with an idea, implement/test it, then it just didn't work and kept not working despite endless re-designs, so eventually you cooked the numbers so you could submit a paper with a non-zero chance of getting published (and accumulate a non-zero chance at not perishing). Now you don't even need to cook the numbers because the opportunity cost of producing a paper with an LLM is so low that you can effortlessly iterate and expand. Negative results? Weak storyline? Uninteresting problem? Just by sheer chance some of your AI-generated stuff will get through. You're even in for the best paper award if the actual reviewers use the same LLM you used in your adversarial review loop!
ChrisArchitect•2mo ago
Over fifty new hallucinations in ICLR 2026 submissions
https://news.ycombinator.com/item?id=46181466