frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What is an elliptic curve? (2019)

https://www.johndcook.com/blog/2019/02/21/what-is-an-elliptic-curve/
68•tzury•3h ago•4 comments

RCE via ND6 Router Advertisements in FreeBSD

https://www.freebsd.org/security/advisories/FreeBSD-SA-25:12.rtsold.asc
28•weeha•2h ago•17 comments

GitHub Actions for Self-Hosted Runners Price Increase Postponed

https://pricetimeline.com/news/189
23•taubek•1h ago•13 comments

Egyptian Hieroglyphs: Lesson 1

https://www.egyptianhieroglyphs.net/egyptian-hieroglyphs/lesson-1/
52•jameslk•4h ago•10 comments

Gemini 3 Flash: Frontier intelligence built for speed

https://blog.google/products/gemini/gemini-3-flash/
967•meetpateltech•17h ago•520 comments

Coursera to combine with Udemy

https://investor.coursera.com/news/news-details/2025/Coursera-to-Combine-with-Udemy-to-Empower-th...
515•throwaway019254•21h ago•309 comments

I got hacked: My Hetzner server started mining Monero

https://blog.jakesaunders.dev/my-server-started-mining-monero-this-morning/
390•jakelsaunders94•12h ago•262 comments

Working quickly is more important than it seems (2015)

https://jsomers.net/blog/speed-matters
162•bschne•3d ago•91 comments

Gut bacteria from amphibians and reptiles achieve tumor elimination in mice

https://www.jaist.ac.jp/english/whatsnew/press/2025/12/17-1.html
399•Xunxi•11h ago•92 comments

Don MacKinnon: Why Simplicity Beats Cleverness in Software Design [audio]

https://maintainable.fm/episodes/don-mackinnon-why-simplicity-beats-cleverness-in-software-design
42•mooreds•2d ago•9 comments

Ask HN: Those making $500/month on side projects in 2025 – Show and tell

234•cvbox•8h ago•191 comments

Judge hints Vizio TV buyers may have rights to source code licensed under GPL

https://www.theregister.com/2025/12/05/vizio_gpl_source_code_ruling/
90•pabs3•5h ago•5 comments

AWS CEO says replacing junior devs with AI is 'one of the dumbest ideas'

https://www.finalroundai.com/blog/aws-ceo-ai-cannot-replace-junior-developers
920•birdculture•17h ago•475 comments

Building a High-Performance OpenAPI Parser in Go

https://www.speakeasy.com/blog/building-speakeasy-openapi-go-library
8•subomi•3d ago•1 comments

Developers can now submit apps to ChatGPT

https://openai.com/index/developers-can-now-submit-apps-to-chatgpt/
141•tananaev•11h ago•83 comments

Show HN: I built a fast RSS reader in Zig

https://github.com/superstarryeyes/hys
64•superstarryeyes•1d ago•16 comments

'Ghost jobs' are on the rise – and so are calls to ban them

https://www.bbc.com/news/articles/clyzvpp8g3vo
111•1659447091•5h ago•106 comments

Ask HN: Does anyone understand how Hacker News works?

95•jannesblobel•10h ago•121 comments

OBS Studio Gets a New Renderer

https://obsproject.com/blog/obs-studio-gets-a-new-renderer
248•aizk•13h ago•53 comments

A Safer Container Ecosystem with Docker: Free Docker Hardened Images

https://www.docker.com/blog/docker-hardened-images-for-every-developer/
319•anttiharju•17h ago•73 comments

Jonathan Blow has spent the past decade designing 1,400 puzzles for you

https://arstechnica.com/gaming/2025/12/jonathan-blow-has-spent-the-past-decade-designing-1400-puz...
10•furcyd•6d ago•1 comments

Tell HN: HN was down

561•uyzstvqs•17h ago•305 comments

The Number That Turned Sideways

https://zuriby.github.io/math.github.io/the-number-that-turned-sideways.html
48•tzury•4d ago•29 comments

Cloudflare Radar 2025 Year in Review

https://radar.cloudflare.com/year-in-review/2025
89•ksec•12h ago•36 comments

TikTok unlawfully tracks shopping habits and use of dating apps?

https://noyb.eu/en/tiktok-unlawfully-tracks-your-shopping-habits-and-your-use-dating-apps
190•doener•9h ago•104 comments

Zmij: Faster floating point double-to-string conversion

https://vitaut.net/posts/2025/faster-dtoa/
131•fanf2•3d ago•18 comments

More than half of researchers now use AI for peer review, often against guidance

https://www.nature.com/articles/d41586-025-04066-5
46•neilv•4h ago•27 comments

Oasis: Pooling PCIe Devices over CXL to Boost Utilization

https://dl.acm.org/doi/10.1145/3731569.3764812
11•blakepelton•5d ago•2 comments

How SQLite is tested

https://sqlite.org/testing.html
288•whatisabcdefgh•15h ago•78 comments

Inside PostHog: SSRF, ClickHouse SQL Escape and Default Postgres Creds to RCE

https://mdisec.com/inside-posthog-how-ssrf-a-clickhouse-sql-escaping-0day-and-default-postgresql-...
94•arwt•13h ago•27 comments
Open in hackernews

More than half of researchers now use AI for peer review, often against guidance

https://www.nature.com/articles/d41586-025-04066-5
46•neilv•4h ago

Comments

D-Machine•4h ago
Duplicate: https://news.ycombinator.com/item?id=46281961
croes•3h ago
6 points no comments vs 18 points and 2 comments.

Faster isn’t the metric here

D-Machine•3h ago
Am I missing something here? I am new to posting at HN, despite being a long-time reader.

I get that HN has a policy to allow duplicates so that duplicates that were missed for arbitrary timing reasons can still gain traction at later times. I've seen plenty of "[Duplicate]" tagged posts, and have just seen this as a sort of useful thing for readers (duplicates may have interesting info, or seeing that the dupe did or did not gain traction also gives me info). But maybe I am missing something here, particularly etiquette-wise?

kachapopopow•3h ago
better title is most often the reason for it, looking at it the em-dash probably caused people to dismiss it as an AI bot.
D-Machine•3h ago
If that's the simplistic heuristic people here are using...
kachapopopow•3h ago
if you turn on "show dead" you will see a lot of spam / AI comments
D-Machine•3h ago
Yeah I have that on so see those all the time, I was more wondering why I got a strange comment about tagging a duplicate, and was wondering if I was breaching some kind of etiquette.
kachapopopow•3h ago
people with 100k+ karma often breach the etiquitte they preach so I wouldn't worry too much about it, worse case you get downvoted to -5 and it'll become dead.
D-Machine•3h ago
Ok, figured basically that, but very much appreciate the confirmation. Thanks!
layer8•2h ago
It’s certainly okay to link to a previous discussion, but “duplicate” implies that you think the present submission shouldn’t exist, and the previous submission doesn’t actually provide any discussion.

The fact that a previous submission didn’t gain traction isn’t usually interesting, because it can be pretty random whether something gains traction or not, depending on time of day and audience that happens to be online.

D-Machine•2h ago
Okay, I don't in general see "duplicate" as implying this, but I take your point, and was wondering if that might be the etiquette here.

I also think, on reflection, that you are right in this particular case (given there are no comments on the previous duplicate) so, thank you also for clarifying.

I suppose in the future an e.g. "[Previous discussion]" tag would be more appropriate, providing comments were made, otherwise, just say nothing and leave it to HN.

N_Lens•4h ago
News: Half of researchers lied on this survey
vinni2•2h ago
Which half?
bpodgursky•3h ago
Journals need to find a way to give guidance on what is and isn't appropriate and to let reviewers explain how they used AI tools... because like, you aren't going to nag people out of using AI to do UNPAID work 90% faster and produce results that are 90+th percentile of review quality (let's be real, there are a lot of bad flesh and blood reviewers).
kachapopopow•3h ago
I think it's interesting that AI is probably unintuitively good at spotting fraud in papers due to their ability to hold more context than majority of humans. I wish someone explored this to see if it can spot academic fraud that isn't in their training data already.
BDPW•1h ago
LLM's still make stuff up routinely about things like this so no there's no way this is a reliable method.
kachapopopow•1h ago
It doesn't have to be reliable! It just has to flag things: "hey these graphs look like they were generated using (formula)" or "these graphs do not seem to represent realistic values / real world entrophy" - it just has to be a tool that stops very advanced fraud from slipping through when it already passed human peer review.

The only reason why this is helpful is because humans have natural biases and/or inverse of AI biases which allow them to find patterns that might just be the same graph being scaled up 5 to 10 times.

ratg13•1h ago
Nobody should be using AI as the final arbiter of anything.

It is a tool, and there always needs to be a user that can validate the output.

D-Machine•3h ago
Guidance needs to be more specific. Failing to use AI for search often means you are wasting a huge amount of time, ChatGPT 5.2 Extended Thinking with search enabled speeds up research obscenely, and I'd be more concerned if reviewers were NOT making use of such tools in reviews.

Seeing the high percentage of usage of AI for composing reviews is concerning, but, also, peer review is an unpaid racket which seems basically random anyway (https://academia.stackexchange.com/q/115231), and probably needs to die given alternatives like ArXiV and OpenPeerReview and etc. I'm not sure how much I care about AI slop contaminating an area that already might be mostly human slop in the first place.

jltsiren•3h ago
That's a wrong way of using AI in peer review. A key part of reviewing a paper is reading it without preconceptions. After you have done the initial pass, AI can be useful for a second opinion, or for finding something you may have missed.

But of course, you are often not allowed to do that. Review copies are confidential documents, and you are not allowed to upload them to random third-party services.

Peer review has random elements, but thats true for all other situations (such as job interviews), where the final decision is made using subjective judgment. There is nothing wrong in that.

D-Machine•2h ago
> A key part of reviewing a paper is reading it without preconceptions

I get where you are coming from here, but, in my opinion, no, this is not part of peer review (where expertise implies preconceptions), nor for really anything humans do. If you ignore your pre-conceptions and/or priors (which are formed from your accumulated knowledge and experience), you aren't thinking.

A good example in peer review (which I have done) would be: I see a paper where I have some expertise of the technical / statistical methods used in a paper, but not of the very particular subject domain. I can use AI search to help me find papers in the subject domain faster than I can on my own, and then I can more quickly see if my usual preconceptions about the statistical methods are relevant on this paper I have to review. I still have to check things, but, previously, this took a lot more time and clever crafting of search queries.

Failing to use AI for search in this way harms peer review, because, in practice, you do less searching and checking than AI does (since you simply don't have the time, peer review being essentially free slave labor).

jltsiren•1h ago
By "without preconceptions", I mean that your initial review should not be influenced by anyone else's opinions. In CS, conference management software often makes this explicit by requiring you to upload your review before you can see other reviews. (You can of course revise your review afterwards.)

You are also supposed to review the paper and not just check it for correctness. If the presentation is unclear, or if earlier sections mislead the reader before later sections clarify the situation, you are supposed to point that out. But if you have seen an AI summary of the paper before reading it, you can no longer do that part. (And if a summary helps to interpret the paper correctly, that summary should be a part of the paper.)

If you don't have sufficient expertise to review every aspect of the paper, you can always point that out in the review. Reading papers in unfamiliar fields is risky, because it's easy to misinterpret them. Each field has its own way of thinking that can only be learned by exposure. If you are not familiar with the way of thinking, you can read the words but fail to understand the message. If you work in a multidisciplinary field (such as bioinformatics), you often get daily reminders of that.

hurturue•3h ago
Researchers use it to write the papers themselves: https://www.science.org/content/article/far-more-authors-use...
Animats•1h ago
Then on top of that there's the slop that comes from the university's PR department, where they turn "New possibly-interesting lab result in surface chemistry" into "Trillion dollar battery technology launched".

(Now that I think about it, I haven't seen much battery hype lately. The battery hype people may have pivoted to AI. Lots of stuff is going on in batteries, but mostly by billion-dollar companies in China quietly building plants and mostly shutting up about what's going on inside.)

baalimago•3h ago
They should do a study on this.
zeofig•1h ago
This is because peer review has become a bullshit mill and AI is good at churning through/out bullshit.
TomasBM•44m ago
The reasons listed in TFA - "confidentiality, sensitive data and compromising authors’ intellectual property" - make sense to discourage reviewers from using cloud-based LLMs.

There are also reasons for discouraging the use LLMs in peer review at all: it defeats the purpose of peer in the peer review; hallucinations; criticism not relevant to the community; and so on.

However, I think it's high time to reconsider what scientific review is supposed to be. Is it really important to have so-called peers as gatekeepers? Are there automated checks we can introduce to verify claims or ensure quality (like CI/CD for scientific articles), and leave content interpretation to the humans?

Let's make the benefits and costs explicit: what would we be gaining or losing if we just switched to LLM-based review, and left the interpretation of content to the community? The journal and conference organizers certainly have the data to do that study; and if not, tool providers like EasyChair do.