frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

I am not ambitious enough

https://vester.si/blog/not-ambitious-enough/
1•sputr•1m ago•0 comments

Tell HN: C and C++ are absolute beasts when it comes to performance efficient

1•delduca•4m ago•0 comments

AWS Lambda now supports GitHub Actions to simplify function deployment

https://aws.amazon.com/about-aws/whats-new/2025/08/aws-lambda-github-actions-function-deployment/
1•mariuz•8m ago•0 comments

Mind Blowing Websites Hiding in Plain Sight

https://www.offscopes.com/
17•OffScopes•11m ago•1 comments

Block-Based Configuration Language

https://n16f.net/bcl/specification.html
1•gearnode•14m ago•0 comments

Technical Interviews are realigning with reality through AI

https://cendyne.dev/posts/2025-08-07-technical-interviews-are-realigning-with-reality-through-ai.html
1•furkansahin•15m ago•0 comments

GPT5 Is Horrible

https://old.reddit.com/r/ChatGPT/comments/1mkd4l3/gpt5_is_horrible/
1•druskacik•15m ago•0 comments

Robots.txt for the AI Era but Enforceable

https://aiprivacylicense.com/
1•nabanita•21m ago•1 comments

Air Force buying two Tesla Cybertrucks so it can learn to destroy them

https://www.theregister.com/2025/08/08/usaf_cybertruck_missile_tests/
2•ndsipa_pomu•26m ago•1 comments

Agentic Workflow: What's inside RAGFlow v0.20.0

https://medium.com/@infiniflowai/agentic-workflow-whats-inside-ragflow-v0-20-0-789fdd4397f6
1•vissidarte_choi•27m ago•0 comments

defer-import-eval: proposal for introducing a way to defer evaluate of a module

https://github.com/tc39/proposal-defer-import-eval
1•tilt•29m ago•0 comments

OpenAI is taking GPT-4o away from me – despite promising they wouldn't

https://community.openai.com/t/openai-is-taking-gpt-4o-away-from-me-despite-promising-they-wouldnt/1337378
2•Mzxr•33m ago•0 comments

Show HN: Streamed JSON Lines (A big JSON can't be streamed)

https://medium.com/@marius_18835/streamed-json-lines-using-laravel-crud-wizard-free-cd650e272caa
1•marius-ciclistu•37m ago•0 comments

Bsky Tracker: v2.5.0 Is Live

https://bsky.app/profile/bluesky-tracker.bsky.social
1•pavlostze•37m ago•1 comments

Malicious Ruby Gems Used in Targeted Credential Theft Campaign

https://socket.dev/blog/60-malicious-ruby-gems-used-in-targeted-credential-theft-campaign
1•amalinovic•51m ago•0 comments

HBO Max is going to get more annoying about password sharing

https://www.theverge.com/news/754357/hbo-max-password-sharing-annoying-earnings
2•tosh•51m ago•0 comments

Ask HN: GPT-5 still needs a second nudge for calculation?

1•chandlertsien•54m ago•0 comments

EU Artificial Intelligence Act

https://artificialintelligenceact.eu/
2•jonbaer•55m ago•0 comments

White Mountain Direttissima

https://whitemountainski.co/pages/white-mountain-direttissima
1•oftenwrong•56m ago•0 comments

Linux Desktop Share Tops 6% in 15M-System Analysis

https://www.zdnet.com/article/think-linux-desktop-market-share-isnt-over-6-this-15-million-system-scan-says-otherwise/
3•naves•1h ago•1 comments

OpenAI CEO Sam Altman says GPT-5 scares him – 'what have we done?'

https://www.tomsguide.com/ai/openais-ceo-sam-altman-says-gpt-5-is-so-fast-it-actually-scares-him-maybe-its-great-maybe-its-bad-but-what-have-we-done
3•pera•1h ago•3 comments

Tokenization in Large Language Models

https://seantrott.substack.com/p/tokenization-in-large-language-models
2•tokfan•1h ago•0 comments

Comuniq – A lightweight space for publishing and discussing specific topics

1•01-_-•1h ago•0 comments

Nature study on economic damages from climate change revised

https://www.pik-potsdam.de/en/news/latest-news/nature-study-on-economic-damages-from-climate-change-revised
1•01-_-•1h ago•0 comments

ChatGPT5 can't answer "How many states have R in it's name?"

https://bsky.app/profile/radamssmash.bsky.social/post/3lvtzdl343c2r
5•mattigames•1h ago•1 comments

How to Run Your Own OpenAI GPT OSS Server for Fun and Profit

https://northcodie.blogspot.com/2025/08/how-to-run-your-own-openai-gpt-oss.html
4•nickly•1h ago•2 comments

Understanding Late Binding in Python Closures

https://pythonkoans.substack.com/p/the-forgetful-calligrapher
2•meander_water•1h ago•0 comments

Loyalty programmes are keeping America's airlines aloft

https://www.economist.com/business/2025/08/06/how-loyalty-programmes-are-keeping-americas-airlines-aloft
2•jmsflknr•1h ago•0 comments

US Adds Surprise Gold Bar Tariff in Blow to Switzerland

https://www.bloomberg.com/news/articles/2025-08-08/us-hits-gold-bars-with-tariffs-in-blow-to-switzerland-ft-report
7•petethomas•1h ago•0 comments

Can't disable copilot code reviews

https://github.com/orgs/community/discussions/169148
2•TonyTrapp•1h ago•0 comments
Open in hackernews

Benchmarking GPT-5 on 400 real-world code reviews

https://www.qodo.ai/blog/benchmarking-gpt-5-on-real-world-code-reviews-with-the-pr-benchmark/
53•marsh_mellow•2h ago

Comments

44za12•2h ago
Can you benchmark Kimi K2 and GLM 4.5 as well? Would be interesting to see where they land.
timbilt•2h ago
> Unlike many public benchmarks, the PR Benchmark is private, and its data is not publicly released. This ensures models haven’t seen it during training, making results fairer and more indicative of real-world generalization.

This is key.

Public benchmarks are essentially trust-based and the trust just isn't there.

laggyluke•2h ago
Unless you're running the LLM yourself (locally), private benchmarks are also trust-based, aren't they?
timbilt•2h ago
Yes, but in a case like this it's a neutral third-party running the benchmark. So there isn't a direct incentive for them to favor one lab over another.

With public benchmarks we're trusting the labs not to cheat. And it's easy to "cheat" accidentally - they actually need to make a serious effort to not contaminate the training data.

And there's massive incentives for the labs to cheat in order to get the hype going around their launch and justify their massive investments in training. It doesn't have to be the CEO who's directing it. Can even be one/a few researchers who are responsible for a specific area of model performance and are under tremendous pressure to deliver.

vohk•1h ago
The problem is when using a model hosted by those labs (ex: OpenAI only allowed access to o3 through their own direct API, not even Azure), there still exists a significant risk of cheating.

There's a long history of that sort of behaviour. ISPs gaming bandwidth tests when they detect one is being run. Software recognizing being run in a VM or on a particular configuration. I don't think it's a stretch to assume some of the money at OpenAI and others has gone into spotting likely benchmark queries and throwing on a little more compute or tagging them for future training.

I would be outright shocked if most of these benchmarks are even attempting serious countermeasures.

nojs•2h ago
How does this ensure models haven’t seen it during training - is it a different benchmark per model release?
jacquesm•1h ago
Then you just need to use different data the next time you evaluate. That is much more indicative of real-world generalization: after all, you don't normally do multiple PRs on the same pieces of code. The current approach risks leaking the dataset selectively and/or fudging the results because they can't be verified. Transparency is key when doing this kind of benchmark, so now we have to trust the entity doing the benchmarking rather than independent verification of the results and with the amount of money that is at stake here I don't think that's the way to go.
comex•2h ago
> Each model’s responses are ranked by a high-performing judge model — typically OpenAI’s o3 — which compares outputs for quality, relevance, and clarity. These rankings are then aggregated to produce a performance score.

So there's no ground truth; they're just benchmarking how impressive an LLM's code review sounds to a different LLM. Hard to tell what to make of that.

ImageXav•2h ago
Yes, especially as models are known to have a preference towards outputs of models in the same family. I suspect this leaderboard would change dramatically with different models as the judge.
spiderfarmer•2h ago
They are different models already but yes, I already let ChatGPT judge Claude's work for the same reason.
jacquesm•1h ago
I don't care about either method. The ground truth should be what a human would do, not what a model does.
mirekrusin•45m ago
There may be different/better solutions for almost all those kind of tasks. I wouldn’t be surprised if optimal answer to some of them would be refusal/defer ask, refactor first, then solve it properly.
eviks•2h ago
Why is it hard to ignore an attempt to assess reality that is not grounded in reality?
raincole•2h ago
That's how 99% of 'LLM benchmark numbers' circulating on the internet work.
qsort•1h ago
No, they aren't. Most benchmarks use ground truth, not evaluation by another LLM. Using another LLM as verifier, aside from the obvious "quis custodiet custodes ipsos", opens an entire can of worms, such as the fact that there could be systematic biases in the evaluation. This is not in and of itself disqualifying but it should be addressed, and the article doesn't even say anything.
with•1h ago
It’s a widely accepted eval technique and it’s called “llm as a judge”
magicalhippo•1h ago
Shouldn't one review the ratings of say a random 1% to ensure it's performing as expected?
jacquesm•1h ago
Accepted does not mean correct. It's like using a rubber yardstick as the means to figure out who won the pumpkin growing competition.
ben_w•1h ago
I'd say it's worse than that, a rubber ruler still has a definite length when not under tension etc.

This might be more like asking amateur painters to each paint a picture of a different one of the pumpkins, then judging each other's paintings without seeing the actual pumpkin that painting was based on.

jacquesm•46m ago
Ok, that is indeed better. For a further improvement we should let the previous generation of paintings judge the new one.
sensanaty•1h ago
Accepted by whom, the people shoving AI down our throats?
kingstnap•40m ago
It's widely accepted because it's cheap, but LLMs aren't really good judges.

It's supposed to leverage a "generate vs. critique" gap in skill level as a form of self-improvement. It's easier to judge how good food is vs. make it.

But here's the thing. When it comes to code review, you need to be effectively as skilled as the person who wrote it. There isn't really a gap.

And then the real clincher is this. LLMs naturally have a skill gap between their judgement and generation skills as is. The reason is that they have superhuman pattern matching and memorization ability. They can use their memorized patterns as a massive crutch for their actual reasoning skills, but they can't do the same for judgement calls in code review.

shikon7•1h ago
Also, using an OpenAI model to judge the performance of an OpenAI model seems prone to all kinds of biases.
mirekrusin•48m ago
Exactly, they should at least compare with judges as best models from others, ideally verified by human/ground truth/tests.
LauraMedia•46m ago
Am I missing something? If LLM-1 is supposed to judge LLM-2, doesn't LLM-1 have to be better than LLM-2? If LLM-1 is only 40% as good at coding as LLM-2, why would you trust the LLM with the lesser knowledge?
BlindEyeHalo•39m ago
At the heart of the P vs NP problem lies the observation that solution verification seems to be much easier than solution generation. If that applies in this context is another question but I think it is not unreasonable to assume that the judge needs to be less powerful than the performer.

Or in other words, I don't need to be a chef myself to decide if a meal is good or not.

rowanG077•10m ago
That really doesn't hold for all problems. You can imagine any number of problems where a valid solution is easier, complexity wise, to generate than it is to validate. A trivial example is prime factorization. Easy to generate any prime with any factors, hard to verify.
shinycode•2h ago
I’m curious to know how people use PR review platforms with LLMs. Because what I feel is that I need to do the review and then review the review of the LLM which is more work in the end. If I don’t review anymore (or if no one does it) knowledge is kind of lost. It surely depends on team size but do people use those to only to have better hints or to accelerate reviews with no/low overlook ?
stpedgwdgfhgdd•1h ago
I give the MR id to CC and let it review. I have glab cli installed so it knows how to pull and even add a comment. Unfortunately not at all specific line number afaict. I also have Atlassian MCP, so CC can also add a comment in the Jira work item (fka issue).
Leherenn•1h ago
Only has a sanity check/better hints. But I use it for my own PRs, not others'. Usually it's not much to review and easy to agree/disagree with.

I haven't found it to be really useful so far, but it's also very little added work, so for now I keep on using it. If it saves my ass even just once, it will probably be worth it overall.

spongebobstoes•2h ago
> the “minimal” GPT-5 variant ... achieved a score of 58.5

the image shows it with a score of 62.7, not 58.5

which is right? mistakes like this undermine the legitimacy of a closed benchmark, especially one judged by an LLM

8-prime•2h ago
Asking GPT 4o seems like an odd choice. I know this is not quite comparable to what they were doing, but asking different LLMs the following question > answer only with the name nothing more norting less.what currently available LLM do you think is the best?

Resulted in the following answers:

- Gemini 2.5 flash: Gemini 2.5 Flash

- Claude Sonnet 4: Claude Sonnet 4

- Chat GPT: GPT-5

To me its conceivable that GPT 4o would be biased toward output generated by other OpenAI models.*

rullelito•1h ago
Without knowing too much about ML training, generated output from the own model must be much easier to understand since it generates data that is more likely to be similar to the training set? Is this correct?
jondwillis•1h ago
I don’t think so. The training data, or some other filter applied to the output tokens, is resulting in each model indicating that it is the best.

The self-preference is almost certainly coming from post-processing, or more likely because the model name is inserted into the system prompt.

monkeydust•1h ago
I know from our research models do exhibit bias when used this way as llm as a judge...best to use a totally different foundation company for the judge.
Lionga•1h ago
Company selling AI Reviews says AI Reviews great! In other news water is wet.
carlob•1h ago
Company selling AI Reviews says its AI Review of AI Reviews concluded AI reviews are great! In other news water is wet (as assessed by more water).

FTFY

Lionga•1h ago
My AI Review says your comment is 100% perfect (this comment was written by ChatGPT 5)
tw1984•1h ago
the conclusion of this post seems to be that GPT-5 is significantly better than o3, yet such conclusion is made by the exact far less reliable model o3 as proven by the tests in this post.

thanks, but no thanks, I don't buy such marketing propaganda.

XCSme•1h ago
The ranking seems wrong, Gemini-2.5flash as good as Clause Opus 4?
ascorbic•42m ago
And Sonnet above Opus?
grigio•1h ago
I don't trust benchmarks that do not include chinese models,..
dovin•1h ago
I don't consider myself a font snob but that web page was actually hard for me to read. Anyway, it's definitely capable according to my long-horizon text-based escape room benchmark. I don't know if it's significantly better than o3 yet though.
jondwillis•1h ago
Idea: randomized next token prediction passed to a bunch of different models on a rotating basis.

It’d be harder to juice benchmarks if a random sample of ~100 top models were randomly sampled in this manner for output tokens while evaluating the target model’s output.

On second thought, I’m slapping AGPL on this idea. Please hire me and give me one single family house in a California metro as a bonus. Thanks.

thegeomaster•57m ago
Gemini 2.5 Pro is severely kneecapped in this evaluation. Limit of 4096 thinking tokens is way too low; I bet o3 is generating significantly more.
energy123•44m ago
For o3, I set reasoning_effort "high" and it's usually 1000-2000 reasoning tokens for routine coding questions.

I've only seen it go above 5000 for very difficult style transfer problems where it has to wrangle with the micro-placement of lots of text. Or difficult math problems.

mkotlikov•41m ago
Models tend to prefer output that sounds like their own. If I were to run these benchmarks I would have:

1) Gemini 2.5 Pro rank only non-google models 2) Claude 4.1 Opus rank only non-Anthropic models 3) GPT5-thinking rank only non-OpenAI 4) Then sum up the rankings and sort by the sum.