frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
102•theblazehen•2d ago•23 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
654•klaussilveira•13h ago•190 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
944•xnx•19h ago•550 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
119•matheusalmeida•2d ago•29 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
38•helloplanets•4d ago•38 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
48•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
228•isitcontent•14h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
14•kaonwarb•3d ago•18 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
219•dmpetrov•14h ago•114 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
329•vecti•16h ago•143 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
378•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
487•todsacerdoti•21h ago•241 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
286•eljojo•16h ago•167 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
409•lstoll•20h ago•276 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
21•jesperordrup•4h ago•12 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
87•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
59•kmm•5d ago•4 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
4•speckx•3d ago•2 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
31•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
251•i5heu•16h ago•194 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
15•bikenaga•3d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
56•gfortaine•11h ago•23 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1062•cdrnsf•23h ago•444 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
144•SerCe•9h ago•133 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
180•limoce•3d ago•97 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
287•surprisetalk•3d ago•41 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
147•vmatsiiako•18h ago•67 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
72•phreda4•13h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
29•gmays•9h ago•12 comments
Open in hackernews

Qodo CLI agent scores 71.2% on SWE-bench Verified

https://www.qodo.ai/blog/qodo-command-swe-bench-verified/
139•bobismyuncle•5mo ago

Comments

gronky_•5mo ago
I’ve been running a bunch of coding agents on benchmarks recently as part of consulting, and this is actually much more impressive than it seems at first glance.

71.2% puts it at 5th, which is 4 points below the leader (four points is a lot) and just over 1% lower than Anthropic’s own submission for Claude Sonnet 4 - the same model these guys are running.

But the top rated submissions aren’t running production products. They generally have extensive scaffolding or harnesses that were built *specifically for SWE bench*, which kind of defeats the whole purpose of the benchmark.

Take for example Refact which is at #2 with 74.4%, they built a 2k lines of code framework around their agent specifically for SWE bench (https://github.com/smallcloudai/refact-bench/). It’s pretty elaborate, orchestrating multiple agents, with a debug agent that kicks in if the main agent fails. The debug agent analyzes the failure and gives insights to the main agent which tries again, so it’s effectively multiple attempts per problem.

If the results can be reproduced “out-of-the-box” with their coding agent like they claim, it puts it up there as one of the top 2-3 CLI agents available right now.

szundi•5mo ago
According to your experience with this model, is it just trained for the benchmark or these points are actually representing the performance?
energy123•5mo ago
What are the typical context lengths in SWE-bench problems? Does it partly measure performance in the 64-128k context range?
dimitri-vs•5mo ago
IIRC the SWE bench dataset gives you the full repo snapshot + the issue text, the evaluation pipelines typically run some kind of retriever (eg. grep, BM25) to pick a subset of files to place in the model’s context. They provided context is usually limited up to ~50k tokens.
whymauri•5mo ago
This is what the rows look like:

https://huggingface.co/datasets/princeton-nlp/SWE-bench_Veri...

Its up to your retrieval system/model to selectively hunt for relevant context. Here's a few critiques of the benchy:

https://x.com/brhydon/status/1953648884309536958

thinkingtoilet•5mo ago
This is classic Goodhart's law. "When a measure becomes a target, it ceases to be a good measure"

https://en.wikipedia.org/wiki/Goodhart%27s_law

ambicapter•5mo ago
It's really not that hard to not build a custom bench setup to game the benchmark instead of just using your product straight out of the box, though.
jasonjmcghee•5mo ago
Right. Building a custom setup is blatant- that will wildly overfit.

But let's say a group uses it as a metric as part of CI and each new idea / feature they create runs against SWE bench. Maybe they have parameterized bits and pieces they adjust, maybe they have multiple candidates datasets for fine tuning, maybe they're choosing between checkpoints.

This will also end up overfitting - especially if done habitually. It might be a great metric and result in a more powerful overall model. Or it might not.

VikingCoder•5mo ago
Right, other than financial pressure. Which is, of course, immense.
clutchdude•5mo ago
Also see the VW dieselgate and numerous other "gaming the system" examples.
kelipso•5mo ago
A specific setup for the benchmark is just plain cheating, not Goodhart’s law.
eddd-ddde•5mo ago
I think multiple attempts are completely understandable and even expected? How is that defeating the purpose of the benchmark?
gronky_•5mo ago
It’s a pass@1 benchmark. When submitting you need to check a box that there was only 1 attempt per problem. See here for example: https://github.com/SWE-bench/experiments/pull/219

Building multiple attempts into your agent is stretching the rules, even if technically it’s acceptable

terminalshort•5mo ago
From my perspective as a potential user the number of attempts is the number of times I have to tell it what to do. If you have an agent that makes a single attempt and is 60% accurate vs another that makes 5 attempts and is 80% accurate, why would you care that each individual attempt of the 2nd model is less accurate than the first?
mcintyre1994•5mo ago
I think it depends on "But the top rated submissions aren’t running production products" It sounds like they're shipping a product without the debug agent/try-again logic, and that's just for the benchmark, so you wouldn't get the performance they get as a user.
gronky_•5mo ago
This ok from your perspective then?

def make_pass@1_agent(agent, n):

    def retry_agent(problem):

        for attempt in range(n):

            result = agent(problem)

            if result.success:

                return result

        return result

    return retry_agent
gronky_•5mo ago
Keep in mind that this isn’t about users - the top agents on the leaderboard aren’t running an actual product on the benchmark.

If they are running their production product as is, then of course whatever is built into the product is fine.

DougBTX•5mo ago
Absolutely fine, as long as the success flag is predicted by the model ensemble under test. That’s how Claude Code works for example, it will continue to iterate until success (or it will give up with failure at a certain point).
terminalshort•5mo ago
Definitely wouldn't have written the code that way, but yes, if (and this is a massive "if") the agent has an accurate and meaningful way to determine which way to set the success boolean. The obvious caveat would be if n needed to be large enough to set the costs higher than I am willing to pay for the additional performance or it makes it take longer than I'm willing to wait.

Think of the agent like an employee. If he delivers the code within the expected time and to the expected quality standards, his process of getting there means almost nothing. Do I care if he tried 4 different approaches along the way and threw out the first 3? Not a bit.

whymauri•5mo ago
Papers have been doing rollouts that involve a model proposing N solutions and then self-reviewing to choose the best one (prior to the verifier). So far, I think that's been counted as one pass.
radarsat1•5mo ago
I was thinking about this recently with respect to how many agent systems now let you specify a smaller/faster model for easier tasks and a bigger model for harder tasks.

It's interesting to think about what the trade-offs are. Assuming the system can properly classify a task as easy or hard (big "if" but I guess there are ways), there is nonetheless more to think about, depending on your pricing plan.

For subscription pricing, I guess you don't really care which model runs and in fact it's hard to find a reason to ever run the smaller model, so choosing between the models is more in the provider's interests for cost efficiency.

But for pay-per-use pricing, But if you have a bigger model that can get the answer right 80% of the time, and a smaller model that can handle smaller changes and get things right 60% of the time but correct its mistakes, then the system should try to run it on as many tasks as possible to save you money.. but in the end if ends up having to make a lot of corrections, then maybe you end up needing more total requests than the larger model. In that case maybe it's actually cheaper to run the larger model, if it takes fewer requests.

So I wonder how that kind of trade-off could be effectively calculated. I guess if you can figure out when "retries" happen you can count them and do some statistics on which model is more likely to work out in fewer shots. It's pretty complicated though, when you start to think about it in detail.

I do wonder if even having BOTH the smaller and bigger model make hypotheses, and try the smaller model's idea first, then if it fails, try the bigger model's idea, might be the way to go.

oblio•5mo ago
https://github.com/auchenberg/volkswagen
Roritharr•5mo ago
Finally someone mentions Refact, I was in contact with the team, rooting for them really.
bluelightning2k•5mo ago
Just looked them up. Their pricing is around buying "coins" with no transparency as to what that gets. Hard pass
Roritharr•5mo ago
You realize that you can self-host their stuff? https://github.com/smallcloudai/refact
terminalshort•5mo ago
Is there something in this multi-agent approach that makes the setup more specific to just the test at hand and less general to real engineering tasks? If not, then this multi-agent system will just become what you get out of the box in a future product. Multiple attempts per problem (as long as there's no human intervention or selection between them) is a perfectly fine approach for agents because that's not an issue from the perspective of an engineer using the product. A single agent is already a multi-step usage of LLMs and it sounds like this is just another meta level of that.
ai-christianson•5mo ago
One thing with SWE bench is making sure there's zero leakage of information into the LLM context.

I.e. the agent cannot even know which tests are failing.

It has to both fix the issue based just on the issue text and fix it in the specific way the unit test, which it cannot see, expects.

For this reason I find the benchmark a little disconnected from the reality of software engineering.

orangebread•5mo ago
I've been using Warp for the past few weeks and it's been incredibly impressive over other agentic coding services/platforms. Curious how Qodo stacks up.
lightbendover•5mo ago
When I tried warp I was convinced that was where the industry was going (agents as terminal replacement), but it felt a bit too heavy to me so I haven’t been using it lately. Still think all things will converge on terminal and browser replacement.
rs186•5mo ago
So this is from the same company that wrote a blog post with sentences that don't even make sense:

https://news.ycombinator.com/item?id=44833929, my comment https://news.ycombinator.com/item?id=44835939

khalic•5mo ago
We need some international body to start running these tests… I just can’t trust these numbers any longer. We need a platform for this, something at least we can get some peer reviews
redman25•5mo ago
That sounds like an interesting idea to me. It would at least resolve the problem of companies gaming the metric.

Another approach might be the LiveBench approach where new tests are released on a regular basis.

jcorco•5mo ago
I’m working on this at STAC Research and looking to connect with others interested in helping. Key challenges are ensuring impartiality (and keeping it that way), making benchmarks ungameable, and guaranteeing reproducibility. We’ve done similar work in finance and are now applying the same principles to AI.
khalic•5mo ago
That sounds amazing, mind telling us a little more?
jcorco•5mo ago
Sure! STAC Research has been building and running benchmarks in finance for ~18 years. We’ve had to solve many of the same problems I think you’re highlighting here.. e.g. tech & model providers tuning specifically for the benchmark, results that get published but can’t be reproduced outside the provider’s lab, etc.

The approach is to use workloads defined by developers and end users (not providers) that reflect their real-world tasks. E.g. in finance, delivering market snapshots to trading engines. We test full stacks, holding some layers constant so you can isolate the effect of hardware, software, or models. Every run goes through an independent third-party audit to ensure consistent conditions, no cherry-picking of results, and full disclosure of config and tuning, so that the results are reproducible and the comparisons are fair.

In finance, the benchmarks are trusted enough to drive major infrastructure decisions by the leading banks and hedge funds, and in some cases to inform regulatory discussions, e.g. around how the industry handles time synchronization.

Now starting to apply the same principles to the AI benchmarking space. Would love to talk to anyone who wants to be involved?

khalic•5mo ago
Thank you, it’s quite brilliant to transfer those skill like this.

So the business model would be AI foundries contracting you for evaluating their models?

Do you envision some kind of freely accessible platform for consulting the results?

mupuff1234•5mo ago
I'm curious how do these LLM wrapper companies think they'll survive long term - especially coding related wrappers.

I could understand focusing on a niche business use case, but coding is a main focus of the foundation models themselves.

M4R5H4LL•5mo ago
Labeling them as “wrappers” and “niche business” indicates a strong cognitive bias already. Value can be created on both sides of the equation.
dgfitz•5mo ago
How so? They are wrappers, and it is niche.
choilive•5mo ago
Wrappers are a bit pejorative and reductive - everything is a wrapper around something else.
dgfitz•5mo ago
If everything is a wrapper around something else, how can the description be a pejorative?
riku_iki•5mo ago
I think those wrappers could create some potentially complex workflow around LLM API, with various trees of decisions, integrations, eval, rankers, ratets, etc, and this is their added value.
itamarcode•5mo ago
Unlike most SWE bench submissions, Qodo Command one uses the product directly.

I think that the next step is getting an official "checked" mark by the SWE bench team

whymauri•5mo ago
I feel like the bash only SWE Bench Verified (a.k.a model + mini-swe-agent) is the closest thing to measuring the inherent ability of the model vs. the scaffolding.

https://github.com/SWE-agent/mini-swe-agent

NitpickLawyer•5mo ago
There's swe-rebench, where they take "bugs/issues" by date, and you can drag a slider on their top scores to see issues solved after the model was released (obviously only truly working for open models).
OldGreenYodaGPT•5mo ago
Was using their bot for code review for last 2 years but just dropped it for BugBot
esafak•5mo ago
If Qodo is reading: please compare your efficiency too. Run some tasks on various agents using the same models, and report the cost.
zuzuen_1•5mo ago
Does anyone have a benchmark on the effectiveness of using embeddings for mapping bug reports to code files as opposed to extensive grepping as Qodo, Cursor and a number of tools I use do to localize faults?
zuzuen_1•5mo ago
I would be more interested in Qodo's performance on the swe-bench-multilingual benchmark. Swe-bench-verified only includes bugs related to python breakages.

The best submission is swe-bench-multilingual is Claude 3.7 Sonnet which solves ~43% of the issues in the dataset.

OldfieldFund•5mo ago
do we know anything about the size of the model? I can't find the answer.
khalic•5mo ago
it's sonnet behind the scene
afro88•5mo ago
If Qodo are reading this: please introduce a plan that isn't for teams or enterprise. A "pro" plan for individuals who want more than 250 credits per month.
raylad•5mo ago
If it's really better than Claude Code while using Sonnet 4.0, then I'd pay a monthly fee for it, but only if I can use my Claude subscription the same way Claude Code does.

I do not want to pay API charges or be limited to a fixed number of "credits" per month.

lirantal•5mo ago
Slick. This applies to the new Qodo Command CLI, yes?

I updated to the latest version last night. Enjoyed seeing the process permission toggle (rwx). Was a refreshing change to keep the security minded folks less in panic with all the agentic coding adoptions :-)