frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
152•yi_wang•5h ago•48 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
73•RebelPotato•5h ago•18 comments

SectorC: A C Compiler in 512 bytes (2023)

https://xorvoid.com/sectorc.html
267•valyala•13h ago•51 comments

Total surface area required to fuel the world with solar (2009)

https://landartgenerator.org/blagi/archives/127
30•robtherobber•4d ago•28 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
207•mellosouls•15h ago•355 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
170•surprisetalk•12h ago•163 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
75•swah•4d ago•130 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
76•gnufx•11h ago•59 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
183•AlexeyBrin•18h ago•35 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
176•vinhnx•16h ago•17 comments

Why there is no official statement from Substack about the data leak

https://techcrunch.com/2026/02/05/substack-confirms-data-breach-affecting-email-addresses-and-pho...
30•witnessme•2h ago•7 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
328•jesperordrup•23h ago•98 comments

The Architecture of Open Source Applications (Volume 1) Berkeley DB

https://aosabook.org/en/v1/bdb.html
8•grep_it•5d ago•0 comments

First Proof

https://arxiv.org/abs/2602.05192
138•samasblack•15h ago•81 comments

Wood Gas Vehicles: Firewood in the Fuel Tank (2010)

https://solar.lowtechmagazine.com/2010/01/wood-gas-vehicles-firewood-in-the-fuel-tank/
35•Rygian•2d ago•9 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
86•momciloo•13h ago•17 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
77•chwtutha•3h ago•20 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
109•thelok•15h ago•24 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
593•theblazehen•3d ago•212 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
41•mbitsnbites•3d ago•5 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
114•randycupertino•8h ago•241 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
314•1vuio0pswjnm7•19h ago•502 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
235•limoce•4d ago•125 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
907•klaussilveira•1d ago•277 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
160•speckx•4d ago•244 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
36•languid-photic•4d ago•17 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
304•isitcontent•1d ago•39 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
498•lstoll•1d ago•332 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
447•ostacke•1d ago•114 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
314•dmpetrov•1d ago•158 comments
Open in hackernews

Building a better Bugbot

https://cursor.com/blog/building-bugbot
44•onurkanbkrc•3w ago

Comments

skrebbel•3w ago
Few things give me more dread than reviewing the mediocre code written by an overconfident LLM, but arguing in a PR with an overconfident LLM that its review comments are wrong is up there.
makingstuffs•3w ago
I can’t agree more. I’m torn on LLM code reviews. On the one hand I think it is a place that makes a lot of sense and they can quickly catch silly human errors like misspelled variables and whatnot.

On the other hand the amount of flip flopping they go through is unreal. I’ve witnessed numerous instances where either the cursor bugbot or Claude has found a bug and recommended a reasonable fix. The fix has been implemented and then the LLM has argued the case against the fix and requested the code be reverted. Out of curiosity to see what happens I’ve reverted the code just to be told the exact same recommendation as in the first pass.

I can foresee this becoming a circus for less experienced devs so I turned off the auto code reviews and stuck them in request only mode with a GH action so that I can retain some semblance of sanity and prevent the pr comment history from becoming cluttered with overly verbose comments from an agent.

ramraj07•3w ago
The purpose of these reviewers is to flag the bug to you. You still need to read the code around and see if its valid and serious and worth a fix. Why does it matter if it then says the opposite after the fix? Did it even happen often or is this just an anecdote of a one time thing?
ljm•2w ago
It’s like a linter with conflicting rules (can’t use tabs, rewrite to spaces; can’t use spaces, rewrite to tabs). Something that runs itself in circles and can also block a change unless the comment is resolved simply adds noise, and a bot that contradicts itself does not add confidence to a change.
dgxyz•3w ago
The battle I am fighting at the moment is that our glorious engineering team, who are the lowest bidding external outsourcer, make the LLM spew look pretty good. The reality of course is they are both terrible, but no one wants to hear that, only that the LLM is better than the humans. And that's only because it's the narrative they need to maintain.

Relative quality is better but the absolute quality is not. I only care about absolute quality.

ramraj07•3w ago
Do you have actual experience with bugbot? Its live in our org and is actually pretty good, almost none of its comments are frivolous or wrong, and it finds genuine bugs most reviewers miss. This is unlike Graphite and Copilot, so no one's glazing AI for AIs sake.

Bugbot is now a valuable part of our SD process. If you have genuine examples to show that we are just being delusional or haven’t hit a roadblock, I would love to know.

skrebbel•3w ago
I assume that this is the same as when Cursor spontaneously decides to show code review comments in the IDE as part of some upsell? In that case yes I’m familiar and they were all subtly wrong.
ljm•2w ago
I have no problem accepting the odd comment that actually highlights a flaw and dismissing the rest, because I can use discretion and have an understanding of what it has pointed out and if it’s legit.

The dread is explaining this to someone less experienced, because it’s not helpful to just say to use your gut. So I end up highlighting the comments that are legit and pointing out the ones that aren’t to show how I’m approaching them.

It turns out that this is a waste of time, nobody learns anything from it (because they’re using an LLM to write the code anyway) and it’s better to just disable the integration and maybe just run a review thing locally if you care. I would say that all of this has made my responsibility as a mentor much more difficult.

agent013•3w ago
The biggest problem with LLM reviews for me is not false positives, but authority. Younger devs are used to accepting bot comments as the ultimate truth, even when they are clearly questionable
jaggederest•3w ago
Yes, I've found some really interesting bugs using LLM feedback, but it's about a 40% accuracy rate, mostly when it's highlighting things that are noncritical (for example, we don't need to worry about portability in a single architecture app that runs on a specific OS)
ljm•2w ago
I alluded to it in a separate comment but the problem I have here is that it is really hard to get through to them on this too.

Upskilling a junior dev required you spend time in the code and sharing knowledge, doing pairing and such like. LLMs have abstracted a good part of that away and in doing so broken a line of communication, and while there are still many other topics that can be tackled as a mentor, the one most relevant to an upstart junior is effective programming and they will more likely disappear into Claude Code for extended lengths of time than reach out for help now.

This is difficult to work with because you’ll need to do more frequent check-ins, akin to managing. And coaching someone through a prompt and a fancy MCP setup isn’t the same as walking through a codebase, giving context, advising on idiomatic language use and such like.

nolanl•3w ago
I've found Bugbot to be shockingly effective at finding bugs in my PRs. Even when it's wrong, it's usually worth adding a comment, since it's the kind of mistake a human reviewer would make.