frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Leaning Into the Coding Interview: Lean 4 vs. Dafny cage-match

https://ntaylor.ca/posts/proving-the-coding-interview-lean/
1•todsacerdoti•2m ago•0 comments

Show HN: Azazel – Lightweight eBPF-based malware analysis sandbox using Docker

https://github.com/beelzebub-labs/azazel
1•mariocandela•6m ago•0 comments

We urgently need a federal law forbidding AI from impersonating humans

https://garymarcus.substack.com/p/we-urgently-need-a-federal-law-forbidding
2•headalgorithm•7m ago•0 comments

Show HN: File Brain – Local file search with OCR and semantic search

https://github.com/Hamza5/file-brain
1•Hamza5•9m ago•0 comments

Show HN: CLI Rust tool gitorg helps manage GitHub orgs

https://crates.io/crates/gitorg
1•DavidCanHelp•16m ago•0 comments

Gitdatamodel Documentation

https://git-scm.com/docs/gitdatamodel
1•todsacerdoti•17m ago•0 comments

Men lose their Y chromosome as they age – how it may matter

https://theconversation.com/men-lose-their-y-chromosome-as-they-age-scientists-thought-it-didnt-m...
5•bikenaga•19m ago•1 comments

Biases in the Blind Spot: Detecting What LLMs Fail to Mention

https://arxiv.org/abs/2602.10117
1•mpweiher•21m ago•0 comments

Free SERP Content Analyzer

https://kitful.ai/write-tools/serp-content-analyzer
1•eashish93•21m ago•1 comments

Why I'm Not Worried About My AI Dependency

https://boagworld.com/emails/ai-dependency/
1•cdrnsf•24m ago•0 comments

AI Agent Lands PRs in Major OSS Projects, Targets Maintainers via Cold Outreach

https://socket.dev/blog/ai-agent-lands-prs-in-major-oss-projects-targets-maintainers-via-cold-out...
1•cdrnsf•25m ago•0 comments

Internet Increasingly Becoming Unarchivable

https://www.niemanlab.org/2026/01/news-publishers-limit-internet-archive-access-due-to-ai-scrapin...
39•ninjagoo•26m ago•14 comments

Intent to Experiment: Ship Rust XML Parser to 1% stable for non XSLT scenarios

https://groups.google.com/a/chromium.org/g/blink-dev/c/D7BE4QPw0S4
1•justin-reeves•29m ago•0 comments

Google Search Isn't a Common Carrier–Richards vs. Google

https://blog.ericgoldman.org/archives/2026/02/google-search-isnt-a-common-carrier-richards-v-goog...
2•hn_acker•31m ago•0 comments

Rendering attractors at 200 megapixels on A100s

https://axisophy.com/collections/mersenne
2•scylx•31m ago•1 comments

First Ariane 6 with four boosters lifts off

https://www.esa.int/Enabling_Support/Space_Transportation/Ariane/More_power_first_Ariane_6_with_f...
3•belter•32m ago•0 comments

What If AI Isn't the Goal? – Living in a Post-AI Society

https://zias.be/blog/living-in-a-post-ai-society
1•ziasvannes•35m ago•2 comments

Putting economic theory to the test: Cutting local taxes cuts household income

https://phys.org/news/2026-02-economic-theory-local-taxes-household.html
2•bikenaga•36m ago•1 comments

How AI slop is causing a crisis in computer science

https://www.nature.com/articles/d41586-025-03967-9
4•gnabgib•40m ago•0 comments

Show HN: AuraSpend " Voice-first expense tracker using Gemini for NLU

https://play.google.com/store/apps/details?id=com.intrepid.auraspend&hl=en_US
1•subhanzg•43m ago•0 comments

Every App Needs Auth / Ory Helps / This Template Fixes It

https://github.com/Samuelk0nrad/docker-ory
1•samuel_kx0•44m ago•0 comments

Show HN: DryCast – Never run outside to save your laundry from rain again

https://drycast.app/
1•AwkwardPanda•44m ago•0 comments

Manage, freeze and restore GPU processes quickly

https://github.com/shayonj/gpusched
2•shayonj•44m ago•0 comments

Show HN: Tilth v0.3 – 17% cheaper AI code navigation (279 runs, 3 Claude models)

1•jahala•46m ago•0 comments

Tech leaders pour $50M into super PAC to elect AI-friendly candidates

https://www.latimes.com/business/story/2026-02-13/tech-titans-pour-50-million-into-super-pac-to-e...
3•geox•47m ago•0 comments

How Head Works in Git

https://jvns.ca/blog/2024/03/08/how-head-works-in-git/
3•b-man•47m ago•0 comments

I Visited the Future of AI Engineering – and Returned with a Warning

https://igor718185.substack.com/p/i-visited-the-future-of-ai-engineering
2•iggori•49m ago•3 comments

Dr. Oz pushes AI avatars as a fix for rural health care

https://www.npr.org/2026/02/14/nx-s1-5704189/dr-oz-ai-avatars-replace-rural-health-workers
13•toomuchtodo•50m ago•8 comments

TikTok

https://www.tiktok.com/explore
1•Hackersing•51m ago•0 comments

Bloom-Filter Art: Encode words in a heart; Send it to someone special

https://improbable-heart.com/
1•nait•51m ago•1 comments
Open in hackernews

Stoat removes all LLM-generated code following user criticism

https://github.com/orgs/stoatchat/discussions/1022
34•ashleyn•2h ago

Comments

logicprog•1h ago
What a shame
ronsor•1h ago
It seems the thread was brigaded by militant anti-AI people upset over a few trivial changes made using an LLM.

I encourage people here to go read the 3(!) commits reverted. It's all minor housekeeping and trivial bugfixes—nothing deserving of such religious (cultish?) fervor.

raincole•1h ago
At this point perhaps to not disclose AI usage is the right thing to do. Transparency only feeds the trolls, unfortunately.
ronsor•1h ago
I have been saying this for a few years at this point. Transparency can only exist when there is civility, and those without civility deserve no transparency[0].

[0] As a corollary, those with civility do deserve transparency. It's a tough situation.

longfacehorrace•1h ago
Looked at repos of the two loudest users in that thread; either they have none or it's all forks of other projects.

Non-contributors dictating how the hen makes bread.

ronsor•1h ago
In general, caving to online mobs is a bad long-term strategy (assuming the mob is not the majority of your actual target audience[0]). The mob does not care about your project, product, or service, and it will not reward you for your compliance. Instead it only sees your compliance as a weakness to further target.

[0] While this fact can be difficult to ascertain, one must remember that mobs are generally much, much louder than normal users, and normal users are generally quiet even when the mob is loud.

lich_king•1h ago
Yes, but also... that's like 90% of the interactions you get on the internet?

I don't want to be too meta, but isn't that a description of most HN threads? We show up to criticize other people's work for self-gratification. In this case, we're here here to criticize the dev caving in, even though most of us don't even know what Stoat is and we don't care.

Except for some corner cases, most developers and content creators mostly get negative engagement, because it's less of an adrenaline rush to say "I like your work" than to say "you're wrong and I'm smarter than you". Many learn to live with it, and when they make a decision like that, it's probably because they actually agree with the mob.

ronsor•1h ago
I don't actually care what the dev does. That's their prerogative, and it doesn't affect whether or not I'll use the software (I will if it's useful). I think that's the difference between here and a "mob", assuming other commenters think similarly.

I do think it's harmful to cave in, but that doesn't make me think less of the maintainer's character. On the other hand, some of the commenters in the issue might decry them as evil if they made the "wrong" decision.

It's fine to have opinions on the actions of others, but it's not fine to burn them at the stake.

longfacehorrace•47m ago
Not just online; priests, CEOs, celebrities, politicians; don't make them happy you're a sinner, a bad employee, hater of freedom, etc.

Anyone with a rhetorical opinion but who otherwise provides little to getting cars off assembly lines, homes built, network cables laid.

In physical terms the world is full of socialist grifters in that they only have a voice, no skill. They are reliant on money because they're helpless to themselves.

Engineers could rule the world of they acted collectively rather than start personal businesses. If we sat on our hands unless demands are met, the world stops.

A lot of people in charge fear tech unions as we control the world getting shit done.

throawayonthe•1h ago
isn't forks of other projects how you usually contribute code on github
blibble•48m ago
most of the anti-AI community have already migrated their repos from Slophub
pythonaut_16•1h ago
Wastes of time like this are exactly why Stoat/Revolt is unlikely to ever be a serious Discord alternative
argee•1h ago
Could you elaborate on this? I can’t tell whether you mean to say that open source projects run into user-initiated time sinks that detract from their productivity (which is arguably the case for any public facing project), or whether private repositories bypass this type of scrutiny by default which affords them an advantage, or whether this is about the Stoat/Revolt devs specifically and how they choose to spend their time.
ronsor•43m ago
I think the parent comment is referring to the fact that even focusing on whether ~100 lines of code across 3 commits should/should not be generated by an LLM is meaningless bikeshedding which has no place in a serious project.
Palomides•32m ago
why? I think having a stated policy on LLM use is increasingly unavoidable for FOSS projects
sodality2•1h ago
If only the average open source project got this level of scrutiny actually checking for vulnerabilities. I get that you don't want your private chats leaked by slopcode, but this was a few dozen lines of scaffolding in large software created before LLM coding; it would have been better to register your discontent without making demands, then continue to watch the repo for vulnerabilities. This feels like fervor without any work behind it
singularfutur•1h ago
Reverting a few trivial commits because of purity tests is a bad precedent. It rewards the loudest commenters and punishes maintainers.
imsofuture•1h ago
It will be a painful decade until those who have already lost this weird ideological war ever realize it.
rsynnott•57m ago
And which side is that? I mean, from my point of view, it seems like it’s probably the ones who are having a magic robot write a thousand lines of code that almost, but not quite, does something sensible, rather than using a bloody library.

(For whatever reason, LLM coding things seem to love to reinvent the square wheel…)

ronsor•51m ago
Not once in history has new technology lost to its detractors, even if half its proponents were knuckleheads.
BJones12•49m ago
Nuclear power disagrees
raincole•43m ago
Nuclear power will win (obviously). Unless you're talking about nuclear weapon.
hubertdinsk•45m ago
latest counter-example is NFT.
ronsor•39m ago
NFTs lost because they didn't do anything useful for their proponents, not because people were critical of them. They would've fizzled out even without detractors for that reason.

On the other hand, normal cryptocurrencies continue to exist because their proponents find them useful, even if many others are critical of their existence.

Technology lives and dies by the value it provides, and both proponents and detractors are generally ill-prepared to determine such value.

hubertdinsk•32m ago
oh it's "because of this and that" now?

The orignal topic was "not once blah blah...". I don't have to entertain you further, and won't.

blibble•9m ago
moving the goalposts
latexr•9m ago
Web3, Google Glass, Metaverse, NFTs…
kingstnap•44m ago
Dependencies aren't free. If you have a library that has less than a thousand lines of code total that is really janky. Sometimes it makes sense like PicoHTTPParser but it often doesn't.

Left-pad isn't a success story to be reproduced.

spankalee•39m ago
> the ones who are having a magic robot write a thousand lines of code that almost, but not quite, does something sensible

Gee, I wonder which "side" you're on?

It's not true that all AI generated code looks like it does the right thing but doesn't, or that all that human written code does the right thing.

The code itself matters here. So given code that works, is tested, and implements the features you need, what does it matter if it was completely written by a human, an LLM, or some combination?

Do you also have a problem with LLM-driven code completion? Or with LLM code reviews? LLM assisted tests?

GorbachevyChase•9m ago
I’m not sure where you’ve been the last four years, but we’ve come a long way from GPT 3.5. There is a good chance your work environment does not permit the use of helpful tools. This is normal.

I’m also not sure why programmatically generated code is inherently untrustworthy but code written by some stranger who is confidence in motives are completely unknown to you is inherently trustworthy. Do we really need to talk about npm?

minimaxir•59m ago
And then you have the "Alas, the sheer fact that LLM slop-code has touched it at all is bound to be a black stain on its record" comments.
Palomides•56m ago
what if the users legitimately don't want AI written software?
raincole•43m ago
You have to think twice if you really want to cater to these 'legitimate users' then. In Steam's review section you can find people give negative reviews just because the game uses Unity or Unreal. Should devs cater to them and develop their in-house engine?
Palomides•38m ago
maybe? devs should weigh the feedback and decide what they think will best serve the project. open source is, especially, always in conversation with the community of both users and developers.
minimaxir•43m ago
Then they have the right to not use it: Stoat does not have a monopoly on chat software.
blibble•49m ago
maybe a preview of what's to come when the legal system rules the plagiarism machine's output is a derivative work?
spankalee•36m ago
Since a human can also be a "plagiarism machine" (it's a potential copyright violation for both me and an LLM alike to create images of Mickey Mouse for commercial uses) it'll matter exactly what the output is, won't it?
Seattle3503•17m ago
This sort of purity policing happens to other open source mission driven projects. The same thing happens to Firefox. Open source projects risk spending all their time trying to satisfy a fundamentally extreme minority, while the big commercial projects act with impunity.

It seems like it is hard to cultivate a community that cares about doing the right thing, but is focused and pragmatic about it.

ragthr•1h ago
Nice move! It is fun to watch the copyright thieves and their companies go into intellectual contortions (militant, purity tests, ideology) if their detrimental activities get any pushback.
philipwhiuk•53m ago
The fun part is this only happens because Claude Code commits its changes.

If you use for example, GitHub Co-Pilot IDE integration, there's no evidence.

cat_plus_plus•44m ago
"it's worth considering that there are many people with incredibly strong anti-LLM views, and those people tend to be minorities or other vulnerable groups."

I have pretty low expectations for human code in that repository.

ronsor•41m ago
The response mentioning minorities is obviously bad faith. Even if true, it's not really relevant, and most likely serves as a way to tie LLM use to slavery, genocide, or oppression without requiring rational explanation.
stavros•41m ago
I love how people in the thread are like "if I'm going to ask my group of friends to switch to this, I need to know it's not written by security-issue-generator machines", meanwhile at Discord LLMs go brrr:

https://discord.com/blog/developing-rapidly-with-generative-...

ronsor•38m ago
To be fair, many of them are already fleeing Discord over the ID surveillance, so it makes sense that they would be pickier this time.
latexr•1m ago
No one on the thread is advocating for Discord, so I don’t understand what argument are making.
deadbabe•34m ago
If you find yourself having to use LLMs to write a lot of tedious code, you have bad architecture. You should use patterns and conventions that eliminate the tedium, by making things automagically work. This means each line of code you write is more powerful, less filler stuff. Remember the days when you could create entire apps with just a few lines of code? So little code that an LLM would be pointless.