frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Diverse perspectives on AI from Rust contributors and maintainers

https://nikomatsakis.github.io/rust-project-perspectives-on-ai/feb27-summary.html
95•weinzierl•1h ago•41 comments

PC Gamer recommends RSS readers in a 37mb article that just keeps downloading

https://stuartbreckenridge.net/2026-03-19-pc-gamer-recommends-rss-readers-in-a-37mb-article/
257•JumpCrisscross•5h ago•122 comments

The gold standard of optimization: A look under the hood of RollerCoaster Tycoon

https://larstofus.com/2026/03/22/the-gold-standard-of-optimization-a-look-under-the-hood-of-rolle...
142•mariuz•5h ago•49 comments

The future of version control

https://bramcohen.com/p/manyana
371•c17r•8h ago•219 comments

Reports of code's death are greatly exaggerated

https://stevekrouse.com/precision
210•stevekrouse•12h ago•193 comments

LLMs Predict My Coffee

https://dynomight.net/coffee/
58•surprisetalk•4d ago•23 comments

Why I love NixOS

https://www.birkey.co/2026-03-22-why-i-love-nixos.html
165•birkey•6h ago•126 comments

GrapheneOS will remain usable by anyone without requiring personal information

https://grapheneos.social/@GrapheneOS/116261301913660830
162•nothrowaways•2h ago•34 comments

Project Nomad – Knowledge That Never Goes Offline

https://www.projectnomad.us
345•jensgk•11h ago•98 comments

Flash-MoE: Running a 397B Parameter Model on a Laptop

https://github.com/danveloper/flash-moe
290•mft_•12h ago•103 comments

Five Years of Running a Systems Reading Group at Microsoft

https://armaansood.com/posts/systems-reading-group/
108•Foe•7h ago•28 comments

Iran war energy crisis is a renewable energy wake-up call

https://apnews.com/article/middle-east-wars-renewable-energy-asia-4b5fe0693ce5816472c905db85f7da6e
83•mooreds•2h ago•66 comments

MAUI Is Coming to Linux

https://avaloniaui.net/blog/maui-avalonia-preview-1
141•DeathArrow•8h ago•66 comments

How to Attract AI Bots to Your Open Source Project

https://nesbitt.io/2026/03/21/how-to-attract-ai-bots-to-your-open-source-project.html
54•zdw•1d ago•11 comments

Windows native app development is a mess

https://domenic.me/windows-native-dev/
309•domenicd•14h ago•327 comments

Theodosian Land Walls of Constantinople

https://turkisharchaeonews.net/object/theodosian-land-walls-constantinople
20•bcraven•3d ago•4 comments

First and Lego Education Partnership Update

https://community.firstinspires.org/first-lego-education-partnership-update
11•jchin•3d ago•3 comments

Building an FPGA 3dfx Voodoo with Modern RTL Tools

https://noquiche.fyi/voodoo
149•fayalalebrun•10h ago•32 comments

What Young Workers Are Doing to AI-Proof Themselves

https://www.wsj.com/economy/jobs/ai-jobs-young-people-careers-14282284
55•wallflower•5h ago•55 comments

Show HN: Codala, a social network built on scanning barcodes

https://play.google.com/store/apps/details?id=com.hsynkrkye.codala&hl=en
22•hsynkrkye•4d ago•11 comments

Teaching Claude to QA a mobile app

https://christophermeiklejohn.com/ai/zabriskie/development/android/ios/2026/03/22/teaching-claude...
57•azhenley•5h ago•4 comments

Cloudflare flags archive.today as "C&C/Botnet"; no longer resolves via 1.1.1.2

https://radar.cloudflare.com/domains/domain/archive.today
364•winkelmann•20h ago•262 comments

Palantir extends reach into British state as gets access to sensitive FCA data

https://www.theguardian.com/technology/2026/mar/22/palantir-extends-reach-into-british-state-as-i...
170•chrisjj•6h ago•46 comments

More common mistakes to avoid when creating system architecture diagrams

https://www.ilograph.com/blog/posts/more-common-diagram-mistakes/
134•billyp-rva•12h ago•50 comments

Vectorization of Verilog Designs and its Effects on Verification and Synthesis

https://arxiv.org/abs/2603.17099
20•matt_d•3d ago•3 comments

They're Vibe-Coding Spam Now

https://tedium.co/2026/02/25/vibe-coded-email-spam/
16•raybb•1h ago•9 comments

OpenClaw is a security nightmare dressed up as a daydream

https://composio.dev/content/openclaw-security-and-vulnerabilities
277•fs_software•6h ago•192 comments

25 Years of Eggs

https://www.john-rush.com/posts/eggs-25-years-20260219.html
242•avyfain•4d ago•70 comments

The IBM scientist who rewrote the rules of information just won a Turing Award

https://www.ibm.com/think/news/ibm-scientist-charles-bennett-turing-award
93•rbanffy•12h ago•7 comments

A review of dice that came with the white castle

https://boardgamegeek.com/thread/3533812/a-review-of-dice-that-came-with-the-white-castle
124•doener•3d ago•36 comments
Open in hackernews

Diverse perspectives on AI from Rust contributors and maintainers

https://nikomatsakis.github.io/rust-project-perspectives-on-ai/feb27-summary.html
95•weinzierl•1h ago

Comments

_pdp_•1h ago
AI ultimately breaks the social contract.

Sure, people are not perfect, but there are established common values that we don't need to convey in a prompt.

With AI, despite its usefulness, you are never sure if it understands these values. That might be somewhat embedded in the training data, but we all know these properties are much more swayable and unpredictable than those of a human.

It was never about the LLM to begin with.

If Linus Torvalds makes a contribution to the Linux kernel without actually writing the code himself but assigns it to a coding assistant, for better or worse I will 100% accept it on face value. This is because I trust his judgment (I accept that he is fallible as any other human). But if an unknown contributor does the same, even though the code produced is ultimately high quality, you would think twice before merging.

I mean, we already see this in various GitHub projects. There are open-source solutions that whitelist known contributors and it appears that GitHub might be allowing you to control this too.

https://github.com/orgs/community/discussions/185387

yabutlivnWoods•1h ago
Generational churn breaks social contract.

You all using Latin and believing in the old Greek gods to honor the dead?

Muricans still owning slaves from Africa?

All ways in which old social contracts were broken at one point.

We are not VHS cassettes with an obligation to play out a fuzzy memory of history.

bluefirebrand•1h ago
> AI ultimately breaks the social contract

Business schools teach that breaking the social contract is a disruption opportunity for growth, not a negative,

The Hacker in Hacker News refers to "growth hacking" now, not hacking code

_pdp_•55m ago
It depends who you ask.

You cannot say that breaking the social contract (the fabric of society, if you will) is generally a good thing, although I am sure some will find opportunities for growth.

After all, the phoenix must burn to emerge, but let's not romanticise the fire.

bluefirebrand•48m ago
> You cannot say that breaking the social contract (the fabric of society, if you will) is generally a good thing

I am not saying it's a good thing, just that it's a common attitude here

I suppose it didn't come through in my original post, but I was trying to be critical

throwaway27448•1h ago
An agent is still attached to an accountable human. If it is not, ignore it.
jojomodding•56m ago
How do you figure out which is the case, at scale?
throwaway27448•22m ago
You don't.
_pdp_•1h ago
I forgot to mention why I brought up the idea of who is making the contribution rather than how (i.e., through an LLM).

Right now, the biggest issue open-source maintainers are facing is an ever-increasing supply of PRs. Before coding assistants, those PRs didn't get pushed not because they were never written (although obviously there were fewer in quantity) but because contributors were conscious of how their contributions might be perceived. In many cases, the changes never saw the light of day outside of the fork.

LLMs don't second-guess whether a change is worth submitting, and they certainly don't feel the social pressure of how their contribution might be received. The filter is completely absent.

So I don't think the question is whether machine-generated code is low quality at all, because that is hard to judge, and frankly coding assistants can certainly produce high-quality code (with guidance). The question is who made the contribution. With rising volumes, we will see an increasing amount of rejections.

By the way, we do this too internally. We have a script that deletes LLM-generated PRs automatically after some time. It is just easier and more cost-effective than reviewing the contribution. Also, PRs get rejected for the smallest of reasons.

If it doesn't pass the smell test moments after the link is opened, it get's deleted.

pear01•33m ago
> LLMs don't second-guess whether a change is worth submitting, and they certainly don't feel the social pressure of how their contribution might be received. The filter is completely absent.

Of course you could have an agent on your side do this, so I take you to mean a LLM that submits a PR and is not instructed to make such a reflection will not intrinsically make it as a human would, that is as a necessary side effect of submitting in the first place (though one might be surprised).

It would be curious to have an API that perhaps attempts to validate some attestation about how the submitting LLM's contribution was derived, ie force that reflection at submission time with some reasonable guarantees of veracity even if it had yet to be considered. Perhaps some future API can enforce such a contract among the various LLMs.

pear01•43m ago
Prioritizing or deferring to existing contributors happens in pretty much every human endeavor.

As you point out this of course predates the age of LLM, in many ways it's basic human tribal behavior.

This does have its own set of costs and limitations however. Judgement is hard to measure. Humans create sorting bonds that may optimize for prestige or personal ties over strict qualifications or ability. The tribe is useful, but it can also be ugly. Perhaps in a not too distant future, in some domains or projects these sorts of instincts will be rendered obsolete by projects willing to accept any contribution that satisfies enough constraints, thereby trading human judgement for the desired mix of velocity and safety. Perhaps as the agents themselves improve this tension becomes less an act of external constraint but an internal guide. And what would this be, if not a simulation of judgement itself?

You could also do it in stages, ie have a delegated agent promote people to some purgatory where there is at least some hope of human intervention to attain the same rights and privileges as pre-existing contributors, that is if said agent deems your attempt worthy enough. Or maybe to fight spam an earnest contributor will have to fork over some digital currency, to essentially pay the cost of requesting admission.

All of these scenarios are rather familiar in terms of the history of human social arrangements.

That is just to say, there is no destruction of the social contract here. Only another incremental evolution.

ghosty141•1h ago
The title is misleading. It says in one of the first sentences:

> The comments within do not represent “the Rust project’s view” but rather the views of the individuals who made them. The Rust project does not, at present, have a coherent view or position around the usage of AI tools; this document is one step towards hopefully forming one.

So calling this "Rust Project Perspectives on AI" is not quite right.

JoshTriplett•1h ago
Correct. This is one internal draft by someone quoting some other people's positions but not speaking for any other positions.
chriscbr•1h ago
Maybe "Rust maintainers' perspectives on AI" or "Rust contributors' perspectives on AI" would be better?
eholk•59m ago
I took it as meaning "perspectives of people in the Rust Project about AI."
andai•1h ago
>It takes care and careful engineering to produce good results. One must work to keep the models within the flight envelope. One has to carefully structure the problem, provide the right context and guidance, and give appropriate tools and a good environment. One must think about optimizing the context window; one must be aware of its limitations.

In other words, one has to lean into the exact opposite tendencies of those which generally make people reach for AI ;)

throwaway27448•1h ago
I'm not sure there is a "normal" tendency to reach for AI. But there is certainly parallel in that, say, javascript and PHP have a reputation of being preferred by barely able people who make interesting and useful things with atrocious code.
olalonde•1h ago
I feel bad for people who reject LLMs on moral grounds. They'll likely fall behind, while also having to live in a world increasingly built around something they see as immoral.
monkaiju•53m ago
> They'll likely fall behind

So far this doesn't seem to be the case, despite it being repeated endlessly over the last few years.

>while also having to live in a world increasingly built around something they see as immoral

Should people just decide that things they think are immoral are actually fine and get over it? Doesnt really seem coherent...

ronsor•28m ago
When the moral perspective isn't that sound and isn't that important, yeah, they usually do. Everyone gets tired of complaining.
YorickPeterse•52m ago
This is just the typical FOMO nonsense pushed by AI fans.

It's the exact same as seen with many past hypes, and every time the result is a lot more nuanced than those fans claim. It wasn't that long ago that people were claiming MongoDB was going to revolutionize the world and make relational databases obsolete, or how cryptocurrencies were going to change the world, or NFTs, and the list goes on.

pton_xd•51m ago
I don't necessarily agree with the LLM moral objection, but this point of view is unconvincing. Change the topic to say, slavery, and the "I feel bad for those who reject slavery on moral grounds, they'll fall behind..." argument becomes fairly absurd.

You're essentially saying the very concept of a moral objection is to be pitied. Maybe you believe that's true but I'd say that reflects poorly on our values today.

muglug•39m ago
No, he's saying this specific moral objection is to be pitied.

When I say "I feel bad for people who feel a need to own guns", I'm not saying I feel bad for people who feel a need to lock their doors at night.

pton_xd•24m ago
The whole point of a moral objection is to give up some real or perceived benefit due to moral or ethical concerns. There's nothing to be pitied.
forgetfulness•45m ago
LLMs are very easy to pick up, the point of them for their makers is to commoditize skill and knowledge, you can't be left behind in learning to use them, AI providers don't have economic incentives to make them into anything other than appliances.

The people more at risk of being left behind are the ones that don't learn when not to trust their output.

duskwuff•10m ago
> The people more at risk of being left behind are the ones that don't learn when not to trust their output.

Or the ones who fall out of practice writing software themselves because they've been relying on AI to do all the work.

(Or the same, but with "long-form English text" instead of "software".)

bluefirebrand•42m ago
I feel bad for people who accept AI. They're going to wind up just as replaced by it as I will, but it will somehow come as a surprise to them despite the writing being on the wall for ages

I imagine there will be a lot of regrets in the future from people that were early adopters that eventually got pushed out by the AI they love so much

kellpossible2•29m ago
There must be plenty of people who "accept" it in a fatalistic manner, where the final result will not be a surprise.
exfalso•28m ago
Regret? Of what? The tech is here. You won't slow it down by not using it. People need to either adapt by moving to more and more niche areas, or become the person to be retained when the efficiency gains materialize. We still don't have the proper methodology figured out, but people are working on it.

That said, I'd agree that people who currently claim 20x speedups will indeed be replaced.

ares623•21m ago
> You won't slow it down by not using it.

Then why is it forced into everywhere and everyone and everything?

bluefirebrand•6m ago
If enough people refuse to use it then we can absolutely slow it down

So I'm doing that. Even if I don't expect to "win" in the end, I'm doing what I think is right

Maybe one day I'll be vindicated

deadbabe•41m ago
Are the people who aren’t born or haven’t even entered a workforce also falling behind?
tayo42•33m ago
Yeah that's why you go to school, learn, get trained etc..
ptnpzwqd•35m ago
On the falling behind:

I strongly doubt that is going to be the case - picking up these tools is not rocket science, even if you want to be able to use them fairly effectively. In addition, there is so much churn in AI tooling these days that an early investment might not really be worth a lot in the longer run.

On the other hand, hands-on experience in programming and architecture is currently a must-have to use the tools effectively - and continuing without AI in the short term might just buy an inexperienced engineer some time to learn, and postpone skill atrophy for an experienced engineer.

Of course, who can know what the future looks like, but I doubt a "wait and see" approach is that dangerous to anyone's career.

Kerrick•26m ago
Why would anybody who rejects them on moral grounds pick them up later? It isn't a discussion of lateness, it's a discussion of opting out.
yonran•47m ago
Seems like a lot of people’s problems with AI come from talking to the dumber models and having it not provide sufficient proof that it fixed a bug. Maybe instead of banning AI, projects should set a minimum smarts level. e.g. to contribute, you must use gpt-5.4-codex high or better for either writing it or code reviewing it.
henry_bone•42m ago
The industry and the wider world are full steam ahead with AI, but the following takes (from the article) are the ones that resonate with me. I don't use AI directly in my work for reasons similar to those expressed here[1].

For the record, I'll use it as a better web search or intro to a set of ideas or topic. But i no longer use it to generate code or solutions.

1. https://nikomatsakis.github.io/rust-project-perspectives-on-...

ysleepy•24m ago
I enjoyed reading theses perspectives, they are reasoned and insightful.

I'm undecided about my stance for gen AI in code. We can't just look at the first order and immediate effects, but also at the social, architectural, power and responsibility aspects.

For another area, prose, literature, emails, I am firm in my rejection of gen AI. I read to connect with other humans, the price of admission is spending the time.

For code, I am not as certain, nowadays I don't regularly see it as an artwork or human expression, it is a technical artifact where craftsmanship can be visible.

Will gen AI be the equivalent of a compiler and in 20 years everyone depends on their proprietary compiler/IDE company?

Can it even advance beyond patterns/approaches that we have built until then?

I have many more questions and few answers and both embracing and rejecting feels foolish.

tracerbulletx•16m ago
I'm worried about a few big companies owning the means of production for software and tightening the screws.
kvirani•10m ago
This is my immediate concern as well. Sam said in an interview that he sees "intelligence" as a utility that companies like OpenAI would own and rent out.
arcanemachiner•2m ago
Hopefully it continues to get commoditized to the point where no monopoly can get a stranglehold on it, since the end product ("intelligence") can be swapped out with little concern over who is providing it.