frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Marble Fountain

https://willmorrison.net/posts/marble-fountain/
148•chris_overseas•3h ago•15 comments

The Manuscripts of Edsger W. Dijkstra

https://www.cs.utexas.edu/~EWD/
117•nathan-barry•4h ago•44 comments

Montana Becomes First State to Enshrine 'Right to Compute' into Law

https://montananewsroom.com/montana-becomes-first-state-to-enshrine-right-to-compute-into-law/
189•bilsbie•6h ago•96 comments

The Principles of Diffusion Models

https://arxiv.org/abs/2510.21890
58•Anon84•3h ago•3 comments

Drilling Down on Uncle Sam's Proposed TP-Link Ban

https://krebsonsecurity.com/2025/11/drilling-down-on-uncle-sams-proposed-tp-link-ban/
26•todsacerdoti•1h ago•13 comments

Bumble Berry Pi – A Cheap DIY Raspberry Pi Handheld Cyberdeck

https://github.com/samcervantes/bumble-berry-pi
41•MakerSam•3h ago•6 comments

AI isn't replacing jobs. AI spending is

https://www.fastcompany.com/91435192/chatgpt-llm-openai-jobs-amazon
343•felineflock•4h ago•198 comments

Reviving Classic Unix Games: A 20-Year Journey Through Software Archaeology

https://vejeta.com/reviving-classic-unix-games-a-20-year-journey-through-software-archaeology/
97•mwheeler•6h ago•35 comments

Zensical – A modern static site generator built by the Material for MkDocs team

https://squidfunk.github.io/mkdocs-material/blog/2025/11/05/zensical/
73•japhyr•6h ago•23 comments

Samsung Family Hub for 2025 Update Elevates the Smart Home Ecosystem

https://news.samsung.com/us/samsung-family-hub-2025-update-elevates-smart-home-ecosystem/
271•janandonly•4h ago•242 comments

When Your Hash Becomes a String: Hunting Ruby's Million-to-One Memory Bug

https://mensfeld.pl/2025/11/ruby-ffi-gc-bug-hash-becomes-string/
52•phmx•5d ago•14 comments

Visualize FastAPI endpoints with FastAPI-Voyager

https://www.newsyeah.fun/voyager/
87•tank-34•7h ago•12 comments

Using bubblewrap to add sandboxing to NetBSD

https://blog.netbsd.org/tnf/entry/gsoc2025_bubblewrap_sandboxing
52•jaypatelani•6h ago•16 comments

William Gass and John Gardner: A Debate on Fiction (1979)

https://medium.com/the-william-h-gass-interviews/william-h-gass-interviewed-by-thomas-leclair-wit...
4•ofalkaed•6d ago•0 comments

CHIP8 – writing emulator, assembler, example game and VHDL hardware impl

http://blog.dominikrudnik.pl/chip8-emulator-assembler-game-vhdl
8•qikcik•5d ago•0 comments

Email verification protocol

https://github.com/WICG/email-verification-protocol
95•sgoto•1w ago•61 comments

I Am Mark Zuckerberg

https://iammarkzuckerberg.com/
967•jb1991•13h ago•353 comments

Ironclad – formally verified, real-time capable, Unix-like OS kernel

https://ironclad-os.org/
331•vitalnodo•20h ago•95 comments

Python Software Foundation gets a donor surge after rejecting federal grant

https://thenewstack.io/psf-gets-a-donor-surge-after-rejecting-anti-dei-federal-grant/
23•MilnerRoute•2h ago•4 comments

Ask HN: How do you get over the fear of sharing code?

26•sodokuwizard•2h ago•41 comments

Largest cargo sailboat completes first Atlantic crossing

https://www.marineinsight.com/shipping-news/worlds-largest-cargo-sailboat-completes-historic-firs...
354•defrost•23h ago•241 comments

Reverse engineering Codex CLI to get GPT-5-Codex-Mini to draw me a pelican

https://simonwillison.net/2025/Nov/9/gpt-5-codex-mini/
131•simonw•15h ago•62 comments

Bull markets make you feel smarter than you are

https://awealthofcommonsense.com/2025/11/ben-graham-bull-market-brains/
66•raw_anon_1111•3h ago•21 comments

Alive internet theory

https://alivetheory.net/
131•manbitesdog•7h ago•61 comments

Ask HN: How would you set up a child’s first Linux computer?

131•evolve2k•8h ago•178 comments

Knowledge Insulating Vision-Language-Action Models: Train, Run Fast, Generalize [pdf]

https://www.physicalintelligence.company/download/pi05_KI.pdf
6•arunc•1w ago•0 comments

Open-source communications by bouncing signals off the Moon

https://open.space/
244•fortran77•1w ago•64 comments

American Heart Association says melatonin may be linked to serious heart risks

https://www.sciencedaily.com/releases/2025/11/251104012959.htm
17•pogue•1h ago•9 comments

Marko – A declarative, HTML‑based language

https://markojs.com/
341•ulrischa•1d ago•166 comments

Forth – Is it still relevant?

https://github.com/chochain/eforth
88•lioeters•14h ago•70 comments
Open in hackernews

About KeePassXC's Code Quality Control

https://keepassxc.org/blog/2025-11-09-about-keepassxcs-code-quality-control/
84•haakon•4h ago

Comments

blibble•3h ago
> We take no shortcuts.

I mean... they are

isn't that the point? not as if "AI" leads to higher quality is it

> Certain more esoteric concerns about AI code being somehow inherently inferior to “real code” are not based in reality.

if this was true why the need to point out "we're not vibe coding", and create this process around it?

fork and move on

droidmonkey•3h ago
We did not create this process for AI, it has been our process since 2016.
jpeterson•3h ago
Code submissions either meet the standards of the project or they don't. Whether it was generated by human or AI is irrelevant.
KronisLV•2h ago
> Whether it was generated by human or AI is irrelevant.

No, some projects take fundamental issues with AI, be it ethical, copyright related, or raising doubts over whether people even understand the code they're submitting and whether it'll be maintainable long term or even work.

There was some drama around that with GZDoom: https://arstechnica.com/gaming/2025/10/civil-war-gzdoom-fan-... (although that was a particular messy case where the code broke things because the dev couldn't even test it and also straight up merged it; so probably governance problems in the project as well)

But the bottom line is that some projects will disallow AI on a principled basis and they don't care just about the quality of the code, rather that it was written by an actual person. Whether it's possible to just not care about that and sneak stuff in regardless (e.g. using autocomplete and so on, maybe vibe coding a prototype and then making it your own to some degree), or whether it's possible to use it as any other tool in development, that's another story.

Edit: to clarify my personal stance, I'm largely in the "code is code" camp - either it meets some standard, or it doesn't. It's a bit like with art - whether you prefer something with soul or mindless slop, unfortunately for some the reckoning is that the purse holders often really do not care.

arghwhat•2h ago
> No, some projects take fundamental issues with AI, be it ethical, copyright related, or raising doubts over whether people even understand the code they're submitting and whether it'll be maintainable long term or even work.

These issues are no different for normal submissions.

You are responsible for taking ownership and having sorted out copyright. You may accidentally through prior knowledge write something identical to pre-existing code with pre-existing copyright. Or steal it straight off StackOverflow. Same for an LLM - at least Github Copilot has a feature to detect literal duplicates.

You are responsible for ensuring the code you submit makes sense and is maintainable, and the reviewer will question this. Many submit hand-written, unmaintainable garbage. This is not an LLM specific issue.

Ethics is another thing, but I don't agree with any proposed issues. Learning from the works of others is an extremely human thing, and I don't see a problem being created by the fact that the experience was contained in an intermediate box.

The real problem is that there are a lot of extremely lazy individuals thinking that they are now developers because they can make ChatGPT/Claude write them a PR, and throw a tantrum over how it's discriminating against them to disallow the work on the basis that they don't understand it.

That is: The problem is people, as it always has been. Not LLMs.

riedel•1h ago
I would agree, IMHO keepassXC should however actually lay out their review standards better to actually be able to review security relevant code. I am a happy keepassxc user on multiple devices. However, trying to use and extend it in various settings, I simply still do not understand their complete threat model, which makes it very difficult to understand the impact of many of extensions it provides: being it for quick unlocking or API connection to browsers that can be used for arbitrary clients.
s_ting765•39m ago
People get confused talking about AI. For some reason they skip the fact that a human prompted the LLM for the generated output. One could almost think AI is an agent all on its own.
thunderfork•3h ago
My great concern with regards to AI use is that it's easy to say "this will not impact how attentive I am", but... that's an assertion that one can't prove. It is very difficult to notice a slow-growing deficiency in attentiveness.

Now, is there hard evidence that AI use does lead to this in all cases? Not that I'm aware of. Just as there's no easy way to prove the difference between "I don't think this is impacting me, but it is" and "it really isn't".

It comes down to two unevidenced assertions - "this will reduce attentiveness" vs "no it won't". But I don't feel great about a project like this just going straight for "no it won't" as though that's something they feel with high confidence.

From where does that confidence come?

droidmonkey•2h ago
> From where does that confidence come?

From decades of experience, quite honestly.

eviks•2h ago
How can you have decades of experience in a technology less than a single decade old? Sounds like ones of those HR minimum requirement memes
droidmonkey•1h ago
Decades of programming and open source experience.
blibble•1h ago
you have decades of experience of reviewing code produced at industrial scale to look plausible, but with zero underlying understanding, mental model or any reference to ground truth?

glad I don't work where you do!

it's actually even worse than that: the learning process to produce it doesn't care about correctness at all, not even slightly

the only thing that matters is producing plausible enough looking output to con the human into pressing "accept"

(can you see why people would be upset about feeding output generated by this process into a security critical piece of software?)

phoerious•1h ago
The statement that correctness plays no role in the training process is objectively false. It's untrue for text LLMs, even more so for code LLMs. Correct would be that the training process and the architecture of LLMs cannot guarantee correctness.
blibble•1h ago
> The statement that correctness plays no role in the training process is objectively false.

this statement is objectively false.

phoerious•1h ago
I'm just an AI researcher, what do I know?
blibble•59m ago
> I'm just an AI researcher, what do I know?

me too! what do I know?

(at least now we know where the push for this dreadful policy is coming from)

phoerious•25m ago
The whole purpose RLVR alignment is to ensure objectively correct outputs.
eviks•2h ago
> We take no shortcuts. At KeePassXC, we use AI for

Followed by shortcuts

> As such, they are a net benefit and make KeePassXC strictly safer.

They can also waste author's/reviewer's time chasing imaginary ends, taking time away from the "regular" review, or with some level of trust add some plausibly explained vulnerability. Nothing is strict here

I'm sure if you ask your favorite AI bot, he'll come up with a few more reasons why the statement is overconfidently wrong.

phoerious•1h ago
If we're wasting anyone's time, it's our own. Your comment reads like the AI would make up hundreds of invalid complaints, which is simply not true. You can see for yourself in our GitHub repository if you care.
Firehawke•1h ago
This just wrecked my trust in KeePassXC. Time to go see if anyone's going to continue this from a fork where they aren't setting themselves up for a massive security failure of some variety.
PaulKeeble•1h ago
I am now on the hunt for a non vibe coded alternative. I stopped open sourcing code after all my open code's licenses were broken by Microsoft and everyone else commercialising it. Which I guess is part of the point of why they did it and have put serious money to defending themselves in court against anyone that dare challenge it. Suffice to say I don't want anything to do with projects that participated in that theft and re-commercialisation of open source code.

Does not look like the original Keepass project is doing this which is the easiest migration away but I will check a bit deeper on their commits to be sure.

Lariscus•1h ago
I didn't know about that and this is really concerning to me. AI has no place in security critical software like KeePassXC, and I remain unconvinced that they will only use it for simple tasks. I don't feel like I can trust this software any longer this is a password manager not just some random website where bugs basically don't matter. I hate that I have to replace yet another piece of software that I liked.
phoerious•23m ago
Our entire development process is open on GitHub. You can see where we use or accept AI at any time.
irilesscent•24m ago
I'd trust them to know what they're with KeePassXC given their track record with it.