frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
1•Keyframe•1m ago•0 comments

AIII: A public benchmark for AI narrative and political independence

https://github.com/GRMPZQUIDOS/AIII
1•GRMPZ23•1m ago•0 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
1•valyala•2m ago•0 comments

The API Is a Dead End; Machines Need a Labor Economy

1•bot_uid_life•3m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•Jyaif•4m ago•0 comments

New wave of GLP-1 drugs is coming–and they're stronger than Wegovy and Zepbound

https://www.scientificamerican.com/article/new-glp-1-weight-loss-drugs-are-coming-and-theyre-stro...
3•randycupertino•6m ago•0 comments

Convert tempo (BPM) to millisecond durations for musical note subdivisions

https://brylie.music/apps/bpm-calculator/
1•brylie•8m ago•0 comments

Show HN: Tasty A.F.

https://tastyaf.recipes/about
1•adammfrank•9m ago•0 comments

The Contagious Taste of Cancer

https://www.historytoday.com/archive/history-matters/contagious-taste-cancer
1•Thevet•10m ago•0 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
1•alephnerd•11m ago•0 comments

Bithumb mistakenly hands out $195M in Bitcoin to users in 'Random Box' giveaway

https://koreajoongangdaily.joins.com/news/2026-02-07/business/finance/Crypto-exchange-Bithumb-mis...
1•giuliomagnifico•11m ago•0 comments

Beyond Agentic Coding

https://haskellforall.com/2026/02/beyond-agentic-coding
3•todsacerdoti•12m ago•0 comments

OpenClaw ClawHub Broken Windows Theory – If basic sorting isn't working what is?

https://www.loom.com/embed/e26a750c0c754312b032e2290630853d
1•kaicianflone•14m ago•0 comments

OpenBSD Copyright Policy

https://www.openbsd.org/policy.html
1•Panino•15m ago•0 comments

OpenClaw Creator: Why 80% of Apps Will Disappear

https://www.youtube.com/watch?v=4uzGDAoNOZc
2•schwentkerr•19m ago•0 comments

What Happens When Technical Debt Vanishes?

https://ieeexplore.ieee.org/document/11316905
2•blenderob•20m ago•0 comments

AI Is Finally Eating Software's Total Market: Here's What's Next

https://vinvashishta.substack.com/p/ai-is-finally-eating-softwares-total
3•gmays•20m ago•0 comments

Computer Science from the Bottom Up

https://www.bottomupcs.com/
2•gurjeet•21m ago•0 comments

Show HN: A toy compiler I built in high school (runs in browser)

https://vire-lang.web.app
1•xeouz•22m ago•1 comments

You don't need Mac mini to run OpenClaw

https://runclaw.sh
1•rutagandasalim•23m ago•0 comments

Learning to Reason in 13 Parameters

https://arxiv.org/abs/2602.04118
2•nicholascarolan•25m ago•0 comments

Convergent Discovery of Critical Phenomena Mathematics Across Disciplines

https://arxiv.org/abs/2601.22389
1•energyscholar•25m ago•1 comments

Ask HN: Will GPU and RAM prices ever go down?

1•alentred•26m ago•2 comments

From hunger to luxury: The story behind the most expensive rice (2025)

https://www.cnn.com/travel/japan-expensive-rice-kinmemai-premium-intl-hnk-dst
2•mooreds•27m ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
6•mindracer•28m ago•0 comments

A New Crypto Winter Is Here and Even the Biggest Bulls Aren't Certain Why

https://www.wsj.com/finance/currencies/a-new-crypto-winter-is-here-and-even-the-biggest-bulls-are...
1•thm•28m ago•0 comments

Moltbook was peak AI theater

https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
2•Brajeshwar•28m ago•0 comments

Why Claude Cowork is a math problem Indian IT can't solve

https://restofworld.org/2026/indian-it-ai-stock-crash-claude-cowork/
3•Brajeshwar•28m ago•0 comments

Show HN: Built an space travel calculator with vanilla JavaScript v2

https://www.cosmicodometer.space/
2•captainnemo729•29m ago•0 comments

Why a 175-Year-Old Glassmaker Is Suddenly an AI Superstar

https://www.wsj.com/tech/corning-fiber-optics-ai-e045ba3b
1•Brajeshwar•29m ago•0 comments
Open in hackernews

About KeePassXC's Code Quality Control

https://keepassxc.org/blog/2025-11-09-about-keepassxcs-code-quality-control/
112•haakon•3mo ago

Comments

blibble•3mo ago
> We take no shortcuts.

I mean... they are

isn't that the point? not as if "AI" leads to higher quality is it

> Certain more esoteric concerns about AI code being somehow inherently inferior to “real code” are not based in reality.

if this was true why the need to point out "we're not vibe coding", and create this process around it?

fork and move on

droidmonkey•3mo ago
We did not create this process for AI, it has been our process since 2016.
jpeterson•3mo ago
Code submissions either meet the standards of the project or they don't. Whether it was generated by human or AI is irrelevant.
KronisLV•3mo ago
> Whether it was generated by human or AI is irrelevant.

No, some projects take fundamental issues with AI, be it ethical, copyright related, or raising doubts over whether people even understand the code they're submitting and whether it'll be maintainable long term or even work.

There was some drama around that with GZDoom: https://arstechnica.com/gaming/2025/10/civil-war-gzdoom-fan-... (although that was a particular messy case where the code broke things because the dev couldn't even test it and also straight up merged it; so probably governance problems in the project as well)

But the bottom line is that some projects will disallow AI on a principled basis and they don't care just about the quality of the code, rather that it was written by an actual person. Whether it's possible to just not care about that and sneak stuff in regardless (e.g. using autocomplete and so on, maybe vibe coding a prototype and then making it your own to some degree), or whether it's possible to use it as any other tool in development, that's another story.

Edit: to clarify my personal stance, I'm largely in the "code is code" camp - either it meets some standard, or it doesn't. It's a bit like with art - whether you prefer something with soul or mindless slop, unfortunately for some the reckoning is that the purse holders often really do not care.

arghwhat•3mo ago
> No, some projects take fundamental issues with AI, be it ethical, copyright related, or raising doubts over whether people even understand the code they're submitting and whether it'll be maintainable long term or even work.

These issues are no different for normal submissions.

You are responsible for taking ownership and having sorted out copyright. You may accidentally through prior knowledge write something identical to pre-existing code with pre-existing copyright. Or steal it straight off StackOverflow. Same for an LLM - at least Github Copilot has a feature to detect literal duplicates.

You are responsible for ensuring the code you submit makes sense and is maintainable, and the reviewer will question this. Many submit hand-written, unmaintainable garbage. This is not an LLM specific issue.

Ethics is another thing, but I don't agree with any proposed issues. Learning from the works of others is an extremely human thing, and I don't see a problem being created by the fact that the experience was contained in an intermediate box.

The real problem is that there are a lot of extremely lazy individuals thinking that they are now developers because they can make ChatGPT/Claude write them a PR, and throw a tantrum over how it's discriminating against them to disallow the work on the basis that they don't understand it.

That is: The problem is people, as it always has been. Not LLMs.

riedel•2mo ago
I would agree, IMHO keepassXC should however actually lay out their review standards better to actually be able to review security relevant code. I am a happy keepassxc user on multiple devices. However, trying to use and extend it in various settings, I simply still do not understand their complete threat model, which makes it very difficult to understand the impact of many of extensions it provides: being it for quick unlocking or API connection to browsers that can be used for arbitrary clients.
s_ting765•2mo ago
People get confused talking about AI. For some reason they skip the fact that a human prompted the LLM for the generated output. One could almost think AI is an agent all on its own.
Barrin92•2mo ago
>Whether it was generated by human or AI is irrelevant.

No. These systems are still so mindboggingly bad at anything that involves manual memory management and pointers that even entertaining the idea of using them for something as critical as a non-trivial large c++ codebase, for a password manager no less, is nuts. It displays a lack of concern for security and propensity for shortcuts that I don't want to touch anything by people who even remotely consider this appropriate.

phoerious•2mo ago
Then it's good that we're not doing manual memory management.
Sincere6066•2mo ago
It is extremely relevant. I refuse to touch it if it uses AI.
beefnugs•2mo ago
Yes absolutely relevant, especially in this software case. There is no requirement for mass-amounts of boilerplate code to be written here, just supposedly smart and correct cryptography and as little code as possible to do the job right... so if someone is using AI... that is a huge red flag.

An obvious sign that something is going horribly wrong in this project.

In fact i think this kind of news is enough to garner a huge influx of international hackers all targeting this package now, if they weren't already. They will be looking closely at the supply chain, phishing the hell out of the developers, physical intrusions where they can, its a hint that the developers might be stressed and making poor decisions, with huge payoff for infiltrating

phoerious•2mo ago
Untrue. KeePassXC has large parts of UI boilerplate code and test cases. The cryptographic routines are the smallest part. They are pretty stable and don't change much. It's also not where we would be using AI.
thunderfork•3mo ago
My great concern with regards to AI use is that it's easy to say "this will not impact how attentive I am", but... that's an assertion that one can't prove. It is very difficult to notice a slow-growing deficiency in attentiveness.

Now, is there hard evidence that AI use does lead to this in all cases? Not that I'm aware of. Just as there's no easy way to prove the difference between "I don't think this is impacting me, but it is" and "it really isn't".

It comes down to two unevidenced assertions - "this will reduce attentiveness" vs "no it won't". But I don't feel great about a project like this just going straight for "no it won't" as though that's something they feel with high confidence.

From where does that confidence come?

droidmonkey•3mo ago
> From where does that confidence come?

From decades of experience, quite honestly.

eviks•3mo ago
How can you have decades of experience in a technology less than a single decade old? Sounds like ones of those HR minimum requirement memes
droidmonkey•2mo ago
Decades of programming and open source experience.
blibble•2mo ago
you have decades of experience of reviewing code produced at industrial scale to look plausible, but with zero underlying understanding, mental model or any reference to ground truth?

glad I don't work where you do!

it's actually even worse than that: the learning process to produce it doesn't care about correctness at all, not even slightly

the only thing that matters is producing plausible enough looking output to con the human into pressing "accept"

(can you see why people would be upset about feeding output generated by this process into a security critical piece of software?)

phoerious•2mo ago
The statement that correctness plays no role in the training process is objectively false. It's untrue for text LLMs, even more so for code LLMs. Correct would be that the training process and the architecture of LLMs cannot guarantee correctness.
blibble•2mo ago
> The statement that correctness plays no role in the training process is objectively false.

this statement is objectively false.

phoerious•2mo ago
I'm just an AI researcher, what do I know?
blibble•2mo ago
> I'm just an AI researcher, what do I know?

me too! what do I know?

(at least now we know where the push for this dreadful policy is coming from)

phoerious•2mo ago
The whole purpose RLVR alignment is to ensure objectively correct outputs.
eviks•3mo ago
> We take no shortcuts. At KeePassXC, we use AI for

Followed by shortcuts

> As such, they are a net benefit and make KeePassXC strictly safer.

They can also waste author's/reviewer's time chasing imaginary ends, taking time away from the "regular" review, or with some level of trust add some plausibly explained vulnerability. Nothing is strict here

I'm sure if you ask your favorite AI bot, he'll come up with a few more reasons why the statement is overconfidently wrong.

phoerious•2mo ago
If we're wasting anyone's time, it's our own. Your comment reads like the AI would make up hundreds of invalid complaints, which is simply not true. You can see for yourself in our GitHub repository if you care.
Firehawke•2mo ago
This just wrecked my trust in KeePassXC. Time to go see if anyone's going to continue this from a fork where they aren't setting themselves up for a massive security failure of some variety.
PaulKeeble•2mo ago
I am now on the hunt for a non vibe coded alternative. I stopped open sourcing code after all my open code's licenses were broken by Microsoft and everyone else commercialising it. Which I guess is part of the point of why they did it and have put serious money to defending themselves in court against anyone that dare challenge it. Suffice to say I don't want anything to do with projects that participated in that theft and re-commercialisation of open source code.

Does not look like the original Keepass project is doing this which is the easiest migration away but I will check a bit deeper on their commits to be sure.

AlexErrant•2mo ago
The original Keepass project has 11 CVEs. XC has 3, and has disputed all of them with e.g. "the vendor disputes this because memory-management constraints make this unavoidable in the current design and other realistic designs", etc.
droidmonkey•2mo ago
Additionally, the original KeePass project has no public development or public review process for their code. They do everything behind the scenes and only publish code when a release is made. KeePass is "code available" open source.
Lariscus•2mo ago
I didn't know about that and this is really concerning to me. AI has no place in security critical software like KeePassXC, and I remain unconvinced that they will only use it for simple tasks. I don't feel like I can trust this software any longer this is a password manager not just some random website where bugs basically don't matter. I hate that I have to replace yet another piece of software that I liked.
phoerious•2mo ago
Our entire development process is open on GitHub. You can see where we use or accept AI at any time.
Lariscus•2mo ago
That's all nice but I still don't want slop code in an application as security critical as a password manager. The correct percentage of slop code for a password manager is 0% and it’s pants on head crazy to claim otherwise.

I have dug around a bit and found a thread mastodon thread that doesn't inspire confidence[1]. KeePassXC seems completely untrustworthy at this point not only have they jumped on the AI bandwagon, they also seemingly don't know what a zero-day is. I genuinely liked KeePassXC and used it for years now I am spending my Sunday evening researching alternatives.

[1] https://fosstodon.org/@2something@transfem.social/1148367097...

irilesscent•2mo ago
I'd trust them to know what they're with KeePassXC given their track record with it.
ysleepy•2mo ago
Tell yourself what you want, but this sort of AI positive proclamation will make your project seem less trustworthy to many people.

I choose not to use a vibe coded password manager, rigorous review or not, to protect my entire digital existence, monetary assets and reputation.

It's the pinnacle of safety requirements, memory unsafe language, cryptography, incredibly high stakes.

I have the distinct displeasure having to review LLM output in pull requests and unfailingly they contain code the submitted doesn't fully understand.

AlexErrant•2mo ago
Y'know how there's "security theater"? https://en.wikipedia.org/wiki/Security_theater

I think there's an analogous subset: "llm-security theater".

There's so much pearl-clutching, pedantry, and noise from people who are obviously 1) not contributing to KeePassXC AND 2) never would contribute AND 3) are unaware of EXISTING bugs/issues/CVEs with KeePassXC. All they provide are vague abstract arguments from their own experience with LLMs, and they argue with the maintainers of KeyPassXC without giving specifics, as though they have the right to tell others how to run their repo when they're unable to link a single concrete problematic issue or PR.

Instead, all they have are "vibes", which is ironic.

0x_rs•2mo ago
There's no way to determine whether a contributor used LLMs in part or full, not without them being honest about it. With that in mind, this seems like a reasonable position. Been using KeePassXC since forever and will continue to do so. It might feel wrong to some, but these changes are inevitable and it's best to be prepared and become acquainted with that now rather than later.
superdisk•2mo ago
> There's no way to determine whether a contributor used LLMs in part or full, not without them being honest about it.

Oh, you can tell.

cadamsdotcom•2mo ago
> we still code ourselves for work and for fun. This will not suddenly go away because we have another tool in our belts.

AI is just another way to write code. At the end of the day code is just text. It still needs to be reviewed - nothing about that is changing.

snowwrestler•2mo ago
I feel like a lot of the comments here do not understand how KeePassXC actually works. It’s a client application that works with a standard encrypted file format. The file format is the basis for security, not the client application.

KeePassXC does not store any data. Nor does it receive connections from the Internet, like a server. Thus the risk is structurally lower than a commercial client-server application like LastPass or 1Password, which is actually in possession of your password data.

I use 1Password at work for its excellent collaboration features and good-enough security. For most people it replaces a post-it note or Excel file. It’s way better than those.

But for my passwords I use KeePass (the file format) and a variety of clients including KeePassXC. This statement about AI won’t change that, unless someone can give me a reason other than vague “AI bad” or “no vibe coding” like most comments so far.

sharts•2mo ago
I think a lot of folks end up copying their encrypted file to shared storage like Dropbox anyway. This doesn’t seem all that different from using 1pass.
cassianoleal•2mo ago
I can see a few differences.

Pushing Keepass vault to cloud storage:

* No per-item synchronisation

* Full control over encryption of the database

* Choice of cloud storage to trust with vault

* Free as in beer if no using cloud (or using a free/already paid for offering)

1Password:

* Per-item sync and collaboration

* Full trust on the (closed-source) client apps over encryption of vault

* No choice of cloud

* No choice of encryption

* Mandatory paying subscription

beefnugs•2mo ago
Sorry but that is nonsense. "The file format is the basis for security, not the client application" is so wrong, any messing with the application is game over.

Hell if you leave your computer unlocked, a rubber-ducky could replace your executable and middleman your master password.

phoerious•2mo ago
There is actually very little we can do about local attackers, with or without AI. All we can do is mitigate.