frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Study confirms experience beats youthful enthusiasm

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/
1•Willingham•3m ago•0 comments

The Big Hunger by Walter J Miller, Jr. (1952)

https://lauriepenny.substack.com/p/the-big-hunger
1•shervinafshar•4m ago•0 comments

The Genus Amanita

https://www.mushroomexpert.com/amanita.html
1•rolph•9m ago•0 comments

We have broken SHA-1 in practice

https://shattered.io/
1•mooreds•9m ago•1 comments

Ask HN: Was my first management job bad, or is this what management is like?

1•Buttons840•10m ago•0 comments

Ask HN: How to Reduce Time Spent Crimping?

1•pinkmuffinere•12m ago•0 comments

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•16m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•18m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
1•saikatsg•18m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
1•aweussom•19m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
3•archb•21m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•21m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•22m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•22m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•27m ago•0 comments

Python Only Has One Real Competitor

https://mccue.dev/pages/2-6-26-python-competitor
3•dragandj•29m ago•0 comments

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•30m ago•1 comments

Ask HN: How are you using specialized agents to accelerate your work?

1•otterley•31m ago•0 comments

Passing user_id through 6 services? OTel Baggage fixes this

https://signoz.io/blog/otel-baggage/
1•pranay01•32m ago•0 comments

DavMail Pop/IMAP/SMTP/Caldav/Carddav/LDAP Exchange Gateway

https://davmail.sourceforge.net/
1•todsacerdoti•32m ago•0 comments

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•34m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•35m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•35m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•36m ago•1 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•37m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•39m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•39m ago•0 comments

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•40m ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•41m ago•1 comments

NASA delays moon rocket launch by a month after fuel leaks during test

https://www.theguardian.com/science/2026/feb/03/nasa-delays-moon-rocket-launch-month-fuel-leaks-a...
2•mooreds•41m ago•0 comments
Open in hackernews

CodeMender: an AI agent for code security

https://deepmind.google/discover/blog/introducing-codemender-an-ai-agent-for-code-security/
199•ravenical•4mo ago

Comments

blibble•4mo ago
what an annoying page

pointless videos, without enough time to read the code

sobiolite•4mo ago
I wonder if we're going to end up in an arms race between AIs masquerading as contributors (and security researchers) trying to introduce vulnerabilities into popular libraries, and AIs trying to detect and fix them.
sublinear•4mo ago
Why would it be like that instead of the way we already handle low-trust environments?

Projects that get a lot of attention already put up barriers to new contributions, and the ones that get less attention will continue to get less attention.

The review process cannot be left to AI because it will introduce uncertainty nobody wants to be held responsible for.

If anything, the people who have always seen code as a mere means to an end will finally come to a forced decision: either stop fucking around or get out of the way.

An adversarial web is ultimately good for software quality, but less open than it used to be. I'm not even sure if that's a bad thing.

sobiolite•4mo ago
What I'm suggesting is: what if AIs get so good at crafting vulnerable (but apparently innocent) code than human review cannot reliably catch them?

And saying "ones that get less attention will continue to get less attention" is like imagining that only popular email addresses get spammed. Once malice is automated, everyone gets attention.

courseofaction•4mo ago
Significantly easier to detect than create? Not quite NP, but intuitively an AI which can create such an exploit could also detect it.

The economics is more about how much the defender is willing to spend in advance protection vs the expected value of a security failure

cookiengineer•4mo ago
I think the issue I have with this argument is that it's not a logical conclusion that's based on technological choice.

It's an argument about affordability and the economics behind it, which puts more burden on the (open source) supply chain which is already stressed to its limit. Maintainers simply don't have the money to keep up with foreign state actors. Heck, they don't even have money for food at this point, and have to work another job to be able to do open source in their free time.

I know there are exceptions, but they are veeeery marginal. The norm is: open source is unpaid, tedious, and hard work to do. It will get harder if you just look at the sheer amount of slopcode pull requests that plague a lot of projects already.

The trend is likely going to be more blocked pull requests by default rather than having to read and evaluate each of them.

torginus•4mo ago
If you are doing security of all things - why wouldn't you verify the provenance of your tooling and libs?
zb3•4mo ago
DeepMind = not available for use
esafak•4mo ago
It's lost its charm.
mmaunder•4mo ago
Can we just flag this since it’s not actually a thing available to anyone?
sigmar•4mo ago
4.5 million lines of code for one fix is impressive for an LLM agent, but there's so little detail in this post otherwise. Perhaps this is a tease to what will be released on Thursday...
wrs•4mo ago
That's how I read it at first too, but I think the more probable interpretation is that it was a fix to a project that has 4.5M lines of code.
sigmar•4mo ago
oh, that would definitely make more sense.
narmiouh•4mo ago
Not a fan of future products being announced as if they are here but are basically is still in "Internal Research" stages. I'm not sure who this is really helping? except creating unnecessary anticipation which we kinda all know are in this loop lately of "yes it works great, but".
bgwalter•4mo ago
So it is a secret tool, they will "gradually reach out to interested maintainers of critical open source projects with CodeMender-generated patches", then they "hope to release CodeMender as a tool that can be used by all software developers".

Why is everything in "AI" shrouded in mystery, hidden behind $200 monthly payments and has glossy announcements. Just release the damn thing and let us test it. You know, like the software we write and that you steal from us.

philipwhiuk•4mo ago
It could instead be used to automate the finding of zero-days.

And $200 payments is probably revenue neutral for actual cost of this stuff.

nickpinkston•4mo ago
I'm optimistic that it's easier to find/solve vulnerabilities via auto pen-testing / patching, and other security measures, than it will be to find/exploit vulnerabilities after - ie defense is easier in an auto-security world.

Does anyone disagree?

This is purely my intuition, but I'm interested in how others are thinking about it.

All this with the mega caveat of this assuming very widespread adoption of these defenses, which we know won't be true and auto-hacking may be rampant for a while.

manquer•4mo ago
In open source codebases perhaps, either because big tech would be generous enough to run and generate PRs(if they are welcome ) for those issues.

In proprietary/closed source it depends on ability to spend the money these tools would end up costing.

As there is more and more vibe coded apps there will be more security bugs because app owners just don’t know better or don’t care to fix them .

This happened when rise of Wordpress and other cmses and their plugin ecosystem or languages like early PHP or for that matter even C opened up software development to wider communities.

On average we will see more issues not less.

courseofaction•4mo ago
I've also thought this for scam perpetration vs mitigation. An AI listening to grandma's call would surely detect most confidence or pig butchering scams (or suggest how to verify), and be able to cast doubt on the caller's intentions or inform a trusted relative before the scammer can build up rapport. Security and surveillance concerns notwithstanding.
Joel_Mckay•4mo ago
In general, most modern vulnerabilities are initially identified with fuzzing systems under abnormal conditions. Whether these issues may be consistently exploited can be probabilistic in nature, and thus repeatability with a POC dataset is already difficult.

That being said, most modern exploits are already auto-generated though brute-force, as nothing more complex is required.

>Does anyone disagree?

CVE agents already pose a serious threat vector in and of itself.

1. Models can't currently be made inherently trustworthy, and the people claiming otherwise are selling something.

"Sleeper Agents in Large Language Models - Computerphile"

https://www.youtube.com/watch?v=wL22URoMZjo

2. LLMs can negatively impact logical function in human users. However, people feel 20% more productive, and that makes their contributed work dangerous.

3. People are already bad at reconciling their instincts and rational evaluation. Adding additional logical impairments is not wise:

https://www.youtube.com/watch?v=-Pc3IuVNuO0

4. Auto merging vulnerabilities into opensource is already a concern, as it falls into the ambiguous "Malicious sabotage" or "Incompetent noob" classifications. How do we know someone or some models intent? We can't, and thus the code base could turn into an incoherent mess for human readers.

Mitigating risk:

i. Offline agents should only have read-access to advise on identified problem patterns.

ii. Code should never be cut-and-pasted, but rather evaluated for its meaning.

iii. Assume a system is already compromised, and consider how to handle the situation. In this line of reasoning, the policy choices should become clear.

Best of luck, =3

closeparen•4mo ago
If you can compromise an employee desktop and put a too-cheap-to-meter intelligence equivalent to a medium-skilled software developer in there to handcraft an attack on whatever internal applications they have access to, it's kind of over. This kind of stuff isn’t normally hardened against custom or creative attacks. Cybersecurity rests on bot attacks having known signatures, and sophisticated human attackers having better things to do with their time.
squigz•4mo ago
Why not put a more powerful agent in there to handcraft defences?
NitpickLawyer•4mo ago
> I'm optimistic that it's easier to find/solve vulnerabilities via auto pen-testing / patching, and other security measures, than it will be to find/exploit vulnerabilities after - ie defense is easier in an auto-security world.

I somewhat share the feeling that this is where it's going, but not sure if fixing will be easier. In "meatbag" red vs. blue teams, reds have it easier as they only have to make it once, blue has to always be right.

I do imagine something adversarial being the new standard, though. We'll have red vs blue agents that constantly work on owning the other side.

dotancohen•4mo ago
In many small companies (e.g. startups), the attackers are far more experienced and skilled than are the defenders. For attacking specific targets, they also have the leisure of choosing the timing of the attack - maybe the CTO just boarded a four hour flight?
Yoric•4mo ago
Does anybody know how such LLMs are trained/fine-tuned?
summarity•4mo ago
If you want to get reliable automated fixes today, I'd encourage you to enable code scanning on your repo. It's free for open-source repos and includes Copilot Autofix (also for free).

We've already seen more than 100,000 fixes applied with Autofix in the last 6 months, and we're constantly improving it. It's powered by CodeQL, our deterministic and in-depth static analysis engine, which also recently gained support for Rust.

To enable go to your repo -> Security -> code scanning.

Read more about how autofix works here: https://docs.github.com/en/code-security/code-scanning/manag...

And stay tuned for GitHub Universe in a few weeks for other relevant announcements ;).

Disclaimer: I'm the Product lead on detection & remediation engines at GitHub

inemesitaffia•4mo ago
Please tell your people about 2FA SMS delivery issues to certain West African countries. I'd rather have it via email or have the option of WhatsApp

I was fine before 2FA and I'm willing to pay to go without. Same username

Can't scan my code if I can't access my account

philipwhiuk•4mo ago
If this is released publicly it will be immediately used to find zero-days in software by black hats.
sitkack•4mo ago
Remember kids! Everything is dual use.