frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

A rogue AI led to a serious security incident at Meta

https://www.theverge.com/ai-artificial-intelligence/897528/meta-rogue-ai-agent-security-incident
82•mikece•2h ago

Comments

welfare•1h ago
Behind paywall, is there another link to the article?
yomismoaqui•1h ago
https://archive.is/A2hmz
krupan•1h ago
I hit back, clicked the link again, and it let me through
JKolios•1h ago
"A rogue AI led to a serious security incident" is certainly a way to write "Someone vibe coded too hard and leaked data".
krupan•1h ago
Read TFA. It's not "Someone vibe coded too hard and leaked data"
Uhhrrr•1h ago
The two errors, then, were that the LLM hallucinated something, and that a human trusted the LLM without reasoning about its answer. The fix for this common pattern is to reason about LLM outputs before making use of them.
krupan•1h ago
It's more like, the LLM "hallucinated" (I hate that term) and automatically posted the information to the forum. It sounds like the human didn't get a chance to reason about it. At least not the original human that asked the LLM for an answer
c-linkage•1h ago
If you don't like hallucinate, try bullshit. [NB: bullshit is a technical term; see https://en.wikipedia.org/wiki/On_Bullshit]

https://www.psypost.org/scholars-ai-isnt-hallucinating-its-b...

krupan•1h ago
That is my preferred term, but it seems to derail discussions that might have otherwise been productive (might...the hope I have)
nytesky•52m ago
I’m not in AI, but what is happening is that it is building output from the long tail of its training data? Instead of branching down the more common probability paths, something in this interaction had it travel into the data wilderness?

So I asked AI to give it a good name, and it said “statistical wandering” or “logical improv”.

paxys•1h ago
A big problem now both internally to a company and externally is that official support channels are being replaced by chatbots, and you really have no option but to trust their output because a human expert is no longer available.

If I post a question to the internal payment team's forum about a critical processing issue and some "payments bot" replies to me, should I be at fault for trusting the answer?

RussianCow•1h ago
I know this is happening with external customer support, but is this really happening internally at big companies? Preventing you from talking to a human in the correct department about an issue feels like a bomb waiting to explode.
paxys•1h ago
Teams are heavily incentivized to incorporate AI in their internal workflows. At Meta it is a requirement, and will come up in your performance review if you fail to do so.
wmeredith•1h ago
I'm sure it is. Thankfully I don't work for a company this large any more, but when I was employed by a multinational with 30K+ employees, our IT department was outsourced to India and you had to get through a couple layers of phone tree/webchat hell to actually talk to a real person. I could easily see companies of this size replacing their support with LLM nonsense.
leptons•1h ago
If "the level of awareness that created a problem, cannot be used to fix the problem", then you're asking too much if you expect a human to reason about an LLM output when they are the ones that asked an LLM to do the thinking for them to begin with.
thwarted•1h ago
This feels like a rediscovering/rewording of Kernighan's Law:

"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." ~ Brian Kernighan

somewhereoutth•1h ago
However - Automation bias is a common problem (predating AI), the 'human-in-the-loop' ends up implicitly trusting the automated system.
krupan•1h ago
At least pre-LLM automation was written by a careful human who's job was on the line, and was deterministic.
SlinkyOnStairs•1h ago
> The fix for this common pattern is to reason about LLM outputs before making use of them.

That is politics. Not engineering.

Assigning a human to "check the output every time" and blaming them for the faults in the output is just assigning a scapegoat.

If you have to check the AI output every single time, the AI is pointless. You can just check immediately.

fhd2•1h ago
Well, I'd say there's two dimensions:

1. Check frequency (between every single time and spot checks).

2. Check thoroughness (between antagonistic in-depth vs high level).

I'd agree that, if you're towards the end of both dimensions, the system is not generating any value.

A lot of folks are taking calculated (or I guess in some cases, reckless) risks right now, by moving one or both of those dimensions. I'd argue that in many situations, the risk is small and worth it. In many others, not so much.

We'll see how it goes, I suppose.

alfalfasprout•1h ago
When organizational incentives penalize NOT using AI and firing the bottom x% regularly then are you really surprised LLM outputs aren't being scrutinized?
krupan•1h ago
"A human, however, might have done further testing and made a more complete judgment call before sharing the information"

Because a human would have been fired for posting something that incorrect and dangerous

paxys•1h ago
But funny enough the person who was responsible for setting up the bot will likely face no repercussions. In fact they will probably be rewarded for transitioning their team's workflows to AI.
pixl97•24m ago
I mean, only if it leads to embarrassment right off the bat.

If there is a year or two between writing your security fuck up and it being discovered the likelihood of repercussions drops significantly.

jasonpeacock•1h ago
I'm concerned that someone had the permissions to make such a change without the knowledge of how to make the change.

And there was no test environment to validate the change before it was made.

Multiple process & mechanism failures, regardless of where the bad advice came from.

krupan•1h ago
If you have to do all that, then what's the point of the AI? I'm joking, but I'm afraid many others say the same thing 100% seriously
Fizzadar•1h ago
I’m predicting a wave of such incidents to start appearing over the next few months/years.
amelius•1h ago
How long until an AI puts all our personal data on the streets?
krupan•1h ago
Very soon, and at this point I'm not sure even that would cure the delusions of the few who practically worship LLMs
esseph•1h ago
It's already there for a dollar to the right data broker. Could probably pull your doctor visit info from last week (example).
yieldcrv•1h ago
very misaligned! sprays bottle at mac mini
advisedwang•1h ago
AI can be used to move fast. So management expects us to move at that speed. AI can be used to move even faster if you don't check it's output. The ever ratcheting demand for faster output will make it infeasible to diligently check AI output all the time. AI errors being acted on without due care is inevitable.
ex-aws-dude•1h ago
This agent stuff is really making me lose respect for our industry

All the years of discussing programming/security best practices

Then cut to 2026 and suddenly its like we just collectively decided software quality doesn't matter and its becoming standard practice to have bots on our local PC constantly running unknown shell commands

aeblyve•1h ago
People salivate so hard at the thought of the high level of automation promised that they're willing to do away with privacy altogether and live in Data Communism.

My thinking is, this will increase the demand for backup and other resilience solutions.

_doctor_love•1h ago
> People salivate so hard at the thought of the high level of automation promised that they're willing to do away with privacy altogether and live in Data Communism.

This occurred long time ago comrade 'aeblyve.

aeblyve•1h ago
‘At a certain stage of development, the material productive forces of society come into conflict with the existing relations of production, or this may express the same thing in legal terms - with the property relations within the framework of which they have operated hitherto. From forms of development of the productive forces these relations turn into their fetter. Then begins an era of social revolution. The changes in the economic foundation leads sooner or later to the transformation of the whole immense superstructure.’

Marx

Apocryphon•1h ago
Turns out all of the frenzy of the ZIRP era is piddling compared to what happens when ZIRP is taken away.
yoyohello13•58m ago
How can you respect an industry that doesn't respect itself?
testplzignore•50m ago
Our industry has never been serious about security. We all download and run unvetted code via package managers every day. At least now the insanity is out in the open. We won't change until Skynet fires off the nukes.
asdff•34m ago
I keep getting so depressed thinking about the inevitable. Quite simply, humans can't scale or iteratively improve. We still need to eat, we still need to sleep, we can only think on one thread at a time basically, we take 20 years to get to our prime, which is a fleeting moment, while most of our lifespan is spent in a state of decline of capability. AI humanoid robot from the near future doesn't need to eat or sleep, can work 24/7, can compute thousands of processes in parallel, is the same fungible unit as any other humanoid robot, forever with some maintenance. Why justify a sustaining an inefficient human in that modern world? It is more profitable for the company to have humans go extinct and maximize planetary resource use to its fullest extent possible.

Seems we are digging our graves as a species and don't even realize it. I mean Sam Altman is already saying it taking 20 years to train a human is a Big Problem.

pixl97•27m ago
>and don't even realize it.

Oh, many of us realize it, but doing anything about Moloch is much, much harder.

nancyminusone•9m ago
To what end though? Are the robots going to take over and trade busy work amongst themselves forever? What would that accomplish?
sunrunner•32m ago
> We won't change until Skynet fires off the nukes.

And then we won't need to, because at that point it will be too late.

superb_dev•38m ago
I’ve never had respect for the industry as a whole, only individuals within. There has a been a serious lack of rigor and professionalism in software engineering for as long as I’ve been a part of it
jihadjihad•25m ago
It's a slap in the face that we tack engineering onto it. A very small percentage of software engineering is as rigorous as actual engineering.
edf13•37m ago
It’s a nightmare… the problem is it’s far too easy for people to set these agents up - without understanding the security implications.

We’ve covered so many issues already on our blog (grith.ai)

wnevets•35m ago
The number of wasted hours spent talking about code quality and patterns has to be astronomical.
kstenerud•27m ago
I think it's batshit crazy. That's why I wrote yoloAI, so I could sandbox it up properly and control EXACTLY what comes out of that sandbox, diff style.

https://github.com/kstenerud/yoloai

I can't go back anymore. Going back to a non-sandboxed Claude feels like going back to a non-adblocked browser.

heisenbit•21m ago
Agents are providing to employees the long overdue benefits limited liability companies long enjoyed: Gambling with upside for themselves and other peoples downsides.
nickpinkston•10m ago
That's a fun insight. Have you / others written about this?
moffkalast•9m ago
They technically have, just now.
piva00•4m ago
We didn't collectively decided, we've got this forced down our throats to apply a novel tool to any imaginable situation because the execs got antsy about being left behind.

A truly absurd amount of capital was deployed which triggered a cascade of reactions by the people in charge of capital at other places. They are extremely anxious that everything will change under their feet, and if they don't start using as much as humanly possible of it right about now they die.

That's it.

The tools have definitely found some use, there's more to learn on how else they can be used, and maybe over time smart people will settle on ways to wrangle it well. The messaging from the execs though, is not that, it is "you'll be measured on how much you use this, we don't know for what or how, it's for you to figure out but don't dare to not use it".

I do understand their anxiety, their job is to not let their companies die, and make the most money as they can in the process; a seemingly major shift on the foundations of their orgs will cause fear.

But we have not collectively decided that it was safe, and good, to run rampant with these tools without caring for all that was learnt since software was invented...

worik•1h ago
> A rogue AI led to a serious security incident at Meta

The AI "led to" the incident , true. But do nt forget that this, like all similar incidents , is a human failure

AI is a tool with no agency. People make mistakes using it, thone mistakes are the responsibility of the humans

sunrunner•24m ago
Why do we keep calling these things "agents" then? Or using the term "agentic"?
dmazin•50m ago
This is a lot less of a story than it seems.

It makes it sound like a rogue AI hacked Meta.

Instead, the "wild" thing here is that someone let an agent speak on their behalf with no review. The agent posted inaccurate instructions which someone else followed.

Those instructions lead to a brief gap in internal ACL controls, sounds like. I'm sorry, but given that the US government gave 14 year olds off incel Discords full access to Social Security data, this is not shocking by comparison.

To be clear, it is dumb and rude to let an agent speak on your behalf _without even reviewing it_.

This will eventually lead to a bigger snafu, of course. Security teams should control or at least review the agent permissions of every installation. Everyone is adopting this stuff, and a whole lot of people are going to set it up lazily/wrong (yolo mode at work).

skywhopper•14m ago
“Meta spokesperson Tracy Clayton said in a statement to The Verge that ‘no user data was mishandled’ during the incident.”

Wow, no mishandled user data? A striking change of standard operating procedure from Meta here.

Actually the later information in the story directly contradicts that, so The Verge probably shouldn’t have just quoted this line if their reporting is in opposition to it.

Regardless, this is one of the more insidious things about these tools. They often get minor but critical things wrong in the midst of mostly correct information. And people think they can analyze the data presented to them and make logical judgments, but that’s just not the case.

The article points out that “a human could have done the same thing” but, between the overly confident tone of the text generated by these tools, and the fact that weirdly people trust the LLM output more than they trust other humans (who generally admit or at least hint when they aren’t actually experts on a topic), it’s actually far worse when one of these bots gets something wrong.

Astral to Join OpenAI

https://astral.sh/blog/openai
1041•ibraheemdev•7h ago•651 comments

Google details new 24-hour process to sideload unverified Android apps

https://arstechnica.com/gadgets/2026/03/google-details-new-24-hour-process-to-sideload-unverified...
192•0xedb•3h ago•218 comments

Show HN: Three new Kitten TTS models – smallest less than 25MB

https://github.com/KittenML/KittenTTS
221•rohan_joshi•5h ago•69 comments

Return of the Obra Dinn: spherical mapped dithering for a 1bpp first-person game

https://forums.tigsource.com/index.php?topic=40832.msg1363742#msg1363742
119•PaulHoule•2d ago•19 comments

Noq: n0's new QUIC implementation in Rust

https://www.iroh.computer/blog/noq-announcement
93•od0•2h ago•7 comments

Cockpit is a web-based graphical interface for servers

https://github.com/cockpit-project/cockpit
18•modinfo•27m ago•2 comments

NanoGPT Slowrun: 10x Data Efficiency with Infinite Compute

https://qlabs.sh/10x
42•sdpmas•2h ago•8 comments

An update on Steam / GOG changes for OpenTTD

https://www.openttd.org/news/2026/03/19/steam-changes-update
204•jandeboevrie•3h ago•138 comments

From Oscilloscope to Wireshark: A UDP Story

https://www.mattkeeter.com/blog/2022-08-11-udp/
31•ofrzeta•1h ago•2 comments

4Chan mocks £520k fine for UK online safety breaches

https://www.bbc.com/news/articles/c624330lg1ko
156•mosura•6h ago•231 comments

Anthropic takes legal action against OpenCode

https://github.com/anomalyco/opencode/pull/18186
222•_squared_•1h ago•165 comments

Scaling Karpathy's Autoresearch: What Happens When the Agent Gets a GPU Cluster

https://blog.skypilot.co/scaling-autoresearch/
76•hopechong•4h ago•32 comments

OpenBSD: PF queues break the 4 Gbps barrier

https://undeadly.org/cgi?action=article;sid=20260319125859
160•defrost•7h ago•47 comments

Juggalo makeup blocks facial recognition technology (2019)

https://consequence.net/2019/07/juggalo-makeup-facial-recognition/
202•speckx•7h ago•125 comments

“Your frustration is the product”

https://daringfireball.net/2026/03/your_frustration_is_the_product
300•llm_nerd•9h ago•191 comments

Launch HN: Voltair (YC W26) – Drone and charging network for power utilities

34•wweissbluth•4h ago•18 comments

Tesla: Failure of the FSD's degradation detection system [pdf]

https://static.nhtsa.gov/odi/inv/2026/INOA-EA26002-10023.pdf
11•doener•51m ago•0 comments

Successes and Breakdowns in Everyday Non-Display Smart Glasses Use

https://arxiv.org/abs/2602.22340
11•PaulHoule•3d ago•0 comments

World Happiness Report 2026

https://www.worldhappiness.report/ed/2026/
99•ChrisArchitect•4h ago•67 comments

The Shape of Inequalities

https://www.andreinc.net/2026/03/16/the-shape-of-inequalities/
79•nomemory•6h ago•14 comments

macOS 26 breaks custom DNS settings including .internal

https://gist.github.com/adamamyl/81b78eced40feae50eae7c4f3bec1f5a
259•adamamyl•5h ago•132 comments

Waymo 13x safer than human drivers

https://twitter.com/Waymo/status/2034646084073480224
31•xnx•46m ago•7 comments

No AI in Node.js Core

https://github.com/indutny/no-ai-in-nodejs-core
13•porsager•1h ago•2 comments

I turned Markdown into a protocol for generative UI

https://fabian-kuebler.com/posts/markdown-agentic-ui/
52•FabianCarbonara•7h ago•30 comments

Launch HN: Canary (YC W26) – AI QA that understands your code

23•Visweshyc•4h ago•13 comments

A rogue AI led to a serious security incident at Meta

https://www.theverge.com/ai-artificial-intelligence/897528/meta-rogue-ai-agent-security-incident
84•mikece•2h ago•58 comments

Prompt Injecting Contributing.md

https://glama.ai/blog/2026-03-19-open-source-has-a-bot-problem
91•statements•5h ago•29 comments

Connecticut and the 1 Kilometer Effect

https://alearningaday.blog/2026/03/19/connecticut-and-the-1-kilometer-effect/
22•speckx•3h ago•15 comments

Clockwise acquired by Salesforce and shutting down next week

https://www.getclockwise.com
10•nigelgutzmann•1h ago•0 comments

Afroman found not liable in defamation case

https://nypost.com/2026/03/18/us-news/afroman-found-not-liable-in-bizarre-ohio-defamation-case/
1010•antonymoose•11h ago•569 comments