frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

HIV infection reprogrammes CD4T cells for quiescence and entry into latency

https://www.nature.com/articles/s41564-025-02128-y
1•PaulHoule•1m ago•0 comments

Tente, a Spanish Lego Competitor in the 70s

https://en.wikipedia.org/wiki/Tente_(toy)
1•wslh•2m ago•0 comments

Physics and Engineering of orbital data centers

https://twitter.com/Andercot/status/1981465914400002550
1•measurablefunc•4m ago•0 comments

Building a programming language using SQLite's VM – Pt 1

https://el-yawd.github.io/blog/2025/carrot/
1•sebg•5m ago•0 comments

United Kingdom Gov Reuse Library (Web Components)

https://digitalgovernmenthub.org/library/united-kingdom-gov-reuse-library/
1•rmason•9m ago•0 comments

How much of individual success is attributable to free time?

https://greyenlightenment.com/2025/01/28/how-much-of-individual-success-due-to-having-free-time/
1•paulpauper•9m ago•0 comments

Triton Developer Conference 2025 Talks [video]

https://www.youtube.com/watch?v=s30WoZ7lx3w
1•matt_d•9m ago•0 comments

White House List of Donors for President Trump's $300M Ballroom

https://www.nytimes.com/2025/10/23/us/politics/trump-ballroom-donors-list.html
2•reaperducer•10m ago•0 comments

X-ray glasses? All the 'mind-boggling' allegations in the FBI NBA gambling probe

https://www.cbc.ca/news/world/nba-gambling-probe-9.6950164
4•Jocund•11m ago•0 comments

Show HN: I built Kumi – a typed, array-oriented dataflow compiler in Ruby

https://kumi-play-web.fly.dev/
2•goldenCeasar•12m ago•0 comments

About Writing

https://www.ssp.sh/brain/writing/
3•articsputnik•13m ago•0 comments

Show HN: FlowLens – MCP server for debugging with Claude Code

https://magentic.ai/flowlens/
1•mzidan101•16m ago•0 comments

Guillermo del Toro: 'I'd rather die' than use generative AI

https://www.npr.org/2025/10/23/nx-s1-5577963/guillermo-del-toro-frankenstein
2•cdata•23m ago•0 comments

Why /Dev/Null Is an Acid Compliant Database

https://jyu.dev/blog/why-dev-null-is-an-acid-compliant-database/
3•swills•24m ago•0 comments

Show HN: Hacker News sans AI content

https://tokyo-synth-1243_4mn1lfqabzpz.vibesdiy.app/
1•neom•24m ago•0 comments

Migrating to LibreWolf

https://ratfactor.com/cards/librewolf
1•Curiositry•24m ago•0 comments

When is it better to think without words?

https://www.henrikkarlsson.xyz/p/wordless-thought
2•Curiositry•25m ago•0 comments

Meaningful Work

https://fastpaced.com/articles/meaningful-work/
1•davnn•25m ago•0 comments

Poland's birth rate is in freefall. The cause? A loneliness epidemic

https://www.theguardian.com/commentisfree/2025/oct/23/polands-birth-rate-is-in-freefall-the-cause...
6•amichail•28m ago•1 comments

My gf thinks videogames cannot be art. What to show her?

2•Gerard0•29m ago•7 comments

Show HN: New Version 3.1.0 of Hmpl

https://github.com/hmpl-language/hmpl/releases/tag/3.1.0
1•aanthonymax•29m ago•0 comments

Nexperia's China unit resumes chip sales to domestic distributors

https://www.reuters.com/world/china/nexperias-china-unit-resumes-chip-sales-domestic-distributors...
1•markus_zhang•30m ago•0 comments

A React Native Implementation of the U.S. Web Design System

https://usmds.blencorp.com/
1•mjheadd•33m ago•0 comments

SVG in GTK

https://blogs.gnome.org/gtk/2025/10/23/svg-in-gtk/
1•atilimcetin•34m ago•0 comments

Netflix Earnings, KPop Demon Hunters and Netflix Hit Production

https://stratechery.com/2025/netflix-earnings-kpop-demon-hunters-and-netflix-hit-production/
1•feross•34m ago•0 comments

Performance optimizations for storing web server access logs in ClickHouse

https://clickhouse.com/blog/log-compression-170x
1•krizhanovsky•37m ago•1 comments

The Boring Co. breaks ground in Nashville for 'exploratory borings'

https://www.usatoday.com
2•geox•38m ago•0 comments

The World Is a Cloud: A Reader's Guide to David Chapman

https://maxlangenkamp.substack.com/p/the-world-is-a-cloud
1•yichab0d•38m ago•0 comments

Docker overrides UFW rules

https://github.com/chaifeng/ufw-docker
1•chwonl•40m ago•1 comments

We only have one life. Let's stop wasting it on YouTube shorts

https://github.com/CaptainYouz/FocusTube
3•youz•40m ago•0 comments
Open in hackernews

Armed police swarm student after AI mistakes bag of Doritos for a weapon

https://www.dexerto.com/entertainment/armed-police-swarm-student-after-ai-mistakes-bag-of-doritos-for-a-weapon-3273512/
287•antongribok•3h ago

Comments

tencentshill•3h ago
"rapid human verification." at gunpoint. The Torment Nexus has nothing on these AI startups.
palmotea•2h ago
Why did they waste time verifying? The police should have eliminated the threat before any harm could be done. Seconds count when you're keeping people safe.
drak0n1c•2h ago
The dispatch relayer and responding officers should at least have ready access to a screen where they can see a video/image of the raw footage that triggered the AI alert. If it is a false alarm, they will better see it and react accordingly, and if it is a real threat they will better understand the initial context and who may have been involved.
ggreer•2h ago
According to a news article[1], a human did review the video/image and flagged it as a false positive. It was the principal who told the school cop, who then called other cops:

> The Department of School Safety and Security quickly reviewed and canceled the initial alert after confirming there was no weapon. I contacted our school resource officer (SRO) and reported the matter to him, and he contacted the local precinct for additional support. Police officers responded to the school, searched the individual and quickly confirmed that they were not in possession of any weapons.

What's unclear to me is the information flow. If the Department of School Safety and Security recognized it as a false positive, why did the principal alert the school resource officer? And what sort of telephone game happened to cause local police to believe the student was likely armed?

1. https://www.wbaltv.com/article/student-handcuffed-ai-system-...

wat10000•2h ago
Sounds like a "better safe than sorry" approach. If you ignore the alert on the basis that it's a false positive, then it turns out it really was a gun and the person shoots somebody, you're going to get sued into the ground, fired, name plastered all over the media, etc. On the other hand, if you call in the cops and there wasn't a gun, you're fine.
spankibalt•1h ago
> "On the other hand, if you call in the cops and there wasn't a gun, you're fine."

Yeah, cause cops have never shot somebody unarmed. And you can bet your ass that the possible follow-up lawsuit to such a debacle's got "your" name on it.

wat10000•26m ago
Good luck suing somebody for calling the police.
mothballed•6m ago
Reports on child welfare, it is often illegal to release the name of the tipster. Commonly taken advantage of by disgruntled exes or in custody dusputes.
Zigurd•1h ago
Ask a black teenager about being fine.
Etheryte•2h ago
Next up, a captcha that verifies you're not a robot by swatting you and checking at gunpoint.
duxup•3h ago
>Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”

It's ok everyone, you're safer now that police are pointing a gun at you, because of a bag of chips ... just to be safe.

/s

Absolutely ridiculous. We're living "computer said you did it, prove otherwise, at gunpoint".

cranberryturkey•3h ago
gustapo
walkabout•3h ago
We got our cyberpunk future, except none of it's cool and everything's extremely stupid.
duxup•3h ago
They could at least thrown in some good music and cute girls with colored hair to make us feel better :(
irilesscent•2h ago
You get grok Lady for the latter.
forgetfulness•2h ago
I’ve got great news for you: there are more girls with colored hair than ever before, and we got the Synthwave revival, just try to find the right crowd and put on Timecop1983 in your headphones

Cyberpunk always sucked for all but the corrupt corporate, political and military elites, so it’s all tracking

surgical_fire•2h ago
> there are more girls with colored hair than ever before

The ones I see don't tend to lean cute.

forgetfulness•2h ago
Well the "hackers" jacking in to the "Hacker News" discussion board, where we talk about the oppression brought in by the corrupt AI-peddling corporations employed by the even more corrupt government, probably aren't all looking like Zero Cool, Snake Plissken, Officer K, or the like, though a bunch may be.
JKCalhoun•2h ago
Pretty sure cyberpunk was always this dark.
walkabout•2h ago
Dark, yes, but also cool and with a fair amount of competence in play, including among powerful actors, and often lots of competence.

We got dark, but also lame and stupid.

surgical_fire•2h ago
AI singularity wil happen, but motherbrain as a complete moron. It will extinguish humans not as a grand plan for machines to take over, but doing horrible mistakes when trying to make things better.
mrguyorama•1h ago
If any of you had actually paid attention to the source media, you would have noticed that they were explicitly dystopias. They were always clearly and explicitly hell for normal people trying to live life.

Meanwhile, tons of you watched star trek and apparently learned(?) that the "bright future" it promised us was.... talking computers? And not, you know, post scarcity and enlightenment that allowed people to focus on things that brought them joy or they were good at, and an entire elimination of the concept of "capitalism" or personal profit or resource disparity that could allow people to not be able to afford something while some asshole in the right place at the right time gets to take a percentage cut of the entire economy for their personal use.

The primary "technology" of star trek was socialism lol.

walkabout•53m ago
Oh of course they were dystopias. But at least they were cool and there was a fair amount of competence floating around.

My point is exactly that we got the dystopia but also it's not even a little cool, and it's very stupid. We could have at least gotten the cool dystopia where bad things happen but at least they're part of some kind of sensible longer-term plan. What we got practically breaks suspension of disbelief, it's so damn goofy.

> The primary "technology" of star trek was socialism lol.

Yep. Socialism, and automatic brainwashing chairs. And sending all the oddball non-conformists off to probably die on alien planets, I guess. (The "Original Series" is pretty weird)

cookiengineer•41m ago
I recommend rewatching the trilogy of Brazil, 12 Monkeys and Zero Theorem.

It's sadly the exact future that we are already starting to live in.

stockresearcher•2h ago
The safest thing to do is to pull all Frito Lay products off shelves until the packaging can be redesigned to ensure that AI never confuses them for guns. It's a liability issue. THINK OF THE CHILDREN.
dredmorbius•2h ago
The only thing that can stop a bag guy with Doritos is ...
rolph•3h ago
Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”

prioritize your own safety by not attending any location fitted with such a system, or deemed to be such a dangerous environment that such a system is desired.

the AI "swatted" someone.

etothet•2h ago
The corporate version of "It's a feature, not a bug."
bilbo0s•2h ago
Calling it today. This company is going to get innocent kids killed.

How many packs of Twizzlers, or Doritos, or Snickers bars are out there in our schools?

First time it happens, there will be an explosion of protests. Especially now that the public knows that the system isn't working but the authorities kept using it anyway.

This is a really bad idea right now. The technology is just not there yet.

mothballed•2h ago
And then there's plenty of bullies who might put a sticker of a picture of a gun on someone's back, knowing it will set off the image recognition. It's only a matter of time until they figure that out.
withinboredom•21m ago
When I was a kid, we made rubber-band guns all the time. I’m sure that would set it off too.
mrguyorama•1h ago
>First time it happens, there will be an explosion of protests.

Why do you believe this? In the US, cops will cower outside of a school with an armed gunman actively murdering children, forcibly detain parents who wish to go in if the cops wont, and then re-elect everyone involved

In the US, an entire segment of the population will send you death threats claiming you are part of some grand (democrat of course) conspiracy for the crime of being a victim of a school shooting. After every school shooting, republican lawmakers wear an AR-15 pin to work the next day to ensure you know who they care about.

Over 50% of the country blamed the protesting students at Kent state for daring to be murdered by the national guard.

Cops can shoot people in broad daylight, in the back, with no justification, or reasonable cause, or can even barge into entirely the wrong house and open fire on homeowners exercising their basic right to protect themselves from strangers invading their homes, and as long as the people who die are mostly black, half the country will spout crap like "They died from drugs" or "they once sold a cigarette" or "he stole skittles" or "they looked at my wife wrong" while the cops take selfies reenacting the murder for laughs and talk about how terrified they are by BS "training" that makes them treat every stranger as a wanted murderer armed to the teeth. The leading cause of death for cops is still like heart disease of course.

Trump sent unmarked forces to Portland and abducted people off the street into unmarked vans and I think we still don't really know what happened there? He was re-elected.

nyeah•26m ago
Clearly it did not prioritize human safety.
ben_w•3h ago
Memories of Jean Charles de Menezes come to mind: https://en.wikipedia.org/wiki/Killing_of_Jean_Charles_de_Men...
Gibbon1•2h ago
That was my first thought as well. A worry is police officers make mistakes which leads to anywhere from hapless people getting terrorized, harmed or killed. The bad thing about AI is it'll allow police to escape responsibility. Perhaps also where a human will realize it made a mistake they can admit it and everything is okay. But if AI says you had a gun, it won't walk that back. AI said he had a gun. But when we checked, he didn't have it anymore.

In the Menezes case the cops were playing a game of telephone that ended up with him being shot in the head.

dredmorbius•2h ago
And Amadou Diallo:

<https://en.wikipedia.org/wiki/Killing_of_Amadou_Diallo>

proee•3h ago
Walking through TSA scanners, I always get that unnerving feeling I will get pulled aside. 50% of the time they flag my cargo pants because of the zipper pockets - There is nothing in them but the scanner doesn't like them.

Now we get the privilege of walking by AI security cameras placed in random locations, hoping they don't flag us.

There's a ton of money to be made with this kind of global frisking, so lots of pressure to roll out more and more systems.

How does this not spiral out of control?

walkabout•3h ago
I already adjust my clothing choices when flying to account for TSA's security theater make-work bullshit. Wonder how long before I'm doing that when preparing to go to other public places.

(I suppose if I attended pro sports games or large concerts, I'd be doing it for those, too)

hinkley•1h ago
I was getting pulled out of line in the 90’s for having long hair. I don’t dress in shitty clothes or fancy ones, I didn’t look funny, just the hair, which got regular compliments from women.

I started looking at people trying to decide who looked juicy to the security folks and getting in line behind them. They can’t harass two people in rapid succession. Or at least not back then.

The one I felt most guilty about, much later, was a filipino woman with a Philippine passport. Traveling alone. Flying to Asia (super suspect!). I don’t know why I thought they would tag her, but they did. I don’t fly well and more stress just escalates things, so anything that makes my day tiny bit less shitty and isn’t rude I’m going to do. But probably her day would have been better for not getting searched than mine was.

dheera•3h ago
The TSA scanners also trigger easily on crotch sweat.
hsbauauvhabzb•3h ago
I enjoy a good grope, so I’ll keep that in mind the next time I’m heading into the us.
jason-phillips•3h ago
I got pulled aside because I absentmindedly showed them my concealed carry permit, not my driver's license. I told them I was a consultant working for their local government and was going back to Austin. No harm no foul.
oceanplexian•2h ago
If the system used any kind of logic whatsoever a CCW permit would not only allow you to bypass airport security but also carry in the airport (Speaking as both a pilot and a permit holder)

Would probably eliminate the need for the TSA security theater so that will probably never happen.

some_random•2h ago
The point of the security theater is to assuage the 95th percentile scared-of-everything crowd, they're the same people who want no guns signs in public parks.
jerlam•2h ago
That may have been true 25 years ago. All the rules are now mostly an annoyance and don't reassure anyone.

There weren't a lot of people voicing opposition to TSA's ending of the shoes off policy earlier this year.

bediger4000•2h ago
You're right not a lot of people objected to TSA ending the no shoes safety rule, and it's a shame. I certainly objected and tried to make my objections known, but apparently 23 or 24 years of the iconic custom of taking shoes off went to waste because the TSA decided to slack off
mrguyorama•1h ago
No.

Right from the beginning it was a handout to groups who built the scanning equipment, who were basically personal friends with people in the admin. We paid absurd prices for niche equipment, a lot of which was never even deployed and just sat in storage.

Several of the hijackers were literally given extended searches by security that day.

A reminder that what actually stopped hijackings (like, nearly entirely) was locking the cabin door, which was always doable, and has not ever been breached. Not only did this stop terrorist hijackings, it stopped more casual hijackings that used to be normal, it could also stop "inside man" style hijackings like that one with a disgruntled FedEx pilot, it was nearly free to implement, always available, harms no one's rights, doesn't turn airport security into a juicy bombing target, doesn't slow down an important part of the economy, doesn't invent a massive bureaucracy and LEO in the arms of a new american agency that has the goal of suppressing domestic problems and has never done anything useful. Keep in mind, shutting the cockpit door is literally how the terrorists themselves protected themselves from being stopped and is the reason Flight 93 couldn't be recovered.

TSA is utterly ineffective. They have never stopped an attack, regularly fail their internal audits, the jobs suck, and they pay poorly and provide minimal training.

mothballed•2h ago
You can carry in the airport in AZ without a permit, in the unsecured areas. I think there was only one broo-ha-ha because some particularly bold guy did it openly with a rifle (can't remember if there's more to the story).
more_corn•3h ago
Why don’t you pay the bribe and skip the security theater scanner? It’s cheap. Most travel cards reimburse for it too.
proee•2h ago
I'm sure CLEAR is already having giddy discussion on how they can charge you to get pre-verified access to walk around in public. We can all wear CLEAR certified dog tags so the cops can hastle the non-dog-tagged people.
malux85•2h ago
Speak up citizens!

Email the state congressman and tell them what you think.

Since (pretty much) nobody does this, if a few hundred people do it, they will sit up and take notice. It takes less people than you might think.

Since coordinating this with a bunch of strangers (I.e. the public) is difficult, the most effective way is to normalise speaking up in our culture. Of course normalising it will increase the incoming comm rate, which will slowly decrease the effectiveness but even post that state, it’s better than where we are, which is silent public apathy

anigbrowl•2h ago
If that's the case, why do people in Congress keep voting for things their constituents don't like? When they get booed at town halls they just dismiss it as being the work of paid activists.
actionfromafar•1h ago
Yeah, Republicans hide from townhalls. Most of them have one constituent, Trump.
mpeg•2h ago
To be fair, at least you can choose not to wear the cargo pants.

A friend of mine once got pulled aside for extra checks and questioning after he had already gone through the scanners, because he was waiting for me on the other side to walk to the gates together and the agent didn't like that he was "loitering" – guess his ethnicity...

stavros•2h ago
How is it fair to say that? That's some "why did you make me hurt you"-level justification.
mpeg•2h ago
No, it's not.

I have shoes that I know always beep on the airport scanners, so if I choose to wear them I know it might take longer or I might have to embarrassingly take them off and put them on the tray. Or I can not wear them.

Yes, in an ideal world we should all travel without all the security theatre, but that's not the world we live in, I can't change the way airports work, but I can wear clothes that make it faster, I can put my liquids in a clear bag, I can have bags that make it easy to take out electronics, etc. Those things I can control.

But people can't change their skin color, name, or passport (well, not easily), and those are also all things you can get held up in airports for.

stavros•1h ago
That's true, if you're saying "I can at least avoid being assaulted by the shitty system", I just want to point out that it is a shitty system.
mpeg•1h ago
I fully agree with you on that, it is a shitty system :)
franktankbank•2h ago
> guess his ethnicity...

Not sure, but I bet they were looking at their feet kinda dusting off the bottoms, making awkward eye contact with the security guard on the other side of the one way door.

JustExAWS•2h ago
Getting pulled aside by TSA for secondary screening is nowhere in the ball park of being rushed at gunpoint as a teenager and told to lay down on the ground where one false move will get you shot by a trigger happy cop that probably won’t face any consequences - especially if the innocent victim is a Black male.

In fact, they will probably demonize the victim to find sn excuse why he deserved to get shot.

proee•2h ago
I wasn't implying TSA-cargo-pant-groping is comparable. My point is to show escalation in public facing systems. We have been dealing with TSA. Now we get AI Scanners. What's next?

Also, no need to escalate this into a race issue.

JustExAWS•2h ago
Yes because I’m sure if a White female had been detected by AI of carrying a gun, it would have been treated the same way.
proee•2h ago
You have no evidence to suggest this, just bias. Unless you are aware of the AI algorithm, then it's a pointless discussion that only causes strife and conjecturing.
JustExAWS•2h ago
It’s not the AI algorithm, it’s the police response I’m questioning would be different.
proee•2h ago
How many audit the police videos have you seen on Youtube? There are an insufferable amount of "white" people getting destroyed by the cops. If you replace the "white" people in these videos with "black" then 99% of viewers would assume the cops are hardcore racist, when in fact, they are just bad cops - very bad cops, that have some deep psychological issues - probably rooted from a traumatic childhood.
JustExAWS•1h ago
https://www.sentencingproject.org/reports/one-in-five-dispar...

https://scholar.harvard.edu/files/fryer/files/empirical_anal...

https://www.prisonpolicy.org/blog/2022/12/22/policing_survey...

hinkley•1h ago
But it was a black man they harassed.
leptons•3h ago
This is only the beginning of AI-hallucinated policing. Not a good start, and I don't think it's going to end well for citizens.
4ndrewl•3h ago
"end well for citizens."

That ship has long sailed buddy.

throwaway173738•2h ago
Yeah ask all those citizens getting “detained” by ICE how it worked out for them.
phkahler•3h ago
>> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”

No. If you're investigating someone and have existing reason to believe they are armed then this kind of false positive might be prioritizing safety. But in a general surveillance of a public place, IMHO you need to prioritize accuracy since false positives are very bad. This kid was one itchy trigger-pull away from death over nothing - that's not erring on the side of safety. You don't have to catch every criminal by putting everyone under a microscope, you should be catching the blatantly obvious ones at scale though.

blueflow•2h ago
The perceived threat of government forces assaulting and potentially killing me for reasons i have no control over, this is the kind of stuff that terminates the social contract. I'd want a new state that protects me from such stuff.
Havoc•3h ago
>the system “functioned as intended,”

Behold - a real life example of a "Not a hotdog" system, except this one is gun / not-a-gun.

Except the fictional one from the series was more accurate...

idontwantthis•3h ago
If these AI video based gun detectors are not a massive fraud I will eat one.

How on Earth does a person walk with a concealed gun? What does a woman in a skirt with one taped to her thigh walk like? What does a man in a bulky sweatshirt with a pistol on his back walk like? What does a teenager in wide legged cargo jeans with two pistols and a extra magazines walk like?

VTimofeenko•2h ago
The brochure linked from TFA has a screenshot of a combination of segmentation and object recognition models which are fairly standard in NVRs. Quick skim of the vendor website seems to confirm this[1] and states a claim that they are not analyzing the gait.

[1]: https://www.omnilert.com/blog/what-is-visual-gun-detection-t...

walkabout•2h ago
The whole idea even accepting the core premise is OK to begin with needs to have a similar analysis applied to it that medical tests do: will there be enough false positives, with enough harm caused by them, that this is actually worse than doing nothing? Compared with likelihood of improving an outcome and how bad a failure to intervene is on average, of course.

Given that there's no relevant screening step here and it's just being applied to everyone who happens to be at a place it's truly incredible that such an analysis would shake out in favor of this tech. The false positive rate would have to be vanishingly tiny, and it's simply not plausible that's true. And that would have to be coupled with a pretty low false negative rate, or you'd need an even lower false positive rate to make up for how little good it's doing even when it's not false-positiving.

So I'm sure that analysis was either deliberately never performed, or was and then was ignored and not publicized. So, yes, it's a fraud.

(There's also the fact that as soon as these are known to be present, they'll have little or no effect on the very worst events involving firearms at schools—shooters would just avoid any scheme that involved loitering around with a firearm where the cameras can see them, and count on starting things very soon after arriving—like, once you factor in second order effects, too, there's just no hope for these standing up to real scrutiny)

15155•31m ago
The real issue is that they obviously can't detect what's in a backpack or similar large vessel.

Gait analysis is really good these days, but normal, small objects in a bag don't impact your gait.

gdulli•3h ago
The only way we could have foreseen this was immediately.
AuthAuth•3h ago
What is happening in the world. There should be some liability for this but nothing will happen.
SoftTalker•3h ago
Not sure nothing will happen. Some trial lawyers would love to sue a city, a school system, and an AI surveillance company over "trauma, anxiety, and mental pain and suffering" caused by the incident. There will probably be a settlement that nobody ever hears about.
wat10000•2h ago
A settlement paid by the taxpayers with no impact at all on anyone actually responsible.
tamimio•2h ago
Law enforcement officers, judicial officials, social workers, and similar generally maintain qualified immunity from liability in the course of their work. This case for example in which judges and social workers allegedly failed to properly assess a mother's fitness for child custody despite repeated indicators suggesting otherwise. The child was ultimately placed in the mother's care, and later was killed in an execution style (not due to negligence).

https://www.youtube.com/watch?v=wzybp0G1hFE

eastbound•2h ago
Not applicable - As a society we’ve countless times chosen to favour the right of the mother to keep children above the rights of other humans. Most children are killed in the home of the mother (i.e. either by the mother, or where partner choice would have avoided that, while the father was available), or even worde in the Anders Breivik situation (father available with stable job and perspectives in life, but custody refused, child grew up a mass murderer as always).
StopDisinfo910•2h ago
The world is doing fairly ok, thank you. The US however I’m not so sure as people here are apparently more concerned by the AI malfunction than with the idea it’s somehow sensible to live monitor high schools for gun threat.
JustExAWS•2h ago
So you’re okay with trigger happy cops forcing a teenager to the ground because he had a bag of Doritos?
StopDisinfo910•2h ago
No, I think it’s crazy that people somehow think it’s rational to video monitor kids and be worried they have actual fire arms.

I think that’s a level of f-ed up which is so far removed from questioning AI that I wonder why people even tolerate it and somehow seem to view the premise as normal.

The cop thing is just icing on the cake.

AuthAuth•53m ago
Its not just the US. China runs the same level of surveillance, its being implemented all throughout Europe, Africa and Asia. This is becoming the norm.
j45•3h ago
Sad for the student.

Imagine the head scratching that's going on with execs who are surprised things might work when a probabilistic software is being used for deterministic purposes without realizing there's a gap between it kind of by nature.

doublerabbit•2h ago
> Imagine the head scratching that's going on with execs

I can't. The execs won't care and probably in their sadist ways, cheer.

j45•47m ago
Fair. Only a matter of time until it's big enough that it can't be avoided.
6stringmerc•3h ago
Feed the same system an image of an Asian kid and it will think the bag of chips is a calculator /s

Or am I kidding? AI is only as good as its training and humans are...not bastions of integrity...

AndrewKemendo•3h ago
Using humans for training gurantees bad outcomes because humans cannot demonstrate sociality at the same scale as antisociality.
mentalgear•3h ago
Ah, the coming age of Palantir's all seeing platform; and Peter Thiel becoming the shadow Emperor. Too bad non-deterministic ml systems are prone to errors that risk lives when applied wrongly to crucial parts of society. But in an authoritarian state, those will be hidden away anyway, so there's nothing to see here: move along folks. Yes, surveillance and authoritarianism go hand in hand, ask China. It's important to protest these methods and push lawmakers to act against them; now, before it's too late.
seanhunter•3h ago
The article is about omnialert, not palantir, but don’t let the facts get in the way of your soapbox rant.
mzajc•2h ago
Same fallible systems, same end goal of mass surveillance.
MiiMe19•2h ago
I might be missing something but I don' think this article isn't about palantir or any of their products
yifanl•2h ago
You're absolutely right, Palantir just needs a different name and then they'd have no issues.
joomla199•1h ago
This comment has a double negative, which makes it a false positive.
macintux•3h ago
I think the most amazing part is that the school doubled down on the mistake by parroting the corporate line.

I expect a school to be smart enough to say “Yes, this is a terrible situation, and we’re taking a closer look at the risks involved here.”

JKCalhoun•2h ago
Lawyer's advice?
macintux•2h ago
I would think "no comment" would be safer/smarter than "yeah, your kids are at risk of being shot by police by attending our school, deal with it".
JKCalhoun•2h ago
Good point.
malux85•3h ago
Poor kid, and what an incompetent police department not to use their own judgement ……

But ……

Doritos should definitely use this as an advertisement, Doritos - The only weapon of mass deliciousness, or something like that

And of course pay the kid, so something positive came come out of the experience for him

rkomorn•3h ago
"Armed and delicious" ? "Right to bear snacks" ?
tartoran•2h ago
"You'd die for a bag of Doritos"
Dilettante_•9m ago
"Stop resisting...the flavor"
fritzo•3h ago
> “They didn’t apologize. They just told me it was protocol. I was expecting at least somebody to talk to me about it.”

I wonder how effective an apology and explanation would have been? Just some respect.

cool_man_bob•3h ago
Effective at what? No one is facing any consequences anyway.
throwaway173738•2h ago
Except for the kids who experienced the “rapid human verification” firsthand.
mothballed•2h ago
Not a bad point, but a fake apology is worse than none.
eastbound•2h ago
Maybe an apology from the AI?
hn_go_brrrrr•2h ago
More's the pity. The school district could use some consequences.
chasd00•47m ago
The school admin has no understanding of the tech and only the dimmest comprehension of what happened. Asking them to do anything besides what the tech company told them to do is asking wayyy too much.
hsbauauvhabzb•3h ago
I would be certainly curious to test ethnicity with this system. Will white students with a bag of Doritos be flagged, or only if they’re black?
12_throw_away•2h ago
Exactly. I wonder if this a purpose-built image-recognition system, or is it a lowest-possible effort generic image model trained on the internet? Classifying a Black high school student holding Doritos as an imminent shooting threat certainly suggests the latter.
more_corn•3h ago
Wait… AI hallucinated and the police overreacted to a black kid who actually posed no threat?

I thought those two things were impossible?

kirykl•2h ago
Wouldn’t have thought AI assessment of security image is enough for probable cause
nickdothutton•2h ago
"Omnilert Gun Detect delivers instant gun detection, near-zero false positives".
dgacmu•2h ago
If it's taking images every 30 seconds, it's getting 86400 x 30 = 2.5 million images per day per camera. So when it causes enormous, unnecessary trauma to one student per week, the company can rightfully claim it has less than a 1 in 10 million false positive rate.

(* see also "how to lie with statistics").

froobius•2h ago
Stuff like this feels like some company has managed to monetize an open source object detection model like YOLO [1], creating something that could be cobbled together relatively easily, and then sold it as advance AI capabilities. (You'd hope they've have at least fine-tuned it / have a good training dataset.)

We've got a model out there now that we've just seen has put someone's life at risk... Does anyone apart from that company actually know how accurate it is? What it's been trained on? Its false positive rate? If we are going to start rolling out stuff like this, should it not be mandatory for stats / figures to be published? For us to know more about the model, and what it was trained on?

[1] https://arxiv.org/abs/1506.02640

EdwardDiego•12m ago
And it feels like they missed the "human in the loop" bit. One day this company is likely to find itself on the end of a wrongful death lawsuit.
jmcgough•2h ago
The guidance counselor does not have the training or time to "fix" the trauma you just gave this kid and his friends. Insane to put minors through this.
vezycash•2h ago
With high level of hallucination, cops need to tranquilizers more. If the student had reached for his bag just before the cops arrived, BLM 2.0 would have started.
throw7•2h ago
If false positives are known to happen, then you design a system where the image is vetted before telling the cops the perpetrator is armed. The company is basically swatting, but I'm sure they'll never be held liable.
shaky-carrousel•2h ago
He could have been easily murdered. It's not the first time by a far margin that a bunch of overzealous cops murder a kid. I would never ever in my life set foot in a place that sends me armed cops so easily. That school is extremely dangerous.
ggreer•2h ago
I think the reason the school bought this silly software is because it's a dangerous school, and they're grasping at straws to try and fix the problem. The day after this false positive, a student was robbed.[1] Last month, a softball coach was charged with rape and possession of child pornography.[2] Last summer, one student was stabbed while getting off the bus.[3] Last year, there were two incidents where classmates stabbed each other.[4][5]

1. https://www.nottinghammd.com/2025/10/22/student-robbed-outsi...

2. https://www.si.com/high-school/maryland/baltimore-county-hig...

3. https://www.wbaltv.com/article/knife-assault-rossville-juven...

4. https://www.wbal.com/stabbing-incident-near-kenwood-high-sch...

5. https://www.cbsnews.com/baltimore/news/teen-injured-after-re...

evanelias•1h ago
That certainly sounds bad, but it's all relative; keep in mind this school is in Baltimore County, which is distinct from the City of Baltimore and has a much different crime profile. This school is in the exact same town as Eastern Tech, literally the top high school in Maryland.
tartoran•2h ago
"“We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident,” the principal wrote. “Our counselors will provide direct support to the students who were involved.”"

Make them pay money for false positives instead of direct support and counselling. This technology is not ready for production, it should be in a lab not in public buildings such as schools.

akoboldfrying•2h ago
> Make them pay money for false positives instead of direct support and counselling.

Agreed.

> This technology is not ready for production

No one wants stuff like this to happen, but nearly all technologies have risks. I don't consider that a single false positive outweighs all of its benefits; it would depend on the rates of false and true positives, and the (subjective) value of each (both high in this case, though I'd say preventing 1 shooting is unequivocally of more value than preventing 1 innocent person being unnecessarily held at gunpoint).

Zigurd•1h ago
In the US cops kill more people than terrorists. As long as you quantifying values take that into account.
neuralRiot•52m ago
I think I’ve said this too many times already but the core problem here and with the “AI craze” is that nobody realy wants to solve problems, what they want is a marketable product and AI seems to be the magic wrench that fits in all the nuts and since most of the people doesn’t really knows how it works and what are its limitations the happily buy the “magic dust”.
dekken_•1h ago
> Make them pay money

It already cost money paying for the time and resources to be misappropriated.

There needs to be resignations, or jail time.

SAI_Peregrinus•48m ago
The taxpayers collectively pay the money, the officers involved don't (except for that small fraction of their income they pay in taxes that increase as a result).
neilv•2h ago
> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”

It just created a situation in which a bunch of people with guns were told that some teen had a gun. That's a very unsafe situation that the system created, out of nothing.

And some teen may be traumatized. Again, unsafe.

Incidentally, the article's quotes make this teen sound more adult than anyone who sold or purchased this technology product.

bilbo0s•2h ago
>And some teen may be traumatized.

Um. That's not really the danger here.

The danger is that it's as clear as day that in the future someone is gonna be killed. That's not just a possibility. It's a certainty given the way the system is set up.

This tech is not supposed to be used in this fashion. It's not ready.

wat10000•2h ago
I fully agree, but we also really need to get to a place where drawing the attention of police isn't an axiomatically life-threatening situation.
Zigurd•1h ago
If the US wasn't psychotic, not all police would have to be armed, and not every police response would be an armed response.
akoboldfrying•2h ago
> The danger is that it's as clear as day that in the future someone is gonna be killed.

This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.

I'm not downplaying the risks. I'm saying that we should remember that almost everything has risks and benefits, and as a society we decide for or against using/doing them based mostly on the ratio between those two things.

So we need some data on the rates of false vs. true detections here. (A value judgment is also required.)

GuinansEyebrows•1h ago
> This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.

huh, i can think of at least one recent example of a popular figure using this kind of argument to extreme self-detriment.

neilv•59m ago
Did you want to emphasize or clarify the first danger I mentioned?

My read of the "Um" and the quoting, was that you thought I missed that first danger, and so were disagreeing in a dismissive way.

When actually we're largely in agreement about the first danger. But people trying to follow a flood of online dialogue might miss that.

I mentioned the second danger because it's also significant. Many people don't understand how safety works, and will think "nobody got shot, so the system must have worked, nothing to be concerned about". But it's harder for even those people to dismiss the situation entirely, when the second danger is pointed out.

omnipresent12•2h ago
https://www2.ljworld.com/news/schools/2025/aug/07/lawrence-s...

Another false positive by one of these leading content filters schools use - the kid said something stupid in a group chat and an AI reported it to the school, and the school contacted the police. The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time. They are suing Gaggle, who claims they never intended their system to be used that way.

These kinds of false positives are incredibly common. I interviewed at one of their competitors (Lightspeed), and they actually provide a paid service where they have humans review all the alerts before being forwarded to the school or authorities. This is a paid addon, though.

random3•1h ago
It’s actually “AI swarmed” since no human reasoning, only execution, was exerted - basically have an AI directing resources.
actionfromafar•1h ago
Reverse Centaur. MANNA.
janalsncm•1h ago
In any system, there are false positives and false negatives. In some situations (like a high recall disease detection) false negatives are much worse than false positives, because the cost of a false positive is a more rigorous screening.

But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.

Aside from improving the quality of the detection model, we should try to reduce the “cost” of both failure modes as much as possible. Putting a human in the loop or having secondary checks are ways to do that.

nkrisc•27m ago
In this case false positives are far, far worse than false negatives. A false negative in this system does not mean a tragedy will occur, because there are many other preventative measures in place. And never mind the fact that this country refuses to even address the primary cause of gun violence in the first place: the ubiquity of guns in our society. So systems like this is what we end up with when we ignore to address the problem of guns and choose to deal the downstream effects of that instead.
lelandfe•27m ago
I was swatted once. Girlfriend's house. Someone called 911 and said they'd seen me kill a neighbor, drag their body into the house, and was now holding my gf's family hostage.

We answered the screams at the door to guns pointed at our faces, and countless cops.

It was explained to us that this was the restrained version. We got a knock.

Unfortunately, I understand why these responses can't be neutered too much. You just never know.

collingreen•23m ago
In this case, though, you COULD know, COULD verify with a human before pointing guns at people, or COULD not deploy a half finished product in a way that prioritizes your profit over public safety.
SoftTalker•19m ago
Do the cops not ever get tired of being fooled like this? Or do they just enjoy the chance to go out in their army-surplus armored cars and pretend to be special forces?
adaml_623•2m ago
This is a really good question. Sadly the answer is that they think it's how the system is meant to work. Well that seems to be the answer that I see coming from police spokespeople
jawns•2h ago
I expect that John Bryan -- who produces content as The Civil Rights Lawyer https://thecivilrightslawyer.com -- will have something to say about it.

He frequently criticizes the legality of police holding people at gunpoint based on flimsy information, because the law considers it a use of force, which needs to be justified. For instance, he's done several videos about "high risk" vehicle stops, where a car is mistakenly flagged as stolen, and the occupants are forced to the ground at gunpoint.

My take is that if both the AI automated system and a "human in the loop" police officer looked at the picture and reasonably believed that the object in the image was a gun, then the stop might have been justified.

But if the automated system just sent the officers out without having them review the image beforehand, that's much less reasonable justification.

cyanydeez•2h ago
Someday there'll be a lawyer in court telling us how strong the AI evidence was because companies are spending billions of dollars on it.
mothballed•2h ago
Or they'll tell us police have started shooting because an acorn falls, so they shouldn't be expected to be held to higher standards and are possibly an improvement.
hinkley•1h ago
Is use of force without justification automatically excessive force or is there a gray area?
jawns•56m ago
See https://en.wikipedia.org/wiki/Graham_v._Connor
whycome•2h ago
Can someone write the novel

“Computer says die”

mchannon•2h ago
In 1987, Paul Verhoeven predicted exactly this in the original Robocop.

ED-209 mistakenly viewed a young man as armed, blows him away in the corporate boardroom.

The article even included an homage to:

“Dick, I’m very disappointed in you.”

“It’s just a small glitch.”

1970-01-01•1h ago
Very ripe for a lawsuit. I would expect lawyers to be calling daily.
nullbyte808•1h ago
I would get my GED at that point. Screw that school.
satisfice•1h ago
T0 be fair most commercials for Doritos, Skittles, Mentos, etc., if occurring in real life, would result in a strong police response just after they cut away.
einrealist•1h ago
Let's hope that, thanks to AI, the young man will now have a healthier diet! /s
gnarlouse•1h ago
And so begins the ending of the "unfinished fable of the sparrows"
programjames•59m ago
The model seems pretty shitty. Does it only look on a frame-by-frame basis? Literally one second of video context and it would never make that mistake.
BeetleB•47m ago
The "AI mistake" part is a red herring.

The real question is: Would this have happened in an upper/middle class school.

The student has dark skin. And is attending a school in a crime ridden neighborhood.

Were it a white student in a low crime neighborhood, would they have approached him with guns drawn?

The AI failure is masking the real problem - bad police behavior.

ratelimitsteve•44m ago
the best part of the technocracy is that they're not actually all that good at anything. the second best part is that when their mistakes end in someone dead there will be some way that they're not responsible.
neverkn0wsb357•37m ago
It’s unsurprising, since this kind of classification is only as good as the training data.

And police do this kind of stuff all the time (or in the very least you hear about it a lot if you grew up in a major city).

So if you’re gonna automate broken systems, you’re going to see a lot more of the same.

I’m not sure what the answer is but I definitely feel that “security” system like this that are purchased and rolled out need to be highly regulated and be coupled with extreme accountability and consequences for false positives.

adam12•36m ago
This is what we get instead of reasonable gun control laws.
15155•10m ago
You're free to (attempt to) amend the Second Amendment, but the Supreme Court of the United States has already affirmed and reaffirmed that individual ownership of firearms in common use is a right.

What do you propose that is "reasonable" given the frameworks established by Heller, McDonald, Caetano, and Bruen?

I can 3D print or mill basically every item you can imagine prohibiting at home: what exactly do laws do in this case?

kayge•29m ago
I wonder if the AI correctly identified it as a bag of Doritos, but was also trained on the commercial[0] where the bag appears to beat up a human (his fault for holding on too tight) and then it destroys an alien spacecraft.

[0] https://www.youtube.com/watch?v=sIAnQwiCpRc

crazygringo•28m ago
It sounds like the police mistook it as well:

> “They showed me the picture, said that looks like a gun, I said, ‘no, it’s chips.’”

So AI did the initial detection, but police looked at it and agreed. We don't see the image, but it probably did look like a gun because of a weird shadow or something.

Fundamentally, this isn't really any different from a person seeing someone with what looks like a gun and calling the cops, only it turns out the person didn't see it clearly.

The main issue is just that with increased numbers of images, there will be an increase in false positives. Can this be fixed by including multiple images, e.g. from motion of the object, so police (and the AI) can better eliminate false positives before traumatizing some poor teen?

jmyeet•28m ago
There are two basic ways AI can be used:

1. To enhance human productivity; or

2. To replace humans.

Companies, particularly in the US, very much want to go with (2) and part of the reason they can is because there are zero consequences for incidents like this.

A couple ofexamples spring to mind:

1. the UK Royal Mail scandal where a bad system accused postmasters of theft, some of whom committed suicide over the allegations. Those allegations were later proven false and it was the system's fault. IMHO the people who signed off and deployed this should be charged with negligent homicide; and

2. The Hertz case where people who had returned cars were erroneously flagged as car thieves and report was made to police. This created hell for people who would often end up with warrants they had no idea about and would be detained on random traffic stops over a car that was never stolen.

Now these aren't AI but just like the Doritos case here, the principle is the same: companies are trying to replace people with computers. In all cases, a human should be responsible for reviewing any such complaint. In the Hertz case, a human should check to see if the car is actually stolen.

In the Royal Mail situation, the system needs to show its work. Deployment should be against the existing accounting system and discrepancies between the two need to be investigated for bugs until the system is proven correct. Particularly in the early stages, a forensic accountant (if necessary) should verify that funds were actually stolen before filing a criminal complaint.

And if "false positive" criminal complaints are filed, the people who allowed that to happen, if negligent (and we all know they are), should themslves be criminally charged.

We are way too tolerant of black box systems that can result in significant harm or even death to people. Show your work. And make a human put their name and reputation to any output of such systems.

zkmon•26m ago
At least there is a check done by humans in a human way. What if this human check is removed in future, as AI decisions would be deemed no longer requiring a human inspection?
blindriver•13m ago
How is this not slander? I would absolutely sue the fuck out of this system where it puts people's lives in danger.