It's ok everyone, you're safer now that police are pointing a gun at you, because of a bag of chips ... just to be safe.
/s
Absolutely ridiculous. We're living "computer said you did it, prove otherwise, at gunpoint".
Cyberpunk always sucked for all but the corrupt corporate, political and military elites, so it’s all tracking
The ones I see don't tend to lean cute.
We got dark, but also lame and stupid.
Meanwhile, tons of you watched star trek and apparently learned(?) that the "bright future" it promised us was.... talking computers? And not, you know, post scarcity and enlightenment that allowed people to focus on things that brought them joy or they were good at, and an entire elimination of the concept of "capitalism" or personal profit or resource disparity that could allow people to not be able to afford something while some asshole in the right place at the right time gets to take a percentage cut of the entire economy for their personal use.
The primary "technology" of star trek was socialism lol.
My point is exactly that we got the dystopia but also it's not even a little cool, and it's very stupid. We could have at least gotten the cool dystopia where bad things happen but at least they're part of some kind of sensible longer-term plan. What we got practically breaks suspension of disbelief, it's so damn goofy.
> The primary "technology" of star trek was socialism lol.
Yep. Socialism, and automatic brainwashing chairs. And sending all the oddball non-conformists off to probably die on alien planets, I guess. (The "Original Series" is pretty weird)
It's sadly the exact future that we are already starting to live in.
prioritize your own safety by not attending any location fitted with such a system, or deemed to be such a dangerous environment that such a system is desired.
the AI "swatted" someone.
How many packs of Twizzlers, or Doritos, or Snickers bars are out there in our schools?
First time it happens, there will be an explosion of protests. Especially now that the public knows that the system isn't working but the authorities kept using it anyway.
This is a really bad idea right now. The technology is just not there yet.
Why do you believe this? In the US, cops will cower outside of a school with an armed gunman actively murdering children, forcibly detain parents who wish to go in if the cops wont, and then re-elect everyone involved
In the US, an entire segment of the population will send you death threats claiming you are part of some grand (democrat of course) conspiracy for the crime of being a victim of a school shooting. After every school shooting, republican lawmakers wear an AR-15 pin to work the next day to ensure you know who they care about.
Over 50% of the country blamed the protesting students at Kent state for daring to be murdered by the national guard.
Cops can shoot people in broad daylight, in the back, with no justification, or reasonable cause, or can even barge into entirely the wrong house and open fire on homeowners exercising their basic right to protect themselves from strangers invading their homes, and as long as the people who die are mostly black, half the country will spout crap like "They died from drugs" or "they once sold a cigarette" or "he stole skittles" or "they looked at my wife wrong" while the cops take selfies reenacting the murder for laughs and talk about how terrified they are by BS "training" that makes them treat every stranger as a wanted murderer armed to the teeth. The leading cause of death for cops is still like heart disease of course.
Trump sent unmarked forces to Portland and abducted people off the street into unmarked vans and I think we still don't really know what happened there? He was re-elected.
The technology literally can NEVER be there. It is completely impossible to positively identify a bulge in clothing as a handgun. But that doesn’t stop irresponsible salesmen from making the claim anyway.
In the Menezes case the cops were playing a game of telephone that ended up with him being shot in the head.
Now we get the privilege of walking by AI security cameras placed in random locations, hoping they don't flag us.
There's a ton of money to be made with this kind of global frisking, so lots of pressure to roll out more and more systems.
How does this not spiral out of control?
(I suppose if I attended pro sports games or large concerts, I'd be doing it for those, too)
I started looking at people trying to decide who looked juicy to the security folks and getting in line behind them. They can’t harass two people in rapid succession. Or at least not back then.
The one I felt most guilty about, much later, was a filipino woman with a Philippine passport. Traveling alone. Flying to Asia (super suspect!). I don’t know why I thought they would tag her, but they did. I don’t fly well and more stress just escalates things, so anything that makes my day tiny bit less shitty and isn’t rude I’m going to do. But probably her day would have been better for not getting searched than mine was.
Would probably eliminate the need for the TSA security theater so that will probably never happen.
There weren't a lot of people voicing opposition to TSA's ending of the shoes off policy earlier this year.
Right from the beginning it was a handout to groups who built the scanning equipment, who were basically personal friends with people in the admin. We paid absurd prices for niche equipment, a lot of which was never even deployed and just sat in storage.
Several of the hijackers were literally given extended searches by security that day.
A reminder that what actually stopped hijackings (like, nearly entirely) was locking the cabin door, which was always doable, and has not ever been breached. Not only did this stop terrorist hijackings, it stopped more casual hijackings that used to be normal, it could also stop "inside man" style hijackings like that one with a disgruntled FedEx pilot, it was nearly free to implement, always available, harms no one's rights, doesn't turn airport security into a juicy bombing target, doesn't slow down an important part of the economy, doesn't invent a massive bureaucracy and LEO in the arms of a new american agency that has the goal of suppressing domestic problems and has never done anything useful. Keep in mind, shutting the cockpit door is literally how the terrorists themselves protected themselves from being stopped and is the reason Flight 93 couldn't be recovered.
TSA is utterly ineffective. They have never stopped an attack, regularly fail their internal audits, the jobs suck, and they pay poorly and provide minimal training.
Not even. It's that they rarely pass the audits. Many of the audits have a 90-95% "missed suspect item/s" result.
Email the state congressman and tell them what you think.
Since (pretty much) nobody does this, if a few hundred people do it, they will sit up and take notice. It takes less people than you might think.
Since coordinating this with a bunch of strangers (I.e. the public) is difficult, the most effective way is to normalise speaking up in our culture. Of course normalising it will increase the incoming comm rate, which will slowly decrease the effectiveness but even post that state, it’s better than where we are, which is silent public apathy
A friend of mine once got pulled aside for extra checks and questioning after he had already gone through the scanners, because he was waiting for me on the other side to walk to the gates together and the agent didn't like that he was "loitering" – guess his ethnicity...
I have shoes that I know always beep on the airport scanners, so if I choose to wear them I know it might take longer or I might have to embarrassingly take them off and put them on the tray. Or I can not wear them.
Yes, in an ideal world we should all travel without all the security theatre, but that's not the world we live in, I can't change the way airports work, but I can wear clothes that make it faster, I can put my liquids in a clear bag, I can have bags that make it easy to take out electronics, etc. Those things I can control.
But people can't change their skin color, name, or passport (well, not easily), and those are also all things you can get held up in airports for.
Not sure, but I bet they were looking at their feet kinda dusting off the bottoms, making awkward eye contact with the security guard on the other side of the one way door.
In fact, they will probably demonize the victim to find sn excuse why he deserved to get shot.
Also, no need to escalate this into a race issue.
Note: Precheck is incredibly quick and easy to get; and GE is time consuming and annoying, but has its benefits if you travel internationallly. Both give the same benefits at TSA.
Second note: let's pretend someone replied "I shouldn't have to do that just to be treated...blah blah" and that I replied, "maybe not, but a few bucks could still solve this problem, if it bothers you enough that's worth it to you."
Sure, can't argue with that. But doesn't it bug you just a little that (paying a fee to avoid harassment) doesn't look all that disimilar from a protection racket? As to whether it's a few bucks or many, now you're just a mark negotiating the price.
It actually doesn't! Plenty of people never fly at all and many fly incredibly rarely. The Precheck and GE programs cost money to administer as they have to do background checks and conduct interviews. This actually accomplishes actual security goals, since it allows them to flag risky behavior and examine it.
Who benefits from these programs? Primarily heavy travelers (and optimizers like me who value their time saved more than the $24 a year). These programs also actually make everything better for everyone since I'm no longer taking up a space in the slower-moving, shoes-off line, and TSA/CBP get an actual background check done on me.
The way it is now, heavy travelers who can easily afford it, pay the full costs of the program.
Would you rather:
1. Precheck is free and paid for by all taxpayers even though a lot of people will never bother to enroll (you have to assume -- the cost is so low today that it can't be a barrier for almost anyone who can afford to fly, so it seems a ton of people can't be bothered to follow simple instructions and go get fingerprinted at Staples)
2. Precheck is eliminated and everyone has to go back to the dumb liquids-out, shoes-off thing
3. Precheck is eliminated and we just treat everyone like the Precheck people today, without doing any background checks. Basically like pre-9/11.
Also my partner has told me that apparently my armpits sometimes smell of weed or beer, despite me not coming in contact with either of those for a very long time, and now I definitely don't want to get taken into a small room by a TSA person (After some googling, apparently those smells can be associated with high stress)
That ship has long sailed buddy.
No. If you're investigating someone and have existing reason to believe they are armed then this kind of false positive might be prioritizing safety. But in a general surveillance of a public place, IMHO you need to prioritize accuracy since false positives are very bad. This kid was one itchy trigger-pull away from death over nothing - that's not erring on the side of safety. You don't have to catch every criminal by putting everyone under a microscope, you should be catching the blatantly obvious ones at scale though.
Behold - a real life example of a "Not a hotdog" system, except this one is gun / not-a-gun.
Except the fictional one from the series was more accurate...
How on Earth does a person walk with a concealed gun? What does a woman in a skirt with one taped to her thigh walk like? What does a man in a bulky sweatshirt with a pistol on his back walk like? What does a teenager in wide legged cargo jeans with two pistols and a extra magazines walk like?
[1]: https://www.omnilert.com/blog/what-is-visual-gun-detection-t...
Given that there's no relevant screening step here and it's just being applied to everyone who happens to be at a place it's truly incredible that such an analysis would shake out in favor of this tech. The false positive rate would have to be vanishingly tiny, and it's simply not plausible that's true. And that would have to be coupled with a pretty low false negative rate, or you'd need an even lower false positive rate to make up for how little good it's doing even when it's not false-positiving.
So I'm sure that analysis was either deliberately never performed, or was and then was ignored and not publicized. So, yes, it's a fraud.
(There's also the fact that as soon as these are known to be present, they'll have little or no effect on the very worst events involving firearms at schools—shooters would just avoid any scheme that involved loitering around with a firearm where the cameras can see them, and count on starting things very soon after arriving—like, once you factor in second order effects, too, there's just no hope for these standing up to real scrutiny)
Gait analysis is really good these days, but normal, small objects in a bag don't impact your gait.
What?
This case was a horrifying failure of the entire system that up until that point had fairly decent results for children who end up having to be taken away from their parents and later returned once the Mom/Dad clean up their act.
I think that’s a level of f-ed up which is so far removed from questioning AI that I wonder why people even tolerate it and somehow seem to view the premise as normal.
The cop thing is just icing on the cake.
Imagine the head scratching that's going on with execs who are surprised things might work when a probabilistic software is being used for deterministic purposes without realizing there's a gap between it kind of by nature.
I can't. The execs won't care and probably in their sadist ways, cheer.
Or am I kidding? AI is only as good as its training and humans are...not bastions of integrity...
tl;dr if you want to make a broad point, make the effort to put it in context so people can appreciate it properly.
I expect a school to be smart enough to say “Yes, this is a terrible situation, and we’re taking a closer look at the risks involved here.”
> Baltimore County Public Schools echoed the company’s statement in a letter to parents, offering counseling services to students impacted by the incident.
(Emphasis mine)
But ……
Doritos should definitely use this as an advertisement, Doritos - The only weapon of mass deliciousness, or something like that
And of course pay the kid, so something positive came come out of the experience for him
I wonder how effective an apology and explanation would have been? Just some respect.
I thought those two things were impossible?
(* see also "how to lie with statistics").
We've got a model out there now that we've just seen has put someone's life at risk... Does anyone apart from that company actually know how accurate it is? What it's been trained on? Its false positive rate? If we are going to start rolling out stuff like this, should it not be mandatory for stats / figures to be published? For us to know more about the model, and what it was trained on?
We already went through this years ago with all those terrorism databases and we (humanity) have learned nothing--any database will have a percentage of erroneous data, it is impossible to eliminate erroneous data completely. Therefore, any database used to identify <fill in the blank> will have erroneous conclusions. It's been observed over and over again and governments can't help themselves that "this time it will be different because <fill in the blank> e.g. AI.
1. https://www.nottinghammd.com/2025/10/22/student-robbed-outsi...
2. https://www.si.com/high-school/maryland/baltimore-county-hig...
3. https://www.wbaltv.com/article/knife-assault-rossville-juven...
4. https://www.wbal.com/stabbing-incident-near-kenwood-high-sch...
5. https://www.cbsnews.com/baltimore/news/teen-injured-after-re...
I skimmed through all the articles linked in GP and finding them pretty relevant to whatever decision might have been made to utilize the AI system (not at all to comment on how badly the bad tip was acted on).
Hailing from and still living in N. California, you could tell me that this school is located in Beverly Hills or Melrose Place, and it would still strike me as a piece of trivia. If anything, it'd just be ironic?
So my point was that while the list of incidents is definitely not great, it's still way less severe than many inner-city schools in Baltimore. And honestly these same types of incidents happen at many "safe" large suburban high schools in "nice" areas throughout the US... generally less often than at this particular school, but not an order-of-magnitude difference.
Basically, I'm saying that GP's assertion of it being a "dangerous school" is entirely relative to what you're comparing to. There are much worse schools in that metro area.
I also know several high school teachers and the worst things they've complained about are disruptive/stupid students, not violence. And my friends who are parents would never send their kids to a school that had incidents like the ones I linked to. I think this sort of violence is limited to a small fraction of schools/districts.
No, definitely not. I went to a decently-well-ranked suburban school district, and still witnessed violent incidents... no weapon used, but still multiple cases where the victim got a concussion. And there were arrests, a gun found in a kid's locker, etc. This stuff was unfortunately relatively normal, at least in the 90s. Not quite as often as at the school in the article, but still.
There were fights, but no one was ever harmed with a weapon to my memory.
The crime stats seem fine to me. In a city like Baltimore, the numbers you've presented are shockingly low. When I was going through school, it was quite common for bullies to rob kids... even on campus. Teachers pretty much never did anything about it.
[0] Maybe the guy is a rapist, and maybe he isn't. If he is, that's godawful and I hope he goes to jail and gets his shit straight.
Make them pay money for false positives instead of direct support and counselling. This technology is not ready for production, it should be in a lab not in public buildings such as schools.
Agreed.
> This technology is not ready for production
No one wants stuff like this to happen, but nearly all technologies have risks. I don't consider that a single false positive outweighs all of its benefits; it would depend on the rates of false and true positives, and the (subjective) value of each (both high in this case, though I'd say preventing 1 shooting is unequivocally of more value than preventing 1 innocent person being unnecessarily held at gunpoint).
In your example, if terrorists were the only people killing people in the US, and police (a) were the only means of stopping them, and (b) did not benefit society in any other way, the equation would be simple: get rid of the police. There wouldn't need to be any value judgements, because everything cancels out. But in practice it's not that easy, since the vast majority of killings in the US occur at the hands of people who are neither police nor terrorists, and police play a role in reducing those killings too.
I agree, but this isn't specific to AI -- it's what makes capitalism work to the extent that it does. Nobody wants to clean a toilet or harvest wheat or do most things that society needs someone to do, they want to get paid.
It already cost money paying for the time and resources to be misappropriated.
There needs to be resignations, or jail time.
Decision-maker accountability is the only thing that halts bad decision-making.
This assumes no human verification of the flagged video. Maybe the bag DID look like a gun. We'll never know, because modern journalism has no interest in such things. They obtained the required emotional quotes and moved on.
Those journalists, just trying to get (unjustified, dude, unjustified!!) emotes from kids being mistakenly held at gun point, boy they are terrible.... They're just covering up how necessary those mistakes are in our pursuit of teh crime...
Are you suggesting it's not?
Nobody saw a gun. We know this because there was no gun.
(Ok, ok, with thin, skin-tight, light-colored pants, maybe -- maybe -- it could work. But if it mistook a crumpled-up Doritos bag as a gun, clearly that was not the case here.)
That quote sorta suggests that the police got the alert, looked at the photo, and was like "yeah, that could be a gun, let's go".
Still dumb.
Superintendent approved a system that they 100% knew would hallucinate guns on students. You assert that if the superintendent required human-in-the-loop before calling the police that the superintendent is absolved from deploying that system on students.
You are wrong. The superintendent is the person who decided to deploy a system that would lead to swatting kids and they knew it before they spent taxpayer dollars on that system.
The superintendent also knew that there is no way a school administrator is going to reliably NOT dial SWAT when the AI hallucinates a gun. No administrator is going to err on the side of "I did not see an actual firearm so everything is great even though the AI warned me that it exists." Human-in-the-loop is completely useless in this scenario. And the superintendent knows that too.
In your lifetime, there will be more dead kids from bad AI/police combinations than <some cause of death we all think is too high>. We are not close to safely betting lives on it, but people will do it immediately anyway.
Maybe, but that won't stop the kind of people that watch cable news from saying "if it stops one crime" or "if it saves one life".
If there’s no feedback mechanism, verification doesn’t matter.
It just created a situation in which a bunch of people with guns were told that some teen had a gun. That's a very unsafe situation that the system created, out of nothing.
And some teen may be traumatized. Again, unsafe.
Incidentally, the article's quotes make this teen sound more adult than anyone who sold or purchased this technology product.
Um. That's not really the danger here.
The danger is that it's as clear as day that in the future someone is gonna be killed. That's not just a possibility. It's a certainty given the way the system is set up.
This tech is not supposed to be used in this fashion. It's not ready.
This argument can be made about almost every technology, including fire, electricity, and even the base "technology" of just letting a person watch a camera feed.
I'm not downplaying the risks. I'm saying that we should remember that almost everything has risks and benefits, and as a society we decide for or against using/doing them based mostly on the ratio between those two things.
So we need some data on the rates of false vs. true detections here. (A value judgment is also required.)
huh, i can think of at least one recent example of a popular figure using this kind of argument to extreme self-detriment.
My original comment, currently sitting at -4, has so far attracted one guilt-by-association plus implied-threat combo, and no other replies. To remind readers: My horrifying proposal was to measure both the risks and the benefits of things.
If anyone genuinely thinks measuring the risks and benefits of things is a bad idea, or that it is in general a good idea but not in this specific case, please come forward.
No, which is why your comment was downvoted - the following is a fallacy:
> This argument can be made about almost every technology,
That's the continuum fallacy.
I'm not claiming that a continuous range exists, and that one end cannot be distinguished from the other because the slope between those points is gradual. I'm claiming that there is a category, called technology, and everything in that category is subject to that argument.
If you want to dispute that, it's incumbent on you to provide evidence for why some technology subcategories should not be subject to that argument.
Specifically: You need to present a case for why AI devices like the one discussed in TFA should not be evaluated in terms of their risks and benefits to society.
Good luck with that argument.
My read of the "Um" and the quoting, was that you thought I missed that first danger, and so were disagreeing in a dismissive way.
When actually we're largely in agreement about the first danger. But people trying to follow a flood of online dialogue might miss that.
I mentioned the second danger because it's also significant. Many people don't understand how safety works, and will think "nobody got shot, so the system must have worked, nothing to be concerned about". But it's harder for even those people to dismiss the situation entirely, when the second danger is pointed out.
Hell, it wouldn't even move the needle on racial bias much because LLMs have already shown themselves to be prejudiced against minorities due to the stereotypes in their training data.
[0]Even though no other free society has to pay that price but whatever.
To the argument that then only criminals have guns - in India at least, criminals have very limited access to guns. They have to resort to unreliable handmade guns which are difficult to procure. Usually criminals use knives and swords due to that.
This would not be the case in the US.
Another false positive by one of these leading content filters schools use - the kid said something stupid in a group chat and an AI reported it to the school, and the school contacted the police. The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time. They are suing Gaggle, who claims they never intended their system to be used that way.
These kinds of false positives are incredibly common. I interviewed at one of their competitors (Lightspeed), and they actually provide a paid service where they have humans review all the alerts before being forwarded to the school or authorities. This is a paid addon, though.
All he wanted was a Pepsi. Just one Pepsi. And they wouldn't give it to him.
Yeah, there's a shop near me that sells bongs "intended" for use with tobacco only.
> Gaggle’s CEO, Jeff Patterson, said in an interview that the school system did not use Gaggle the way it is intended. The purpose is to find early warning signs and intervene before problems escalate to law enforcement, he said. “I wish that was treated as a teachable moment, not a law enforcement moment,” said Patterson.
It's entirely predictable that schools will call law enforcement for many of these detections. You can't sell to schools that have "zero tolerance" policies and pretend that your product won't trigger those policies.
But alas, we don't live in that world. We live in a world where there will be firings, civil, and even criminal liability for those who make wrong judgments. If the AI says "possible gun", the human running things who alerts a SWAT team faces all upside and no downside.
Hmm, maybe this generation's version of "nobody ever got fired for buying IBM" will become "nobody ever got fired for doing what the AI told them to do." Maybe humanity is doomed after all.
In fact one might say that what the communist parties did in the 1910s was pretty much that. Ubiquitous surveillance is the problem here, not AI. Communist states used tens of thousands of "agents" that would just walk around, listen in to random conversations, and arrest (and later torture and deport) people. Of course communist states that still exist, like China, have started using AI to do this, but it is nothing new for China and it's people.
And, of course, what these communist states are doing is protecting the rich and powerful in society, and enforcing their "vision", using far more oppressive means than even the GOP dares to dream about. Including against "socialist causes", like LGBTQ. For starters, using state violence against people for merely talking about problems, for example.
> far more oppressive means than even the GOP dares to dream about
That seems to be exactly what they are dreaming about. Something like China’s authoritarianism minus the wise stewardship of the economy, plus killer drones,
My thought when posting was, if the schools already have surveillance cameras that human security guards are watching, adding an AI to alert them to items of interest alone wasn't bad. But maybe you've changed my mind. The AI pays more invasive attention to every stream. Whereas a guard may be watching 16 feeds at once and barely be paying attention, and no one may ever even view the feed unless a crime occurs and they go looking for evidence.
Regardless this setup was way worse! The article said the AI:
> ... scans existing surveillance footage and alerts police in real time when it detects what it believes to be a weapon.
Wow, the system was designed with no human in the loop - it automatically summons armed police!
That "we could" is doing some very heavy lifting. But even if we pretend it's remotely feasible, do we want to take an institution that already trains submission to authority and use it to normalize ubiquitous "for your own good" surveillance?
But can we at least talk about also holding the school accountable for the absolutely insane response?
You talk about not selling to schools that have "zero tolerance" policies as if those are an immutable fact of nature that can never be changed, but they are a human thing that has very obvious negative effects. There is no reason we actually have to have "zero tolerance" policies that traumatize children who genuinely did nothing wrong.
"Zero tolerance" for bringing deadly weapons to school, I can understand. So long as what's being checked for is actual deadly weapons, and not just "anything vaguely gun-shaped", or "anything that one could in theory use as a deadly weapon" (I mean, that would include things like "pens" and "textbooks", so...).
"Zero tolerance" for particular kinds of language is much less acceptable. And I say this as someone who is fully in favor of eliminating things like hate speech or threats of violence—you don't do it by coming down like the wrath of God on children for a single instance of such speech, whether it was actually hate speech or not. They are in school; that's the perfect place to be teaching them a) why such speech is not OK, b) who it hurts, and c) how to express themselves without it, rather than just treating them like terrorists.
Holy shitballs. In my experience such pain addons have very cheap labor attached to them, certainly not what you would expect based on the sales pitch.
Is there some legal way to sue a pair of actors (Gaggle and school) then let them sue each other over who has to pay what percentage?
for this civilian use case the next step - AR googles worn by police with that AI will be projecting onto the googles where that teenager has his gun (kind of Black Mirror style), and the next step after that is obviously excluding the humans even from the execution step.
But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.
Aside from improving the quality of the detection model, we should try to reduce the “cost” of both failure modes as much as possible. Putting a human in the loop or having secondary checks are ways to do that.
I would have gone with “a normalized sense of hopelessness and indignity which causes people to feel like violence is the only way they can have any agency in life” considering “gun” is the adjective and “violence” is the actual thing you're talking about.
Improvement in either area would be a net positive for society. Improvement in both areas is ideal but solving proliferation seems a lot more straightforward than fixing the generally miserable society problem.
https://klimapedia.nl/wp-content/uploads/2020/01/Dweilen_met...
(for others: the Dutch expression is "Dweilen met de kraan open", "Mopping with the tap open")
We answered the screams at the door to guns pointed at our faces, and countless cops.
It was explained to us that this was the restrained version. We got a knock.
Unfortunately, I understand why these responses can't be neutered too much. You just never know.
I hate to say this but I get it. Imagine a scenario happens where they decide "sounds phony. stand down." only for it to be real and people are hurt/killed because the "cops ignored our pleas for help and did nothing." which would be a horrible mistake they could be liable for, never mind the media circus and PR damage. So they treat all scenarios as real and figure it out after they knock/kick in the door.
But lets think about their analogy a little more: sheepdogs and wolves are both canines. Hmm.
Also "funny" how quickly they can reclassify any person as a "wolf", like this student. Hmm.
A sheepdog that bites a sheep for any reason is killed.
Yeah. They were happy to take their sweet time assessing everything safely outside the buildings at Uvalde.
Everyone knows swatting is a real thing that happens and that it's problematic, so why don't police departments have procedures in place which include that possibility? Who benefits from hyped-up police responses to false claims of criminal activity?
My daughter was swatted, but at the time she lived in a town where the cops weren't militarized goon squads. What happened was two uniformed cops politely knocked on her door, had a chat with her, and asked if they could come in and look around. She allowed them, they thanked her and the issue was resolved.
This is the way. Investigate, even a little, before deploying great force.
Establishing the probable trustworthiness of the report isn't black magic. Ask the reportee for details, question the neighbours, look in through the windows, just send two plain clothed officers pretending to be salesmen to knock on the door first? Continously adjust the approach as new information comes in. This isn't rocket science, ffs.
Even with multiple major discrepancies, police still decided they should go in, no-knock.
"I can see them in the upstairs window" - of a single story home.
"The house is red brick" - it was dark grey wood.
"No cars in the driveway" - there was two.
Cops still said "hmm, still could be legit" and battered down the front door, deployed flashbangs.
As for bad cops they look for any reason to go act like aggro billy badasses.
uh-huh
> if they don't the outcome is on them personally and potentially legally.
Bullshit, they're rarely held accountable when they straight up murder people, and even then "accountable" is "have to go get a different job". https://en.wikipedia.org/wiki/Killing_of_John_T._Williams
ACAB
It just means the police force is an instrument of terror.
always had been dot jpeg.
edit should add sorry to hear that.
I don't understand the connection here
Given the probability of police officers in the USA taking any action as hostile and then ending up shooting him a false positive here is the same as swatting someone.
The system here sent the police off to kill someone.
I'd suspect kids would take guns to 'be cool', show friends, make threats without intention to actually use them. Also, intention to harm that wasn't followed through; intention to defend themselves if threatened; other reasons?
Probably no sound stats, but I'm curious about it, so asked.
Reality is there are guns in schools every day. “Solutions” like this aren’t making anyone safer. School shooters don’t fit this profile - they are planners, not impulsive people hanging out at the social event.
More disturbing is the meh attitude of both the company and the school administration. They almost engineered a tragedy through incompetence, and learned nothing.
Oh look, a corporation refusing to take responsibility for literally anything. How passe.
Human car crash? Human punishment. Corporate-owned car crash? A fine which reduces salaries some negligible percent.
If this company was a sole proprietorship, the only recourse this kid would have is to sue the owner, up to bankruptcy.
Since it's a corporation, his recourse is to sue the company, up to bankruptcy.
As for corporations having rights, I can explain it further if necessary but the key understanding is that the singular of "corporations are people" is "a corporation is people" not "a corporation is a person".
Even when Boeing knowingly caused the deaths of hundreds (especially the second crash was entirely preventable if they would have been honest after the first one), all they got were some fines. Those just end up being charged back to their customers, a big one being the government who fined them in the first place.
If the government decides to prosecute the matter as a civil infraction, or doesn't even bother prosecuting but just has an executive agency hand out a fine, that's not a matter of the corporation shielding people, that's a matter of the government failing to prosecute or secure a conviction.
Since corporations aren't people, Boeing didn't know anything.
Did someone at Boeing have all of that knowledge?
Don't forget that hiding MCAS from pilots and the FAA was a conscious decision. It wasn't something that 'just happened'. The decision to not make it depend on redundant AoA sensors by default too.
My point is, I can imagine that the MCAS suicidal side-effect was something unexpected (it was a technical failure edge-case in a specific and rare scenario) and I get that not anticipating it could have been a mistake, not a conscious decision. But after the first crash they should have owned up to it and not waited for a second crash.
Extenuating circumstances, at best.
you have to recognize that a statement like this means that decision-makers at boeing either knew or were negligent in their duties.
Before the first one yeah, you could claim that they might have had no idea this could possibly happen. I don't think that is the case but ok. The second crash however really shouldn't have happened.
It really isn't -- we're talking about a category of activities that involves only financial liability or civil torts in the first place, regardless of whether the parties involved are organizations or individuals. You can't put people in prison for civil torts.
Prison is irrelevant to 98% of the discussion here. And the small fraction of cases in the status quo that do involve criminal liability -- even within organizations -- absolutely do assign that liability to specific individuals, and absolutely can involve criminal penalties including jail time. Actual criminal conduct is precisely where the courts "pierce the veil" and hold individuals accountable.
> Even when Boeing knowingly caused the deaths of hundreds (especially the second crash was entirely preventable if they would have been honest after the first one), all they got were some fines.
All anyone would ever get in a lawsuit is some fines. The matter is inherently a civil one. And if there were any indications of criminal conduct, criminal liability can be applied -- as it often is -- to the individuals who engaged in it regardless of whether they are operating within an organization or on their own initiative.
The only real difference is that when you sue a large corporation, you're much more able to actually collect the damages you win than you would be if you were just suing one guy operating by himself. If the aim of justice is remunerative, not just punitive, then this is a much superior situation.
> Those just end up being charged back to their customers, a big one being the government who fined them in the first place.
Who would be paying to settle the matter in your preferred situation? It sounds like the most likely outcome is that the victims would just eat the costs they've already incurred, since there'd be little chance of collecting damages, and taxpayers would bear the burden of paying for the punishment of whomever ends up holding the hot potato after all the scapegoating and blame deflection plays out.
This gets even more perverse. If you're an individual you actually can't just set up an LLC to limit your own liability. There's no manner for an individual to say "I'm putting on a hat and acting solely as the LLC" - rather as the owner you need to find and employ enough judgement-proof patsies that the whole thing becomes a "group project" and you can say you personally weren't aware of whatever problem gave rise to liability. In other words, the very design of corporations/LLCs encourages avoiding responsibility.
You're correct with the nitpick about the Supreme Council's justification, but that justification is still poor reasoning. Corporations are government-created liability shields. How they can direct their employees should be limited, to avoid trampling on those individuals' own natural rights. A person or group of people who want to exercise their personal natural rights through hired employees can always forgo the government-created liability shield and go sole proprietorship / gen partnership.
I'm sure it will. But how do you collect $30M in damages from a single individual whose entire net worth is e.g. $1M? What if the sole proprietor actually owns no assets whatsoever, because he's set up a bunch of arrangements where he leases everything from third parties, and contracts out his business operations to a different set of third parties, etc.?
I don't get why so many people are so intent on trying to attribute the motivations to maximize one's own take, deflect blame for harm away from themselves, and cover up their questionable activities to some specific organizational model. All of those motivations come from the human beings involved -- they were always present and always will be -- and those same human beings will manipulate whatever rules or institutions are involved to the greatest extent that they can.
Blaming a particular organizational model for the malicious intentions of the people who are just using that model as a tool is a deep, deep error.
> If you're an individual you actually can't just set up an LLC to limit your own liability.
What are you talking about? Of course you can. People do it all the time.
> rather as the owner you need to find and employ enough judgement-proof patsies that the whole thing becomes a "group project" and you can say you personally weren't aware of whatever problem gave rise to liability.
You're conflating entirely unrelated concepts of liability here. Limited liability as it relates to LLCs and corporations is for financial liability. It means that the organizations debts are not the shareholders' debts. It has nothing to do with legal liability for one's own purposeful conduct, whether tortious or criminal.
The kind of liability protection that you think corporations enjoy but single-member LLCs don't -- protection from the liability for individual criminal behavior -- does not exist for anyone at all.
> A person or group of people who want to exercise their personal natural rights through hired employees can always forgo the government-created liability shield and go sole proprietorship / gen partnership.
The ownership structure of a business has nothing at all to do with how it hires employees and directs their activities. The same law of agency and doctrine of vicarious liability applies to all agent-principal relationships regardless of whether the principal is a corporation or a sole proprietorship.
It's not about getting made whole from damages, it's about the incentives for the business owner. A sole proprietor has their own skin fully in the game, whereas an LLC owner does not (only modulo things customarily shielded from bankruptcy like retirement savings and primary dwelling, and asset protection strategies for the extremely rich, like charitable foundations)
> I don't get why so many people are so intent on trying to attribute the motivations to maximize one's own take, deflect blame for harm away from themselves, and cover up their questionable activities to some specific organizational model
Because this specific legal structure (not organizational model, that is orthogonal) is a powerful tool for deflecting blame.
> You're conflating entirely unrelated concepts of liability here... It has nothing to do with legal liability for one's own purposeful conduct, whether tortious or criminal
The point is that these concepts are quite intertwined for small businesses, and only become distinct when there are enough people involved to make a nobody's-fault "group project". Let's say I want to own a piece of rental property and think putting it in an LLC will protect my personal life from all the random things that might happen playing host to other people's lives. Managing one property doesn't take terribly much time so I do it myself. Now it snows, the tenant does a crappy job of shoveling, and someone slips on the sidewalk up front, gets hurt, and sues. Since I'm personally involved in supervising the condition of the property, there is now a theory of personal liability for me that I should have been aware of the poor conditions of the sidewalk. (This same liability applies to the tenant, or anyone that was hired to shovel, but they're usually judgement proof, sympathetic, etc).
Same thing with making repairs to the property, etc - any direct involvement (supplying anything but investment capital) opens up avenues for personal liability, negating the LLC protections.
> The same law of agency and doctrine of vicarious liability applies
The point is that LLC/corporate structures allow for much higher levels of scaling, allowing them to apply higher levels of coercion to their employees. Since these limited liability structures are purely creations of government (rather than something existing outside of government), it's straightforwardly justifiable to regulate what activities they may engage in to mitigate this coercion.
Just bring back fucking pistol deals. I have a better chance of defending myself there.
Versus all the natural people at the highest echelons of our political economic system valiantly taking responsibility for fuckall?
We can at least hold them responsible.
We don’t. (We can also hold corporations responsible. We seldom do.)
The problem isn’t in the form of legal entity fraud and corruption wears.
Jail is a great deterrent for natural persons.
In some ways, yes. In most ways, no. In most cases, a massive fine aligns interests. Our problem is we've become weak kneed at levying massive fines on corporations.
Unlike a person, you don't have to house a corporation to punish it. Your fine simply wipes out the owners. If the enterprise is a going concern, it's born under new ownership. If it's not, its assets are redistributed.
> Jail is a great deterrent for natural persons
Jail works for executives who defraud. We just, again, don't do it. This AI could have been sold by a billionaire sole proprietor, I doubt that would suddenly make the rules more enforceable.
Jail isn't on the table for financial liability or civil torts in the first place, and since pretty much all the forms of liability involving commercial conduct we're discussing here are financial liability or civil torts, it's not really relevant to the discussion.
Someone nearby: well what if they use it to replace human thinking instead of augment it?
Engineer: well they would be ridiculous. Nobody would ever think that’s a good idea.
Marketing Team: it seems like this lands best when positioning it as a decision-making tool. Let’s get some metrics on how much faster it is at making decisions than people are.
Sales Rep: ok, Captain, let’s dive into our flagship product, DecisionMaker Pro, the totally automated security monitoring agent…
::6 months later—some kid is being held at gunpoint over snacks.::
The authorities are just itching to have their brains replaced with by dumb computer logic, without regard for community safety and wellbeing.
He frequently criticizes the legality of police holding people at gunpoint based on flimsy information, because the law considers it a use of force, which needs to be justified. For instance, he's done several videos about "high risk" vehicle stops, where a car is mistakenly flagged as stolen, and the occupants are forced to the ground at gunpoint.
My take is that if both the AI automated system and a "human in the loop" police officer looked at the picture and reasonably believed that the object in the image was a gun, then the stop might have been justified.
But if the automated system just sent the officers out without having them review the image beforehand, that's much less reasonable justification.
But the fact that the police showed the photo does suggest that maybe they did manually review the photo before going out. If that's the case, do wonder how much the AI influenced their own judgment, though. That is, if there was no AI involved, and police were just looking at real-time surveillance footage, would they have made the same call on their own? Possibly not: it feels reasonable to assume that they let the fact of the AI flagging it override their own judgment to some degree.
“Computer says die”
ED-209 mistakenly viewed a young man as armed, blows him away in the corporate boardroom.
The article even included an homage to:
“Dick, I’m very disappointed in you.”
“It’s just a small glitch.”
The real question is: Would this have happened in an upper/middle class school.
The student has dark skin. And is attending a school in a crime ridden neighborhood.
Were it a white student in a low crime neighborhood, would they have approached him with guns drawn?
The AI failure is masking the real problem - bad police behavior.
And police do this kind of stuff all the time (or in the very least you hear about it a lot if you grew up in a major city).
So if you’re gonna automate broken systems, you’re going to see a lot more of the same.
I’m not sure what the answer is but I definitely feel that “security” system like this that are purchased and rolled out need to be highly regulated and be coupled with extreme accountability and consequences for false positives.
What do you propose that is "reasonable" given the frameworks established by Heller, McDonald, Caetano, and Bruen?
I can 3D print or mill basically every item you can imagine prohibiting at home: what exactly do laws do in this case?
The current interpretation of 2A is actually a fairly recent invention; in the past, it's been interpreted much more narrowly. And if SCOTUS can overturn Roe v. Wade's precedent, they can do the same with their interpretation of 2A. They won't of course, at least not until some of its members age out and get -- hopefully -- replaced with people who aren't idiots.
But I'd be fine if 2A was amended away. Let the states make whatever gun laws they want, and we can see whether blue or red states end up with lower levels of gun violence as a result.
> The current interpretation of 2A is actually a fairly recent invention;
What was Miller, over a century ago? Is that "recent?"
> hopefully -- replaced with people who aren't idiots.
See you in 30 years!
> But I'd be fine if 2A was amended away.
Good luck! I'll keep milling and printing in the meantime!
> “They showed me the picture, said that looks like a gun, I said, ‘no, it’s chips.’”
So AI did the initial detection, but police looked at it and agreed. We don't see the image, but it probably did look like a gun because of a weird shadow or something.
Fundamentally, this isn't really any different from a person seeing someone with what looks like a gun and calling the cops, only it turns out the person didn't see it clearly.
The main issue is just that with increased numbers of images, there will be an increase in false positives. Can this be fixed by including multiple images, e.g. from motion of the object, so police (and the AI) can better eliminate false positives before traumatizing some poor teen?
Just put the full footage in front of an unbiased third party for a multi-stage verification first. The problem space isn't "is that weird shadow in the picture a gun or not?" it's "does the kid in the video have a gun?". It's not hard to figure out the difference between a bag of chips and a gun based on body language. Presumably the kid ate chips out of the bag? Using certain motions that one makes when doing that? Presumably the kids around him all saw the object in his hands and somehow did not react as if it was a gun? Jeez.
This is quite frankly absurd. The fact that the AI flagged it is bonkers, and the fact that a human doing manual review still believed it was a gun... I mean, just, wow. The level of dangerous incompetence here is staggering.
And I wouldn't be surprised if, minutes (or even seconds) before the video frame the AI flagged, the full video showed the kid finishing the bag and stuffing it in his pocket. AIs suck at context; a human watching the full video would not have made the same mistake. But in mostly taking the human out of the loop, all they had for verification was a single frame of video, captured as a context-free still image.
It is frankly mind-boggling that you or anyone else can defend this crap.
It's not totally clear -- we haven't seen the picture. The point is, it seemed to look like a gun. Shadows and reflections do funny things. For you to say with such confidence that this is absurd and bonkers, is itself absurd without us seeing the image(s) in question.
> It is frankly mind-boggling that you or anyone else can defend this crap.
That's not appropriate. Please see HN guidelines:
Not sure I agree. The AI flagging it certainly biased the person doing the manual review toward agreeing with the AI's assessment. I can imagine a scenario where there was no AI involved, just a human watching that same surveillance feed, and (correctly) not seeing anything alarming in it.
Also I expect the AI completely failed at context. I wouldn't be surprised if the full video feed, a few minutes (or even seconds) before the flagged frame, shows the kid crumpling up the empty Doritos bag and stuffing it in his pocket. The AI probably doesn't keep all that context around to use when making a later decision, and giving just the flagged frame of video to the human may have caused them to miss out on important context.
1. To enhance human productivity; or
2. To replace humans.
Companies, particularly in the US, very much want to go with (2) and part of the reason they can is because there are zero consequences for incidents like this.
A couple ofexamples spring to mind:
1. the UK Royal Mail scandal where a bad system accused postmasters of theft, some of whom committed suicide over the allegations. Those allegations were later proven false and it was the system's fault. IMHO the people who signed off and deployed this should be charged with negligent homicide; and
2. The Hertz case where people who had returned cars were erroneously flagged as car thieves and report was made to police. This created hell for people who would often end up with warrants they had no idea about and would be detained on random traffic stops over a car that was never stolen.
Now these aren't AI but just like the Doritos case here, the principle is the same: companies are trying to replace people with computers. In all cases, a human should be responsible for reviewing any such complaint. In the Hertz case, a human should check to see if the car is actually stolen.
In the Royal Mail situation, the system needs to show its work. Deployment should be against the existing accounting system and discrepancies between the two need to be investigated for bugs until the system is proven correct. Particularly in the early stages, a forensic accountant (if necessary) should verify that funds were actually stolen before filing a criminal complaint.
And if "false positive" criminal complaints are filed, the people who allowed that to happen, if negligent (and we all know they are), should themslves be criminally charged.
We are way too tolerant of black box systems that can result in significant harm or even death to people. Show your work. And make a human put their name and reputation to any output of such systems.
Because that's not what slander is.
I hope they sue the police department over this.
I hope this kid gets what he deserves.
What a tragedy. I'm sure racial profiling on behalf of the AI and the police had absolutely nothing to do with it.
Abolish SWAT teams. Do away with the idea that the state employees can be permitted to be more armed than anyone else.
Blaming the so-called 'swatter' (whether it's a human or AI) is really not getting at the root of the problem.
e.g. Not "this student has a gun" but "this model says the student has a gun with a probability of 60%".
If an AI can't quantify it's degree of confidence, it shouldn't be used for this sort of thing.
I wanna see the frames too.
This problem doesn't exist in Europe or Japan because guns aren't that ubiquitous, which means that the police have the time to think before they act, which makes them less likely to escalate and start shooting. Obviously, for Americans, the only solution is to get rid of the gun culture, but this will never happen, so suck it up that AI gets you swatted.
>Obviously, for Americans, the only solution is to get rid of the gun culture, but this will never happen, so suck it up that AI gets you swatted.
...and you are correct.
This exact scenario is discussed in [1]. The "human in the loop" failed, but we're supposed to blame the human, not the AI (or the way it was implemented). The humans serve as "moral crumple zones".
""" The emphasis on human oversight as a protective mechanism allows governments and vendors to have it both ways: they can promote an algorithm by proclaiming how its capabilities exceed those of humans, while simultaneously defending the algorithm and those responsible for it from scrutiny by pointing to the security (supposedly) provided by human oversight. """
I suspect, though, that the AI flagging that image heavily influenced the cop doing the manual review. Without the AI, I'd expect that a cop manually watching a surveillance feed would have found nothing out of the ordinary, and this wouldn't have happened.
So I agree that it's weird to just blame the human in the loop here. Certainly they share blame, but the fact of an AI model flagging this sort of thing (and doing an objectively terrible job of it) in the first place should take most of the blame here.
"Sorry, that's Nacho gun"
Instead of:
1. AI detects gun on surveillance
2. Dispatch armed police to location
It should be:
1. AI detects gun on surveillance
2. Human reviews the pictures and verifies the threat
3. Dispatch armed police to location
I think the latter version is likely what already took place in this incident, and it was actually a human that also mistook a bag of Doritos for a gun. But that version of the story is not as interesting, I guess.
That would have been bold
Edit: And racism. Just watched the video.
"Omnilert" .. "You Have 10 Seconds To Comply"
-now targeting Black children!
Q: What was the name of the Google AI Ethicist who was fired by Google for raising the concern that AI overwhelmingly negatively framed non-white humans as threats .. Timnit Gebru
https://en.wikipedia.org/wiki/Timnit_Gebru#Exit_from_Google
We, as technologists, ARE NOT DOING BETTER. We must do better, and we are not on the "DOING BETTER" trajectory.
We talk about these "incidents" with breathless, "Wwwwellll if we just train our AI better ..." and the tragedies keep rolling.
Q2: Which of you has had a half dozen Squad Cars with Armed Police roll up on you, and treat you like you were a School Shooter? Not me, and I may reasonably assume it's because I am white, however I do eat Doritos.
It is about the question, the answer will become very clear once you understand what was the question presented to the inference model, and of course what data and context was fed
Fuck you.
But I really wouldn't want to send my kid to a school that surveils students all the time, and uses garbage software like this that directly puts kids into dangerous situations. I feel like with a private school, I'd have more choice and ability to influence that sort of thing.
That is the real issue.
Police force anywhere else in the world that know how to behave would have approched the student, have had a small chat with him, found out all he had in hands was a bag of doritos, maybe would have asked politely to see the content of his bag, explaining the search has been triggered by an autodetection system that may lead to occasional errors and wished him a good day.
No. Trusting AI is clearly the issue.
If there was a 9-1-1 call to the police that there was an active shooter at your kids school, how would you want the police to show up?
Can someone outline a more pragmatic, if not likely, course of what happens next after this? Is it swept under the rug and we move on?
the proofs are there.
philosophers mulled this over long ago and made clear statements as to why ai cant work
though not that for a second do I misdunderstand that it is "all in" for ai, and we all get to go for the 100 trillion dollar ride to hell.
can we have truely awsome automation for manufacturing and mundane beurocratic tasks?, fuck ya we can!
but anything that requires understanding, is forever out of reach, which unfortunatly is also lacking in the people pushing, this thing, now
* the student was black
Is that really a coincidence?
It's just a matter of time before this or something worse happens.
Then, just sit back and enjoy as the lawsuit unfolds.
Its pretty clearly documented how it works here:
https://www.omnilert.com/solutions/gun-detection-system https://www.omnilert.com/solutions/ai-gun-detection https://www.omnilert.com/solutions/professional-monitoring
But really this is typical of cop overreaction with escalation and ego rather than calm, legal, and reasonable investigation. Karens may SWAT people they don't like, but it's police officers who must use reasonableness and restraint to defend the vestiges of their impartiality and community confidence based on asking questions and gathering evidence in a legal and appropriate manner rather than rushing to conclusions. Case in point: The NYC rough false arrest of a father in front of his kid to retrieve his mis-delivered package where the egomaniacal bully cop aggressively lectures a guy for his own mistake to cover his own ego while blaming the victim: https://youtu.be/LXd-4HueHYE
tencentshill•3mo ago
palmotea•3mo ago
anigbrowl•3mo ago
vee-kay•3mo ago
* Even hundreds of cops in full body armor and armed with automatic guns will not dare to engage a single "lone wolf" shooter doing a killing spree in a school; the heartless cowards may even prevent the parents from going inside to rescue their kids: Uvalde school shooting incident
* Cop on a ego trip, will shoot down a clearly harmless kid calmly eating a burger in his own car (not a stolen car): Erik Cantu incident
* Cops are not there to serve the society, they are not there to ensure safety and peace for the neighborhood, they are merely armed militia to protect the rich and powerful elites: https://www.alternet.org/2022/06/supreme-court-cops-protect-...
akimbostrawman•3mo ago
Doesn't mean they are perfect or shouldn't criticised but claiming that's all they are doing isn't reasonable either.
If you look at actual per capita statistics you will easily see this.
vee-kay•3mo ago
https://en.wikipedia.org/wiki/Lists_of_killings_by_law_enfor...
In the United States, law enforcement officers shoot and kill more than 1,100 civilians each year, with a significant number of these incidents involving unarmed individuals, particularly among Black Americans who are disproportionately affected. The FBI has begun collecting data on these use-of-force incidents to provide better insights into the circumstances surrounding police shootings.
Police killed more than 1,300 people in the U.S. last year, an estimated 0.3% increase in police killings per million people. The increase makes 2024 the deadliest year for police violence by a slim margin since Mapping Police Violence began tracking civilian deaths more than a decade ago.
https://www.usatoday.com/story/news/nation/2025/02/26/police...
There is no national database that documents police killings in the U.S., and the report comes days after the Justice Department removed a database tracking misconduct by federal law enforcement. Researchers spent thousands of hours analyzing more than 100,000 media reports to compile the Mapping Police Violence database.
https://en.wikipedia.org/wiki/List_of_mass_shootings_in_the_...
In 2025, the U.S. has experienced significant gun violence, with 11,197 shooting deaths reported through September 30, along with 20,425 nonfatal injuries. The year has seen a total of 341 mass shootings, resulting in 331 fatalities and 1,499 injuries.
Shootings have happened in all 50 states, at all times of day, and in locations as varied as schools, gas stations, gyms, Walmarts, and homes. Some involved handguns, others rifles or shotguns.
10.3 million guns have been sold across the U.S. in 2025 through September 30.
https://www.thetrace.org/2025/10/shooting-gun-violence-data-...
Mass shootings in the United States are incidents where one or more individuals use firearms to kill or injure multiple people, typically in public settings. The frequency and definitions of these events can vary, but they have been a significant concern in recent years, with the U.S. experiencing more mass shootings than any other country.
GVA has recorded 325 mass shootings in the U.S. this year through three quarters. Those have resulted in 309 deaths and 1,490 injuries.
Mass shootings in the last quarter included the high-profile shooting at a New York skyscraper, as well as the shooting of 29 people, 26 of them children, at a church in Minneapolis. Two children, aged 8 and 10, were killed in that incident.
akimbostrawman•3mo ago
Someone being unarmed doesn't mean they can't be deadly. They could be driving a vehicle or otherwise physically assault somebody which would justify deadly force.
Otherwise 1100 is actually quite low compared to the total gun death and per capita in which blacks are over represented compared to all other groups. That includes black on black and black on cops.
drak0n1c•3mo ago
ggreer•3mo ago
> The Department of School Safety and Security quickly reviewed and canceled the initial alert after confirming there was no weapon. I contacted our school resource officer (SRO) and reported the matter to him, and he contacted the local precinct for additional support. Police officers responded to the school, searched the individual and quickly confirmed that they were not in possession of any weapons.
What's unclear to me is the information flow. If the Department of School Safety and Security recognized it as a false positive, why did the principal alert the school resource officer? And what sort of telephone game happened to cause local police to believe the student was likely armed?
1. https://www.wbaltv.com/article/student-handcuffed-ai-system-...
wat10000•3mo ago
spankibalt•3mo ago
Yeah, cause cops have never shot somebody unarmed. And you can bet your ass that the possible follow-up lawsuit to such a debacle's got "your" name on it.
wat10000•3mo ago
mothballed•3mo ago
giardini•3mo ago
https://legalclarity.org/false-report-under-the-texas-penal-...
Furthermore, anyone who files a false report can be sued in civil court.
wat10000•3mo ago
rkagerer•3mo ago
I can make a system that flags stuff, too. That doesn't mean it's any good. If they can show there was no reasonable cause then they've got a leg to stand on.
wat10000•3mo ago
rolph•3mo ago
wat10000•3mo ago
Which may very well be accurate, but I can't imagine the law ever punishing someone on that basis.
Zigurd•3mo ago
xp84•3mo ago
JohnFen•3mo ago
xp84•3mo ago
Etheryte•3mo ago