Don't security cameras have universals motion detection triggers you can use to make sure everything gets captured? Why only pre-screen human silhouettes?
Since AGI for cameras is very far away as the number of false positives and creative workarounds for camouflage is insane to be caught by current "smart" algorithms.
Rotations? Like the military hold perimeter security?
>And, really, how would you feel if that was YOUR job?
If I couldn't get a better job to pay my bills, then that would be amazing. Weird of you to assume like that would somehow be the most dehumanizing job in existence.
I’m reminded of the Skyrim shopkeepers with a basket on their head.
humans will see that they are screwing up and reformulate the action plan.
AI will keep screwingup until it is stopped, and apparently will gaslight when attempts are made to realign at the prompt.
humans realize when results are not desirable.
AI just keeps generating output until plugpull.
Therein lies the rub.
And that is an entirely different problem, isn’t it?
In simple terms: The AI doesn’t need to say, "something unusual is happening because I saw walking trees and trees usually cannot walk", but merely "something unusual is happening because what I saw was unusual, care to take a look?"
I bet they’d have similar luck if they dressed up as bears. Or anything else non-human, like a triangle.
It isn't about knowing that trees don't walk, but that trees do behave in certain ways and noticing that it is "surprised" that they fail to behave in the predicted ways, where "surprise" is something like "this is a very low probability output of my model of the next frame". It isn't necessary to enumerate all the ways the next frame was low-probability, it is enough to observe that it was logically-not high probability.
In a lot of cases this isn't necessarily that useful, but in a security context having a human take a look at a "very low probability series of video frames" will, if nothing else, teach the developers a lot about the real capability of the model. If it spits out a lot of false positives, that is itself very informative about what the model is "really" doing.
duxup•5mo ago
I'm working on some AI projects at work and there's no magic code I can see to know what it is going to do ... or even sometimes why it did it. Letting it loose in an organization like that seems unwise at best.
Sure they could tell the AI to watch out for boxes, but now every time some poor guy moves some boxes they're going to set off something.
mlinhares•5mo ago
Delivery guy shows up carrying boxes, gets shot.
erulabs•5mo ago
duxup•5mo ago
collingreen•5mo ago
The surface area of these issues is really fun.