There will be no hiding from these things and no possibility of evasion.
They’ll have agility exceeding champion drone pilots and be too small to even see or hear until it’s far too late.
Life in the Donbass trenches is already hell. We’ll find a way to make it worse.
https://www.youtube.com/watch?v=HipTO_7mUOw
Now that we've seen the use of drones in the Ukraine war, 10k+ drone light shows, Waymo's autonomous cars, and tons of AI advancements in signals processing and planning, this seems obvious.
I don't want to live on this planet anymore.
We already achieved complete destruction potential.
Drones don't change much. It's potentially better for us civilians if drones get used to attack a lot more targeted (think Putin).
This should lead to narrow policies which might be less aggressive
Putin is well protected, way better than US presidents and candidates. With lower prices and barriers it can actually be you, or any low profile target. Luckily real terrorist are mostly uneducated.
https://github.com/AILab-CVC/YOLO-World
https://huggingface.co/spaces/stevengrove/YOLO-World/tree/ma...
Slightly unrelated, how does AGPL work when applied to model weights? It seems plausible that a service could be structured to have pluggable models on the backend. Would that be sufficient to avoid triggering it?
But honestly by the time AI is proficiently writing large quantities of code reliably and without human intervention it's unclear how much significance human labor in general will have. Software licensing is the least of our concerns.
For higher quality at a higher cost of VRAM, you can also use Flux.1 Fill to do inpainting.
Lastly, Flux.1 Kontext [dev] is going to be released soon and it promises to replace the entire flow (and with better prompt understanding). HN thread here: https://news.ycombinator.com/item?id=44128322
I don’t know if our project sponsor ever got the company off the ground, but the basic idea was an automated system to scare geese off of golf courses without also activating in the middle of someone’s backswing.
We had motion triggered sprinklers that worked great, but they did not differentiate between foxes and 4 year old children if I forgot to turn them off haha.
We have more or less 360 degrees CCTV coverage of the garden via 7 or 8 CCTV cameras so rough plan is to have basic motion pixel detection to detect frames with something happening then fire those frames off for inference (rather than trying to stream all video feeds through the algorithm 24/7) and turn the sprinklers on. Hope to get to about 500ms end-to-end latency from detection to sprinklers/tap activated to cement the "causality" of stepping into the garden and then ~immediately getting soaked and scared in the foxes brains. Most latency will be for the water to physically move and make the sprinklers start, but that is another issue really.
Probably will use a RPi 5 + AI Hat as the main local inference provider, and ZigBee controlled valve solenoid on the hose tap.
I got a cheap MLX90640 off aliexpress for target detection and a grove vision AI V2 module to use with IR cam for classification/object tracking. Esp32 for fusion and servo/solenoid actuation.
Collab?
This work presents a version of YOLO that can work on new categories without needing to retrain the algorithm, but instead having a real-time "dictionary" of examples that you can seemlessly update. Seems like a very useful algorithm to me.
Edit: apologies i misread your comment I thought it was asking why this is different that regular YOLO
ed•1d ago
ipsum2•1d ago
stevepotter•1d ago
ipsum2•1d ago
RugnirViking•1d ago
For what it's worth, YOLO has been a standard in image processing for ages at this point, with dozens of variations on the algorithm (yolov3, yolov5, yolov6, etc) and this is yet another new one. Looks great tho
SAM wouldn't run under 1000ms per frame for most reasonable image sizes
euazOn•1d ago
aunty_helen•1d ago