frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
632•klaussilveira•13h ago•187 comments

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
20•theblazehen•2d ago•2 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
930•xnx•18h ago•548 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
34•helloplanets•4d ago•26 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
110•matheusalmeida•1d ago•28 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
43•videotopia•4d ago•1 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
10•kaonwarb•3d ago•10 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
222•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
213•dmpetrov•13h ago•103 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
323•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
372•ostacke•19h ago•94 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•19h ago•181 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
478•todsacerdoti•21h ago•234 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
275•eljojo•15h ago•164 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
404•lstoll•19h ago•273 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•21 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
56•kmm•5d ago•3 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
16•jesperordrup•3h ago•9 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
245•i5heu•16h ago•189 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
13•bikenaga•3d ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
54•gfortaine•10h ago•22 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
141•vmatsiiako•18h ago•64 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
281•surprisetalk•3d ago•37 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1060•cdrnsf•22h ago•436 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
133•SerCe•9h ago•119 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
177•limoce•3d ago•96 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•8h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•20h ago•23 comments
Open in hackernews

YOLO-World: Real-Time Open-Vocabulary Object Detection

https://arxiv.org/abs/2401.17270
148•greesil•8mo ago

Comments

ed•8mo ago
Neat. Wonder how this compares to Segment Anything (SAM), which also does zero-shot segmentation and performs pretty well in my experience.
ipsum2•8mo ago
SAM doesn't do open vocabulary i.e. it segments things without knowing the name of the object, so you can't ask it to do "highlight the grapes", you have to give it an example of a grape first.
stevepotter•8mo ago
Try this: https://github.com/luca-medeiros/lang-segment-anything
ipsum2•8mo ago
This uses GroundingDINO for open vocabulary, separate model. Useful nonetheless, but means you're running a lot of model inference for a single image.
RugnirViking•8mo ago
YOLO is way faster. We used to run both, with YOLO finding candidate bounding boxes and SAM segmenting just those.

For what it's worth, YOLO has been a standard in image processing for ages at this point, with dozens of variations on the algorithm (yolov3, yolov5, yolov6, etc) and this is yet another new one. Looks great tho

SAM wouldn't run under 1000ms per frame for most reasonable image sizes

euazOn•8mo ago
Just as a quick demo, here is an example of YOLO-World combined with EfficientSAM: https://youtu.be/X7gKBGVz4vs?t=980
aunty_helen•8mo ago
We used mobile Sam because of this, was about 250ms on cpu. Useful for our use case
AndrewKemendo•8mo ago
We’ve tested this in our production environment on mobile robots (think quadcopter and ground UGV) and it works really nicely
TechDebtDevin•8mo ago
If this is military related, im terrified of the future. Sci-fi movies with crazy drones from back when are no longer that cute.
jiggawatts•8mo ago
The truly scary part is that it’s a straightforward evolution from this to 1000 fps hyperspectral sensors.

There will be no hiding from these things and no possibility of evasion.

They’ll have agility exceeding champion drone pilots and be too small to even see or hear until it’s far too late.

Life in the Donbass trenches is already hell. We’ll find a way to make it worse.

MoonGhost•8mo ago
Then it should be possible to use them to counter and defend. Think of AI powered interceptor drones patrolling the area, anti-drone light machine guns.
collingreen•8mo ago
As long as you keep paying your gemini anti-drone bill and don't set account limits you'll be fine! </s>
AndrewKemendo•8mo ago
You’re right to be terrified
echelon•8mo ago
7 years ago, this felt like science fiction:

https://www.youtube.com/watch?v=HipTO_7mUOw

Now that we've seen the use of drones in the Ukraine war, 10k+ drone light shows, Waymo's autonomous cars, and tons of AI advancements in signals processing and planning, this seems obvious.

yard2010•8mo ago
This is important.

I don't want to live on this planet anymore.

Flemlo•8mo ago
We have nuclear weapons.

We already achieved complete destruction potential.

Drones don't change much. It's potentially better for us civilians if drones get used to attack a lot more targeted (think Putin).

This should lead to narrow policies which might be less aggressive

MoonGhost•8mo ago
> potentially better for us civilians if drones get used to attack a lot more targeted (think Putin

Putin is well protected, way better than US presidents and candidates. With lower prices and barriers it can actually be you, or any low profile target. Luckily real terrorist are mostly uneducated.

bevenky•8mo ago
Is this OSS?
T-A•8mo ago
https://github.com/AILab-CVC/YOLO-World
fc417fc802•8mo ago
Unclear exactly what you're asking. The linked paper describes an algorithm (patent status unclear). That paper happens to link to a GPL licensed implementation whose authors explicitly solicit business licensing inquiries. The related model weights are available on Hugging Face (license unclear). Notably the HF readme file contains conflicting claims. The metadata block specifies apache while the body specifies GPL.

https://github.com/AILab-CVC/YOLO-World

https://huggingface.co/spaces/stevengrove/YOLO-World/tree/ma...

sigmoid10•8mo ago
The paper says it is based on YOLOv8, which uses the even stricter AGPL-3.0. That means you can use it commercially, but all derived code (even in a cloud service) must be made open source as well.
fc417fc802•8mo ago
I assume they refer to the academic basis for the algorithm rather than the implementation itself.

Slightly unrelated, how does AGPL work when applied to model weights? It seems plausible that a service could be structured to have pluggable models on the backend. Would that be sufficient to avoid triggering it?

kouteiheika•8mo ago
They probably mean the algorithm, but nevertheless the YOLO models are relatively simple so if you know what you're doing it's pretty easy to reimplement them from scratch and avoid the AGPL license for code. I did so once for the YOLOv11 model myself, so I assume any researcher worth their salt would also be able to do so too if they wanted to commercialize a similar architecture.
sigmoid10•8mo ago
You don't just need to reimplement the architecture (which is trivial even for non-researcher level devs), you need to re-train the weights from scratch. According to the legal team behind Yolo, weights (including modifications via fine tuning) fall under the AGPL as well and you need to contact their sales team for a custom license if you want to deviate from AGPL.
kouteiheika•8mo ago
At least for the Ultralytics YOLO models this is also relatively easy (I've done it too). These models are tiny by today's standards, so training them from scratch even on consumer hardware is doable in reasonable time. The only tricky part is writing the training code which is a little more complicated than just reimplementing the architecture itself, but, again, if a random scrub like me can do it then any researcher worth their salt will be able to do it too.
sigmoid10•8mo ago
You don't just need the training algorithm, but also the training data. Which in turn might have additional license requirements.
kouteiheika•8mo ago
AFAIK their pretrained models just use publicly available datasets. From their README:

> YOLO11 Detect, Segment and Pose models pretrained on the COCO dataset are available here, as well as YOLO11 Classify models pretrained on the ImageNet dataset.

jimmydoe•8mo ago
Does GPL still mean anything if you can ask AI to read from code A and reimplement into code B?
msgodel•8mo ago
If that's legal then copyright is meaningless which was the original intention of the GPL.
MoonGhost•8mo ago
So, uncopyrightable AI generated code is actually a good thing from open source community standpoint?
fc417fc802•8mo ago
Presumably depends on the impacts. It's an ideology that seeks user freedom. If you need access to the source code to use as a template that clearly favors proprietary offerings. But if you can easily clone proprietary programs that would favor the end user.
fc417fc802•8mo ago
The standard for humans is a clean room reimplementation so I guess you'd need 2 AIs, one to translate A into a list of requirements and one to translate that list back into code.

But honestly by the time AI is proficiently writing large quantities of code reliably and without human intervention it's unclear how much significance human labor in general will have. Software licensing is the least of our concerns.

dragonwriter•8mo ago
How would this kind of mechanical translation fail to be a violation of copyright?
silentsea90•8mo ago
Q. Any of you know models that do well at deleting objects from an image i.e. inpainting with mask with intention to replace mask with background? Whatever I've tried so far leaves a smudge (eg. LaMa)
GaggiX•8mo ago
There are plenty of Stable Diffusion based models that are capable of inpainting, of course they are heavier to run than LaMa.
silentsea90•8mo ago
My question wasn't about inpainting but eraser inpainting models. Most inpainting models replace objects instead of erasing them even though the prompt shares an intent to delete
jokethrowaway•8mo ago
You can build a pipeline where you use: GroundingDino (description to object detection) -> SAM (segmenting) -> Stable Diffusion model (inpainting, I do mainly real photo so I like to start with realisticVisionV60B1_v51HyperVAE-inpainting and then swap if I have some special use case)

For higher quality at a higher cost of VRAM, you can also use Flux.1 Fill to do inpainting.

Lastly, Flux.1 Kontext [dev] is going to be released soon and it promises to replace the entire flow (and with better prompt understanding). HN thread here: https://news.ycombinator.com/item?id=44128322

silentsea90•8mo ago
Thanks! I do use GroundingDino + SAM2, but haven't tried realisticVisionV60B1_v51HyperVAE-inpainting! Will do! And will try flux kontext too. Thanks!
pavl•8mo ago
This looks so good! Will it be available on replicate?
greesil•8mo ago
I've got big plans for this for an automated geese scaring system
zachflower•8mo ago
Funnily enough, that was my computer science capstone project back in 2010!

I don’t know if our project sponsor ever got the company off the ground, but the basic idea was an automated system to scare geese off of golf courses without also activating in the middle of someone’s backswing.

greesil•8mo ago
If someone can sell it for $100 they'd make some serious money. The birds are fouling my pool, and the plastic owl does nothing. Right now I'm thinking it should make a loud noise, or launch a tennis ball randomly. The best part is I can have it disarm if it sees a person.
joshwa•8mo ago
My thought is just to rent it out for to rich folks with lawns for a few hundred bucks a week. My contraption will have thermal detection, AI target discrimination, and precision targeting with a laminar flow water stream. That’s the plan, anyways.
mattlondon•8mo ago
Same here but for urban foxes.

We had motion triggered sprinklers that worked great, but they did not differentiate between foxes and 4 year old children if I forgot to turn them off haha.

We have more or less 360 degrees CCTV coverage of the garden via 7 or 8 CCTV cameras so rough plan is to have basic motion pixel detection to detect frames with something happening then fire those frames off for inference (rather than trying to stream all video feeds through the algorithm 24/7) and turn the sprinklers on. Hope to get to about 500ms end-to-end latency from detection to sprinklers/tap activated to cement the "causality" of stepping into the garden and then ~immediately getting soaked and scared in the foxes brains. Most latency will be for the water to physically move and make the sprinklers start, but that is another issue really.

Probably will use a RPi 5 + AI Hat as the main local inference provider, and ZigBee controlled valve solenoid on the hose tap.

akshitgaur2005•8mo ago
Brb, using this for the local tigers
joshwa•8mo ago
Likewise but for raccoons. Are you precision targeting or just broad sprinkler coverage? I need to make sure my cat doesn’t get hosed :-/

I got a cheap MLX90640 off aliexpress for target detection and a grove vision AI V2 module to use with IR cam for classification/object tracking. Esp32 for fusion and servo/solenoid actuation.

Collab?

serf•8mo ago
not to be a grump, but why was this posted recently? Has something changed? Yolo-world has been around for a bit now.
3vidence•8mo ago
The setback of YOLO architectures is that they use predefined object categories that are a part of the training process. If you want to adapt YOLO to a new domain you need to retrain it with your new category label.

This work presents a version of YOLO that can work on new categories without needing to retrain the algorithm, but instead having a real-time "dictionary" of examples that you can seemlessly update. Seems like a very useful algorithm to me.

Edit: apologies i misread your comment I thought it was asking why this is different that regular YOLO

greesil•8mo ago
It was new to me, serf. And judging by the number of upvotes, it was new to a few other people too.
jimmydoe•8mo ago
this is one year old. wonder why post now.
MoonGhost•8mo ago
Old stuff is often reposted here to attract attention. It mostly goes unnoticed.
saithound•8mo ago
Needs (2024) in the title.