Nowadays it looks like yolo absolutely dominates this segment. Any data scientists can chime in?
I think people have continued to work on it. There’s no single lab or developer, it mostly appears that the metrics for comparison are usually focused on the speed/MAP plane.
One nice thing is that even with modest hardware, it’s low enough latency to process video in real time.
But the exciting new research is moving beyond the narrow task of segmentation. It's not just about having new models that get better scores but building larger multimodal systems, broader task definitions etc.
I gave mxnet a bit of an outsized score in hindsight, but outside of that I think I got things mostly right.
Have anyone else had similiar experiences?
CaptainOfCoit•3h ago
Seems they were pretty spot on! https://trends.google.com/trends/explore?date=all&q=pytorch,...
But to be fair, it was kind of obvious around ~2023 without having to look at metrics/data, you just had to look at what the researchers publishing novel research used.
Any similar articles that are a bit more up to date, maybe even for 2025?
Legend2440•3h ago
Unless you’re working at Google, then maybe you use JAX.
mattnewton•2h ago
fleahunter•1h ago
In my experience, a lot of it comes down to the community and the ease of use. Debugging in PyTorch feels way more intuitive, and I wonder if that’s why so many people are gravitating toward it. I’ve seen countless tutorials and workshops pop up for PyTorch compared to TensorFlow recently, which speaks volumes to how quickly things can change.
But then again, TensorFlow's got its enterprise backing, and I can't help but think about the implications of that. How long can PyTorch ride this wave before it runs into pressure from industry demands? And as we look toward 2025, do you think we'll see a third contender emerge, or will it continue to be this two-horse race?
CaptainOfCoit•58m ago
PyTorch has a huge collection of companies, organizations and other entities backing it, it's not gonna suddenly disappear soon, that much is clear. Take a look at https://pytorch.org/foundation/ for a sample
kenjackson•34m ago
bonoboTP•25m ago
All the graph building and session running was way too complex, with too much global state and variable sharing was complicated and based on naming and variable scopes and name scopes and so on.
It was an okay try, but that design simply didn't work so well for quick prototyping, iterating, debugging that's crucial in research.
PyTorch was much closer to just writing straightforward numpy code. TensorFlow 2 then tried to catch up with "eager mode", but in the background it was still a graph and tracing often broke and you had to write the code very carefully and with limitations.
In the end, Pytorch also developed proper production and serving tools as well as graph compilation, so now there's basically no reason to go to TensorFlow. Not even Google researchers use it (they use jax). I guess some industries still use it but at some point I expect Google to shut down TF and focus on the JAX ecosystem with some kind of conversion tools for TF.