frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•4m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•5m ago•0 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•5m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
3•bookofjoe•5m ago•1 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•6m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
1•ilyaizen•7m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•8m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•8m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•8m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•8m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•9m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•10m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•10m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•11m ago•0 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•15m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•15m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•16m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•16m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•18m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•18m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•19m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•19m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•19m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
3•simonw•20m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•20m ago•0 comments

Show HN: Velocity - Free/Cheaper Linear Clone but with MCP for agents

https://velocity.quest
2•kevinelliott•21m ago•2 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•23m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
2•nmfccodes•23m ago•1 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
2•eatitraw•29m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•29m ago•0 comments
Open in hackernews

The State of Machine Learning Frameworks in 2019

https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/
40•jxmorris12•3mo ago

Comments

CaptainOfCoit•3mo ago
> In 2019, the war for ML frameworks has two remaining main contenders: PyTorch and TensorFlow. My analysis suggests that researchers are abandoning TensorFlow and flocking to PyTorch in droves.

Seems they were pretty spot on! https://trends.google.com/trends/explore?date=all&q=pytorch,...

But to be fair, it was kind of obvious around ~2023 without having to look at metrics/data, you just had to look at what the researchers publishing novel research used.

Any similar articles that are a bit more up to date, maybe even for 2025?

Legend2440•3mo ago
It’s still all pytorch.

Unless you’re working at Google, then maybe you use JAX.

mattnewton•3mo ago
JAX is quite popular in many labs outside of Google doing large scale training runs, because up until recently the parallelism ergonomics were way better. PyTorch core is catching up (maybe already witn the latest release, haven’t used it yet) and there are a lot of PyTorch using projects to study though.
jonas21•3mo ago
I feel like it was all pretty obvious by late 2017. Prototyping and development in PyTorch was so much easier - it felt just like writing normal Python code. And the supposed performance benefits of the static computation graph in TensorFlow didn't materialize for most workloads. Nobody wanted to use TensorFlow - though you often had to when working on existing codebases.

I think the only thing that could have saved TensorFlow at that point would have been some sort of enormous performance boost that would only work with their computation model. I'm assuming Google's plan was make it easy to run the same TensorFlow code on GPUs and TPUs, and then swoop in with TPUs that massively outperformed GPUs (at least on a performance per dollar basis). But that never really happened.

oceansky•3mo ago
In 2019 I delivered a instance segmentation project and I used Mask RCNN and tensorflow.

Nowadays it looks like yolo absolutely dominates this segment. Any data scientists can chime in?

deepsquirrelnet•3mo ago
I haven’t used RCNN, but trained a custom YOLOv5 model maybe 3-4 years ago and was very happy with the results.

I think people have continued to work on it. There’s no single lab or developer, it mostly appears that the metrics for comparison are usually focused on the speed/MAP plane.

One nice thing is that even with modest hardware, it’s low enough latency to process video in real time.

bonoboTP•3mo ago
SAM (Segment Anything Model) by Meta is a popular go-to choice for off the shelf segmentation.

But the exciting new research is moving beyond the narrow task of segmentation. It's not just about having new models that get better scores but building larger multimodal systems, broader task definitions etc.

jszymborski•3mo ago
lil' self promo but I made a similar blog post in 2018.

I gave mxnet a bit of an outsized score in hindsight, but outside of that I think I got things mostly right.

https://source.coveo.com/2018/08/14/deep-learning-showdown/

jph00•3mo ago
We knew in 2017 that PyTorch was the future, so moved all our research and teaching to it: https://www.fast.ai/posts/2017-09-08-introducing-pytorch-for... .
Scene_Cast2•3mo ago
I found out that in the embedded world (think microcontrollers without an MMU), Tensorflow lite is still the only game in town (pragmatically speaking) for vendor-supported hardware acceleration.
leviliebvin•3mo ago
I recently tried to port my model to JAX. Got it all working the "JAX WAY", and I believe I did everything correct, with one neat top level .jit() applied to the training step. Unfortunately I could not replicate the performance boost of torch.compile(). I have not yet delved under the hood to find the culprit, but my model is fairly simple so I was sort of expecting JAX JIT to perform just as well if not better than torch.compile().

Have anyone else had similiar experiences?

yberreby•3mo ago
JAX code usually ends up being way faster than equivalent torch code for me, even with torch.compile. There are common performance killers, though. Notably, using Python control flow (if statements, loops) instead of jax.lax primitives (where, cond, scan, etc).
leviliebvin•3mo ago
Interesting. Thanks for you input. I already tried to adhere to the JAX paradigm as laid out in the documentation so I already have a fully static graph.
pama•3mo ago
I would test how much of the total flop capability of the hardware you are using. Take the first order terms of your model and estimate how many flops you need per data point (a good guide is 6*param for training if you mostly have large multiplies and nonlinearity/norm layers) and then calculate the real time performance for a given data size input vs the actual expected theoretical max perfomance for the given GPU (eg 1e15 FLOPs/s for bfloat16 per H100 or H200 GPU). If you are already over 50% it is unlikely you can have big gains without very considerable effort, and most likely simple jax or pytorch are not sufficient at that point. If you are at the 2–20% range there are probably some low hanging fruit left and the closer you are to using only 1% the easier it is to see dramatic gains.
AndrewKemendo•3mo ago
Tensorflow was a revelation when it came out and Jeff & Sanjay were heralded as gods

Just goes to show that even when you’ve got everything going for you, perfect team filled with nice people, infinite resources (TPUs anyone?), perfect marketing, your own people will split off and take over the market.

Second place seems to always win the market