frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•42s ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•2m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
3•codexon•3m ago•1 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•4m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•7m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•8m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•8m ago•0 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•9m ago•0 comments

Implementing TCP Echo Server in Rust [video]

https://www.youtube.com/watch?v=qjOBZ_Xzuio
1•sheerluck•9m ago•0 comments

LicGen – Offline License Generator (CLI and Web UI)

1•tejavvo•12m ago•0 comments

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
2•_____k•12m ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•14m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
3•CurtHagenlocher•16m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•17m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•17m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•17m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
1•vyrotek•18m ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•20m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•22m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•27m ago•1 comments

You can't QA your way to the frontier

https://www.scorecard.io/blog/you-cant-qa-your-way-to-the-frontier
1•gk1•28m ago•0 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
1•latentio•28m ago•0 comments

Robust and Interactable World Models in Computer Vision [video]

https://www.youtube.com/watch?v=9B4kkaGOozA
2•Anon84•32m ago•0 comments

Nestlé couldn't crack Japan's coffee market.Then they hired a child psychologist

https://twitter.com/BigBrainMkting/status/2019792335509541220
1•rmason•34m ago•1 comments

Notes for February 2-7

https://taoofmac.com/space/notes/2026/02/07/2000
2•rcarmo•35m ago•0 comments

Study confirms experience beats youthful enthusiasm

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/
2•Willingham•42m ago•0 comments

The Big Hunger by Walter J Miller, Jr. (1952)

https://lauriepenny.substack.com/p/the-big-hunger
2•shervinafshar•43m ago•0 comments

The Genus Amanita

https://www.mushroomexpert.com/amanita.html
1•rolph•48m ago•0 comments

We have broken SHA-1 in practice

https://shattered.io/
10•mooreds•49m ago•4 comments

Ask HN: Was my first management job bad, or is this what management is like?

1•Buttons840•50m ago•0 comments
Open in hackernews

Dual RTX 5060 Ti 16GB vs. RTX 3090 for Local LLMs

https://www.hardware-corner.net/guides/dual-rtx-5060-ti-16gb-vs-rtx-3090-llm/
14•pietrushnic•8mo ago

Comments

supermatt•8mo ago
What is the difference like with batching?

It seems all these tests only compare a single prompt at a time, which is just going to be throttled by memory bandwidth (faster on 3090) and clock speed (faster on 5060) for the most part.

The 3090 has almost 3x the cores of a 5060, so I’m guessing it will absolutely wipe the floor with the dual 5060 setup for batched inference - which is increasingly essential for agentic workflows and complex tool use.

Havoc•8mo ago
One substantial downside is other uses. e.g. I also use my desktop for gaming. And a 3090 beats a 5060 easily on that. By a sizable margin - ~33% on some games

Not sure I'd trade more LLM vram for that.

esafak•8mo ago
Reading this gave me flashbacks to the 80s, when tinkerers tried to move utilities into the upper- and extended memory area to free up precious conventional memory, 640KB of which we were told ought to have been "enough for anyone". All this because we were saddled with a 16-bit OS. This is not an LLM problem -- 32GB of memory is peanuts in 2025 -- this is an Intel and AMD problem.
zamadatix•8mo ago
As the article highlights the problem is really twofold. You need enough VRAM to load the model at all but there also needs to be enough bandwidth that accessing all of that memory is fast enough to be worthwhile. It'd be "easy" to slap 2 TB of "slow" DDR5 onto a GPU but it wouldn't perform much better than a high core count CPU running LLMs with the same memory.
omneity•8mo ago
I am not entirely surprised by the relative equivalence for the sparse model. The combined bandwidth of 2x 5060 Ti ≃ 1x 3090. There are inefficiencies in multi-gpus that are more negligible at smaller dimensions, hence why the dense 32B model performs significantly worse on the dual 5060 setup.

For reference I am getting ~40 output tok/s on a 4090 (450W) with Qwen3 32B and a context window of 4096.

> Ultimately, as the user note aptly put it, the decision largely boils down to how much context you anticipate using regularly.

Hah. (emphasis mine)