frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•3m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•4m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•4m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•5m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•5m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
1•alainrk•6m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•7m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
1•edent•10m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•13m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•13m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•19m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•20m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•20m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•23m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•26m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•26m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•26m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•26m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•28m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•30m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•32m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•34m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•35m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•35m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•38m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
5•sakanakana00•41m ago•1 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•44m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
4•Tehnix•44m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•46m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•46m ago•6 comments
Open in hackernews

GPU Price Tracker

https://www.unitedcompute.ai/gpu-price-tracker
54•ushakov•9mo ago

Comments

throwawayffffas•9mo ago
What about AMD card?
kubb•9mo ago
The website has an ".ai" domain. It's about people wanting to run inference, and maybe mine cryptocurrency and for some reason only NVIDIA cards are used for that.
IshKebab•9mo ago
Some reason: CUDA
throwawayffffas•9mo ago
You can run inference on AMD cards, ROCm[1] is a thing. I am running inference on amd cards locally.Plus the highest performing cards for computational workloads are AMD's[2] of course you can't buy these on amazon.

1. https://rocm.docs.amd.com/en/latest/index.html 2. https://www.amd.com/en/products/accelerators/instinct/mi300....

the__alchemist•9mo ago
CUDA.
frognumber•9mo ago
I'm curious what's responsible for the current uptick.
Alifatisk•9mo ago
I own stocks for Nvidia, I believe they will still climb higher than ever before. But at home, my setup has AMD components because they are more worth it.

I am more into AMD cards than anything, I wish this site also tracked the prices of AMD aswell.

thebruce87m•9mo ago
> I believe they will still climb higher than ever before.

I think this expectation is already priced in. I invested when I saw LLMs kicking off with no reflection in the NVIDIA share price and made 10x when the market caught up.

Now with tariff uncertainty and trump signalling to China (via Russia) that there would be no repercussions for invading Taiwan I’m less convinced there is growth there, but the possibility of massive loss. In the days of meme stocks this might not matter of course.

Note that an invasion of Taiwan would have huge implications everywhere but any company that needs leading edge semiconductors to directly sell their products would be screwed more than others.

Alifatisk•9mo ago
Really? You think Nvidia will go downhill from here?
amelius•9mo ago
Why is memory stuck at such low values while applications clearly demand more?
mckirk•9mo ago
My guess would be 'artificial scarcity for the purpose of market segmentation', because people probably wouldn't buy that many of the expensive professional cards if the consumer cards had a ton of VRAM.
amelius•9mo ago
Ok, time to start supporting another brand folks.
cubefox•9mo ago
Nvidia indeed does this with the *60 cards, which are limited to 8 GB. They probably copied this upselling strategy from Apple laptops.
nacs•9mo ago
Except now, Apple with it's shared VRAM/RAM model now has better deals especially past 24GB of VRAM than you get with Nvidia now (for inference at least).

A Macbook or Mac Mini with 32GB as a whole system is now cheaper than a 24GB Nvidia card.

cubefox•9mo ago
That's an interesting point about unified memory. But I assume most people will use their graphics card for video games rather than machine learning inference. And most non-console games are programmed for Windows and non-unified memory.
YetAnotherNick•9mo ago
HBM are in very limited supply and NVidia tries to buy all the stock it could find at any price[1][2]. So the memory literally couldn't be increased.

[1]: https://www.nextplatform.com/2024/02/27/he-who-can-pay-top-d...

[2]: https://www.reuters.com/technology/nvidia-clears-samsungs-hb...

WithinReason•9mo ago
how about GDDR?
YetAnotherNick•9mo ago
If bandwidth is not the issue, you could directly use system memory via PCIe[1]. No need for on chip memory.

[1]: https://developer.nvidia.com/gpudirect

WithinReason•9mo ago
bandwidth is almost always is an issue, but not enough to be worth buying dedicated HW for it.
karmakaze•9mo ago
I'm surprised the RTX cards don't have a Terms of Use that prohibits running CUDA on them. They already removed NVLink from the 40-series onward. Maybe running 8k VR could use the 32GB on the 5090 but I can't imagine much else that's not compute.

I'm looking forward to newer APUs with onboard 'discrete' GPUs and quad or more channel LPDDR5X+ and 128GB+ unified memory that costs less than an M3 Ultra.

sokoloff•9mo ago
Applications that consumers use (games and desktop) work fine with the amount of memory that consumer GPUs have.

GPUs targeting more RAM-hungry applications exist, but they’re quite a bit more expensive, so people who play games buy gaming GPUs while people who need more VRAM buy cards targeting that application.

Why would a consumer want to pay for 40GB of VRAM if 12GB will do everything they need?

Const-me•9mo ago
> work fine with the amount of memory that consumer GPUs have

Most consumers buy GPUs to play videogames. Recently, nVidia launched two flavors of 5060 Ti consumer GPU with 8GB and 16GB memory, the cards are otherwise identical.

Apparently, the 8GB version is only good for 1080p resolution with no DLSS. In many games, the difference between these versions is very substantial. The article says 8GB version is deprecated right at launch: https://videocardz.com/newz/nvidia-geforce-rtx-5060-ti-with-...

sokoloff•9mo ago
It looks like the 8GB cards are about $60 (10-12%) cheaper than the 16GB cards.

I sure don't want a world where we only have 32GB 5090s and nVidia reaching farther down the price-vs-performance curve to offer consumers a more affordable (but lower performing) choice seems like a good, rather than a bad, thing to me. (I genuinely don't see the controversy here.)

Const-me•9mo ago
> 8GB cards are about $60 (10-12%) cheaper

nVidia says the launch prices for them are $400 and $500, respectively.

> seems like a good, rather than a bad, thing to me

The bad thing is the most affordable current generation card capable of playing modern videogames in decent quality now costs $500. That’s too expensive for most consumers I’m afraid.

Steam hardware survey says 7 out of 10 most popular cards are nVidia [234]060 which were sold for around $300. Despite most of them also have 8GB VRAM, when consumers bought these cards a few years ago, 8 GB was good enough for videogames released at that time.

sokoloff•9mo ago
If you're defining 4K@60Hz or 4K@120Hz as the left extreme of "decent quality", then sure.

Legacy Blu-Ray discs maxed out at 1080p30. Most people would consider those discs to be "decent quality" (or more realistically even "high quality") video, and a $400 video card is well capable of playing modern games at (or even above) that resolution and framerate. The entry-level 5060 cards are also good enough for video games released at this time, in either memory trim.

Const-me•9mo ago
> If you're defining 4K@60Hz or 4K@120Hz as the left extreme of "decent quality", then sure.

The 8GB version struggles in 1440p, and only delivers playable framerates in 1080p with some combination of in-game settings. Here’s the original article: https://www.techspot.com/review/2980-nvidia-geforce-rtx-5060...

I agree with the author: that level of performance for $400 is inadequate. BTW, I remember 10 years ago I bought nVidia 960 for about $200. For the next couple years, the performance stayed pretty good even for newly released games.

os2warpman•9mo ago
There are supply constraints at almost every single step in the GPU supply chain.

An earthquake three months ago, production issues, and insatiable demand mean that every single GDDR/HBM chip being made at factories already operating at maximum capacity has been sold to a waiting customer.

If Nvidia wanted to double the amount of VRAM on their products, the only thing that would happen is the supply of finished products would be halved.

No amount of money can fix it, only time.

dsign•9mo ago
Maybe Moore's law is dead, but its sister about the doubling of the computing hunger every year seems to be well and fine. And wait until we get bored of using GPUs to make artists starve and finally leverage AI for something fundamentally useful--or terrible--, like helping billionaires live forever....
tossandthrow•9mo ago
Not really GPU tracker - more like Nvidia card comparison.

AMD has some really interesting things on the drawing board, and Apple should definitely be in the mix.

krzyk•9mo ago
Not Nvidia, some of Nvidia, missing e.g. 3060
_ea1k•9mo ago
Yeah, Apple devices and Nvidia Spark would make for interesting additions. I wish we had something like a performance/$ comparison between architectures.
RolandTheDragon•9mo ago
would love to see this become an arena where I can let my local GPU "fight" against other ones
Obertr•9mo ago
It would be nice to have something like a score to indicate how powerful it is, determined by the price, to see which one is kind of the best.

Neat: when clicking on the name, I would like to be redirected to Amazon. The link on the far right was hard to find. :)

casey2•9mo ago
LAWL "It's not a bubble"
prhn•9mo ago
A price tracker should be more sophisticated than just pulling a single listing from Amazon and dumping whatever price a random 3rd party reseller is listing it for.

The prices here are not indicative of the market at all. They're a single sample from the most egregious offender.

More data points for a 5090 Founders

   - Amazon $4,481

   - StockX $3,447

   - eBay   $3,500-$4,000
I hope whatever product "United Compute" is selling is more thoughtfully constructed than this.
AtheistOfFail•9mo ago
It's a .ai domain.... they're selling "Wrapped LLM"
flessner•9mo ago
Furthermore, the menu is missing a close button, the components look like shadcn or AI generated, and overall it's not well optimized for mobile.

Also listing Coca Cola in the team section, without indication of a partnership or investment - likely as a joke - is not a smart move.

It looks like - and probably is - a random assortment of projects from a single person, the "branding" is simply not reflecting this.

bcraven•9mo ago
I find skinflint[0] is good for this sort of long-term tracking.

[0] https://skinflint.co.uk/?cat=gra16_512&view=gallery&pg=1&v=e...

cyberpower1•9mo ago
I think this exactly what you are referring to: https://gpuprices.ai/
_ea1k•9mo ago
It really is amazing how much these have increased. NVidia 3090 for almost as much as the MSRP for 5090? Incredible!
the__alchemist•9mo ago
This is now giving me scarcity mindset; when prices go back to normal, I'll only buy top tier; last longer before needing to UG. Screwed that one up last time; bought a 4080 when the window opened for a few weeks. (You could just buy them direct from Nvidia's page for a bit)
mciancia•9mo ago
3090 is like 600-800 USD used and basically no new stock.

They have shit data since Amazon doesn't really sell most of those cards and they do no validation

jerryseff•9mo ago
Any plans to make this available as an API / link to common purchase links for each to back the "live" pricing data?
ein0p•9mo ago
Strange specs table - it seems to ignore the tensor core FLOPs, which is what you'd be using most of the time if you're interested in computational throughput.
disqard•9mo ago
I typed in "RTX A2000" and got zero results.

I guess this website is for folks who're trying to equip a data center in their backyard with H100s?