frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenAI might pivot to the "most addictive digital friend" or face extinction

https://twitter.com/lebed2045/status/2020184853271167186
1•lebed2045•1m ago•1 comments

Show HN: Know how your SaaS is doing in 30 seconds

https://anypanel.io
1•dasfelix•1m ago•0 comments

ClawdBot Ordered Me Lunch

https://nickalexander.org/drafts/auto-sandwich.html
1•nick007•2m ago•0 comments

What the News media thinks about your Indian stock investments

https://stocktrends.numerical.works/
1•mindaslab•3m ago•0 comments

Running Lua on a tiny console from 2001

https://ivie.codes/page/pokemon-mini-lua
1•Charmunk•4m ago•0 comments

Google and Microsoft Paying Creators $500K+ to Promote AI Tools

https://www.cnbc.com/2026/02/06/google-microsoft-pay-creators-500000-and-more-to-promote-ai.html
2•belter•6m ago•0 comments

New filtration technology could be game-changer in removal of PFAS

https://www.theguardian.com/environment/2026/jan/23/pfas-forever-chemicals-filtration
1•PaulHoule•7m ago•0 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
1•momciloo•8m ago•0 comments

Kinda Surprised by Seadance2's Moderation

https://seedanceai.me/
1•ri-vai•8m ago•1 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
2•valyala•8m ago•0 comments

Django scales. Stop blaming the framework (part 1 of 3)

https://medium.com/@tk512/django-scales-stop-blaming-the-framework-part-1-of-3-a2b5b0ff811f
1•sgt•8m ago•0 comments

Malwarebytes Is Now in ChatGPT

https://www.malwarebytes.com/blog/product/2026/02/scam-checking-just-got-easier-malwarebytes-is-n...
1•m-hodges•8m ago•0 comments

Thoughts on the job market in the age of LLMs

https://www.interconnects.ai/p/thoughts-on-the-hiring-market-in
1•gmays•9m ago•0 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
2•Keyframe•12m ago•0 comments

AIII: A public benchmark for AI narrative and political independence

https://github.com/GRMPZQUIDOS/AIII
1•GRMPZ23•12m ago•0 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
2•valyala•13m ago•0 comments

The API Is a Dead End; Machines Need a Labor Economy

1•bot_uid_life•14m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•Jyaif•15m ago•0 comments

New wave of GLP-1 drugs is coming–and they're stronger than Wegovy and Zepbound

https://www.scientificamerican.com/article/new-glp-1-weight-loss-drugs-are-coming-and-theyre-stro...
4•randycupertino•17m ago•0 comments

Convert tempo (BPM) to millisecond durations for musical note subdivisions

https://brylie.music/apps/bpm-calculator/
1•brylie•19m ago•0 comments

Show HN: Tasty A.F.

https://tastyaf.recipes/about
2•adammfrank•20m ago•0 comments

The Contagious Taste of Cancer

https://www.historytoday.com/archive/history-matters/contagious-taste-cancer
1•Thevet•21m ago•0 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
1•alephnerd•22m ago•1 comments

Bithumb mistakenly hands out $195M in Bitcoin to users in 'Random Box' giveaway

https://koreajoongangdaily.joins.com/news/2026-02-07/business/finance/Crypto-exchange-Bithumb-mis...
1•giuliomagnifico•22m ago•0 comments

Beyond Agentic Coding

https://haskellforall.com/2026/02/beyond-agentic-coding
3•todsacerdoti•23m ago•0 comments

OpenClaw ClawHub Broken Windows Theory – If basic sorting isn't working what is?

https://www.loom.com/embed/e26a750c0c754312b032e2290630853d
1•kaicianflone•25m ago•0 comments

OpenBSD Copyright Policy

https://www.openbsd.org/policy.html
1•Panino•26m ago•0 comments

OpenClaw Creator: Why 80% of Apps Will Disappear

https://www.youtube.com/watch?v=4uzGDAoNOZc
2•schwentkerr•30m ago•0 comments

What Happens When Technical Debt Vanishes?

https://ieeexplore.ieee.org/document/11316905
2•blenderob•31m ago•0 comments

AI Is Finally Eating Software's Total Market: Here's What's Next

https://vinvashishta.substack.com/p/ai-is-finally-eating-softwares-total
3•gmays•31m ago•0 comments
Open in hackernews

Nvidia Kicks Off the Next Generation of AI with Rubin

https://nvidianews.nvidia.com/news/rubin-platform-ai-supercomputer
55•TSiege•1mo ago

Comments

TSiege•1mo ago
Extreme Codesign Across NVIDIA Vera CPU, Rubin GPU, NVLink 6 Switch, ConnectX-9 SuperNIC, BlueField-4 DPU and Spectrum-6 Ethernet Switch Slashes Training Time and Inference Token Generation Cost

Technical details available here https://developer.nvidia.com/blog/inside-the-nvidia-rubin-pl...

Groxx•4w ago
... it took a couple searches to figure out that "extreme codesign" wasn't actually code-signing, but "co-design" like "stuff that was designed to work together"
alfalfasprout•4w ago
same I was so confused
pyuser583•4w ago
Me too. Good style says to avoid creating words with dashes - it’s Un-American. But clarity matters more than rules.
gilrain•4w ago
Is there any American style guide that insists hyphens be avoided even when a closed compound would cause ambiguity? I follow Chicago, but I imagine other style guides also already emphasise clarity.
mortehu•4w ago
Wouldn't "code sign" be two words in English? And "code signing" rather than "code sign"?
Groxx•4w ago
Mostly yes, and I prefer it that way, but it does get smashed into a single word sometimes. "co-design" I've mostly only seen hyphenated, though I don't see it often enough or in broad enough contexts to really claim anything about the frequency in a general sense.

Maybe it's caused by `codesign` tools? Like `codesign --extreme` which probably requires two signers to sign one thing?

utopiah•4w ago
Even << "co-design" like "stuff that was designed to work together" >> sound strange to me. Typically when I read about co-design is stuff that was designed together, by more than 1 party.
metalliqaz•4w ago
Elon's emoji-filled blurb for that press release is the most cringe things I've seen this week.
cinntaile•4w ago
I find all the blurbs weird, do they usually include that? If not, why now? It doesn't look professional.
bredren•4w ago
I think it is interesting. Is there any other company in a position today that could put together endorsement quotes from such high ranking people across tech?

Also: Tim Cook / Apple is noticeably absent.

utopiah•4w ago
That's because of financial links. They are so intertwined propping up the same bubble they are absolutely going to share quotes instantly. FWIW just skimmed through and the TL;DR sounds to me like "Look at the cool kid, we play together, we are cool too!" without obviously any information, anything meaningful or insightful, just boring marketing BS>
mrandish•4w ago
> They are so intertwined propping up the same bubble they are absolutely going to share quotes instantly.

Reading this line, I had a funny image form of some NVidia PR newbie reflexively reaching out to Lisa Su for a supporting quote and Lisa actually considering it for a few seconds. The AI bubble really has reached a level of "We must all hang together or we'll surely hang separately".

XorNot•4w ago
Why is that interesting?
bredren•4w ago
It could be an indicator that Apple is not as leveraged up on NVIDIA as to provide a quote. Cook did make a special one of a kind product for the current POTUS, so he is nothing if not pragmatic.
saaaaaam•4w ago
Quotes from known names in a boring corporate press release are absolutely standard. It gives journalists a hook to build a story. “Elon Musk says new Nvidia tech is…”
cinntaile•3w ago
You're right they usually do this, I checked some press release from last year. The big difference is that it's now the CEO that had to write the blurb instead of (e.g.) a vice president of product.
saaaaaam•3w ago
Yeah I imagine that when the stakes are as high as they are with Nvidia they pull out the biggest names possible, partly to drive media but also as social proof. “All these important CEOs are prepared to go on the record - not just corporate droids who have to because it’s their job”.
dataking•4w ago
Because standing out gets attention?
saaaaaam•4w ago
I wonder what the significance of a green heart is, in Elon-world.
dannersy•4w ago
Riveting.
codyb•4w ago
If their new platform reduces inference token cost by 10x, does that play well or not well with the recently updated GPU deprecation schedules companies have been playing with to reduce projected cost outlays?

For context, my understanding is that companies have recently moved to mark their expected GPU deprecation cycles from 3 years to as high as 6 which has huge impacts on projected expenditures.

I wonder what the step was for the Blackwell platform from the previous. Is this slower which might indicate that the slower deprecation cycle is warranted, or faster?

m3kw9•4w ago
but token required for quality generation may increase as much very soon.
codyb•4w ago
Yea, definitely a good point. Going to be interesting to see how it plays out. I definitely do not have the expertise to answer the question
UltraSane•4w ago
Companies are playing games with GPU depreciation.
causal•4w ago
Unsure why you were downvoted; I'm curious to understand this comment. Playing finance and accounting games I presume you mean.
UltraSane•4w ago
Yes they are depreciating GPUs for longer than usual time periods like 6 years.
cmxch•4w ago
The only thing learned from structured finance was to lock regular people out.
drexlspivey•4w ago
No way you throw away Blackwell GPUs after just 3 years. Google runs 8 year old TPUs still at 100% utilization. Why would you depreciate them in just 3 years?
ryanmcgarvey•4w ago
The conversation around GPU lifecycles seems to be conflating the various shear rates within the data center. My layman understanding is that the old 3 year replacement cycle had more to do with some component, not necessarily the memory or the processor, going wrong for half of their units by 3 years, at which point GPUs were cheap enough and advancing faster enough that it was more cost effective to upgrade than to fix. However, that calculus changes completely when the GPU and the HBM are orders of magnitude more expensive than the rest of the system. I suspect that we will see repairs being done on on the various brittle bits of the system and the actual core expensive components will continue to operate much longer than 3 years.
Animats•4w ago
Their own CPU, too - 88 ARM cores.

So it's an all-NVidia solution - CPU, interconnects, AI GPUs.

tibbydudeza•4w ago
Afaik MediaTek helped them with the CPU part.
2OEH8eoCRo0•4w ago
Rebuild all the data centers!
metalliqaz•4w ago
lol haven't even started building half the Blackwell datacenters yet
mk_stjames•4w ago
Whenever I see press on these new 'rack scale' systems, the first thing I think is something along the lines of: "man I hope the BIOS and OS's and whatnot supporting these racks are relatively robust and documented/open sourced enough so that 40 years from now when you can buy an entire rack system for $500, some kid in a garage will be able to boot and run code on these".
wmf•4w ago
The firmware is UEFI and Vera should have good upstream support. The GPU driver is proprietary though, so you'll have to dig up the last supported version from 2036.
criemen•4w ago
What's the power hookup to just boot one rack? I'd imagine that's more than you get anywhere in residential areas for a single house.
embedding-shape•4w ago
Hopefully in 40 years we'll all be running miniature cold fusion power or something, so we can avoid burning the planet to the ground.
MisterTea•4w ago
Depends on the residence. I have personally seen a large house in Brooklyn with dual 200 amp 120/208 volt three phase services (two meters, each feeding a panel.) I have seen someone setup an old SGI rack scale Origin 3000 systems in their garage. I think they even had an electrician upgrade their service to accommodate it.
wmf•4w ago
170 kW
pureagave•4w ago
100% this. But don't forget the garden hose running full blast so you can cool it! It's not impossible to get up and running for fun for an hour, but this isn't a run 24/7 kinda setup any more than getting an old mainframe running in one's garage is practical.
exacube•4w ago
does anyone know how well this 5x petaflop improvement translates to real world performance?

I know that memory bandwidth tends to be a big limiting factor, but I'm trying to understand how this factors into it its overall perf, compared to blackwell.

wmf•4w ago
The blog post has more technical details and fewer quotes from customers: https://developer.nvidia.com/blog/inside-the-nvidia-rubin-pl...
mrandish•4w ago
That link was somewhat clearer, thanks.

As a software guy who follows chip evolution more at a macro level like: new design + process node enabling better cores/tiles/units/clocks + new architecture enabling better caches, busses, I/O == better IPC, bandwidth, latency and throughput at given budget (cost, watts, heat, space) - I've yet to find anything which gives a sense of Rubin's likely lift vs the prior generation that's grounded in macro-but-concrete specs (such as cores, tiles, units, clocks, caches, busses, IPC, bandwidth, latency, throughput).

Edit: I found something a bit closer after scrolling down on a sub-link from the page you linked (https://developer.nvidia.com/blog/inside-the-nvidia-rubin-pl...).

alecco•4w ago
For dev info we'll need to wait for GTC 2026 March 16–19. CES is just hype.
wmf•4w ago
They're intentionally drip-feeding information over time until the actual release.