frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
1•ambitious_potat•3m ago•0 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•3m ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
1•irreducible•4m ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•6m ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•10m ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•22m ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•27m ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•33m ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
1•alexjplant•34m ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
1•akagusu•34m ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•37m ago•2 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•42m ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•45m ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
5•DesoPK•49m ago•0 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•51m ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
33•mfiguiere•57m ago•19 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
3•meszmate•59m ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•1h ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•1h ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•1h ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•1h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
4•gmays•1h ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•1h ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•1h ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•1h ago•1 comments

Why Are There Still So Many Jobs? The History and Future of Workplace Automation [pdf]

https://economics.mit.edu/sites/default/files/inline-files/Why%20Are%20there%20Still%20So%20Many%...
2•oidar•1h ago•0 comments

The Search Engine Map

https://www.searchenginemap.com
1•cratermoon•1h ago•0 comments

Show HN: Souls.directory – SOUL.md templates for AI agent personalities

https://souls.directory
1•thedaviddias•1h ago•0 comments

Real-Time ETL for Enterprise-Grade Data Integration

https://tabsdata.com
1•teleforce•1h ago•0 comments

Economics Puzzle Leads to a New Understanding of a Fundamental Law of Physics

https://www.caltech.edu/about/news/economics-puzzle-leads-to-a-new-understanding-of-a-fundamental...
3•geox•1h ago•1 comments
Open in hackernews

The G in GPU is for Graphics damnit

https://ut21.github.io/blog/triton.html
209•sebg•4mo ago

Comments

nomadygnt•4mo ago
This is cool! I love this kind of simulation GPU programming stuff. Reminds me of this awesome talk from Peter Whidden: https://youtu.be/Hju0H3NHxVI?si=V_UZugPSL9a8eHEM

Not as technicial but similarly cool.

Ocerge•4mo ago
This is awesome. It also brought back some anxiety from >10 years ago in college that reminds me that computer graphics and my brain do not agree whatsoever.
fragmede•4mo ago
Everything's just triangles and numbers, and my brain's no good with numbers. Linear algebra I can do though.
mouse_•4mo ago
It's not the numbers that freak me out, it's what they do to each other...
fragmede•4mo ago
eww...
Keyframe•4mo ago
Not always. Disregarding CSGs and parametrics, Nvidia itself was almost buried for not adhering to that philosophy with their first product https://en.wikipedia.org/wiki/NV1

funny side note. SEGA invested $5m in Nvidia then, after the fiasco to keep them alive. They sold that equity when Nvidia went IPO for roughly $15m. Have they kept it, it would be worth $3b today. SEGA's market cap is around $4b today.

spaceballbat•4mo ago
Funny nvidias first 3d accelerator used quaternions
throwaway808081•4mo ago
People that love linear algebra reserve a special space, of either fondness or of hate, for Euclidian Space.
Animats•4mo ago
Graphics is trivial until you get to shadows and lighting. Then all the simple tricks stop working.
GuB-42•4mo ago
Global illumination is the hard part. The math isn't that hard, but even the best render farms don't have enough computing power to support a straightforward implementation.

So what follow is an endless series of tricks. Path-tracing is one of the purest implementations, and it is actually a simple algorithm to implement, but if you don't want to have a noisy mess on all but the most simple shapes, now we are talking PhDs and rock star developers.

Lwerewolf•4mo ago
Would you mind linking some articles or hinting towards techniques used to "coerce" the choosing of ray sample directions so that noise is minimized even in very "specular" scenes? Sorry for the lack of proper terminology on my end, I've been out of the loop for a very long time, but I assume that's where the majority of the tricks are - I suppose the rest is mostly intersection check accelerations (i.e. BVH).
omcnoe•4mo ago
The modern state of the art for realtime is ml denoisers, taking noisy pixel data from multiple frames, plus other associated data eg velocity vectors of geometry, depth data etc. and using it to produce a perfectly denoised image.
Sohcahtoa82•4mo ago
"perfectly" is doing some heavy lifting here.

Right now, I'm heavily into Cyberpunk 2077. I've got an RTX 5090, so I can turn all the details, including the ray tracing, to their maximum settings. It's absolutely gorgeous (Especially on my 4K HDR OLED monitor), but if you look really closely, you can still see the evidence of some shortcuts being taken.

Some reflections that are supposed to be a bit rough (like a thin puddle in the road) may appear a bit blurry as I'm walking, but will come into better focus when I stop. My guess is that as I'm moving, the angles of the rays being reflected change with every frame, making the data very noisy. Once I stop, they become consistent, so the reflection becomes clear.

namibj•4mo ago
Metropolis light transport is a big one.
nomel•4mo ago
> but even the best render farms don't have enough computing power to support a straightforward implementation.

Back in 2014, Disneys Hyperion farm supported 10-20 software bounces! [1] Does this count, or does it still require "cheats"?

[1] https://www.engadget.com/2014-10-18-disney-big-hero-6.html

Sohcahtoa82•4mo ago
> Graphics is trivial until you get to shadows and lighting

And reflections and refractions.

Raster-based reflections are a simple shortcut: Just take the rendered image of what's being reflected and invert it.

But that doesn't work when the reflected object is off screen. As a result, if you're over water that's reflecting a city skyline or something in the distance, then pitch the camera down, the reflection vanishes as the skyline goes off screen.

Alternatively, you can create an environment-mapped texture, but that makes the reflection not reflect what's actually there, just an approximation of it.

I find it incredibly distracting in games. It's like bad kerning: Once you know what it looks like, you see it EVERYWHERE.

rob74•4mo ago
The thought expressed in the title came to my mind when I saw Nvidia described as an "AI company" in the press recently...
chii•4mo ago
to be fair, the percentage of their revenue derived from ai-related sales is much higher now than before. Why is that not accurate?
shootingoyster•4mo ago
https://www.wheresyoured.at/the-case-against-generative-ai/
ffsm8•4mo ago
GN did a video a few weeks ago in which they were showing a slide from Nvidias shareholder meeting in which it was shown that gaming was a tiny part of Nvidias revenue.

Basically, almost half of their revenue is pure profit and all of that comes from AI.

While the slide looked a lot nicer, the data is also available on their site https://nvidianews.nvidia.com/news/nvidia-announces-financia...

dylan604•4mo ago
Just because customers use their hardware for AI does not mean the hardware maker is an AI company.
naasking•4mo ago
https://news.ycombinator.com/item?id=45487334

When more of their revenue comes from AI than graphics, and they're literally removing graphics output from their hardware...

dotnet00•4mo ago
There's a lot of software involved in GPUs, and NVIDIA's winning strategy has been that the software is great. They maintained a stable ecosystem across most of their consumer and workstation/server stack for many years before crypto, AI and GPU-focused HPC really blew up. AMD has generally better hardware but poor enough software that "fine wine" is a thing (ie the software takes many years post-hardware-launch to actually properly utilize the hardware). For example, they only recently got around to making AI libraries usable on the pre-covid 5700XT.

NVIDIA basically owns the market because of the stability of the CUDA ecosystem. So, I think it might be fair to call them an AI company, though I definitely wouldn't call them just a hardware maker.

hansvm•4mo ago
*barely passable software while their competitors literally shit the bed, but I take your point.
pklausler•4mo ago
Literally?
mrguyorama•4mo ago
"Literally" as an intensifier predates the United States.

You aren't even "dying on this hill", people like you are inventing a hill made out of dead bodies.

pklausler•4mo ago
Literally inventing a hill made out of dead bodies, or figuratively?
dotnet00•4mo ago
As someone who codes in CUDA daily, putting out and maintaining so many different libraries implementing complex multi-stage GPU algorithms efficiently at many different levels of abstraction, without having a ton of edgecase bugs everywhere, alongside maintaining all of the tooling for debugging and profiling, and still having regular updates, is quite a bit beyond "barely passable". It's a feat only matched by a handful of other companies.
Almondsetat•4mo ago
An object is what it does. NVIDIA is making the most money through AI, so that's what it is now to the market
Pooge•4mo ago
Nvidia is selling hardware. What the buyers are doing with it doesn't change anything about Nvidia.

A company selling knives is not considered a butcher or cook, despite the main uses of knives being just that.

matthewmacleod•4mo ago
But it clearly does, as NVIDIA rolls out hardware and software optimised for deployment as AI compute.
Pooge•4mo ago
You have a point. Then it's a "compute" company.
swiftcoder•4mo ago
A company that sells knives, and also invests heavily in restaurants, might be considered to be in the restaurant business, however

Nvidia spends a lot of money investing in downstream AI companies, in what feels like a rather incestuous circle

OJFord•4mo ago
I'm not sure if this is what you mean too, but by the same logic it's not a 'graphics company' nor gaming etc. either. 'Chipmaker' as they say, specialising in highly parallel application-specific compute.
calaphos•4mo ago
The hardware is heavily optimized for low precision matrix math, pretty much only used for AI.
bobsmooth•4mo ago
And in graphics rasterization.
namibj•4mo ago
Those parts (the tensor cores) aren't used for rasterization.
david-gpu•4mo ago
While raster units are separate from tensor cores, both can be leveraged for image rendering. The simplest example of this is Nvidia's DLSS.
touisteur•4mo ago
Not since the Ozaki scheme has appeared. Good high-precision perf from low-precision tensor units has unlocked some very interesting uses of low-fp64-perf GPUs.
phkahler•4mo ago
Just like it was a crypto company. Its a computational fad chaser.

Next up: quantum. And that will be the end of them.

tengbretson•4mo ago
They're just selling. Thats it. If 50 Pittsburgh Steelers fans show up to your bar every Sunday, congrats, you're a Steelers bar now.
tripplyons•4mo ago
I don't think they will fall for the quantum hype. Jensen has publicly stated that quantum computing is at least decades away from being useful.
CamperBob2•4mo ago
We were saying the same thing about AI less than one decade ago, of course... and then the Vaswani paper came out. What if it turns out that when it comes to quantum computing, "Used Pinball Machine Parts Are All You Need"?
cantor_S_drug•4mo ago
This is similar to evolution. Evolution repurposes old systems for newer tasks. The GPU name is stuck but it has been deployed for AI.
larodi•4mo ago
Indeed, why would they not call themselves NvidAI to begin with. This company has twice already been super lucky to have their products used for the wrong thing (given GPUs were created to accelerated graphics, not mining or inference)
Tuna-Fish•4mo ago
3 times, if you count the physics GPGPU boom that Nvidia rode before cryptocurrencies.

And other than maybe the crypto stuff, luck had nothing to do with it. Nvidia was ready to support these other use cases because in a very real way they made them happen. Nvidia hardware is not particularly better for these workloads than competitors. The reason they are the $4.6T company is that all the foundational software was built on them. And the reason for that is that JHH invested heavily in supporting the development of that software, before anyone else realized there was a market there worth investing in. He made the call to make all future GPUs support CUDA in 2006, before there were heavy users.

indoordin0saur•4mo ago
I don't think the physics processing units were ever big. This was mostly just offloading some of their physics processes from the CPU to the GPU. It could be seen as a feature of GPUs for games, like ray-tracing acceleration.
touisteur•4mo ago
They still had a boom of being used for a lot of HPC loads, even non-AI supercomputers, although it was quickly dwarfed by their other markets.
Tuna-Fish•4mo ago
That's not what I was referring to. I was talking about NV selling GPGPUs for HPC loads, starting with the Tesla generation. They were mostly used for CFD.
indoordin0saur•4mo ago
Ah, you're right. Thanks for the correction. But seems like they have applications far beyond CFD if they are what's put in the biggest supercomputers.
Tuna-Fish•3mo ago
CFD is what 90+% of non-AI supercomputer time is spent on. Whether you are doing aerodynamic simulations for a new car chassis, weather forecasting, or testing nuclear weapons in silico, or any of the other of literally hundreds of interesting applications, the computers basically run the same code just with different data inputs.
aurareturn•4mo ago
Or that parallel computing is immensely useful in general and that more use cases will be found for it in the future beyond AI.

At some point, maybe it isn’t luck anymore but a general trend towards parallel computing.

dotancohen•4mo ago

  > parallel computing.
Maybe because the acronym PCU will invite to many toilet jokes.

"No, I see the pee" and at least another that I'd rather not express in polite company ))

svara•4mo ago
I don't think it's luck. They invested in CUDA long before the AI hype.

They quietly (at first) developed general purpose accelerators for a specific type of parallel compute. It turns out there are more and more applications being discovered for those.

It looks a lot like visionary long term planning to me.

I find myself reaching for Jax more and more where you would have done numpy in the past. The performance difference is insane once you learn how to leverage this style of parallelization.

namibj•4mo ago
Are you able to share a bit, enough to explain to others doing similar work that this "Jax > numpy" aspect applies to what their work (and thus that they'd be well-off to learn enough Jax to make use of it themselves)?
svara•4mo ago
This should be a good starting point:

https://docs.jax.dev/en/latest/jax.numpy.html

A lot of this really is a drop in replacement for numpy that runs insanely fast on the GPU.

That said you do need to adapt to its constraints somewhat. Some things you can't do in the jitted functions, and some things need to be done differently.

For example, finding the most common value along some dimension in a matrix on the GPU is often best done by sorting along that dimension and taking a cumulative sum, which sort of blew my mind when I first learnt it.

fennecfoxy•4mo ago
I mean afaik the consumer GPUs portion of their business has always been tiny in comparison to enterprise (except to begin with right at the start of the company's history, I believe).

In a way it's the scientific/AI/etc enterprise use of Nvidia hardware that enables the sale of consumer GPUs as a side effect (which are just byproducts of workstation cards having a certain yield - so flawed chips can be used in consumer cards).

kokada•4mo ago
No, gaming revenue for NVIDIA was historically the major revenue percentage from the company (up until 2023). Only with the recent AI boom this changed.

Source (I am not sure how reliable this is because I got this from ChatGPT, but I remember seeing something similar from other sources): https://www.fool.com/investing/2024/02/12/gaming-was-nvidias....

trenchpilgrim•4mo ago
Nvidia started as a gaming company and gaming was the majority of their business until the last 5-10 years.
moomoo11•4mo ago
Generator
Animats•4mo ago
Do most GPUs made for AI even have a graphical output buffer and a video output any more?
Kokonico•4mo ago
I know that the NVIDIA H100 chips don't, other than those however I'm not too sure, I'd assume that that'd be the case though, no point adding extra tech you aren't gonna be using in a big datacenter.
wtallis•4mo ago
They've been aggressively removing or reducing hardware that's vestigial from the perspective of AI. NVIDIA's Hopper has no display outputs, no raytracing hardware, no video encoders, and only one of the eight GPCs has raster graphics functionality; the rest are compute-only. With their newer Blackwell parts, going from B200 to B300 they cut out almost all FP64 and INT8 capabilities so they could squeeze in more FP4 throughput.
augment_me•4mo ago
Yes still but perhaps not needed in next iteration when we just approximate the graphics pipeline with matrix multiplications
GuB-42•4mo ago
You can game on H100 GPUs, it is terrible though. Someone has tested it and it is on the level of a Radeon 680M, that is the performance of a typical business laptop.

https://www.youtube.com/watch?v=-nb_DZAH-TM

riknos314•4mo ago
Naming may provide useful hints about some utility of a tool but naming does not bound the utility of a tool.
cmxch•4mo ago
Then who actually delivers on that front aside from AMD? Intel does deliver but only on the low to mid range.
diegoperini•4mo ago
> And here are my clearly unimpressed “friends” >:(

These friends don't get it!

bblb•4mo ago
Room for new competitors then? Surely Nvidia/AMD/Intel are not the only graphics vendors? Or is the tech too hard to even enter the market?
swiftcoder•4mo ago
There are a whole raft of other GPU companies out there (Broadcom, MediaTek, PowerVR, Samsung, Qualcomm, ...), but none of them interested in the classic PC gaming space.

And I'm not sure that space has been economical for a long time. Integrated GPUs have more-or-less reached a point where they can handle PC games (albeit not at the latest-and-greatest resolutions/frame-rates/ray-tracing/etc), and the market for multi-thousand-dollar dedicated GPUs just isn't very big

jacquesm•4mo ago
> the market for multi-thousand-dollar dedicated GPUs just isn't very big

What market research underpins this?

swiftcoder•4mo ago
Steam hardware survey suggests <15% of gaming PCs are running a high-end GPU. I'm defining "high-end" as "outperforms a top tier integrated GPU", which makes it 3070/4070/5070 or above on the NVidia side of things.

That's a rough upper bound of 20 million on the whole market, and NVidia + AMD already have it buttoned up - a newcomer can expect to attract a tiny fraction thereof

But I think more importantly, you can see this in NVidia's focus. Most of their profits are not in the gaming space anymore.

Sohcahtoa82•4mo ago
The Steam Hardware/Software Survey [0] answers this.

Look up the MSRPs of the most common GPUs. Very very few are over $1,000. If you want to interpret "multithousand" to be >= $2,000, then the answer becomes VERY small (less than 0.5%), as the highest MSRP gaming GPUs are the RTX 5090 and RTX 3090 Ti, both which technically have an MSRP of $1,999, that typically only applies to the "Founders Edition" releases done by NVIDIA. 3rd party AIBs (MSI, Gigabyte, Zotac, etc) typically charge a bit more.

[0] https://store.steampowered.com/hwsurvey/Steam-Hardware-Softw...

StopDisinfo910•4mo ago
Dedicated GPU are dead for general computing. The whole market converged on APU because they are simply more efficient.

There is plenty of competition there: Qualcomm, Samsung, Apple, MediaTek and of course Intel and AMD, and things are moving fast. The best phone APUs nowadays are more powerful than my not so old MacBook Air M1.

dijit•4mo ago
General computing has not required a dedicated GPU for nearly 20 years, I would argue that the continued perseverance of Windows hinges on a handful of productivity software and, for ordinary people, crucially, games. So judging a market so completely, based on "general" computing is too shallow.

> The best phone APUs nowadays are more powerful than my not so old MacBook Air M1.

Which is, itself, an APU.

The question is, is it better than a 2020 era dGPU and CPU combo (at any thermal/power envelope).

The answer is complicated unfortunately, but a 3090 (a 5 year old card) has 4x the memory bandwidth of an M4 Pro and also about 4x the FP32 performance.

So on the high end, descrete graphics cards are still going to be king for gaming. (I know that a 3090 isn't common, but 5080s are more powerful than 3090s).

StopDisinfo910•4mo ago
> for ordinary people, crucially, games

PC gaming is a niche which is incredibly small. Ordinary people don’t use games on their PC provided they have one in the first place. Most PCs nowadays are laptops and they are mostly bought by companies sometimes by people and mostly to do work.

If you look at the respective market size, gaming is mostly done on smartphones and dedicated consoles and they all use APUs.

ruszki•4mo ago
Is there less people gaming on PC, than let’s say 20 years ago, or just the market became larger, and new people started to play with something else?
shagie•4mo ago
Its changing. Bellular News : Gamers Are Dying out(*) https://youtu.be/_80DVbedWiI

https://www.gamesindustry.biz/ign-launches-gaming-trends-pla...

> The prominence of mobile among younger players probably won't be a huge surprise to anyone reading this – 93% of Gen Alpha prefer playing on mobile, according to IGN's segmentation study. But preference for mobile is actually growing for Millennials, too, with 32% calling it their preferred device.

> ...

> Daily concurrent user numbers have grown in Roblox from 3.8 million in June 2022 to more than 25 million in June 2025. Over the same period, Fortnite has grown from 1.2 million to 1.77 million concurrents – with occasional blips, like when 15.3 million players logged on for the Marvel Galactus event.

Steam charts: https://store.steampowered.com/charts show 36,647,144 online now (as I write this)

ruszki•4mo ago
These are still percentages. I asked for absolute values. If 100 million people played on PC in 2005, and now 100 million plays on PC, but 2 billion on mobile, then percentages changes, but you still have the same amount of people playing PC. Btw, “playing” is a very convoluted expression, because almost everybody played snake on their phone even 20 years ago. This is why the only indicator which matters is absolute values.
mastazi•4mo ago
Do you have any links with regards to these market segments? I know that nowadays many people are mobile-only, but I struggle to estimate percentages. I guess it's going to be very different in developed vs developing economies, based on personal observations, but again I would like to see stats. I was able to find things like personal computer sales figures but nothing was said e.g. about desktops vs laptops and whether the laptop is for work or personal use and in the latter case, general vs gaming focused use.
keyringlight•4mo ago
I think the challenge is that uses for a PC, or even if you restrict it to "PC gaming" is such a wide net it's hard to make anything but the most vague/general readings from that audience. When the monthly steam hardware survey results come out there's always a crowd of enthusiasts putting their spin on what should or shouldn't be there, when that includes people playing simple low requirement games all the way through to reality simulators. For non-gaming uses, I think the most significant step was Vista, where they moved over to GPU acceleration for drawing windows (but with a software 'basic' fallback), video decode acceleration and to a lesser extent encode for any device with a camera, although I'd say mobile devices likely exercise encode capability more than desktops do generally.
suddenlybananas•4mo ago
>gaming is mostly done on smartphones

I kinda feel that most games on smartphones are so fundamentally different to like the sweaty PC-gamer type games that they really should be considered a different market.

StopDisinfo910•4mo ago
Should it?

Take a look at the statistics for Minecraft and Fortnite, both games I would consider typical PC games, both massively successful. Mobile is always between 45% and 50%. PC has between 25% and 30% roughly on par with console.

PC gaming is mostly an Asian thing nowadays entirely propped up by esports. The market sure is big enough for GPU still making sense as a product (my incredibly small comment is admittedly a bit too extreme) but probably not for someone to go try to dislodge the current duopoly unless they have a product "for free" as an offshoot of something else.

suddenlybananas•4mo ago
I don't think Minecraft or Fortnite are very typical PC games at all. They have a very different userbase and demographic that they appeal to.
StopDisinfo910•4mo ago
You are discounting two of the most played PC games by far. I think there is only Roblox which is somehow comparable.

I'm therefore curious. What do you think PC gaming is?

usrusr•4mo ago
I'd say PC gaming is huge, but the subset that cares for teasing out both fps and maximum details like it's 1999, that's tiny, an expensive niche hobby like classic cars (not quite as expensive tough). For almost the entire PC gaming market, GPU performance just isn't a pressing issue anymore. Some games might get a little prettier when you pick the more expensive option when choosing a new system, but that's it, not much of a driving factor in buying motivation.
astrea•4mo ago
Holy tangents, Batman! This whole post was a million interrelated topics woven into one semi-coherent textbook.
hollowonepl•4mo ago
Nice texture generator came out of this, with seems to be perfectly looped images! Well done!
geldedus•4mo ago
Since when a name dictates function?
andunie•4mo ago
I read somewhere that CPUs are better at generating graphics than GPUs (although I imagine much slower). Is that true? Does that explain why GUI libraries like Egui are so much uglier than, for example, Iced?
indoordin0saur•4mo ago
What exactly does "better" mean if not faster?
Sohcahtoa82•4mo ago
I imagine the answer is "Higher quality" or "Better customization". You can get extremely precise control over the render pipeline on a CPU since you can calculate pixels however you want.

But...with today's world of pixel shaders (Really, a world that's existed for 10+ years now), I'd be surprised if there's actually any benefit to be had these days. With a proper pixel shader, I doubt there's anything you could do on a CPU that you couldn't do on a GPU, and the GPU would be massively parallel and do it much faster.

indoordin0saur•4mo ago
You give my understanding in your last sentence there. I don't think there's any "higher quality" graphics which could be rendered on a CPU that couldn't be rendered on a GPU. Since they are equivalent in their possible actions, the only differential would be speed, which is what GPUs are designed for.

But to play devil's advocate against myself, I have heard that programming for GPUs can be harder for many things. So maybe usability and developer-friendliness is what is meant by CPUs being better?

Sohcahtoa82•4mo ago
GPUs are TERRIBLE at executing code with tons of branches.

Basically, GPUs execute instructions in lockstep groups of threads. Each group executes the same instruction at the same time. If there's a conditional, and only some of the threads in a group have a state that satisfies the condition, then the group is split and the paths are executed in serial rather than parallel. The threads following the "true" path execute while the threads that need to take the "false" path sit idle. Once the "true" threads complete, they sit idle while the "false" threads execute. Only once both threads complete do they reconverge and continue.

They're designed this way because it greatly simplifies the hardware. You don't need huge branch predictors or out-of-order execution engines, and it allows you to create a processor with thousands of cores (The RTX 5090 has over 24,000 CUDA cores!) without needing thousands of instruction decoders, which would be necessary to allow each core to do its own thing.

There ARE ways to work around this. For example, it can sometimes be faster to compute BOTH sides of a branch, but then merely apply the "if" on which result to select. Then, each thread would merely need to apply an assignment, so the stalls only last for an instruction or two.

Of course, it's worth noting that this non-optimal behavior is only an issue with divergent branches. If every thread decides the "if" is true, there's no performance penalty.

sjsdaiuasgdia•4mo ago
The main context where I've seen claims that CPUs are 'better' at graphics is where a render that looks precisely right and has the desired image quality is more important than a fast render.

Even that is more about control over the rendering process than what silicon is doing the work. With a lot of (older?) graphics APIs, you're implicitly giving the GPU's driver a lot of control over the output and how it is generated. This is how we got events like [0] where GPU vendors would make their drivers 'cheat' on benchmarks by trading image quality for speed when certain criteria were detected.

I imagine that tradeoff has changed somewhat as the industry has moved towards graphics APIs intended to give the programmer more direct control of the hardware.

[0] https://www.reddit.com/r/quake/comments/168btin/does_anyone_...

Levitating•4mo ago
What does "better" mean?

The simple calculations typically used for rendering graphics can easily be parallized on the GPU, hence it's faster. But the result should be identical if the same calculations are done on the CPU.

Also GUI frameworks like iced and egui typically support multiple rendering backands. I know iced is renderer agnostic, and can use a number of backands including the GPU graphics APIs Vulkan, DX12 and Metal.

gbolcer•4mo ago
They used to refer to it as GPGPU (general purpose) but they just shortened it maybe 10 years ago?
larrydag•4mo ago
Should change the name to Matrix Processing Units
_ache_•4mo ago
What does Bechara means? DuckDuckGo doesn't help me much. "Poor thing" maybe?

Anyway, I think you should change friends. Or study subjects. Either way.

Btw, these look like parasites or worms under microscope.

eternityforest•4mo ago
I wonder if they'll ever start doing the graphics with AI.

Could you render the scene in the simplest most efficient way possible, then train a special model that takes that picture, along with the coordinates of lights, depth maps, text descriptions of materials, etc, and adds detail?

dpoloncsak•4mo ago
Isnt this DLSS and Frame Gen in a nutshell?

DLSS - Render at low resolution, use an ML model to upscale

Frame Gen - Render a frame, use an ML model to generate the next frame. Render the next, gen the next....so on

(I think in practice, Frame Gen is more complicated due to fluctuation between FPS but that's another can of worms)

eternityforest•4mo ago
They're pretty much just upscaling and interpolating right? They can't fill in anything that's not actually there, like you could with a model that understands "Hey this tree is a actually just a PS2 era low poly model, but here's a picture of what it should look like, make it fit somehow"
rdsubhas•3mo ago
The G now stands for GenAI.