But how about a practical argument instead. Enabling raytracing in games tends to suck. The graphical improvements on offer are simply not worth the performance cost.
A common argument is that we don't have fast enough hardware yet, or developers haven't been able to use raytracing to it's fullest yet, but it's been a pretty long damn time since this hardware was mainstream.
I think the most damning evidence of this is the just released Battlefield 6. This is a franchise that previously had raytracing as a top-level feature. This new release doesn't support it, doesn't intend to support it.
And in a world where basically every AAA release is panned for performance problems, BF6 has articles like this: https://www.pcgamer.com/hardware/battlefield-6-this-is-what-...
Pretty much this - even in games that have good ray tracing, I can't tell when it's off or on (except for the FPS hit) - I cared so little I bought a card not known to be good at it (7900XTX) because the two games I play the most don't support it anyway.
They oversold the technology/benefits and I wasn't buying it.
(sorry if obvious / already done)
On a more subjective note, you get less interesting art styles because studio somehow have to cram raytracing as a value proposition in there.
Ray tracing is solving the light transport problem in the hardest way possible. Each additional bounce adds exponentially more computational complexity. The control flows are also very branchy when you start getting into the wild indirect lighting scenarios. GPUs prefer straight SIMD flows, not wild, hierarchical rabbit hole exploration. Disney still uses CPU based render farms. There's no way you are reasonably emulating that experience in <16ms.
The closest thing we have to functional ray tracing for gaming is light mapping. This is effectively just ray tracing done ahead of time, but the advantage is you can bake for hours to get insanely accurate light maps and then push 200+ fps on moderate hardware. It's almost like you are cheating the universe when this is done well.
The human brain has a built in TAA solution that excels as frame latencies drop into single digit milliseconds.
I would say, the closest we can get are workarounds like radiance cascades. But everything else than raytracing is just an ugly workaround which falls apart in dynamic scenarios. And don't forget that baking times and storing those results, leading to massive game sizes, are a huge negative.
Funnily enough raytracing is also just an approximation to the real world, but at least artists and devs can expect it to work everywhere without hacks (in theory).
In accelerated compute, the largest areas of interest for advancement are 1) simulation and modeling and 2) learning and inference.
That's why this doesn't make sense to a lot of people. Sony and AMD aren't trying to extend current trends, they're leveraging their portfolios to make the advancements that will shape future markets 20-40 years out. It's really quite bold.
Even without modern deep-learning based "AI", it's not like the pixels you see with traditional rendering pipelines were all artisanal and curated.
AI upscaling is an amazing technology that can provide excellent graphics with minimal overhead. I always play with DLSS on because this is just the be superior experience. Of course there's a group of devs that will excuse poor programming because the gamers can just run lower res and upscale, but this complaint shows up literally every time there's a new technique to do something.
Seems they didn’t learn from the PS3, and that exotic architectures don't drive sales. Gamers don’t give a shit and devs won’t choose it unless they have a lucrative first party contract.
Now, shackling yourself to AMD and expecting a miracle... that I cannot say is a good idea. Maybe Cerny has seen something we haven't, who knows.
every year, Playstation ranks very high when it comes to GOTY nominations
just last year, Playstation had the most nominations for GOTY: https://x.com/thegameawards/status/1858558789320142971
not only that, but PS5 has more 1st party games than Microsoft's Xbox S|X
1053 vs 812 (that got inflated with recent Activision acquisition)
https://en.wikipedia.org/wiki/List_of_PlayStation_5_games
https://en.wikipedia.org/wiki/List_of_Xbox_Series_X_and_Seri...
It's important to check the facts before spreading random FUD
PS5 had the strongest lineup of games this generation, hence why they sold this many consoles
Still today, consumers are attracted to PS5's lineup, and this is corroborated by facts and data https://www.vgchartz.com/
In August for example, the ratio between PS5 and Xbox is 8:1; almost as good as the new Nintendo Switch 2, and the console is almost 5 years old!
You say "underwhelming", people are saying otherwise
Also, to my knowledge, the PS5 still lags behind the PS4 in terms of sales, despite the significant boost that COVID-19 provided.
Each generation has around half the number of games as the previous. This does get a bit murky with the advent of shovelware in online stores, but my point remains.
I think this only proves is that games are now ridiculously expensive to create and met the quality standards expected. Maybe AI will improved this in this future. Take-Two has confirmed that GTA6's budget has exceeded US$1 billion, which is mind-blowing.
The main goal of Direct3D 12, and subsequently Vulcan, was to allow for better use of the underlying graphics hardware as it had changed more and more from its fixed pipeline roots.
So maybe the time is ripe for a rethink, again.
Particularly the frame generation features, upscaling and frame interpolation, have promise but needs to be integrated in a different way I think to really be of benefit.
You aren't seeing them adopted that much, because the hardware still isn't deployed at scale that games can count on them being available, and also it cannot ping back on improving the developer experience adopting them.
However, I'm pessimistic on how this can keep evolving. RT already takes a non trivial amount of transistor budget and now those high end AI solutions require another considerable chunk of the transistor budget. If we are already reaching the limits of what non generative AI up-scaling and frame-gen can do, I can't see where a PS7 can go other than using generative AI to interpret a very crude low-detail frame and generating a highly detailed photorealistic scene from that, but that will, I think, require many times more transistor budget than what will likely ever be economically achievable for a whole PS7 system.
Will that be the end of consoles? Will everything move to the cloud and a power guzzling 4KW machine will take care of rendering your PS7 game?
I really can only hope there is a break-trough in miniaturization and we can go back to a pace of improvement that can actually give us a new generation of consoles (and computers) that makes the transition from an SNES to a N64 feel quaint.
Even in the latest versions of unreal and unity you will find the classic tools. They just won't be advertised and the engine vendor might even frown upon them during a tech demo to make their fancy new temporal slop solution seem superior.
The trick is to not get taken for a ride by the tools vendors. Real time lights, "free" anti aliasing, and sub-pixel triangles are the forbidden fruits of game dev. It's really easy to get caught up in the devil's bargain of trading unlimited art detail for unknowns at end customer time.
The idea of the radiance cores is pretty neato
The times of console giants, their fiefdoms and the big game studios is coming to an end.
And there are AAA that make and will make good money with graphics being front and center.
I'm just not sold.
Do I really think that BG3 being slightly prettier than, say, Dragon Age / Skyrim / etc made it a more enticing game? Not to me certainly. Was cyberpunk prettier than Witcher 3? Did it need to be for me to play it?
My query isn't about whether you can get people to upgrade to play new stuff (always true). But whether they'd still upgrade if they could play on the old console with worse graphics.
I also don't think anyone is going to suddenly start playing video games because the graphics improve further.
There is a reason there are so many complaints in social media about being obvious to gamers in what game engine a game was written on.
It used to be that game development quality was taken more seriously, when they were sold via storage media, and there was a deadline to burn those discs/cartridges.
Now they just ship whatever is done by the deadline, and updates will come later via a DLC, if at all.
But the way it is framed as a revolutionary step and as a Sony collab is a tad misleading. AMD is competent enough to do it by itself, and this will definitely show up in PC and the competing Xbox.
I'm rooting for something unique because I haven't owned a console for 20 years and I like interesting hardware. But hopefully they've learned a lesson about developer ergonomics this time around.
Why not also give a mini AMD EPYC cpu with 32 cores? This way games would start to be much better at multicore.
three_burgers•2h ago
noir_lord•2h ago
three_burgers•2h ago
jpalawaga•2h ago
Games written for the PlayStation exclusively get to take advantage of everything, but there is nothing to compare the release to.
Alternatively, if a game is release cross-platform, there’s little incentive to tune the performance past the benchmarks of comparable platforms. Why make the PlayStation game look better than Xbox if it involves rewriting engine layer stuff to take advantage of the hardware, for one platform only.
Basically all of the most interesting utilization of the hardware comes at the very end of the consoles lifecycle. It’s been like that for decades.
ViscountPenguin•2h ago
awill•2h ago
dontlaugh•39m ago
beagle3•2h ago
three_burgers•2h ago
For PS2, game consoles didn't become the centre of home computing; for PS3, programming against the GPU became the standard of doing real time graphics, not some exotic processor, plus that home entertaining moved on to take other forms (like watching YouTube on an iPad instead of having a media centre set up around the TV); for PS4, people didn't care if the console does social networking; PS5 has been practical, it's just the technology/approach ended up adopted by everyone, so it lost its novelty later on.
pjmlp•2h ago
anthk•1h ago
ffsm8•14m ago
PS3s edge was generally seen as the DVD player.
That's why Sony went with Blue Ray in the PS4, hoping to capitalize on the next medium, too. While that bet didn't pay out, Xbox kinda self destructed, consequently making them the dominant player any way.
Finally:
> PS5 has been practical, it's just the technology/approach ended up adopted by everyone, so it lost its novelty later on.
PS5 did not have any novel approach that was consequently adopted by others. The only thing "novel" in the current generation is frame generation, and that was already being pushed for years by the time Sony jumped on that bandwagon.
MindSpunk•5m ago
The PS2 was the DVD console. The PS3 was the bluray console.
The PS4 and PS5 are also bluray consoles, however blurays are too slow now so they're just a medium for movies or to download the game from.
ericye16•1h ago
numpad0•37m ago
Console also partially had to be quirky dragsters because of Moore's Law - they had to be ahead of PC by years, because it had to be at least comparable to PC games at the end of lifecycle, not utterly obsolete.
But we've all moved on. IMO that is a good thing.