I wonder what modulation order / RF bandwidth they'll be using on the PHY for Gen8. I think Gen7 used 32GHz, which is ridiculously high.
baud seems out of fashion, sym/s is pretty clear & unambiguous.
(And if you're talking channel bandwidth, that needs clarification)
> 16GHz square wave
Is it for PCIe 5.0? PCIe 6.0 should operate on the same frequency and doubling the bandwidth by using PAM4. If PCIe 7.0 doubled the bandwidth and is still PAM4, what is the underlying frequency?
for gen6, halve all numbers
(I'm accepting it because "Transfers"/"T" as unit is quite rare outside of PCIe)
Huh? Baud is sym/s.
That's an interesting thought to look at. PCIe 3 was a while ago, but SATA was nearly a decade before that.
> I wonder what modulation order / RF bandwidth they'll be using on the PHY for Gen8. I think Gen7 used 32GHz, which is ridiculously high.
Wikipedia says it's planned to be PAM4 just like 6 and 7.
Gen 5 and 6 were 32 gigabaud. If 8 is PAM4 it'll be 128 gigabaud...
Obviously PCI is not just about gaming but...
If you're using a new video card with only 8GB of onboard RAM and are turning on all the heavily-advertised bells and whistles on new games, you're going to be running out of VRAM very, very frequently. The faster bus isn't really important for higher frame rate, it makes the worst-case situations less bad.
I get the impression that many reviewers aren't equipped to do the sort of review that asks questions like "What's the intensity and frequency of the stuttering in the game?" because that's a bit harder than just looking at average, peak, and 90% frame rates. The question "How often do textures load at reduced resolution, or not at all?" probably requires a human in the loop to look at the rendered output to notice those sorts of errors... which is time consuming, attention-demanding work.
I don't know how many games are even capable of using lower resolutions to avoid stutter. I'd be interested in an analysis.
SlightlyLeftPad•3d ago
vincheezel•3d ago
ksec•3d ago
At 50-100W for IO, this only leaves 11W per Core on a 64 Core CPU.
linotype•3h ago
jchw•2h ago
Apparently we still have room, as long as you don't run anything else on the same circuit. :)
cosmic_cheese•2h ago
kube-system•2h ago
jchw•49m ago
And even then, even if you do run something 24/7 at max wattage, it's definitely not guaranteed to start a fire even if the wiring is bad. Like, as long as it's not egregiously bad, I'd expect that there's enough margin to cover up less severe issues in most cases. I'm guessing the most danger would come when it's particularly hot outside (especially since then you'll probably have a lot of heat exchangers running.)
jchw•45m ago
I've definitely seen my share of scary things. I have a lighting circuit that is incomprehensibly wired and seems to kill LED bulbs randomly during a power outage; I have zero clue what is going on with that one. Also, often times opening up wall boxes I will see backstabs that were not properly inserted or wire nuts that are just covering hand-twisted wires and not actually threaded at all (and not even the right size in some cases...) Needless to say, I should really get an electrician in here, but at least with a thermal camera you can look for signs of serious problems.
atonse•2h ago
davrosthedalek•1h ago
atonse•1h ago
I only have a PhD from YouTube (Electroboom)
jchw•1h ago
If you actually had an electrician do it, I doubt they would've installed a breaker if they thought the wiring wasn't sufficient. Truth is that you can indeed get away with a 20A circuit on 14 AWG wire if the run is short enough, though 12 AWG is recommended. The reason for this is voltage drop; the thinner gauge wire has more resistance, which causes more heat and voltage drop across the wire over the length of it, which can cause a fire if it gets sufficiently hot. I'm not sure how much risk you would put yourself in if you were out-of-spec a bit, but I wouldn't chance it personally.
chronogram•2h ago
dv_dt•2h ago
buckle8017•2h ago
In power cost? no
I'm literally any other way? also no
kube-system•2h ago
atonse•2h ago
It's often used for things like ACs, Clothes Dryers, Stoves, EV Chargers.
So it's pretty simple for a certified electrician to just make a 240v outlet if needed. It's just not the default that comes out of a wall.
kube-system•1h ago
https://appliantology.org/uploads/monthly_2016_06/large.5758...
ender341341•1h ago
Two phase power is not the same as split phase (There's basically only weird older installations of 2 phase in use anymore).
kube-system•1h ago
voxadam•1h ago
"The US electrical system is not 120V" https://youtu.be/jMmUoZh3Hq4
atonse•1h ago
dv_dt•1h ago
atonse•1h ago
ender341341•1h ago
It'd be all new wire run (120 is split at the panel, we aren't running 240v all over the house) and currently electricians are at a premium so it'd likely end up costing a thousand+ to run that if you're using an electrician, more if there's not clear access from an attic/basement/crawlspace.
Though I think it's unlikely we'll see an actual need for it at home, I imaging a 800w cpu is going to be for server class CPUs and rare-ish to see in home environments.
vel0city•10m ago
carlhjerpe•1m ago
We rarely use 16A but it exists. All buildings are connected to three phases so we can get the real juice when needed (apartments are often single phase).
I'm confident personal computers won't reach 2300W anytime soon though
orra•2h ago
tracker1•2h ago
t0mas88•2h ago
kube-system•2h ago
linotype•1h ago
kube-system•1h ago
CyberDildonics•28m ago
triknomeister•1h ago
Especially a special PDU: https://www.fz-juelich.de/en/newsroom-jupiter/images-isc-202...
And cooling: https://www.fz-juelich.de/en/newsroom-jupiter/images-isc-202...
avgeek23•3d ago
Would push performance further. Although companies like intel would bleed the consumer dry with, a certain i5-whatever cpu with onboard memory of 16 gigs could be insanely priced compared to what you'd pay for addon memory.
0x457•26m ago
sitkack•2h ago
derefr•1h ago
How do you do that, if each GPU expects to be its own backplane? One CPU daughterboard per GPU, and then the CPU daughterboards get SLIed together into one big CPU using NVLink? :P
verall•3h ago
PCIe is a standard/commodity so that multiple vendors can compete and customers can save money. But at 8.0 speeds I'm not sure how many vendors will really be supplying, there's already only a few doing serdes this fast...
MurkyLabs•2h ago
Razengan•2h ago
MBCook•30m ago
A total computer all-in-one. Just no interface to the world without the motherboard.
Dylan16807•2h ago
Also CPUs are able to make use of more space for memory, both horizontally and vertically.
I don't really see the power delivery advantages, either way you're running a bunch of EPS12V or similar cables around.
burnt-resistor•2h ago
kvemkon•1h ago
Actually the RapsberryPi (appeared 2012) was based on a SoC with a big and powerful GPU and small weak supporting CPU. The board booted the GPU first.
LeoPanthera•1h ago
MBCook•32m ago
Then that became unnecessary when L2 cache went on-die.
leoapagano•22m ago