I wonder what modulation order / RF bandwidth they'll be using on the PHY for Gen8. I think Gen7 used 32GHz, which is ridiculously high.
baud seems out of fashion, sym/s is pretty clear & unambiguous.
(And if you're talking channel bandwidth, that needs clarification)
> 16GHz square wave
Is it for PCIe 5.0? PCIe 6.0 should operate on the same frequency and doubling the bandwidth by using PAM4. If PCIe 7.0 doubled the bandwidth and is still PAM4, what is the underlying frequency?
for gen6, halve all numbers
(I'm accepting it because "Transfers"/"T" as unit is quite rare outside of PCIe)
Looking at some documents from Micron I don't see them using GT/s anywhere. And in particular if I go look at their GDDR6X resources because those chips use PAM4, it's all about gigabits per second [per pin]. So for example 6GHz data clock, 12Gbaud, 24Gb/s/pin.
PAM encoding is already analog, and also correspondingly more expensive (power, silicon size, etc) for the increase in speed.
It really wouldn't surprise me if even on workstation platforms only a subset of core lanes were Gen6+ and the common slots were redriven Gen5 or less off of a router / switch chip.
We don't have to go back, baud is still in use. I would expect transfers per second to be a synonym for baud though, and for bits per second per pin to use a different word.
Huh? Baud is sym/s.
That's an interesting thought to look at. PCIe 3 was a while ago, but SATA was nearly a decade before that.
> I wonder what modulation order / RF bandwidth they'll be using on the PHY for Gen8. I think Gen7 used 32GHz, which is ridiculously high.
Wikipedia says it's planned to be PAM4 just like 6 and 7.
Gen 5 and 6 were 32 gigabaud. If 8 is PAM4 it'll be 128 gigabaud...
GB300 is indeed Gen6.
how you use those today is twofold:
- Gen5 x32 via two x16 slots (this is how most people use them)
- use actually the CX8 as your PCIe switch directly to your GPUs: https://www.servethehome.com/this-is-the-nvidia-mgx-pcie-swi...
I believe only next Gen Intel and AMD Zen6 will get PCIe 6.0.
I am hoping Nvidia officially move into Server CPU market not only for their CPU but for wider Web Hosting as well. More competition for Server Hardware.
It'll be interesting if consumer devices bother trying to stay with the latest at all anymore. It's already extremely difficult to justify the cost of implementing PCIe 5.0 when it makes almost no difference for consumer use cases. The best consumer use case so far is enthusiasts who want really fast NVMe SSDs in x4 lanes, but 5.0 already gives >10 GB/s for a single drive, even with the limited lane count. It makes very little difference for x16 GPUs, even with the 5090. Things always creep up over time, but the rate at which the consumer space creeps is just so vastly different from what the DC space has been seeing that it seems unreasonable to expect the two to be lockstep anymore.
so indeed the parent commenter would be correct that everything is Gen5 right now.
at least that's my best educated guess, looking at supermicro's public spec sheet and that it's shipping with CX7 which is Gen5 and not with CX8.
B200 supports Gen6 there is just nothing that would let it run at Gen6.
Edit: found this SemiAnalysis post saying as much https://x.com/SemiAnalysis_/status/1947768988467138645/photo...
to the best of my knowledge this does not exist, but I'd be happy to stand corrected.
(the official NVIDIA DGX B200 is gen5).
Faster lanes = more cost
More faster lanes = lots more cost
The chipset also strikes some of the balance for consumers though. It has a narrow high speed connection to the CPU but enables many lower speed devices to share that bandwidth. That way you can have your spare NVMe drive, SATA controller, wired and wireless NICs, sound hardware, most of your USB ports, your capture card, and some other random things connected over a single x4 to x8 sized channel. This leaves the high cost lanes for just the devices that actually use them (GPU and primary, possibly secondary, storage drive). I've got one consumer type Motherboard with 14 NVMe drives connected, for example, just not at full native speed directly to the CPU.
You're just SoL if you want to connect a bunch of really high bandwidth devices simultaneously (100 Gbps+ NICs, multiple GPUs at full connection speed, a dozen NVMe drives at native speed, or similar) because then you'll be paying for a workstation/server class platform which did make the "more faster lanes" tradeoff (plus some market segment gouging, of course).
Many mobos will operate the available slots such that the total number of active lanes is split between them. But if you use older-generation cards, you'll only get a fraction of the available bandwidth because you're only using a fraction of their lanes, although the physical lanes are physically present.
What I'm thinking about is something like, say, a pair of Gen3 NVMe drives that are good enough for mass storage (running in RAID-1 for good measure) and some cheap used 10 Gb NIC, which will probably be 8x Gen2, all running on a Gen4+ capable mobo.
And, while for a general-purpose setup I can live with splitting available BW between the NIC and the GPU (I most likely don't care about my download going super fast while I game), the downloads will generally go to the storage, so they must be fast at the same time.
You can also buy external PCIe switches (just make sure you're not accidentally buying a PCIe bifurcation device). Most of the time it's cheaper to just buy the higher end motherboard though, e.g. I don't want to know what price "Request a quote" for this PCIe switch which can do x8 4.0 upstream and then quad 4x 3.0 downstream https://www.amfeltec.com/pci-express-gen-4-carrier-board-for... is. I do have a few 3.0 era cards which were more reasonably priced though https://www.aliexpress.us/item/3256801702762036.html?gateway... and they've worked well for me.
I haven't seen such features on boards under 200 EUR, from Asus, Asrock and Gigabyte.
The thing is, if I have to splurge for some 400 EUR "gaming" model, I might as well move to a "workstation" CPU supporting more lanes out of the box, and the mobo will be priced roughly the same.
True. And yet, if you buy an RTX 5090 today, costing $2400 and released in January this year, it's PCIe 5.0 x16
Contrast this with the wild west that is "Ethernet" where it's extremely common for speeds to track well ahead of specs and where interop is, at best, "exciting."
Obviously PCI is not just about gaming but...
If you're using a new video card with only 8GB of onboard RAM and are turning on all the heavily-advertised bells and whistles on new games, you're going to be running out of VRAM very, very frequently. The faster bus isn't really important for higher frame rate, it makes the worst-case situations less bad.
I get the impression that many reviewers aren't equipped to do the sort of review that asks questions like "What's the intensity and frequency of the stuttering in the game?" because that's a bit harder than just looking at average, peak, and 90% frame rates. The question "How often do textures load at reduced resolution, or not at all?" probably requires a human in the loop to look at the rendered output to notice those sorts of errors... which is time consuming, attention-demanding work.
I don't know how many games are even capable of using lower resolutions to avoid stutter. I'd be interested in an analysis.
Some games may be doing that. I expect that in others, the lower-resolution or missing textures are a result of the texture streaming system catastrophically failing to meet its deadline to load in the relevant textures and giving up. It's my understanding that "texture pop-in" is the too-late replacement of a low-resolution "placeholder" texture with a high-resolution texture. If the high-resolution texture doesn't load in time, then you're stuck with the low-res placeholder.
Commentary on textures that fail to load are in the "Monster Hunter: Wilds" section, starting at ~11:35 in [0], and the "Space Marine 2" section starting at ~00:14:40 in [1], which also mentions "Halo: Infinite" and "Forspoken" as other games that have the same sort of behavior. Missing textures are mentioned in the "Star Wars Jedi: Survivor" section starting at 21:21 at [1]. And -while not mentioned- if you look not-that-closely at the first ~5 seconds of that section, you can see the textures (most obviously the ground texture) go from "the same as the 16GB model" to "something you'd expect to see in a bad PS3 game".
Also in that first video, you can see some head-to-head demonstrations of the performance problems having a slower PCI-E link gives you when running out of VRAM starting at ~04:52 in the "The Last of Us Part II" section of [0] and also at ~17:00 in the "F1 25" section of that same video.
I expect there are a few other videos out there that do this sort of analysis, but I can't be arsed to find them.
802.3dj is maybe finishing up soon and has 200gbps lanes. Which is more or less about what pci-e 8.x is supposed to be. The table in the article sums both directions of a lane, which leads to confusion. People want faster ethernet in fewer lanes, so no doubt a 400gbps per lane standard will be starting up soon for pci-e to leverage as pci-e 9.
> not in a format I would normally call Ethernet
Why, because it doesn't use ether, or vampire taps?
Ethernet works over many media, and that's been pretty consistent throughout its life. It started from or at least was inspired by AlohaNet a wireless system, then you had thick and thin coax, then twisted pair and fiber, now twinax and board level interfaces are common, too.
Being less sarcastic, I would ask if 6.0 mobos are on the horizon.
I'm fairly sure they are cooking Gen7 already into CX9.
this would solve the biggest issue with non-server motherboards: not enough PCIe lanes.
What you're saying is possible though, you just need something a little heavier like a PCIe switch do the lane count + mixed-speed conversion magic. That's exactly what the chipset is, but for various reasons it's still only PCIe 4.0, even on the latest generation chips. I wouldn't be surprised if that changed again next generation. The downsides of this approach are switches add cost, latency, and can consume a lot of power. When they first upped the chipset to be a PCIe 4.0 connection in the 500 era, most motherboards capable of using the bandwidth of the chipset actually had to add chipset cooling.
Ideally they'd just add alternative options with more lanes directly from the CPU, but that'd add competition to the bottom of the Threadripper line.
Is that a problem?
These days, you don't need slots for your sound card and network card, that stuff's all integrated on the motherboard.
Plenty of enthusiast motherboards support 1 GPU, 3-4 nvme drives, 4 SATA drives, and have a PCIe 1x slot or two to spare.
Is anyone struggling, except folks working with LLMs? Seems to me folks looking to put several $2400 RTX 4090s in one machine ain't exactly a big market. And they'll probably want a giant server board, so they have space for all their giant four-slot-wide cards.
For example, motherboards with multiple nvme drives often have 1 drive with dedicated lanes and the remainder multiplexed through a PCIe switch embedded in the chipset.
One would think they get sizeable traffic as-is.
SlightlyLeftPad•6mo ago
vincheezel•6mo ago
ksec•6mo ago
At 50-100W for IO, this only leaves 11W per Core on a 64 Core CPU.
linotype•5mo ago
jchw•5mo ago
Apparently we still have room, as long as you don't run anything else on the same circuit. :)
cosmic_cheese•5mo ago
kube-system•5mo ago
jchw•5mo ago
And even then, even if you do run something 24/7 at max wattage, it's definitely not guaranteed to start a fire even if the wiring is bad. Like, as long as it's not egregiously bad, I'd expect that there's enough margin to cover up less severe issues in most cases. I'm guessing the most danger would come when it's particularly hot outside (especially since then you'll probably have a lot of heat exchangers running.)
jchw•5mo ago
I've definitely seen my share of scary things. I have a lighting circuit that is incomprehensibly wired and seems to kill LED bulbs randomly during a power outage; I have zero clue what is going on with that one. Also, often times opening up wall boxes I will see backstabs that were not properly inserted or wire nuts that are just covering hand-twisted wires and not actually threaded at all (and not even the right size in some cases...) Needless to say, I should really get an electrician in here, but at least with a thermal camera you can look for signs of serious problems.
atonse•5mo ago
davrosthedalek•5mo ago
atonse•5mo ago
I only have a PhD from YouTube (Electroboom)
jchw•5mo ago
If you actually had an electrician do it, I doubt they would've installed a breaker if they thought the wiring wasn't sufficient. Truth is that you can indeed get away with a 20A circuit on 14 AWG wire if the run is short enough, though 12 AWG is recommended. The reason for this is voltage drop; the thinner gauge wire has more resistance, which causes more heat and voltage drop across the wire over the length of it, which can cause a fire if it gets sufficiently hot. I'm not sure how much risk you would put yourself in if you were out-of-spec a bit, but I wouldn't chance it personally.
bangaladore•5mo ago
bri3d•5mo ago
However! This strategy only works if the outlet was the only one on the circuit, and _that_ isn't particularly common.
jchw•5mo ago
(Another outlet type I've seen: I saw a NEMA 7 277V receptacle before. I think you get this from one phase of a 480V three-phase system, which I understand is ran to many businesses.)
bryanlarsen•5mo ago
bri3d•5mo ago
esseph•5mo ago
wat10000•5mo ago
Oddly, 14-50 has become the most common receptacle for non-hardwired EV charging, which is rather wasteful since EV charging doesn’t need the neutral at all. 6-50 would make more sense there.
bryanlarsen•5mo ago
1: when an uncle stops by for a visit with his RV he can plug in.
2: the other outlets in your garage are likely on a shared circuit. The 14-50 is dedicated, so with a 14-50 to 5-15 adapter you can more safely plug in a high wattage appliance, like a space heater.
wat10000•5mo ago
2 is something I never thought of, I’ll have to keep that in mind.
viraptor•5mo ago
If you own the house, sure. Many people don't.
glitchc•5mo ago
atonse•5mo ago
I don't remember whether he ran another wire though. It was 5 years ago. Maybe I should not be spreading this anecdote without complete info.
He was a legit electrician that I've worked with for years, specifically because he doesn't cut corners. So I'm sure he did The Right Thing™.
jchw•5mo ago
glitchc•5mo ago
Unless you performed the upgrade yourself or know for a fact that the wiring was upgraded to 12 gauge, it's very risky to just upgrade the breaker. That's how house fires start. It's worth it to check. If you know which breaker it is, you can see the gauge coming out. It's usually written on the wire.
jchw•5mo ago
> * Unless otherwise specifically permitted elsewhere in this Code, the overcurrent protection for conductor types marked with an asterisk shall not exceed 15 amperes for No. 14 copper, 20 amperes for No. 12 copper, and 30 amperes for No. 10 copper, after any correction factors for ambient temperature and number of conductors have been applied.
I could've sworn there were actually some cases where it was allowed, but apparently not, or if there is, I'm not finding it. Seems like for 14 AWG cable the breaker can only be up to 15 amperes.
mrweasel•5mo ago
New homes are probably worse than old homes through. The wires a just chucked in the space been the outer and inner walls, there's basically no chance of replacing them of pulling new ones. Old houses at least frequently have piping in which the wires run.
jacquesm•5mo ago
chronogram•5mo ago
dv_dt•5mo ago
buckle8017•5mo ago
In power cost? no
I'm literally any other way? also no
kube-system•5mo ago
atonse•5mo ago
It's often used for things like ACs, Clothes Dryers, Stoves, EV Chargers.
So it's pretty simple for a certified electrician to just make a 240v outlet if needed. It's just not the default that comes out of a wall.
kube-system•5mo ago
https://appliantology.org/uploads/monthly_2016_06/large.5758...
ender341341•5mo ago
Two phase power is not the same as split phase (There's basically only weird older installations of 2 phase in use anymore).
kube-system•5mo ago
voxadam•5mo ago
"The US electrical system is not 120V" https://youtu.be/jMmUoZh3Hq4
atonse•5mo ago
dv_dt•5mo ago
atonse•5mo ago
ender341341•5mo ago
It'd be all new wire run (120 is split at the panel, we aren't running 240v all over the house) and currently electricians are at a premium so it'd likely end up costing a thousand+ to run that if you're using an electrician, more if there's not clear access from an attic/basement/crawlspace.
Though I think it's unlikely we'll see an actual need for it at home, I imaging a 800w cpu is going to be for server class CPUs and rare-ish to see in home environments.
vel0city•5mo ago
bonzini•5mo ago
Tor3•5mo ago
com2kid•5mo ago
I got a quote for over 2 thousand to run a 24v line literally 9 feet from my electrical panel across my garage to put a EV charger in.
Opening up an actual wall and running it to another room? I can only imagine the insane quotes that'd get.
Marsymars•5mo ago
I’m getting some wiring run about the same distance (to my attic, fished up a wall, with moderately poor access) for non-EV purposes next week and the quote was a few hundred dollars.
tguvot•5mo ago
running to another room will be done usually (at least in usa) through attic or crawlspace. i got it done a few months ago to have dedicated 20A circuit (for my rack) in my work room. cost was around 300-400 as well
com2kid•5mo ago
Honestly I wouldn't expect to pay less than $1000 for the job w/o any markups.
tguvot•5mo ago
com2kid•5mo ago
I've gotten multiple quotes on running the 240v line, the labor breakdown was always over $400 alone. Just having someone show up to do a job is going to be almost $200 before any work is done.
When I got quotes from unlicensed people, those came in around $1000 even.
tguvot•5mo ago
another thing, which is good long term is to a find a local electrician (plumber, etc) who doesn't charge service calls and have reasonable pricing.
no idea about handyman pricing. never used any. for electrical/water/roofing i prefer somebody who is licensed/insured/bonded/etc
the8472•5mo ago
bonzini•5mo ago
lillecarl•5mo ago
We rarely use 16A but it exists. All buildings are connected to three phases so we can get the real juice when needed (apartments are often single phase).
I'm confident personal computers won't reach 2300W anytime soon though
bonzini•5mo ago
Tor3•5mo ago
16A is fine, for most things. 10A used to be kind of ok, with the old IT net and old-style fuses. Nowadays anything under 16A is useless for actual appliances. For the rest it's either 25A and a different plug, or 400V.
lillecarl•5mo ago
On new installations you can choose 10A or 16A so if you're forward thinking you'd go 16 since it gives you another 1300 watts to play with.
rbanffy•5mo ago
Speak for yourself. I’d love to have that much computer at my disposal. Not sure what I’d do with it. Probably open Slack and Teams at the same time.
ThunderSizzle•5mo ago
Too bad it feels like both might as well be single threaded applications somehow
rbanffy•5mo ago
orra•5mo ago
AnthonBerg•5mo ago
tracker1•5mo ago
t0mas88•5mo ago
kube-system•5mo ago
linotype•5mo ago
kube-system•5mo ago
CyberDildonics•5mo ago
wtallis•5mo ago
Tor3•5mo ago
The newer circuits in the house are all 16A, but the old ones (very old) are 10A. A real pain, with new TN nets and modern breakers.
t0mas88•5mo ago
Narishma•5mo ago
triknomeister•5mo ago
Especially a special PDU: https://www.fz-juelich.de/en/newsroom-jupiter/images-isc-202...
And cooling: https://www.fz-juelich.de/en/newsroom-jupiter/images-isc-202...
0manrho•5mo ago
On the consumer side of things where the CPU's are branded Ryzen or Core instead of Epyc or Xeon, a significant chunk of that power consumption is from the boosting behavior they implement to pseudo-artificially[0] inflate their performance numbers. You can save huge (easily 10%, often closer to 30%, but really depends on exact build/generation) on energy by doing a very mild undervolt and limiting boosting behavior on these cpus and keeping the same base clocks. Intel 11th through 14th gen CPU's are especially guilty of this, as are most Threadripper CPU's. you can often trade single digit or even negligible performance losses (depends on what you're using it for and how much you undervolt/underclock/restrict boosting) for double digit reductions in power usage. This phenomenon is also true for GPU's when compared across the enterprise/consumer divide, but not quite to the significant extent in most cases.
Point being, yeah, it's a problem in data centers, but honestly there's a lot of headroom still even if you only have your common American 15A@120VAC outlets available before you need to call your electrician and upgrade your panel and/or install 240VAC outlets or what have you.
0: I say pseudo-artificial because the performance advantages are real, but unless you're doing some intensive/extreme cooling, they aren't sustainable or indicative of nominal performance, just a brief bit of extra headroom before your cooling solution heat-soaks and the CPU/GPU's throttle themselves back down. But it lets them put the "Bigger number means better" on the box for marketing.
Panzer04•5mo ago
Boosting from 4 to 5,5.5 ghz for that brief period shaves a fraction of a second - repeat that for any similar operation and it adds up.
0manrho•5mo ago
The point isn't that there isn't a benefit, it's that you start to pay exponentially more energy per 0.1GHz at a certain point. Furthermore, AMD and Intel were exceptionally aggressive about it in the generations I outlined (AMD would be 7000 series ryzens specifically), leading to instability issues on both platforms due to their spec itself being too aggressive, or AIB partners improperly implementing that spec as the headroom that typically exists from factory stock to push clocks/voltages further was no longer there in some silicon (some of it comes down to silicon lottery and manufacturing defects/mistakes (Intel's oxidation issues for example) but we're really getting into the weeds on this already)
And to clarify: I'm talking specifically of Intel turboboost and AMD's PBO boosting technologies where they boost where they boost well over base clocks, separate from the general dynamic clocking behavior where clocks will drop well below base when not in (heavy) use.
spacedcowboy•5mo ago
deafpolygon•5mo ago
0manrho•5mo ago
They're small and efficient, that means they can pack large numbers of those into small spaces, resulting in a similar large power draw per volume occupied by equipment in the DC. This is especially true with Apple's "Ultrafusion" tech which they're developing as quasi-analog to Nvidia Grace (Hopper) superchips.
spacedcowboy•5mo ago
0manrho•5mo ago
spacedcowboy•5mo ago
And yes, they’re packed densely.
ciupicri•5mo ago
0manrho•5mo ago
Changing settings can lead to stability issues no matter which way you push it frankly. If you're don't know what you're doing/aren't comfortable with it, probably not worth it.
latchkey•5mo ago
Switch is designing for 2MW racks now.
esseph•5mo ago
A computer is becoming a Home Appliance in the it will need 20A wiring and plugs soon, but should move to 220/240v soon anyway (and change the jumper on your standard power supply).
nehalem501•5mo ago
jacquesm•5mo ago
avgeek23•6mo ago
Would push performance further. Although companies like intel would bleed the consumer dry with, a certain i5-whatever cpu with onboard memory of 16 gigs could be insanely priced compared to what you'd pay for addon memory.
0x457•5mo ago
sitkack•5mo ago
derefr•5mo ago
How do you do that, if each GPU expects to be its own backplane? One CPU daughterboard per GPU, and then the CPU daughterboards get SLIed together into one big CPU using NVLink? :P
wmf•5mo ago
mensetmanusman•5mo ago
Maybe the GPU becomes the motherboard and the CPU plugs into it.
db48x•5mo ago
For other use cases like GPU servers it is better to have many GPUs for every CPU, so plugging a CPU card into the GPU doesn’t make much sense there either.
verall•5mo ago
PCIe is a standard/commodity so that multiple vendors can compete and customers can save money. But at 8.0 speeds I'm not sure how many vendors will really be supplying, there's already only a few doing serdes this fast...
Melatonic•5mo ago
snerbles•5mo ago
https://www.servethehome.com/micron-socamm-memory-powers-nex...
eggsome•5mo ago
verall•5mo ago
y1n0•5mo ago
The ip companies are the first to support new standards, make their money selling to intel etc. Allowing intel or whomever to take their time to build higher performance ip.
bgnn•5mo ago
MurkyLabs•5mo ago
Razengan•5mo ago
MBCook•5mo ago
A total computer all-in-one. Just no interface to the world without the motherboard.
trenchpilgrim•5mo ago
Dylan16807•5mo ago
Also CPUs are able to make use of more space for memory, both horizontally and vertically.
I don't really see the power delivery advantages, either way you're running a bunch of EPS12V or similar cables around.
burnt-resistor•5mo ago
kvemkon•5mo ago
Actually the RapsberryPi (appeared 2012) was based on a SoC with a big and powerful GPU and small weak supporting CPU. The board booted the GPU first.
LeoPanthera•5mo ago
MBCook•5mo ago
Then that became unnecessary when L2 cache went on-die.
leoapagano•5mo ago
MBCook•5mo ago
What’s keeping Intel/AMD from putting memory on package like Apple does other than cost and possibly consumer demand?
iszomer•5mo ago
themafia•5mo ago
GPU RAM is high speed and power hungry. So there tends to not be very much of it on the GPU card. This is part of the reason we keep increasing the bandwidth is so the CPU can touch that GPU RAM at the highest speeds.
It makes me wonder though if a NUMA model for the GPU is a better idea. Add more lower power and lower speed RAM onto the GPU card. Then let the CPU preload as much data as is possible onto the card. Then instead of transferring textures through the CPU onto the PCI bus and into the GPU why not just send a DMA request to the GPU and ask it to move it from it's low speed memory to it's high speed memory?
It's a whole new architecture but it seems to get at the actual problems we have in the space.
kokada•5mo ago
themafia•5mo ago
KeplerBoy•5mo ago
I believe that's kind of what bolt graphics is doing with the dimm slots next to the soldered on lpddr5. https://bolt.graphics/how-it-works/
pshirshov•5mo ago
colejohnson66•5mo ago
pshirshov•5mo ago
namibj•5mo ago
vFunct•5mo ago
pezezin•5mo ago
crimony•5mo ago
So if you incorrectly insert a card and bend a pin you're in trouble.
VPX has the sockets on the backplane so avoids this issue, if you bend pins you just grab another card from spares.
This may have changed since I last looked at it.
Telecoms industry definitely seem to favour TCA though.
pezezin•5mo ago
theandrewbailey•5mo ago
namibj•5mo ago
KeplerBoy•5mo ago
iszomer•5mo ago
guerrilla•5mo ago
dylan604•5mo ago
p1esk•5mo ago
BobbyTables2•5mo ago
Kinda like RAM - almost useless in terms of “upgrade” if one waits a few years. (Seems like DDR4 didn’t last long!)
chrismorgan•5mo ago
I feel like I’ve been hearing about people selling five-to-ten-year-old GPUs for sometimes as many dollars as they bought them for, for the last five years; and people choosing to stay on 10-series NVIDIA cards (2016) because the similar-cost RTX 30-, 40- or 50-series was actually worse, because they’d been putting the effort and expense into parts of the chips no one actually used. Dunno, I don’t dGPU.
0manrho•5mo ago
An example, This is storage instead of GPU's, but as the SSD's were PCIe NVMe, it's pretty nearly the same concept: https://www.servethehome.com/zfs-without-a-server-using-the-...
undersuit•5mo ago
PCI-e Networks and CXL are the future of many platforms... like ISA backplanes.
0manrho•5mo ago
That said my experience in this field is more with storage than GPU compute, but I have done some limited hacking about in the GPGPU space with that tech as well. Really fascinating stuff (and often hard to keep up with and making sure every part in the chain supports the features you want to leverage, not to mention going down the PCIe root topology rabbit hole and dealing with latency/trace-length/SnR issues with retimers vs muxers vs etc etc etc).
It's still a nascent field that's very expensive to play in, but I agree it's the future of at least part of the data infrastructure field.
Really looking forward to finally getting my hands on CXL3.x stuff (outside of a demo environment.)
coherentpony•5mo ago
bgnn•5mo ago
But you are right, there's no hierarchy in the systems anymore. Why do we even call something a motherboard? There's a bunch of chips interconnected.
j16sdiz•5mo ago
iszomer•5mo ago
mcdeltat•5mo ago