:-/
But, because I'm a good sport, I actually chased a couple of those links figuring that I could convert Egyptian Pound into USD but <https://www.sigma-computer.com/en/search?q=CXL%20R5X4> is "No results", and similar for the other ones that I could get to even load
I think the main bridge chipsets come from Microchip (this one) and Montage.
This Gigabyte product is interesting since it’s a little lower end than most AXL solutions - so far AXL memory expansion has mostly appeared in esoteric racked designs like the particularly wild https://www.servethehome.com/cxl-paradigm-shift-asus-rs520qa... .
On the positive side, you can scale out memory quite a lot, fill up PCI slots, even have memory external to your chassis. Memory tiering has a lot of potential.
On the negative side, you've got latency costs to swallow up. You don't get distance from CPU for free (there's a reason the memory on your motherboard is as close as practical to the CPU) https://www.nextplatform.com/2022/12/05/just-how-bad-is-cxl-.... CXL spec for 2.0 is at about 200ns of latency added to all calls to what is stored in memory, so when using it you've got to think carefully about how you approach using it, or you'll cripple yourself.
There's been work on the OS side around data locality, but CXL stuff hasn't been widely available, so there's an element of "Well, we'll have to see".
Azure has some interesting whitepapers out as they've been investigating ways to use CXL with VMs, https://www.microsoft.com/en-us/research/wp-content/uploads/....
But CXL-backed memory can use your CPU caches as usual and the PCIe 5.0 lane throughput is still good, assuming that the CXL controller/DRAM side doesn't become a bottleneck. So you could design your engines and data structures to account for these tradeoffs. Like fetching/scanning columnar data structures, prefetching to hide latency etc. You probably don't want to have global shared locks and frequent atomic operations on CXL-backed shared memory (once that becomes possible in theory with CXL3.0).
Edit: I'll plug my own article here - if you've wondered whether there were actual large-scale commercial products that used Intel's Optane as intended then Oracle database took good advantage of it (both the Exadata and plain database engines). One use was to have low latency durable (local) commits on Optane:
https://tanelpoder.com/posts/testing-oracles-use-of-optane-p...
VMware supports it as well, but using it as a simpler layer for tiered memory.
For example:
https://en.wikipedia.org/wiki/I-RAM
(Not a unique thing, merely the first one I found).
And then there are the more exotic options, like the stuff that these folk used to make: https://en.wikipedia.org/wiki/Texas_Memory_Systems - iirc - Eve Online used the RamSan product line (apparently starting in 2005: https://www.eveonline.com/news/view/a-history-of-eve-databas... )
The latency was always a pain though. At some point you realize an SSD with lots of cache gives just as good results since the only downside to an SSD is latency (bandwidth can always be increased in parallel).
roscas•1h ago
cjensen•1h ago
tanelpoder•1h ago
I guess there are some use cases for this for local users, but I think the biggest wins could come from the CXL shared memory arrays in smaller clusters. So you could, for example, cache the entire build-side of a big hash join in the shared CXL memory and let all other nodes performing the join see the single shared dataset. Or build a "coherent global buffer cache" using CPU+PCI+CXL hardware, like Oracle Real Application Clusters has been doing with software+NICs for the last 30 years.
Edit: One example of the CXL shared memory pool devices is Samsung CMM-B. Still just an announcement, haven't seen it in the wild. So, CXL arrays might become something like the SAN arrays in the future - with direct loading to CPU cache (with cache coherence) and being byte-addressable.
https://semiconductor.samsung.com/news-events/tech-blog/cxl-...
justincormack•47m ago
A lot of the initial use cases of CXL seem to be to use up lots of older DDR4 RDIMMs in newer systems to expand memory, eg cloud providers have a lot.
kvemkon•30m ago