Also, if you remember where you saw that logo, please let me know!
I have a 10gbit dual port card in a Lenovo mini pc. There is no normal way to get any heat out of there so I put a 12v small radial fan in there as support. It works great at 5v: silent and cool. It is a fan though so might not suit your purpose.
https://www.reddit.com/r/UsbCHardware/comments/y5uokj/commen...
There's no driver support on macOS, and for Linux you'd need a bleeding edge kernel. Just trying to physically connect it (along with a connected SFP28 transceiver) to my Mac's Thunderbolt port using an external PCIe-to-TB adapter, macmon tells me a power draw of around 4.3 W, so it's not significantly less for half the bandwidth, but the card doesn't get hot at all.
I measure around +11W idle. While running a speed test, I read ca. +15W.
I’m looking forward to your writeup on the RTL8127AF as well. Your blog is awesome!
https://support.apple.com/guide/mac-help/ip-thunderbolt-conn... etc
You'd also mostly be limited to short cables (1-2m) and a ring topology.
This is my firsthand experence trying to get some tablet motherboards to link up and work as a proxmox cluster w/ TB3 as the link between nodes.
The placement is mostly determined by the design of the OCP 2.0 connector. OCP 3.0 has a connector at the short edge of the card, which allows exposing/extending the heat sink directly to the outer case.
If somebody has the talent, designing a Thunderbolt 5 adapter for OCP 3.0 cards could be a worthwhile project.
As a stop-gap, I'd see if there was any way to get airflow into the case - I'd expect even a tiny fan would do much more than those two large heatsinks stuck onto the case (since the case itself has no thermal connection to the chip heatsink).
If that's not a requirement just get the Raiden Digit Light One, which does have a fan (and otherwise the same network card).
If I could design an adapter PCB myself, I would go straight to OCP 3.0, which allows for a much simpler construction, and TB5 speeds.
Alternatively, there are DELL CX422A rNDC cards (R887V) that appear to have an OCP 2.0 connector but a better heatsink design.
If truly concerned, one could use SFP28 to SFP28 cage adapters to have the heat outside the case, and slap on some extra heatsinks there.
Edit: forgot is isn't "true" PCIe but tunneled.
I had to do a double-take when it mentioned Kelvin since That is physically impossible.
It 'reduces it by' ... not reduces it TO
It’s a little bit funny/coy to use it mixed with Celsius.
But this is a cool solution
If you're using an adapter card to add Thunderbolt functionality, then your mainboard needs to support that, and the card must be connected to a PCIe bus that's wired to the Intel PCH, not to the CPU.
Also check the BIOS settings (try setting TB security to "No Security" or "User Authorization")
Some OEM Mellanox cards can be cross-flashed to NVIDIA's stock firmware, maybe that's also relevant.
Pic of a previous cx3 (10 gig on tb3) setup: https://habrastorage.org/r/w780/getpro/habr/upload_files/d3c...
10gig can saturate full speed, 25G in my experience rarely reaches same 20G as the author observed.
Really: because I can, and it is fun. I upgraded my home lan to 10G, because used 10G hardware is cheap (and now 25G enters the same price range).
Which cards do you prefer for 100G, and what is the situation with dacs/optics?
Frankly, 10Gbit is fully 25 years old with, 10GbaseT being 20 years old this year.
Thats ridiculously ancient technology. There is/was a 25/40GbaseT spec too (now 10 years old), which basically no one implemented because like ECC ram (and tape drives, and seem to be trying to do it with harddrives and GPUs) the MBA's have taken over parts of the computer industry and decided that they can milk huge profit margins from technologies which are incrementally more difficult because smaller users just don't matter to their bottom lines. The only reason those MBAs are allowing us to have it now, is because a pretty decent percentage of us can now get 5Gbit+ internet access and our wifi routers can do 1Gbit+ wireless, and the weak link is being able to attach the two.
I did a bit of back of the napkin math/simulation about a possible variable rate Ethernet (ex like NBbaseT, where it has multiple speeds and selects faster one based on line conditions), and concluded that 80+Gbit using modern PHY/DSP's and high symbol rate, multiple bands, techology which is dirt cheap thanks to wifi/bt/etc on fairly short cable distances (ex 30-50M) on CAT8 is entirely possible. And this isn't even fantasy, short cat7 runs are an entire diffrent ballpark from a phone pair, and these days mg.fast/etc have shown 10Gbit+ over that junk.
I'm using mostly fiber just because the servers are connected to Cisco 9305 with 72 100g ports.
And thanks for pointing at CWDM4, these are quite cheap on ebay now
Small thing: I just checked Amazon.com: https://www.amazon.com/s?k=thunderbolt+25G&crid=2RHL4ZJL96Z9...
I cannot find anything for less than 285 USD. The blog post gave a price of 174 USD. I have no reason to disbelieve the author, but a bummer to see the current price is 110 USD more!
I think, tragically, the blog post has caused this price increase.
The offers on Amazon are most likely all drop shippers trying to gauge a price that works for them.
You might have better luck ordering directly from China for a fraction of the price: https://detail.1688.com/offer/836680468489.html
I'm going to try a couple other fan assisted cooling options, as I'd like to keep the setup reasonably compact.
I just ran fiber to my desk and I have a more expensive QNAP unit that does 10G SFP+, but this will let me max out the connection to my NAS.
Although I managed to panic the kernel a couple of times without the extra heatsinks on...
What also may not work are Dell rNDC cards. They look like they have OCP 2.0 type 1 connectors, but may not quite fit (please correct me if I'm wrong). They do however have a nice cooling solution, which could be retrofitted to one of the OCP 2.0 cards.
I've also ordered a Chelsio T6225-OCP cards out of curiosity. These should fit in the PX adapter but require a 3rd-party driver on macOS (which then supports jumbo frames, etc.)
What also fits physically is a Broadcom BCM957304M3040C, but there are no drivers on macOS, and I couldn't get the firmware updated on Linux either.
Spec for reference, I’m not 100% sure. https://docs.nvidia.com/nvidia-connectx-5-ethernet-adapter-c...
In my experience, the cheap eBay MLX cards are DellEMC/HPE/etc OEM cards. However I also encountered zero problems cross-flashing those cards back to generic Mellanox firmware. I'm running several of those cross-flashed CX-4 Lx cards going on six or seven years now and they've been totally bulletproof.
`sudo su - <user>` also seems easier for me to type than `sudo -i -u <user>`
Until motherboards include SFP ports it's probably not worth the effort at all in home setting; external adaptors like the one presented here are unreliable and add several ms of latency.
Where did you get "several ms of latency" figure from? I have not measured external card, but may be I should do it... Because cards themselves have latency in range of microseconds, not millis.
there are a lot of usb options that matter, and tp-link ships lots of realtek chipsets that require very special driver incantations that a lot of the linux drivers simply don't replicate.
two+ layers of bad options will surely add 4ms quick.
USB itself can have a lot of issues anywhere in the chain. I have a Thunderbolt dock where half of the USB ports adds latency and reduced throughput just because the USB chipset that powers them is terrible (it has two separate USB chipsets from different brands). Switch to a different port on the exact same dock and it's fine.
A micro-ATX motherboard with on-board 2xSFP28 (Intel E810):
* https://download-2.msi.com/archive/mnu_exe/server/D3052-data...
* https://www.techradar.com/pro/this-amd-motherboard-has-a-uni...
Sure one can buy nice ethernet cards and cables, but the reality is that if you grab a random laptop/desktop from best buy and a cable, you are looking at best at a 2.5Gb/s speed.
Practically spoken, a lot of the transfer speed advertised by wifi is marketing hogwash barely backed by reality, especially in congested environments.
> Sure one can buy nice ethernet cards and cables, but the reality is that if you grab a random laptop/desktop from best buy and a cable, you are looking at best at a 2.5Gb/s speed.
For both laptops and desktops, PCI lanes. Intel doesn't provide many lanes, so manufacturers don't want to waste valuable lanes permanently for capabilities most people don't ever need.
For laptops in particular, power draw. The faster you push copper, the more power you need. And laptops have even less PCIe lanes available to waste.
For desktops, it's a question of market demand. Again - most applications don't need ultra high transfer rate, most household connectivity is DSL and (G)PON so 1 GBit/s is enough to max out the uplink. And those few users that do need higher transfer rates can always install a PCIe card, especially as there is a multitude of different options to provide high bandwidth connectivity.
Yes but a hogwash of several gigabits sometimes does give you real-world performance of more than a gigabit.
> Intel doesn't provide many lanes, so manufacturers don't want to waste valuable lanes permanently for capabilities most people don't ever need.
It's been a bunch of years that a single lane could do 10Gbps, and a bunch more years that a single lane could do 5Gbps.
Also don't ethernet ports tend to be fed by the chipset? So they don't really take lanes.
It all comes down to performance per Watt, the availability of cheap switching gear, and the actual utility in an office / home environment.
For 10 Gbps, cabling can be an issue. Existing "RJ45"-style Cat 6 cables could still work, but maybe not all of them.
Higher speeds will most likely demand a switch to fiber (for anything longer than a few meters) or Twinax DAC (for inter-device connects). Since Wifi already provides higher speeds, one may be inclined to upgrade just for that (because at some point, Wireless becomes Wired, too).
That change comes with the complexity of running new cabling, fiber splicing, worrying about different connectors (SFP+, SFP28, SFP56, QSFP28, ...), incompatible transceiver certifications, vendor lock-in, etc. Not a problem in the datacenter, but try to explain this to a layman.
Lastly, without a faster pipe to the Internet, what can you do other than NAS and AI? The computers will still get faster chips but most folks won't be able to make use of the bandwidth because they're still stuck on 1Gbps Internet or less.
But that will change. Swiss Init7 has shown that 25GBps Internet at home is not only feasible but also affordable, and China seems to be adding lots of 10G, and fiber in general.
Fun times ahead.
And while not every cat6 will do 10, it would still be worth a shot, and devices aren't using 5 instead they're using even less.
Not to mention that cat8 will happily do 40Gbps as long as you can get from your switch to your end devices in 30 meters.
On consumer devices, I think part of the issue is that we’re still wedded to four-pair twisted copper as the physical medium. That worked well for Gigabit Ethernet, but once you push to 5 or 10 Gb/s it becomes inherently expensive. Twisted pair is simply a poor medium at those data rates, so you end up needing a large amount of complex silicon to compensate for attenuation, crosstalk, and noise.
That's doable but the double whammy is that most people use the network for 'internet' and 1G is simply more than enough, 10G therefore becomes quite niche so there's no enormous volume to overcome the inherent issues at low cost.
For 10 Gbps I find it simpler and cheaper to use fiber or DACs, but motherboards don't provide SFP+, only RJ45 ports. Over 10 Gbps copper is a no go. SFP28 and above would be nice to have on motherboards, but that's a dream with almost zero chances to happen. For most people RJ45 + WiFi 7 is good enough, computer manufacturers will not put SFP+ or SFP28 for a small minority of people.
Servers had a reason to spend for the 10G, 25G and 40G cards which used 4 lanes.
There are 10 Gigabit chips that can run off of one PCI-E 4.0 lane now and the 2.5G and 5G speeds are supported(802.3bz).
wifi is not faster.
However ethernet is not as critical as it used to be, even at the office. People like the conveniency of having laptops they can move around. Unless you are working from home, having a dedicated office space is now seen as a waste of space. If the speed of the wifi is good enough when you are in a meeting room or in your kitchen, there is no reason to plug your laptop when you move back in another place, especially if most connections are to the internet and not the local network. In the workplace, most NAS have been replaced by onedrive / gdrive, at home NAS use has always been limited to a niche population: nerds/techies, photographers, music or video producers...
After it happened 3-4 times, I started debugging. It turned out that we usually get at least a bit of sunlight around noon, as it burns away the morning clouds. And my Thunderbolt box was in direct sunlight, and eventually started overheating.
And a Zoom restart made it fall back onto the Wifi connection instead of wired.
I fixed that by adding a small USB-powered fan to the Thunderbolt box as a temporary workaround. I just realized that it's been like this for the last 3 years: https://pics.ealex.net/s/overheat
Made me chuckle.
Displayport 2.1 UHBR20 is 80 gbps.
USB4 maxes out at 80 gbps.
As you can see, 1gbps ethernet is starting to look like stone age technology. 2.5gbps becoming the next step seems a bit strange when we were jumping orders of magnitude every few years before. But also, ethernet tends to be used on longer cables than DP or USB, and trying to push it much faster results in exponentially increasing losses to resistance and radiation, the cable starts acting like an antenna even with the twisted pairs. Fiber optics are much better suited to high speed long distance, but too expensive and fragile for consumer use.
> All other 25 GbE adapter solutions I’ve found so far ... have a spinning fan. ... the biggest downside of the PX adapter is that it gets really hot, like not touchable hot. Sometimes, either the network connection silently disappeared or (sadly) my Mac crashed with a kernel panic in the network driver. ... Other than that, the PX seems to do the job
cs02rm0•6d ago
e40•6d ago
leosanchez•6d ago
LeoPanthera•6d ago
modderation•5d ago
AdrianB1•5d ago
tgma•5d ago
mcny•6d ago
All I want to do is copy over all the photos and videos from my phone to my computer but I have to baby sit the process and think whether I want to skip or retry a failed copy. And it is so slow. USB 2.0 slow. I guess everybody has given up on the idea of saving their photos and videos over USB?
diogocp•6d ago
Many phones indeed only support USB 2.0. For example the base iPhone 17. The Pro does support USB 3.2, however.
> I guess everybody has given up on the idea of saving their photos and videos over USB?
Correct.
ranguna•6d ago
My last two phones in the last 4 years had at least USB 3.1
jacquesm•6d ago
rbanffy•5d ago
Even worse, the control plane is exposed, but for something that runs 3 Hercules mainframe emulation and two Altairs with MP/M, it’s fine.
jacquesm•5d ago
AdrianB1•5d ago
cirrusfan•6d ago
Do you import originals or do you have the "most compatible" setting turned on?
I always assumed apple simply hated people that use windows/linux desktops so the occasional broken file was caused by the driver being sort-of working and if people complain, well, they can fuck off and pay for icloud or a mac. After upgrading to 15 pro which has 10 gbps usb-c it still took forever to import photos and the occasional broken photos kept happening, and after some research it turns out that the speed was limited by the phone converting the .heic originals into .jpg when transferring to a desktop. Not only does it limit the speed, it also degrades the quality of the photos and deletes a bunch of metadata.
After changing the setting to export original files the transfer is much faster and I haven’t had a single broken file / video. The files are also higher quality and lower filesize, although .heic is fairly computationally-demanding.
Idk about Android but I suspect it might have a similar behavior
walterbell•6d ago
Until USB has monthly service business to compete with cloud storage revenue.
drawfloat•5d ago
As wireless charging never quite reached the level hoped – see AirPower – and Google/Apple seemingly bought and never did anything with a bunch of haptic audio startups, I figure that idea died....but they never cared enough to make sure the USB port remained top end.
fc417fc802•5d ago
rbanffy•5d ago
I’m using Dropbox for syncing photos from phone to Linux laptop, and mounting the SDcard locally for cameras, so this is a guess.
rbanffy•6d ago
kohlschuetter•6d ago
With TB5, and deep pockets, you might probably also benchmark it against a setup with dedicated TB5 enclosures (e.g., Mercury Helios 5S).
TB5 has PCIe 4.0 x4 instead of PCIe 3.0 x4 -- that should give you 50 GbE half-duplex instead of 25 GbE. You would need a different network card though (ConnectX-5, for example).
Pragmatically though, you could also aggregate (bond) multiple 25 GbE network card ports (with Mac Studio, you have up to 6 Thunderbolt buses, so more than enough to saturate a 100GbE connection).
rbanffy•5d ago
geerlingguy•5d ago
I think where it would show more significant speed up is on the AMD Strix Halo cluster.
Except I haven't been able to get RDMA over Thunderbolt on there to work, so it'd be apples to oranges comparing ConnectX to Thunderbolt on Linux.
pdrayton•5d ago
Posted a little bit re: the TB side of things on the Framework and Level1Techs forums but haven’t pulled everything together yet because the higher-speed Ethernet and Infiniband data is still being collected.
So far my observations re: TB is that, on Strix Halo specifically, while latency can be excellent there seem to be some limits on throughput. My tests cap out at ~11Gbps unidir (Tx|Rx), ~22Gbps bidi (Tx+Rx). Which is wierd because the USB4 ports are advertised at 40Gbps bidi, the links report as 2x20Gbs, and are stable with no errors/flapping - so not a cabling problem.
The issue seems rather specific to TB networking on Strix Halo using the USB4 links between machines.
Emphasis to exclude common exceptions - other platforms eg Intel users getting well over 20Gbps; other mini PCs eg MS-1 Max USB4v2; local network eg I’ve measured loopback >100Gbps; or external storage where folk are seeing 18Gbps+ / numbers that align with their devices.
Emd goal is to get hard data on all reasonably achievable link types. Already have data on TB & lower-speed Ethernet (switched & P2P), currently doing setup & tuning on some Mellanox cards to collect data for higher-speed Ethernet and IB. P2P-only for now; 100GbE switching is becoming mainstream but IB switches are still rather nutty.
Happy to collaborate with any other folk interested in this topic. Reach out to (username at pm dot me).
kohlschuetter•6d ago
notrustincloud•5d ago
sschueller•5d ago
kohlschuetter•5d ago
consp•5d ago
soneil•5d ago