Those must've been the server Atoms or the later models that aren't actually all that low-power, as the ones I'm familiar with are well under 10W.
They way they're trying to solve it is very similar to this article, by doing the USB-PD negotiation during U-boot bootloader stage:
- https://gitlab.collabora.com/hardware-enablement/rockchip-35...
- https://lore.kernel.org/u-boot/20241015152719.88678-1-sebast...
I don't know yet how I feel about the fact that a driver in the OS is supposed to take this role and tell the power-supply how much power to deliver. Not necessarily a novel security concern, but a potential nightmare from a plain B2C customer service perspective (i.e. a faulty driver causing the system to shut down during boot, fry the Motherboard,...)
I would have imagined that USB controller chipsets would likely offer some nonvolatile means to set the PD configuration (like jumpers or eeprom) precisely because of this issue. It's surprising to me that such a feature isnt common
Other boards don't do USB-PD at all and just rely you using a PSU with a USB-C connector that defaults to 5V, e.g. RPi and Orange pi 5 (RK3588).
Everything beyond 5V requires a handshake between device and PSU, which ensures that the connected device can actual handle higher power output.
It's pretty common for really cheap electronics to skip these resistors, and then they can only be powered with a USB-A to USB-C cable, not C-to-C. (Because USB-A ports always a source and never a sink.) Adafruit even makes a $4.50 adapter to fix the issue.
But you're right that everything higher than 5V & 3A gets significantly more complex.
In my experience, if the device doesn't have enough power to actually boot it will simply slow charge at the default USB rate.
This can be problematic with devices that immediately try to boot when powered on.
I had an iPad 3 stuck in a low-battery reboot loop like this for hours once upon a time. I eventually got the idea to force it into DFU mode and was finally able to let it charge long enough to complete its boot process.
TIL.
This really feels like a switch configuration problem. A compliant PoE PD circuit indicates its power class and shouldn't need to bootstrap power delivery. If the PD is compliant and components selected correctly then the PSE is either non-compliant or configured incorrectly.
> Our device required about 23W when fully operational, which pushed us into 802.3at (PoE+) territory
The problem the author solved is quite interesting. But I can’t help but think how wasteful it is to load up a full copy of windows just to serve up dumb advertisements.
The attack surface of a fully copy of windows 10 is an attackers wet dream.
Hope most of these installations are put out to pasture and replaced with very low power solutions.
Windows was running because Linux was too hard for vendors.
throw0101d•1d ago
For the record, 802.3bt was released in 2022:
* https://en.wikipedia.org/wiki/Power_over_Ethernet
It allows for up to 71W at the far end of the connection.
londons_explore•1d ago
Why can't POE standards do the same?
Simply don't set voltage or current limits in the standard, and instead let endpoint devices advertise what they're capable of.
userbinator•1d ago
esseph•1d ago
Unless you like home and warehouse fires
Or if you want to add per port fuses. That sounds incredibly expensive.
wmf•23h ago
esseph•23h ago
It can be substantial. But yes, there are cable spec requirements for POE depending on the demands of the device!
NEC as of 2017 has new standards and a whole section for PoE devices above 60W now, specifically a section on safety and best practices. It DOES have cable requirements that do impact the cable standard chosen.
More info on that here: https://www.panduit.com/content/dam/panduit/en/landing-pages...
From: https://reolink.com/blog/poe-distance-limit/?srsltid=AfmBOop... --- PoE Distance Limit (802.3af)
The original 802.3af PoE standard ratified in 2003 provides up to 15.4W of power to devices. It has a maximum distance limit of 100 meters, like all PoE standards. However, because of voltage drop along Ethernet cables, the usable PoE distance for 15.4W devices is often only 50-60 meters in practice using common Cat5e cabling.
In addition, this piece of note from Wikipedia: https://en.wikipedia.org/wiki/Power_over_Ethernet#Power_capa... ---
The ISO/IEC TR 29125 and Cenelec EN 50174-99-1 draft standards outline the cable bundle temperature rise that can be expected from the use of 4PPoE. A distinction is made between two scenarios:
bundles heating up from the inside to the outside, and bundles heating up from the outside to match the ambient temperature
The second scenario largely depends on the environment and installation, whereas the first is solely influenced by the cable construction. In a standard unshielded cable, the PoE-related temperature rise increases by a factor of 5. In a shielded cable, this value drops to between 2.5 and 3, depending on the design.
PoE+ Distance Limit (802.3at) An update to PoE in 2009 called PoE+ increased the available power to 30W per port. The formal 100-meter distance limit remains unchanged from previous standards. However, the higher power budget of 30W devices leads to increased voltage drops during transmission over long distances.
PoE++ Distance Limit (802.3bt) The latest 2018 PoE++ standard increased available power further to as much as 60W. As you can expect, with higher power outputs, usable distances for PoE++ are even lower than previous PoE versions. Real-world PoE++ distances are often only 15-25 meters for equipment needing the full 60W.
wmf•23h ago
esseph•23h ago
The device needs a certain amount of power to keep itself alive. Depending on how the device is designed and if actually adhering to standards, the device should simply not have enough power to start at say 80m, or let's say they pushed the install from the get-go (happens all the time) and it's actually 110m on poor / underspec'd cable.
And let's say the device has enough power to start, but you're using indoor cat5 and it's been outdoor for 7 years, and you don't know this but it's CCA. If it's in a bundle with other similar devices drawing high power and there is enough heat concentrated at a bend, then yes, the cable could catch fire without the device having a problem. As long as the device has enough power it's going to keep doing its thing until the cable has degraded enough to cause signal drop and assuming it's using one of the more modern 4pair PoE standards, would just shut off. But that could be after the drapes or that amazon box in the corner of the room caught fire.
We're just lucky in the residential space that PoE hasn't been as "mass market" as an iphone, and we've been slowly working into higher power delivery as demands have increased.
IMO? It's all silly though. We should just go optical and direct-DC whenever possible ;)
leoedin•22h ago
Looping the cable or putting it in a confined space could cause issues. The cable could then catch fire even though it appeared to be operating normally to the PoE controller.
throwaway67743•19h ago
Aurornis•15h ago
Longer wires don’t increase the overheating risk because the additional heat is divided over the additional length.
jorvi•16h ago
[0]What a horrible naming. PoE 2, 3, 4 etc would have been much better..
esseph•13h ago
https://www.electricallicenserenewal.com/Electrical-Continui...
brirec•23h ago
This is why you don’t want “fake” Cat6 etc. cable. I’ve seen copper-clad aluminum sold as cat6 cable before, but that shit will break 100% of the time and a broken cable will absolutely catch fire from a standard 802.at switch.
esseph•23h ago
https://en.wikipedia.org/wiki/Power_over_Ethernet#Power_capa...
For Dayjob I power a lot of very expensive not-even-on-the-market-yet radios and other equipment via multiple PoE standards, mixed vendors, 2 pair, 4 pair, etc via POE and we have ran into all kinds of POE problems over the years.
POE fires do happen. Sometimes it's the cable, the connector, sometimes something happened to the cable run. Sometimes the gear melts.
https://www.powerelectronictips.com/halt-and-catch-fire-the-...
throw0101d•17h ago
It should be noted that there are two standards (of course) for Ethernet cabling, and one (TIA) officially hardcodes distances (e.g., 100m) but the other (ISO) simply specifies the signal-to-noise has to be a certain limits which could allow for longer distances (>100m):
* https://www.youtube.com/watch?v=kNa_IdfivKs
A specific product that lets you go longer than 100m:
* https://www.youtube.com/watch?v=ZY48KUAZKhM
esseph•23h ago
---
Non-standard implementations There are more than ten proprietary implementations.[49] The more common ones are discussed below.
https://en.wikipedia.org/wiki/Power_over_Ethernet#Non-standa...
RF_Savage•23h ago
esseph•22h ago
izacus•20h ago
crote•18h ago
izacus•14h ago
lazide•10h ago
mrheosuper•1d ago
Even so, the PD protocol limits how much power can be transferred.
Aurornis•22h ago
There are thermal and safety limits to how much current and voltage you can send down standard cabling. The top PoE standards are basically at those limits.
> and instead let endpoint devices advertise what they're capable of.
There are LLDP provisions to negotiate power in 0.1W increments.
The standards are still very useful for having a known target to hit. It’s much easier to say a device is compatible with one of the standards then to have to check the voltage and current limits for everything.
yencabulator•10h ago
varjag•16h ago