Shouldn't this be 5µs?
That said, all those numbers feel a bit off by 1.5-2 orders of magnitude -- that disk read speed translates to about 3 GB/s which is well outside the range of what HDDs can achieve.
https://brenocon.com/dean_perf.html indicates the original set of numbers were more like 10us, 250us, and 30ms.
And it links to https://github.com/colin-scott/interactive_latencies which seems like it extrapolates progress from 14 years ago:
// NIC bandwidth doubles every 2 years
// [source: http://ampcamp.berkeley.edu/wp-content/uploads/2012/06/Ion-stoica-amp-camp-21012-warehouse-scale-computing-intro-final.pdf]
// TODO: should really be a step function
// 1Gb/s = 125MB/s = 125*10^6 B/s in 2003
which means that in 2026 we'll have seen 11 doublings since gigabit speeds in 2003, so we'll all have > terabit speeds available to us.So I guess it's a typo but it makes me doubt the other numbers.
That’s PCIe 3.0 x4 or PCIe 4.0 x2, which a decent commodity M.2 NVMe SSD can use and can possibly saturate, at least for reads.
> which means that in 2026 we'll have seen 11 doublings since gigabit speeds in 2003, so we'll all have > terabit speeds available to us.
We’re not that far off. 100GbE hardware is not especially expensive these days. Little “AI” boxes with 400-800 Gbps of connectivity are a thing.
That being said, all the connections over 100Gbps are currently multi-lane AFAIK, and the heroic efforts and multiplexing needed to exceed 100Gbps at any distance are a bit in excess of the very simple technology that got us to 100Mbps “fast Ethernet”.
http://ithare.com/infographics-operation-costs-in-cpu-clock-...
sneilan1•2h ago
kgwxd•2h ago
whynotmaybe•45m ago
> Productivity soars when a computer and its users interact at a pace (<400ms) that ensures that neither has to wait on the other.
https://lawsofux.com/doherty-threshold/