This isn’t true anymore. Starlink changed the whole game. It’s fast and low latency now, and almost everyone on any service that isn’t Starlink has switched en masse to Starlink because previous satellite internet services were so bad.
Most of the time no one would notice. For some applications it's definitely something that needs to get designed in.
I still occasionally get blips on Comcast, mostly late at night when I'm one of the few who notices.
Seems sensible to take a small convenience hit now to mitigate those risks.
- Depending on your product or use case, somewhere between a majority and a vast majority of your users will be using your product from a mobile device. Throughput and latency can be extremely high, but also highly variable over time. You might be able to squeeze 30Mbps and 200ms pings for one request and then face 2Mbps and 4000ms pings seconds later.
- WiFi generally sucks for most people. The fact that they have a 100Mbps/20Mbps terrestrial link doesn't mean squat if they're eking out 3Mbps with eye-watering packet loss because they're in their attic office. The vast majority of your users are using wireless links (WiFi or cell) and are not in any way hardlined to the internet.
It's a nice feature, but it would be even nicer if you could pin some apps to prevent their offloading even if you haven't used them in ages.
That change would make _viable_ for me at all, right now it's next to useless.
Currently iOS will offload apps that provide widgets (like Widgetsmith) even when I have multiple Widgetsmith widgets on my 1st and 2nd homescreens, I just never open the app (I don't need to, the widgets are all I use). One day the widgets will just be black and clicking on them does nothing. I have to search for Widgetsmith and then make the phone re-download it. So annoying.
Also annoying is you can get push notifications from offloaded apps. Tapping on the notification does _nothing_ no alert, no re-download, just nothing. Again, you have to connect the dots and redownload it yourself.
This "feature" is very badly implemented. If they just allowed me to pin things and added some better UX (and logic for the widget issue) it would be much better.
0: https://support.apple.com/guide/iphone/manage-storage-on-iph...
All of a sudden one day, I was cut off from all my music, by the creators of the iPod!
I switched away from Apple Music and will never return. 15 years of extensive usage of iTunes, and now I will never trust Apple with my music needs again. I'm sure they don't care, or consider the move a good tradeoff for their user base, but it's the most user hostile thing I've ever experienced in two decades on Apple platforms.
Oh and all my lossless got shit on.
Fuck me I guess??
Apple didn’t communicate that well and many folks lost stuff, particularly if they are picky about recordings.
All of the CD collection stuff has degraded everywhere as the databases of tracks have been passed around to various overlords.
I couldn't be bothered to spend time manually selecting stuff to download back then. It was offensive to even ask that spend 30 minutes manually correcting a completely unnecessary mistake on their part. And this was during a really really bad time in interface, with the flat ui idiocy all the rage, and when people were abandoning all UI standards that gave any affordances at all.
If I'm going to go and correct Apple's mistake, I may as well switch to another vendor and do it. Which is what I did. I'm now on Spotify to this day, even though it has many of the problems as Appple Music. At least Spotify had fewer bugs at the time, and they hadn't deleted music off my device.
Good riddance and I'll never go back to Apple Music.
Add music on macOS, and on your phone. Then sync.
RESULT: one overwrites the other, regardless of any settings.
You no longer have the audio you formerly owned.
I can (and do) find things around the house that don't depend on a screen, but it's annoying to know that I don't really have much of a backup way to access the internet if the power is out for an extended period of time. (Short of plunking down for an inverter generator or UPS I suppose.)
Or you could use a Raspberry Pi or similar and a USB WiFi adapter (make sure it supports AP mode) and a battery bank, for an "emergency" battery-operated WiFi router that you'd only use during power outages.
EDIT: Unless your ISP's CPE (modem/whatever) runs on 5 volts, you'd need more than just a USB power bank to keep things going. Maybe a cheap amazon boost converter could get you the extra voltage option.
I run my router + my RPi server off-grid with ~1kWh of usable (lead-acid) battery capacity.
So with those and my laptop's battery, I sailed into our last couple of minor daytime power cuts without even noticing. Sounds of commotion from neighbours alerted me that something was up!
It's grounds for endless debate because it's inherently a fuzzy answer, and everyone has their own limits. However the outcome naturally becomes an amalgamation of everyone's response. So perhaps a post like this leads to a few more slim websites.
Part of the problem is the acceptance of the term "long tail" as normal. It is not. It is a method of marginalizing people.
These are not numbers, these are people. Just because someone is on an older phone or a slower connection does not make them any less of a human being than someone on a new phone with the latest, fastest connection.
You either serve, or you don't. If your business model requires you to ignore 20% of potential customers because they're not on the latest tech, then your business model is broken and you shouldn't be in business.
The whole reason companies are allowed to incorporate is to give them certain legal and financial benefits in exchange for providing benefits (economic and other) to society. If your company can't hold up its end of the bargain, then please do go out of business.
Or, at least the business needs to recognize that their ending support for Y is literally cutting off potential customers, and affirmatively decide that's good for their business. Ask your company's sales team if they'd be willing to answer 10% of their inbound sales calls with "fuck off, customer" and hang up. I don't think any of them would! But these very same companies think nothing of ending support for 'old' phones or supporting only Chrome browser, or programming for a single OS platform, which is effectively doing the same thing: Telling potential customers to fuck off.
Grateful for the blog w/ nice data tho TY
Loads of people are on "a slow link" or iffy internet that would otherwise have a fast internet. Like... plane wifi! Or driving through less populated areas (or the UK outside of london) and have spotty phone reception.
Turns out, it's really tough to do accurately. The main reason is that the public datasets are a mess. For example, the internet availability data is in neat hexagons, while the census demographic data is in weird, irregular shapes that don't line up. Trying to merge them is a nightmare and you lose a ton of detail.
So our main takeaway, rather than just being a pretty map, was that our public data is too broken to even see the problem clearly.
I wrote up our experience here if anyone's curious: https://zeinh.ca/projects/mapping-digital-divide/
I think in so many fields the datasets are by far the highest impact thing someone can work on, even if it seems a bit mundane and boring. Basically every field I've worked in struggles for need of reliable, well maintained and open access data, and when they do get it, it usually sets off a massive amount of related work (Seen this happen in genetics, ML of course once we got ImageNet and also started getting social media text instead of just old newspaper corpuses).
That would definitely be advice I'd give to many people searching for a project in a field -- high quality data is the bedrock infrastructure for basically all projects in academic and corporate research, so if you provide the data, you will have a major impact, pretty much guaranteed.
So anyways, I bring this up with my local government in Chicago and they recommend that I switch to AT&T Fiber because it's listed as available at my address in the FCC's database. Well, I would love to do that except that
1. The FCC's database was wrong and rejected my corrections multiple times before AT&T finally ran fiber to my building this year (only 7 years after they claimed that it was available in the database despite refusing to connect to the building whenever we tried).
2. Now that it is in the building, their Fiber ISP service can't figure out that my address exists and has existing copper telephone lines run to it by AT&T themselves so their system cannot sell me the service. I've been arguing with them for 3 months on this and have even sent them pictures of their own demarc and the existing copper lines to my unit.
3. Even if they fixed the 1st issue, they coded my address as being on a different street than its mailing address and can't figure out how to sell me a consumer internet plan with this mismatch. They could sell me a business internet plan at 5x the price though.
And that's just my personal issues. And I haven't even touched on how not every cell phone is equally reliable, how the switch to 5G has made many cell phones less reliable compared to 3G and 4G networks, how some people live next to live event venues where they can have great mobile connections 70% of the time but the other 30% of the time it becomes borderline unusable, etc.
The rule I've come up with is one user action, one request, one response. By 'one response', I mean one HTTP response containing DOM data; if that response triggers further requests for CSS, images, fonts, or whatever, that's fine, but all the modifications to the DOM need to be in that first request.
An amazing thing.
It's really eye-opening to set up something like toxiproxy, configure bandwidth limitations, latency variability, and packet loss in it, and run your app, or your site, or your API endpoints over it. You notice all kinds of UI freezing, lack of placeholders, gratuitously large images, lack of / inadequate configuration of retries, etc.
So I was tasked with fixing the issue. Instead of loading the whole list, I established a paginated endpoint and a search endpoint. The page now loaded in less than a second, and searches of customer data loaded in a couple seconds. The users hated it.
Their previous way of handling the work was to just keep the index of all customers open in a browser tab all day, Ctrl+F the page for an instant result and open the link to the customer details in a new tab as needed. My upgrades made the index page load faster, but effectively made the users wait seconds every single time for a response that used to be instant at the cost of a one time per day long wait.
There's a few different lessons to take from this about intent and design, user feedback, etc. but the one that really applies here is that sometimes it's just more friendly to let the user have all the data they need and allow them to interact with it "offline".
You can easily see this when using WiFi aboard a flight, where latency is around 600 msec at minimum (most airlines use geostationary satellites, NGSO for airline use isn't quite there yet). There is so much stuff that happens serially in back-and-forth client-server communication in modern web apps. The developer sitting in SF with a sub-10 ms latency to their development instance on AWS doesn't notice this, but it's sure as as heck noticeable when the round trip is 60x that. Obviously, some exchanges have to be serial, but there is a lot of room for optimization and batching that just gets left on the floor.
It's really useful to use some sort of network emulation tool like tc-netem as part of basic usability testing. Establish a few baseline cases (slow link, high packet loss, high latency, etc) and see how usable your service is. Fixing it so it's better in these cases will make it better for everyone else too.
The article looks at broadband penetration in the US. Which is useful, but you need to plan for worst cases scenario, not statistically likely cases.
I have blazing fast internet at home, and that isn't helpful for the AAA app when I need to get roadside assistance.
I want the nytimes app to sync data for offline reading, locally caching literally all of the text from this week should be happening.
Your point reminded me of the NASA Mars rover deployed in 2021 with the little Ingenuity helicopter on board.
The helicopter had a bug that required a software update, which NASA had to upload over three network legs: the Deep Space Network to Mars, a UHF leg from Mars-orbiting robotic vehicles to the rover, and a ZigBee connection from the rover to the Ingenuity helicopter. A single message could take between 5 and 20 minutes to arrive...
Edit: I described this in an article back then:
Ping statistics for <an IP in our DC>:
Packets: Sent = 98585, Received = 96686, Lost = 1899 (1% loss),
Approximate round trip times in milli-seconds:
Minimum = 43ms, Maximum = 3197ms, Average = 58ms
it's almost exactly 5s per 60s of loss^. has been since i got it. for "important" live stuff i have to switch to my cellphone, in a specific part of my house. otherwise the fact that most things are usable on "mobile" means my experience isn't "the worst" - but it does suck. I haven't played a multiplayer game with my friends in a year and a half - since at&t shut off fixed wireless to our area.oh well, 250mbit is almost worth it.
^: when i say this, i wasn't averaging, it drops from 0:54-0:59, in essence, "5 seconds of every minute"
This is spot on for me. I live in a low-density community that got telcom early and the infrastructure has yet to be upgraded. So, despite being a relatively wealthy area, we suffer from poor service and have to choose between flaky high latency high bandwidth (Starlink) and flaky low latency low bandwidth (DSL). I’ve chosen the latter to this point. Point to point wireless isn’t an option because of the geography.
The ol reliable plain HTML stuff usually works great though, even when you have to wait a bit for it to load.
It's hard to make a website that doesn't work reasonably well with that though. Even with all the messed up Javascript dependencies you might have.
I feel for those on older non-Starlink Satellite links. eg. islands in the pacific that still rely on Inmarsat geostationary links. 492 kbit/s maximum (lucky if you get that!), 3 second latency, pricing by the kb of data. Their lifestyle just doesn't use the internet much at all by necessity but at those speeds even when willing to pay the exorbitant cost sites will just timeout.
Starlink has been a revolution for these communities but it's still not everywhere yet.
The other issue that's under-considered is lower spec devices. Way more people use cheap Android phones than fancy last-five-years iPhones. Are you testing on those more common devices?
There is no need or moral obligation for all of the internet to be accessible to everyone. If you're not a millionaire, you're not going to be join a rich country club. If you don't have a background in physics, the latest research won't be accessible to you. If you don't have a decent video card, you won't be able to play many of the latest games. The idea that everything should be equally accessible to everyone is simply the wrong assumption. Inequality is not a bad thing per se.
However, good design principles involve an element of parsimony. Not minimalism, mind you, but a purposeful use of technology. So if the content you wish to show is best served by something resource intensive that excludes some or even most people from using it, but those that can access it are the intended audience, then that's fine. But if you're just jamming a crapton of worthless gimmickry into your website that doesn't serve the purpose of the website, and on top of that, it prevents your target audience from using it, then that's just bad design.
Begin with purpose and a clear view of your intended audience and most of this problem will go away. We already do that by making websites that work with both mobile and desktop browsers. You don't necessarily need to make resource heaviness a first-order concern. It's already entailed by audience and informed by the needs of the presentation.
I lived happily on dialup when I was a teenager, with just one major use case for more bandwidth.
Huh, worked fine for me: https://i.imgur.com/Y7lTOac.png
(if you don't have a favorite, try react.dev)
We're using this benchmark all the time on https://www.firefly-lang.org/ to try to keep it a perfect 100%.
Just a warning about the screenshot he's referencing here: the slice of map that he shows is of the western half of the US, which includes a lot of BLM land and other federal property where literally no one lives [0], which makes the map look a lot sparser in rural areas than it is in practice for humans on the ground. If you look instead at the Midwest on this map you'll see pretty decent coverage even in most rural areas.
The weakest coverage for actually-inhabited rural areas seems to be the South and Appalachia.
[0] https://upload.wikimedia.org/wikipedia/commons/0/0f/US_feder...
Programmers: Let's design for crappy internet
Internet providers: Maybe it's not necessary
* blind
* def
* reading impaired
* other languages/cultures
* slow/bad hardware/iffy internet
To me at some point we need to get to an LCARs like system - where we don't program bespoke UIs at all. Instead the APIs are available and the UI consumes it, knows what to show (with LLMs) and a React interface is JITted on the spot.And the LLM will remember all the rules for blind/def/etc...
Also I think until LLMs become reliable (which may be never), using them in the way you describe is a terrible idea. You don't want your UI to all of a sudden hallucinate something that screws it up.
As far as international emitting of interfaces - yes it absolutely makes sense to do it this way. If you're asking for an address and the customer is in the US, the LLM can easily whip up a form for that kind of address. If you're somewhere else, it can do that too. There's no reason for bespoke interfaces that never get the upgrade because someone made it overly complicated for some reason.
Back in the day, AOP was almost a big thing (for a small subset of programmers). Perhaps what was missing was having a generalized LLM that allowed for the concern to be injected. Forgot your ALT tag? LLM, Not internationalized? LLM, Non-complicated Lynx compatible view? LLM
The NTIA or FCC just released an updated map a few days ago (part of the BEAD overhaul) that shows the locations currently covered by existing unlicensed fixed wireless.
Quick Google search didn't find a link but I have it buried in one of my work slack channels. I'll come back with the map data if somebody else doesn't.
The state of broadband is way, way worse than people think in the US.
Indirect Link: https://medium.com/spin-vt/impact-of-unlicensed-fixed-wirele...
This often fails in all sorts of ways:
* The client treats timeout as end-of-file, and thinks the resource is complete even though it isn't. This can be very difficult for the user to fix, except as a side-effect of other breakages.
* The client correctly detects the truncation, but either it or the server are incapable of range-based downloads and try to download the whole thing from scratch, which is likely to eventually fail again unless you're really lucky.
* Various problems with automatic refreshing.
* The client's only (working) option is "full page refresh", and that re-fetches all resources including those that should have been cached.
* There's some kind of evil proxy returning completely bogus content. Thankfully less common on the client end in a modern HTTPS world, but there are several ways this can still happen in various contexts.
wget -c https://zigzag.com/file1.zip
Note that -c only works with FTP servers and with HTTP servers that support the "Range" header.
So if your market is a global one, there's a chance even a fortune 500 company could struggle to load your product in their HQ because of their terrible internet connection. And I suspect it's probably even worse in some South American/African/Asian countries in the developing world...
Use the software that you make, in the same conditions that your users will use it in.
Most mobile apps are developed by people in offices with massive connections, or home offices with symmetric gigabit fiber or similar. The developers make sure the stuff works and then they're on to the next thing. The first time someone tries to use it on a spotty cellular connection is probably when the first user installs the update.
You don't have to work on a connection like that all the time, but you need to experience your app on that sort of connection, on a regular basis, if you care about your users' experience.
Of course, it's that last part that's the critical missing piece from most app development.
- Mobile first design
- Near unlimited high speed bandwidth
There's never been a case where both are blanket true.
New headline: Betteridge's rule finally defeated. Or is it?
It is telling that tech giants make tools to test their software in poor networking conditions. It may not look like they care, until you try software by those who really don't care.
Client-side rendering with piecemeal API calls is definitely not the solution if you are having trouble getting packets from A to B. The more you spread the information across different requests, the more likely you are going to get lose packets, force arbitrary retries and otherwise jank up the UI.
From the perspective of the server, you could install some request timing middleware to detect that a client is in a really bad situation and actually do something about it. Perhaps a compromise could be to have the happy path as a websocketed react experience that falls back to a ultralight, one-shot SSR experience if the session gets flagged as having a bad connection.
Even if I SSR and inline all the packages/content, that overall response could be broken up into multiple TCP packets that could also be dropped (missing parts in the middle of your overall response).
How does using SSR account for this?
I have to deal with this problem when designing TCP/UDP game networking during the streaming of world data. Streaming a bunch of data (~300 Kb) is similar to one big SSR render and send. This is because standard TCP packets max out at ~65 Kb.
Believing that one request maps to one packet is a frequent "gotcha" I have to point out to new network devs.
FSVO required. Images beyond a few bytes shouldn't be inlined for example since loading them would block the meat of the content after them.
donatj•4h ago
I've been trying to convince them to try Starlink, but they're unwilling to pay for the $500+ equipment costs.
dghlsakjg•4h ago
edflsafoiewq•4h ago
Many people have already said designing for iffy internet helps everyone: this is true for slimming your payload, but not necessarily designing around dropped connections. On a plane or train, you might alternate between no internet and good internet, so you can just retry anything that failed when the connection is back, but a rural connection can be always spotty. And I think the calculus for devs isn't clearly positive when you have to design qualitatively new error handling pathways that many people will never use.
For example, cloning a git repo is non-resumable. Cloning a larger repo can be almost impossible since the probability the connection doesn't drop in the middle falls to zero. The sparse checkout feature has helped a lot here. Cargo also used to be very hard to use on rural internet until sparse registries.
HeyLaughingBoy•3h ago
One of my neighbors is apparently using Starlink since I see a Starlink router show up in my Wi-Fi scan.
amendegree•3h ago