Youre also opening up to more potential customers in rural areas or areas with poor reception, where internet may exist but may not be consistent or low latency
Next que...<loading>
Made me think more about poor & unstable connections when building out new features or updating existing things in the product. Easily parsable loading states, better user-facing alerts about requests timing out, moving calculations/compute client-side where it made sense, etc.
Funny that American civil service is used by militaries on both sides. At least it was used in some role during the war, bot sure about now.
A different perspective on this shows up in a recent HN submission, "Start your own Internet Resiliency Club" (https://news.ycombinator.com/item?id=44287395). The author of the article talks about what it would take to have working internet in a warzone where internet communications are targeted.
While we can frame this as whether we should design our digital products to accommodate people with iffy internet, I think seeing this as a resiliency problem that affects our whole civilization is a better perspective. It is no longer about accomodating people who are underserved, but rather, should we really be building for a future where the network is assumed to be always-connected? Do we really want our technologies to be concentrated in the hands of the few?
https://www.reddit.com/r/MapPorn/comments/vfwjsc/approximate...
I know a lot of the West has terrible broadband, but a not insignificant majority of that land area is uninhabited federal land -- wilderness, such as high mountains, desert, etc. By focusing on the West and focusing on maps that don't acknowledge inhabitedness as an important factor, it confuses the issue.
I'd argue it's more of a travesty that actual fiber optic internet is only available in maybe 15% of addresses nationwide, than the white holes in Eastern Oregon or Northern Nevada. One major reason I believe this is that even at my house, where I have "gigabit" available via DOCSIS, my upload is barely 20Mbps and I have a bandwidth cap of 1.25TB a month which means if I saturate my bandwidth I can only use it for 2 hours 46 minutes per month.
If you compare "things that would be possible if everyone had a 500Mbps upload without a bandwidth cap" vs "things I can do on this connection" it's a huge difference.
I can think of at least two supermarkets where I have crap internet inside in spite of having whole city decent 5G coverage outside.
One thing that never loads is the shopping app for our local equivalent of Amazon. I'm sure they lost some orders because I was in said supermarkets and couldn't check out the competition's offers. Minor cheap-ish stuff or I would have looked for better signal, but still lost orders.
Should you assume all your customers have smartphones? Smartphones with internet connections? Working cameras? Zelle? Venmo? Facebook? WhatsApp? Uncracked screens (for displaying QR codes to be scanned)? The ability to install an app?
I recently bought a snack from a pop-up food vendor who could only accept Venmo, which luckily I have, or exact cash, since he didn't have change. I'm pretty sure he only told me this after he handed me my food. I know lots of people who don't have Venmo—some don't want it because they see it as a security risk, some have had their accounts banned, some just never used it and probably don't want to set it up in a hurry while buying lunch.
I also recently stayed at a rural motel that mentioned in the confirmation email that the front desk isn't staffed 24/7, so if you need to check in after hours, you have to call the on-call attendant. Since cell service is apparently spotty there (though mine worked fine), they included the Wi-Fi password so you could call via Wi-Fi. There were also no room phones, so if the Wi-Fi goes out after hours, guests are potentially incommunicado, which sounds like the basis of a mystery novel.
Seems like a bad idea to me. Even Square with Cash App support would be better
Translation: shitty servers.
That means that the connection might be fine, but the backend is not.
I need to have a lot of error management in my apps, and try to keep the server interactions to a minimum. This also means that I deal with bad connections fairly well.
And no, geography is not an excuse. Neighboring Austria has same geography as Bavaria, and yet it is immediately noticeable when exactly you have passed the border by the cellphone signal indicator going up to full five bars. And neither is money an excuse, Romania - one of the piss poorest countries of Europe - has 5G built out enough to watch youtube in 4k on a train moving 15 km/h with open doors to Sannicolao Mare.
The issue is braindead politics ("we don't need 5G at any remote milk jug") and too much tolerance for plain and simple retarded people who think that all radiation is evil.
Congested and/or weak wifi and cell service are what "iffy" is about. Will a page _eventually_ load if I wait long enough? Or are there 10 sequential requests, 100KBs each, that all have succeed just to show me 2 sentences of text?
Dropped packets? Throttling? Jitter?
I am trying to figure out if there are good testing suites for this or if it is something I need to manually setup.
HOWEVER the main problem (apart from just not having service) is congestion. There is a complete shortage of low bandwidth spectrum which can penetrate walls well in (sub)urban areas, at ~600-900MHz. Most networks have managed to add enough capacity in the mid/upper bands to avoid horrendous congestion, but (eg) 3.5GHz does not penetrate buildings well at all.
This means it is very common to walk into a building and go from 100meg++ speeds on 5G to dropping down to 5G on 700MHz which is so congested that you are looking at 500kbit/sec on a good day.
Annoyingly phone OSs haven't got with the times yet. And just display signal quality for the bars. Which will usually be excellent. It really needs to also have a congestion indicator (could be based on how long your device is waiting for a transmission slot for example).
Systems hardened against authoritarianism are a great thing. Even the taliban have mobile coverage in kabul- and thus, every woman forced under the dschador holding on to a phone, has a connection to the world in her hand. Harden humanity against the worst of its sides in economic decline. I dream of a math proof, coming out of some kabul favella.
One of our biggest sticking points when new forms of multifactor came around is that it can sometimes take longer than a minute to deliver a push notification or text message even in areas that are solid red on Verizon's coverage map.
> This is likely worse for B2C software than B2B.
These are regional retail banks that all use the same LOB software. Despite the product being sold mainly to banks, which famously have branches, the developer never realized that there could be more than a millisecond between a client and a server. The reason they have VDI is so their desktop environment is in the same datacenter as their app server. It's a fucking CRUD app and the server has to deal with maybe a couple hundred transactions per hour.
I think this is pretty typical for B2B. You don't buy software because it is good. You buy software because the managers holding the purse strings like the pretty reports it makes and they are willing to put up with A LOT of bullshyt to keep getting them.
There are lots of cases for sending MORE data on "iffy" internet connections.
One of our websites is a real estate for-sale browsing site (think Zillow). It works great from home or office, but if you are actively out seeing properties it can be real frustrating when Internet comes and goes. Where any interaction with the page can take 10-60 seconds to refresh because of latency and packet loss.
A few months ago I vibe-coded a prototype that would locally cache everything, and use cached versions primarily and update the cache in the background. Using developer tools to simulate bad networking it was a day and night experience. Largely because I would fetch first property photos of all properties as well as details about the first few hundred properties that matched your search criteria.
"Bloat" when used intelligently, isn't so bad. ;-)
I hope people making apps to unlock cars or other critical things that you might need at 1am on a road trip in the middle of nowhere don’t have this attitude of “everyone has reliable internet these days!”
Concrete example: I made an app for Prospect Park in Brooklyn that had various features that were meant to be accessed while in the park which had (has?) very spotty cell service, so it was designed to sync and store locally using an eventually consistent DB, even with things that needed to be uploaded.
A 10mb download over 3G is fine if you can actually start it. But when the page needs 15 round trips before first render, you're already losing the user.
We started simulating 1500ms+ RTT and packet loss by default on staging. That changed everything. Suddenly we saw how spinners made things worse, how caching saved entire flows, and how doing SSR with stale-while-revalidate wasn’t just optimization anymore. It was the only way things worked.
If your app can work on a moving train in Bangladesh, then it's gonna feel instant in SF.
MatthiasPortzel•1d ago