> Also HTTPS requires two additional round trips before it can do the first one — which gets us up to 1836ms!
0-RTT works after the first handshake, but enabling it allows for some forms of replay attacks so that may not be something you want to use for anything hosting an API unless you've designed your API around it.
There are other ways you can try to optimise the certificate chain, though. For instance, you can pick a CA that uses ECC rather than RSA to make use of the much shorter key sizes. Entrust has one, I believe. Even if the root CA has an RSA key, they may still have ECC intermediates you can use.
But yes, ensure that you're serving the entire chain, but keep the chain as short as possible.
If I’m selling to cash cows in America or Europe it’s not an issue at all.
As long as you have >10mbps download across 90% of users I think it’s better to think about making money. Besides if you don’t know that lazy loading exists in 2025 fire yourself lol.
https://www.mcmaster.com/ was found last year to be doing some real magic to make it load literally as fast as possible for the crapiest computers possible.
Basically, looks like someone deliberately did many right things without being lazy or cheap to create a performant web site.
- rural location
- roommate or sibling torrent-ing the shared connection into the ground
- driving around on a road with spotty coverage
- places with poor cellular coverage (some building styles are absolutely hell on cellular as well)
Ahh, if only. Have you seen applications developed by large corporations lately? :)
:)
I googled the names of the people holding the talk and they're both employed by Microsoft as software engineers, I don't see any reason to doubt what they're presenting. Not the whole start menu is React Native, but parts are.
https://news.ycombinator.com/item?id=44124688#:~:text=Just%2...
the notion was popularized as an explanation for a CPU core spiking whenever the start menu opens on Win11
But if Evan Wallace didn't obsess over performance when building Figma, it wouldn't be what it is today. Sometimes, performance is a feature.
Performance matters.
We've spent so many decades misinterpreting Knuth's quote about optimization that we've managed to chew up 5-6 orders of magnitude in hardware performance gains and still deliver slow, bloated and defective software products.
Performance does in fact matter and all other things equal, a fast product is more pleasurable than a slow one.
Thankfully some people like the folks at Figma took the risk and proved the point.
Even if we're innovating on hard technical problems (which most of us are not), performance still matters.
So yeah, make sure not to lose performance unreasonably, but also don't obsess with performance to the point of making things unusable or way too complicated for what they do.
Notably, this is subjective. I’ve had devs tell me that joins (in SQL) are too complicated, so they’d prefer to just duplicate data everywhere. I get that skill is a spectrum, but it’s getting to the point where I feel like we’ve passed the floor, and need to firmly state that there are in fact some basic ideas that are required knowledge.
Yes, at the most absurd limits, some autists may occasionally obsess and make things worse. We're so far from that problem today, it would be a good one to have.
IME, making things fast almost always also makes them simpler and easier to understand.
Building high-performance software often means building less of it, which translates into simpler concepts, fewer abstractions, and shorter times to execution.
It's not a trade-off, it's valuable all the way down.
Treating high performance as a feature and low performance as a bug impacts everything we do and ignoring them for decades is how you get the rivers of garbage we're swimming in.
VM clone time is surprisingly quick once you stop copying memory, after that it's mostly ejecting the NIC and bringing up the new one.
It's absolutely possible, but I'm not sure there's any tool out there with that command... because why would you? You'll get about the same result as forking a process inside the container.
If you're running infra at Google, of course containers and orchestration make sense.
If you're running apps/IT for an SMB or even small enterprise, they are 100% waste, churn and destruction. I've built for both btw.
The contexts in which they are appropriate and actually improve anything at all are vanishingly small.
Debian Slim is < 30 MB. Alpine, if you can live with musl, is 5 MB. The problem comes from people not understanding what containers are, and how they’re built; they then unknowingly (or uncaringly) add in dozens of layers without any attempt at reducing or flattening.
Similarly, K8s is of course just a container orchestration platform, but since it’s so easy to add to, people do so without knowing what they’re doing, and you wind up with 20 network hops to get out of the cluster.
This is why almost all applications and websites are slow and terrible these days.
SPAs are really bad for mostly static websites. News sites, documentation, blogs.
Performance isn’t seen as sexy, for reasons I don’t understand. Devs will be agog about how McMaster-Carr manages to make a usable and incredibly fast site, but they don’t put that same energy back into their own work.
People like responsive applications - you can’t tell me you’ve never seen a non-tech person frustratingly tapping their screen repeatedly because something is slow.
> This is why almost all applications and websites are slow and terrible these days.
But no, there are way more things broken on the web than lack of overoptimization.
EDIT: some reply missed my point, I am not claiming this particular optimization is the holy grail, only that I'd have liked for added benefit of reducing the energy consumption to be mentioned
I'm honestly just really annoyed about this "society and environment"-spin on advise that would have an otherwise niche, but perfectly valid reason behind it (TFA: slow satellite network on the high seas).
This might sound harsh and I don't mean it personally, but making your website smaller and "being vocal about it" (whatever you mean by that) doesn't make an iota of difference. It also only works if your site is basically just text. If your website uses other resources (images, videos, 3D models, audio, etc.), the impact of first load is just noise anyway.
You can have a bigger impact by telling 100,000 people to drive an hour less each month and if just 1% of your hypothetical audience actually does that, you'd achieve orders of magnitude more in terms of environmental and societal impact.
Now, it is true that it didn't save much because probably many people were uploaded 8K videos at the time, so drop in the bucket. But personally, I found it quite inspiring and his decision was instrumental in my deciding to never upload 4K. And in general, I will say that people like that do inspire me and keep me going to be as minimal as possible when I use energy in all domains.
For me at least, trying to optimize for using as little energy as possible isn't an engineering problem. It's a challenge to do it uniformly as much as possible, so it can't be subdivided. And I do think every little bit counts, and if I can spend time making my website smaller, I'll do that in case one person gets inspired by that. It's not like I'm a machine and my only goal is time efficiency....
And no, a million small sites won't "become a trend in society".
If we want to really fix places with bigger impact we need to change this approach in a first place.
This is micro-optimisation for a valid use case (slow connections in bandwidth-starved situations), but in the real world, a single hi-res image, short video clip, or audio sample would negate all your text-squeezing, HTTP header optimisation games, and struggle for minimalism.
So for the vast majority of use cases it's simply irrelevant. And no, your website is likely not going to get 1,000,000 unique visitors per hour so you'd have a hard time even measuring the impact whereas simply NOT ordering pizza and having a home made salad instead would have a measurable impact orders of magnitude greater.
Estimating the overall impact of your actions and non-actions is hard, but it's easier and more practical to optimise your assets, remove bloat (no megabytes of JS frameworks), and think about whether you really need that annoying full-screen video background. THOSE are low-hanging fruit with lots of impact. Trying to trim down a functional site to <14kB is NOT.
I've tried to really cut down my website as well to make it fairly minimal. And when I upload stuff to YouTube, I never use 4K, only 1080P. I think 4K and 8K video should not even exist.
A lot of people talk about adding XYZ megawatts of solar to the grid. But imagine how nice it could be if we regularly had efforts to use LESS power.
I miss the days when websites were very small in the days of 56K modems. I think there is some happy medium somewhere and we've gone way past it.
I am all for efficiency, but optimizing everywhere is a recipe for using up the resources to actually optimize where it matters.
A million determined voters can easily force laws to be made which forces youtube to be more efficient.
I often think about how orthodoxical all humans are. We never think about different paths outside of social norms.
- Modern western society has weakened support for mass action to the point where it is literally an unfathomable "black swan" perspective in public discourse.
- Spending a few million dollars on TV ads to get someone elected is a lot cheaper than whatever Bill Gates spends on NGOs, and for all the money he spent it seems like aid is getting cut off.
- Hiring or acting as a hitman to kill someone to achieve your goal is a lot cheaper than the other options above. It seems like this concept, for better or worse, is not quite in the public consciousness currently. The 1960s 1970s era of assassinations have truly gone and past.
Personally, if a referendum were held tomorrow to disband Google, I would vote yes for that...but good luck getting that referendum to be held.
Is it? My front end engineer spending 90 minutes cutting dependencies out of the site isn’t going to deny YouTube the opportunity to improve their streaming algorithms.
Everyone needs to be aware that we are part of an environment that has limited resources beyond "money" and act accordingly, whatever the scale.
It's a small thing, but as you say internet video is relatively heavy.
To reduce my AI footprint I use the udm=14 trick[1] to kill AI in Google search. It generally gives better results too.
For general web browsing the best single tip is running uBlock Origin. If you can master medium[2] or hard mode (which will require un-breaking/whitelisting sites) it saves more bandwidth and has better privacy.[3]
To go all-out on bandwidth conservation, LocalCDN[4] and CleanURLs[5] are good. "Set it and forget it," improves privacy and load times, and saves a bit of energy.
Sorry this got long. Cheers
[0] https://greasyfork.org/whichen/scripts/23661-youtube-hd
[1] https://arstechnica.com/gadgets/2024/05/google-searchs-udm14...
[2] https://old.reddit.com/r/uBlockOrigin/comments/1j5tktg/ubloc...
Once you find it for a website you can just save it though so you don't need to go through it again.
LocalCDN is indeed a nobrainer for privacy! Set and forget.
Is it really? I was surprised to see that surfing newspaper websites or Facebook produces more traffic per time than Netflix or Youtube. Of course there's a lot of embedded video in ads and it could maybe count as streaming video.
The binary is a compressed artefact and the stream is a TLS pipe. But the principle is the same.
In fact videos streams over the web are actually based on how HTTP documents are chunked and retrieved, rather than the other way around.
2. Everything in our world is dwarfes standing on the shoulders of giants. To rip everything up and create something completely new is most of the time an idea that sounds better than it really would be. Anyone who thinks something else is mostly to young to see this pattern.
HTML/CSS/JS is the only fully open stack, free as in beer and free and not owned by a single entity and standardized by multinational standardization bodies for building applications interfaces that is cross platform and does that excellent. Especially with electron you can build native apps with HTML/CSS/JS.
There are actually web apps not „websites” that are built. Web apps are not html with sprinkled jquery around there are actually heavy apps.
I'll have to speculate what you mean
1. If you mean drawing pixels directly instead of relying on HTML, it's going to be slower. (either because of network lag or because of WASM overhead)
2. If you mean streaming video to the browser and rendering your site server-side, it will break features like resizing the window or turning a phone sideways, and it will be hideously expensive to host.
3. It will break all accessibility features like Android's built-in screen reader, because you aren't going to maintain all the screen reader and braille stuff that everyone might need server-side, and if you do, you're going to break the workflow for someone who relies on a custom tweak to it.
4. If you are drawing pixels from scratch you also have to re-implement stuff like selecting and copying text, which is possible but not feasible.
5. A really good GUI toolkit like Qt or Chromium will take 50-100 MB. Say you can trim your site's server-side toolkit down to 10 MB somehow. If you are very very lucky, you can share some of that in the browser's cache with other sites, _if_ you are using the same exact version of the toolkit, on the same CDN. Now you are locked into using a CDN. Now your website costs 10 MB for everyone loading it with a fresh cache.
You can definitely do this if your site _needs_ it. Like, you can't build OpenStreetMap without JS, you can't build chat apps without `fetch`, and there are certain things where drawing every pixel yourself and running a custom client-side GUI toolkit might make sense. But it's like 1% of sites.
I hate HTML but it's a local minimum. For animals, weight is a type of strength, for software, popularity is a type of strength. It is really hard to beat something that's installed everywhere.
Like how industrial manufacturing are the biggest carbon consumers and compared to them, I’m just a drop in the ocean. But that doesn’t mean I don’t also have a responsibility to recycle because culminate effect of everyone like me recycling quickly becomes massive.
Similarly, if every web host did their bit with static content, you’d still see a big reduction at a global scale.
And you’re right it shouldn’t the end of the story. However that doesn’t mean it’s a wasted effort / irrelevant optimisation
A nice side effect of these choices is that I only spend a small part of my pay. Never had a credit card, never had debt, just saved my money until I had enough that the purchase was no big deal.
I don't really have an issue with people who say that their drop does not matter so why should they worry, but I don't understand it, seems like they just needlessly complicate their life. Not too long ago my neighbor was bragging about how effective all the money he spent on energy efficient windows, insulation, etc, was, he saved loads of money that winter; his heating bill was still nearly three times what mine was despite using a wood stove to offset his heating bill, my house being almost the same size, barely insulated and having 70 year old windows. I just put on a sweater instead of turning up the heat.
Edit: Sorry about that sentence, not quite awake yet and doubt I will be awake enough to fix it before editing window closes.
Creating an average hamburger requires an input of 2-6 kWh of energy, from start to finish. At 15¢ USD/kWh, this gives us an upper limit of about 90¢ of electricity.
The average 14 kB web page takes about 0.000002 kWh to serve. You would need to serve that web page about 1-300,000 times to create the same energy demands of a single hamburger. A 14 mB web page, which would be a pretty heavy JavaScript app these days, would need about 1 to 3,000.
I think those are pretty good ways to use the energy.
Then multiply that by the number of daily visitors.
Without "hamburgers" (food in general), we die, reducing the size of usesless content on websites doesn't really hurt anyone.
Then multiply that by the number of daily customers.
Without web pages (information in general), we return to the Dark Ages. Reducing the number of hamburgers people eat doesn't really hurt anyone.
Now, if mcdonalds padded 5kB of calories of a cheesburger with 10.000 kilobytes of calories in wasted food like news sites doo, it would be a different story. The ratio would be 200 kilos of wasted food for 100grams of usable beef.
It's just an inconvenient truth for people who only care about the environmental impact of things that don't require a behavior change on their part. And that reveals an insincere, performative, scoldy aspect of their position.
What benefit does an individual get from downloading tens of megabytes of useless data to get ~5kB of useful data in an article? It wastes download time, bandwidth, users time (having to close the autoplaying ad), power/battery, etc.
For a user's access to a random web page anywhere, assuming it's not on a CDN near the user, you're looking at at ~10 routers/networks on the way involved in the connection. Did you take that into account?
(In addition to what justmarc said about accounting for the whole network. Plus I suspect between feeding them and the indirect effects of their contribution to climate change, I suspect you're being generous about the cost of a burger.)
Instead we should be looking to nuclear power solutions for our energy needs, and not waste time with reducing website size if its purely a function of environmental impact.
I myself have installed one single package, and it installed 196,171 files in my home directory.
If that isn't gratuitous bloat, then I don't know what is.
And no, reducing resource use to the minimum in the name of sustainability does not scale down the same way it scales up. You're just pushing the idea that all human activity is some sort of disease that's best disposed of. That's essentially just wishing the worst on your own species for being successful.
It's never clear to me whether people who push this line are doing so because they're bitter and want to punish other humans, or because they hate themselves. Either way, it evinces a system of thought that has already relegated humankind to the dustbin of history. If, in the long run, that's what happens, you're right and everyone else is wrong. Congratulations. It will make little difference in that case to you if the rest of us move on for a few hundred years to colonize the planets and revive the biosphere. Comfort yourself with the knowledge that this will all end in 10 or 20 thousand years, and the world will go back to being a hot hive of insects and reptiles. But what glory we wrought in our time.
Whataboutism. https://en.m.wikipedia.org/wiki/Whataboutism
> You're just pushing the idea that all human activity is some sort of disease that's best disposed of. That's essentially just wishing the worst on your own species for being successful.
Strawmaning. https://en.m.wikipedia.org/wiki/Straw_man
Every bloody mention of the environmental impact of our activities gets at least a reply like yours that ticks one of these boxes.
(this was actually stated in agreement with the original poster, who you clearly misunderstood, so there's no "what-about" involved here. They were condemning all kinds of consumption, including the frivolous ones I mentioned).
But
I'm afraid you've missed both my small point and my wider point.
My small point was to argue against the parent's comment that
>>reducing ressources consumption to the minimum required should always be a concern if we intend to have a sustainable future
I disagree with this concept on the basis that nothing can be accomplished on a large scale if the primary concern is simply to reduce resource consumption to a minimum. If you care to disagree with that, then please address it.
The larger point was that this theory leads inexorably to the idea that humans should just kill themselves or disappear; and it almost always comes from people who themselves want to kill themselves or disappear.
..."required".
That allows you to fit pretty much everything in that requirement. Which actually makes my initial point a bit weak, as some would put "delivering 4K quality tiktok videos" as a requirement.
Point is that energy consumption and broad environmental impact has to be a constraint in how we design our systems (and businesses).
I stand by my accusations of whataboutism and strawmaning, though.
That's a sweeping misunderstanding of what I wrote, so I'd ask that you re-read what I said in response to the specific quote.
Since a main argument is seemingly that AI is worse, let's remember that AI is querying these huge pages as well.
Also that the 14kb size is less than 1% of the current average mobile website payload.
Calculate how much electricity you personally consume in total browsing the Internet for a year. Multiply that by 10 to be safe.
Then compare that number to how much energy it takes to produce a single hamburger.
Do the calculation yourself if you do not believe me.
On average, we developers can make a bigger difference by choosing to eat salad one day instead of optimizing our websites for a week.
I know it is not the exact topic, but sometimes I think we dont need the fastest response time but consistent response time. Like every single page within the site to be fully rendered with exactly 1s. Nothing more nothing less.
But this sort of goes against my no / minimal JS front end rendering philosophy.
ip route change default via <gw> dev <if> initcwnd 20 initrwnd 20
A web search suggests CDNs are now at 30 packets for the initial window, so you get 45kb there.Which would be kinda annoying on a slow connection.
Either you'd have buffer issues or dropped packets.
But in practice, I think this should work most of the time for most people. On slower connections, your connection will probably crawl to a halt due to retransmission hell, though. Unless you fill up the buffers on the ISP routers, making every other connection for that visitor slow down or get dropped, too.
Disabling slow start and using BBR congestion control (which doesn't rely on packet loss as a congestion signal) makes a world of difference for TCP throughput.
Any reference for this?
Using my own server software I was able to produce a complex single page app that resembled an operating system graphical user interface and achieve full state restoration as fast as 80ms from localhost page request according to the Chrome performance tab.
You are correct in that TCP packets are processed within the kernel of modern operating systems.
Edit for clarity:
This is a web server only algorithm. It is not associated with any other kind of TCP traffic. It seems from the down votes that some people found this challenging.
FWIW I optimised the heck out of my personal homepage and got 100/100 for all Lighthouse scores. Which I had not previously thought possible LOL
Built in Rails too!
It's absolutely worth optimising your site though. It just is such a pleasing experience when a page loads without any perceptible lag!
All the Tailwind building and so on is done using common JS tools, which are mostly standard out of the box Rails 8 supplied scripts!
Sprockets used to do the SASS compilation and asset bundling, but the Rails standard now is to facilitate your own preferences around compilation of CSS/JS.
It was more a quick promote Rails comment as it can get dismissed as not something to build fast website in :-)
Would love it if someone kept a list.
Hopefully you'll find some of them aesthetically pleasing
/ 2.7 kB
main.css 2.5 kB
favicon.png 1.8 kB
-------------------
Total 7.0 kB
Not bad, I think! I generate the blog listing on the home page (as well as the rest of my website) with my own static site generator, written in Common Lisp [2]. On a limited number of mathematical posts [3], I use KaTeX with client-side rendering. On such pages, KaTeX adds a whopping 347.5 kB! katex.min.css 23.6 kB
katex.min.js 277.0 kB
auto-render.min.js 3.7 kB
KaTeX_Main-Regular.woff2 26.5 kB
KaTeX_Main-Italic.woff2 16.7 kB
----------------------------------
Total Additional 347.5 kB
Perhaps I should consider KaTeX server-side rendering someday! This has been a little passion project of mine since my university dorm room days. All of the HTML content, the common HTML template (for a consistent layout across pages), and the CSS are entirely handwritten. Also, I tend to be conservative about what I include on each page, which helps keep them small.You could try replacing KaTeX with MathML: https://w3c.github.io/mathml-core/
I would love to use MathML, not directly, but automatically generated from LaTeX, since I find LaTeX much easier to work with than MathML. I mean, while I am writing a mathematical post, I'd much rather write LaTeX (which is almost muscle memory for me), than write MathML (which often tends to get deeply nested and tedious to write). However, the last time I checked, the rendering quality of MathML was quite uneven across browsers, both in terms of aesthetics as well as in terms of accuracy.
For example, if you check the default demo at https://mk12.github.io/web-math-demo/ you'd notice that the contour integral sign has a much larger circle in the MathML rendering (with most default browser fonts) which is quite inconsistent with how contour integrals actually appear in print.
Even if I decide to fix the above problem by loading custom web fonts, there are numerous other edge cases (spacing within subscripts, sizing within subscripts within subscripts, etc.) that need fixing in MathML. At that point, I might as well use full KaTeX. A viable alternative is to have KaTeX or MathJaX generate the HTML and CSS on server-side and send that to the client and that's what I meant by server-side rendering in my earlier comment.
“MathML for {very rough textual form of the equation}” seems to give a 100% hit rate for me. Even when i want some formatting change i can ask the llm and that pretty much always has a solution (mathml can render symbols and subscripts in numerous ways but the syntax is deep). It’ll even add the css needed to change it up in some way if asked.
Why can't this be precomputed into html and css?
It can be. But like I mentioned earlier, my personal website is a hobby project I've been running since my university days. It's built with Common Lisp (CL), which is part of the fun for me. It's not just about the end result, but also about enjoying the process.
While precomputing HTML and CSS is definitely a viable approach, I've been reluctant to introduce Node or other tooling outside the CL ecosystem into this project. I wouldn't have hesitated to add this extra tooling on any other project, but here I do. I like to keep the stack simple here, since this website is not just a utility; it is also my small creative playground, and I want to enjoy whatever I do here.
HTTP/3 uses UDP rather than TCP, so TCP slow start should not apply at all.
Very relevant. A lot of websites need 5 to 30 seconds or more to load.
One other advantage of QUIC is that you avoid some latency from the three-way handshake that is used in almost any TCP implementation. Although technically you can already send data in the first SYN packet, the three-way handshake is necessary to avoid confusion in some edge cases (like a previous TCP connection using the same source and destination ports).
Interesting to hear that QUIC does away with the 3WHS - it always catches people by surprise that it takes at least 4 x latency to get some data on a new TCP connection. :)
> ... analysis [by Cloudflare] suggests that the throttling [by Russian ISPs] allows Internet users to load only the first 16 KB of any web asset, rendering most web navigation impossible.
I have done the hyper optimised, inline resource, no blocking script, hand minimised JS, 14kb website thing before and the problem with doing it the "hard" way is it traps you in a design and architecture.
When your requirements change all the minimalistic choices that seemed so efficient and web-native start turning into technical debt. Everyone fantasises about "no frameworks" until the project is no longer a toy.
Whereas the isomorphic JS frameworks let you have your cake and eat it: you can start with something that spits out compiled pages and optimise it to get performant _enough_, but you can fall back to thick client JavaScript if necessary.
1. There is math for how long it takes to send even one packet over satellite connection (~1600ms). Its a weak argument for the 14kb rule since there is no comparison with a larger website. 10 packets wont necessarily take 16 seconds.
2. There is a mention that images on webpage are included in this 14kb rule. In what case are images inlined to a page’s initial load? If this is a special case and 99.9% of images don’t follow it, it should be mentioned at very least.
Just because everything else is bad, doesn't invalidate the idea that you should do better. Today's internet can feel painfully slow even on a 1Gbps connection because of this; websites were actually faster in the early 2000s, during the transition to ADSL, as they still had to cater to dial-up users and were very light as a result.
I get this all the time at my job, when I recommend a team do something differently in their schema or queries: “do we have any examples of teams currently doing this?” No, because no one has ever cared to try. I understand not wanting to be guinea pigs, but you have a domain expert asking you to do something, and telling you that they’ll back you up on the decision, and help you implement it. What more do you want?!
Low resolution thumbnails that are blurred via CSS filters over which the real images fade in once downloaded. Done properly it usually only adds a few hundred bytes per image for above the fold images.
I don’t know if many bloggers do that, though. I do on my blog and it’s probably a feature on most blogging platforms (like Wordpress or Medium) but it’s more of a commercial frontend hyperoptimization that nudges conversions half a percentage point or so.
The HTTPS negotiation is going to consume the initial roundtrips which should start increasing the size of the window
Modern CDNs start with larger initial windows and also pace the packets onto the network to reduce the chances of congesting
There’s also a question as to how relevant the 14kb rule has ever been… HTML renders progressively so as long as there’s some meaningful content in the early packets then overall size is less important
The quality of connection is so much better, and as you can get a starlink mini with a 50GB plan for very little money, its already in the zone that just one worker could grab his own and bring it on the rig to use on his free time and to share.
Starlink terminals aren't "infrastructure". Campers often toss one on their roof without even leaving the vehicle. Easier than moving a chair. So, as I said, the geostationary legacy system immediately becomes entirely obsolete other than for redundancy, and is kinda irrelevant for uses like browsing the web.
I swear I am not just trying to be a dick here. If I didn't think it had great content I wouldn't have commented. But I feel like I'm reading a LinkedIn post. Please join some of those sentences up into paragraphs!
The modern web has crossed the rubicon long time ago for 14kb websites.
What are you doing with the extra 500kB for me, the user?
> 90% of the time in interested in text. Most of the reminder vector graphics would suffice.
14 kB is a lot of text and graphics for a page. What is the other 500 for?
It's fair to prefer text-only pages, but the "and graphics" is quite unrealistic in my opinion.
Doesn't this sort of undo the entire point of the article?
If the idea was to serve the entire web page in the first roundtrip, wouldn't you have lost the moment TLS is used? Not only does the TLS handshake send lots of stuff (including the certificate) that will likely get you over the 14kb boundary before you even get the chance to send a byte of your actual content - but the handshake also includes multiple request/response exchanges between client and server, so it would require additional roundtrips even if it stayed below the 14kb boundary.
So the article's advice only holds for unencrypted plain-TCP connections, which no one would want to use today anymore.
The advice might be useful again if you use QUIC/HTTP3, because that one ditches both TLS and TCP and provides the features from both in its own thing. But then, you'd have to look up first how congestion control and bandwidth estimation works in HTTP3 and if 14kb is still the right threshold.
A 14kb page can load much faster than a 15kb page - https://news.ycombinator.com/item?id=32587740 - Aug 2022 (343 comments)
How about a single image? I suppose a lot of people (visitors and webmasters) like to have an image or two on the page.
palata•7h ago
Hamuko•7h ago
fouronnes3•7h ago
Hamuko•7h ago
There are some chargers that take card payments though. My local IKEA has some. There's also EU legislation to mandate payment card support.
https://electrek.co/2023/07/11/europe-passes-two-big-laws-to...
DuncanCoffee•6h ago