frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Nobody knows how to build with AI yet

https://worksonmymachine.substack.com/p/nobody-knows-how-to-build-with-ai
52•Stwerner•48m ago•20 comments

Known Bad Email Clients

https://www.emailprivacytester.com/badClients
20•mike-cardwell•34m ago•12 comments

Linux and Secure Boot certificate expiration

https://lwn.net/SubscriberLink/1029767/43b62a7a7408c2a9/
91•todsacerdoti•8h ago•40 comments

My Self-Hosting Setup

https://codecaptured.com/blog/my-ultimate-self-hosting-setup/
409•mirdaki•13h ago•150 comments

Fstrings.wtf

https://fstrings.wtf/
250•darkamaul•5h ago•69 comments

Hyatt Hotels are using algorithmic Rest “smoking detectors”

https://twitter.com/_ZachGriff/status/1945959030851035223
388•RebeccaTheDev•12h ago•216 comments

Babies made using three people's DNA are born free of mitochondrial disease

https://www.bbc.com/news/articles/cn8179z199vo
95•1659447091•2d ago•48 comments

Valve confirms credit card companies pressured it to delist certain adult games

https://www.pcgamer.com/software/platforms/valve-confirms-credit-card-companies-pressured-it-to-delist-certain-adult-games-from-steam/
728•freedomben•1d ago•706 comments

OpenAI claims Gold-medal performance at IMO 2025

https://twitter.com/alexwei_/status/1946477742855532918
162•Davidzheng•7h ago•244 comments

Pimping My Casio: Part Deux

https://blog.jgc.org/2025/07/pimping-my-casio-part-deux.html
111•r4um•8h ago•30 comments

A 14kb page can load much faster than a 15kb page (2022)

https://endtimes.dev/why-your-website-should-be-under-14kb-in-size/
336•truxs•8h ago•229 comments

Piramidal (YC W24) Is Hiring a Full Stack Engineer

https://www.ycombinator.com/companies/piramidal/jobs/JfeI3uE-full-stack-engineer
1•dsacellarius•4h ago

I avoid using LLMs as a publisher and writer

https://lifehacky.net/prompt-0b953c089b44
132•tombarys•5h ago•80 comments

What is the richest country in 2025?

https://www.economist.com/graphic-detail/2025/07/18/what-is-the-richest-country-in-the-world-in-2025
14•RestlessMind•47m ago•5 comments

YouTube No Translation

https://addons.mozilla.org/en-US/firefox/addon/youtube-no-translation/
118•thefox•8h ago•56 comments

Advertising without signal: The rise of the grifter equilibrium

https://www.gojiberries.io/advertising-without-signal-whe-amazon-ads-confuse-more-than-they-clarify/
133•neehao•14h ago•57 comments

How to write Rust in the Linux kernel: part 3

https://lwn.net/SubscriberLink/1026694/3413f4b43c862629/
230•chmaynard•18h ago•17 comments

Asynchrony is not concurrency

https://kristoff.it/blog/asynchrony-is-not-concurrency/
276•kristoff_it•21h ago•196 comments

Meta says it won’t sign Europe AI agreement, calling it an overreach

https://www.cnbc.com/2025/07/18/meta-europe-ai-code.html
289•rntn•22h ago•388 comments

Astronomers use colors of trans-Neptunian objects to track ancient stellar flyby

https://phys.org/news/2025-07-astronomers-trans-neptunian-track-ancient.html
12•bikenaga•3d ago•4 comments

N78 band 5G NR recordings

https://destevez.net/2025/07/n78-band-5g-nr-recordings/
13•Nokinside•2d ago•0 comments

A CarFax for Used PCs: Hewlett Packard wants to give old laptops new life

https://spectrum.ieee.org/carfax-used-pcs
22•miles•3d ago•20 comments

Debcraft – Easiest way to modify and build Debian packages

https://optimizedbyotto.com/post/debcraft-easy-debian-packaging/
70•pabs3•16h ago•22 comments

An exponential improvement for Ramsey lower bounds

https://arxiv.org/abs/2507.12926
18•IdealeZahlen•6h ago•1 comments

Zig Interface Revisited

https://williamw520.github.io/2025/07/13/zig-interface-revisited.html
10•ww520•2d ago•1 comments

Mr Browser – Macintosh Repository file downloader that runs directly on 68k Macs

https://www.macintoshrepository.org/44146-mr-browser
80•zdw•16h ago•18 comments

Bun adds pnpm-style isolated installation mode

https://github.com/oven-sh/bun/pull/20440
97•nateb2022•15h ago•15 comments

Broadcom to discontinue free Bitnami Helm charts

https://github.com/bitnami/charts/issues/35164
202•mmoogle•21h ago•108 comments

Silence Is a Commons by Ivan Illich (1983)

http://www.davidtinapple.com/illich/1983_silence_commons.html
178•entaloneralie•19h ago•45 comments

Zig's New Writer

https://www.openmymind.net/Zigs-New-Writer/
90•Bogdanp•2d ago•13 comments
Open in hackernews

A 14kb page can load much faster than a 15kb page (2022)

https://endtimes.dev/why-your-website-should-be-under-14kb-in-size/
335•truxs•8h ago

Comments

palata•7h ago
Fortunately, most websites include megabytes of bullshit, so it's not remotely a concern for them :D.
Hamuko•7h ago
I recently used an electric car charger where the charger is controlled by a mobile app that's basically a thin wrapper over a website. Unfortunately I only had a 0.25 Mb/s Internet plan at the time and it took me several minutes just staring at the splash screen as it was downloading JavaScript and other assets. Even when I got it to load, it hadn't managed to download all fonts. Truly an eye-opening experience.
fouronnes3•7h ago
Why can't we just pay with a payment card at electric chargers? Drives me insane.
Hamuko•7h ago
These chargers have an RFID tag too, but I'd forgotten it in my jacket, so it was mobile app for me.

There are some chargers that take card payments though. My local IKEA has some. There's also EU legislation to mandate payment card support.

https://electrek.co/2023/07/11/europe-passes-two-big-laws-to...

DuncanCoffee•6h ago
It wasn't required by law and the ocpp charging protocol, used to manage charge sessions at a high level between the charger and the service provider (not the vehicle) did not include payments management. Everybody just found it easier to manage payments using apps and credits. But I think Europe is going to make it mandatory soon(ish)
zevv•7h ago
And now try to load the same website over HTTPS
xrisk•7h ago
Yeah I think this computation doesn’t work anymore once you factor in the tls handshake.
aziaziazi•7h ago
From TFA:

> Also HTTPS requires two additional round trips before it can do the first one — which gets us up to 1836ms!

supermatt•7h ago
This hasn’t been the case since TLS1.3 (over 5 years ago) which reduced it to 1-RTT - or 0-RTT when keys are known (cached or preshared). Same with QUIC.
aziaziazi•6h ago
Good to know, however "when the keys are know" refers to a second visit (or request) of the site right ? That isn’t helpful for the first data paquets - at least that what I understand from the site.
jeroenhd•4h ago
Without cached data from a previous visit, 1-RTT mode works even if you've never vistited the site before (https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/#1-rtt-mode). It can fall back to 2-RTT if something funky happens, but that shouldn't happen in most cases.

0-RTT works after the first handshake, but enabling it allows for some forms of replay attacks so that may not be something you want to use for anything hosting an API unless you've designed your API around it.

mrweasel•6h ago
I know some people who are experimenting with using shorter certificates, i.e. shorter certificate chains, to reduce traffic. If you're a large enough site, then you can save a ton of traffic every day.
tech2•6h ago
Please though, for the love of dog, have your site serve a complete chain and don't have the browser or software stack do AIA chasing.
jeroenhd•4h ago
With half of the web using Let's Encrypt certificates, I think it's pretty safe to assume the intermediates are in most users' caches. If you get charged out the ass for network bandwidth (i.e. you use Amazon/GCP/Azure) then you may be able to get away with shortened chains as long as you use a common CA setup. It's a hell of a footgun and will be a massive pain to debug, but it's possible as a traffic shaving measure if you don't care about serving clients that have just installed a new copy of their OS.

There are other ways you can try to optimise the certificate chain, though. For instance, you can pick a CA that uses ECC rather than RSA to make use of the much shorter key sizes. Entrust has one, I believe. Even if the root CA has an RSA key, they may still have ECC intermediates you can use.

mrweasel•2h ago
That is a really good point. Googles certificate service can issue a certificate signed directly by Google, but not even Google themselves are using it. They use the one that's cross signed by GlobalSign (I think).

But yes, ensure that you're serving the entire chain, but keep the chain as short as possible.

moomoo11•7h ago
I’d care about this if I was selling in India or Africa.

If I’m selling to cash cows in America or Europe it’s not an issue at all.

As long as you have >10mbps download across 90% of users I think it’s better to think about making money. Besides if you don’t know that lazy loading exists in 2025 fire yourself lol.

jofzar•7h ago
It really depends on who your clients are and where they are.

https://www.mcmaster.com/ was found last year to be doing some real magic to make it load literally as fast as possible for the crapiest computers possible.

kosolam•7h ago
The site is very fast indeed
actionfromafar•7h ago
I want to buy fasteners now.
kosolam•7h ago
Fasterners, as fast as possible
A_D_E_P_T•7h ago
Do you have any idea what they actually did? It would be interesting to study. That site really is blazing fast.
gbuk2013•6h ago
Quick look: GSLB (via Akamai) for low latency, tricks like using CSS sprite to serve a single image in place of 20 or so for fewer round-trips, heavy use of caching, possibly some service worker magic but I didn't dig that far. :)

Basically, looks like someone deliberately did many right things without being lazy or cheap to create a performant web site.

_nivlac_•4h ago
I am SO glad jofzar posted this - I remember this website but couldn't recall the company name. Here's a good video on how the site is so fast, from a frontend perspective:

https://youtu.be/-Ln-8QM8KhQ

theandrewbailey•2h ago
I was intrigued that they request pages in the background on mouse-over, then swap on click. I decided to do likewise on my blog, since my pages are about a dozen kb of HTML, and I aggressively cache things.
xrisk•7h ago
I think you’d be surprised to learn that Indian mobile speeds are pretty fast, at 133 mbps median. Ranked 26th in the world (https://www.speedtest.net/global-index#mobile).
hyperbrainer•7h ago
And in the last few years, access has grown tremendously, a big part of which has been Jio's aggressive push with ultra-cheap plans.
flohofwoe•6h ago
I wouldn't be surprised if many '3rd world' countries have better average internet speeds than some developed countries by leapfrogging older 'good enough' tech that's still dominating in the developed countries, e.g. I've been on a 16 MBit connection in Germany for a long time simply because it was mostly good enough for my internet consumption. One day my internet provider 'forcefully' upgraded me to 50 MBit because they didn't support 16 MBit anymore ;)
mrweasel•6h ago
For the longest time I tried arguing with my ISP that I only needed around 20Mbit. They did have a 50Mbit at the time, but the price difference between 50, 100 and 250 and meant that you basically got ripped off for anything but the 100Mbit. It's the same now, I can get 300Mbit, but the price difference between 300 and and 500 is to small to be viewed as an actual saving, similar, you can get 1000Mbit, but I don't need it and the price difference is to high.
mrweasel•6h ago
Hope you're not selling to the rural US then.
masklinn•6h ago
There's plenty of opportunities to have slow internet (and especially long roundtrips) in developed countries e.g.

- rural location

- roommate or sibling torrent-ing the shared connection into the ground

- driving around on a road with spotty coverage

- places with poor cellular coverage (some building styles are absolutely hell on cellular as well)

paales2•7h ago
Or maybe we shouldn’t. A good experience doesnt have to load under 50ms, it is fine for it to take a second. 5G is common and people with slower connections accept longer waiting times. Optimizing is good but fixating isn’t.
9dev•7h ago
The overlap of people that don’t know what TCP Slow Start is and those that should care about their website loading a few milliseconds faster is incredibly small. A startup should focus on, well, starting up, not performance; a corporation large enough to optimise speed on that level will have a team of experienced SREs that know over which detail to obsess.
andrepd•7h ago
> a corporation large enough will have a team of experienced SREs that know over which detail to obsess.

Ahh, if only. Have you seen applications developed by large corporations lately? :)

achenet•6h ago
a corporation large enough to have a team of experienced SREs that know which details to obsess over will also have enough promotion-hungry POs and middle managers to tell them devs to add 50MB of ads and trackers in the web page. Maybe another 100MB for an LLM wrapper too.

:)

elmigranto•7h ago
Right. That’s why all the software from, say, Microsoft works flawlessly and at peak efficiency.
9dev•7h ago
That’s not what I said. Only that the responsible engineers know which tradeoffs they make, and are competent enough to do so.
samrus•6h ago
The decision to use react for the start menu wasnt out of competency. The guy said on twitter that thats what he knew so he used it [1]. Didnt think twice. Head empty no thoughts

1 https://x.com/philtrem22/status/1927161666732523596

ldjb•5h ago
Please do share any evidence to the contrary, but it seems that the Tweet is not serious and is not from someone who worked on the Start Menu.
bool3max•5h ago
No way people on HN are falling for bait Tweets. We're cooked
mort96•1h ago
I found this: https://www.youtube.com/watch?v=kMJNEFHj8b8&t=287s

I googled the names of the people holding the talk and they're both employed by Microsoft as software engineers, I don't see any reason to doubt what they're presenting. Not the whole start menu is React Native, but parts are.

fsh•5h ago
It is indeed an impressive feat of engineering to make the start menu take several seconds to launch in the age of 5 GHz many-core CPUs, unlimited RAM, and multi-GByte/s SSDs. As an added bonus, I now have to re-boot every couple of days or the search function stops working completely.
the_real_cher•5h ago
Fair warning, X has has more trolls than 4chan.
Henchman21•1h ago
Please, it has more trolls than Middle Earth
9dev•5h ago
That tweet is fake, and as repeatedly stated by Microsoft engineers, the start menu is written in C# of course, the only part using React native is a promotion widget within the start menu. While even that is a strange move, all the rest is just FUD spread via social media.
hombre_fatal•3h ago
"Hi it's the guy who did <thing everyone hates>" is a Twitter meme.
mnw21cam•6h ago
Hahaha. Keep digging.
SXX•6h ago
This. It's exactly why Microsoft use modern frameworks such as React Native for their Start Menu used by billions of people every day.
Nab443•4h ago
And probably the reason why I have to restart it at least twice a week.
chamomeal•4h ago
Wait… please please tell me this is a weirdly specific joke
kevindamm•3h ago
Only certain live portions of it, and calling it React is a stretch but not entirely wrong:

https://news.ycombinator.com/item?id=44124688#:~:text=Just%2...

the notion was popularized as an explanation for a CPU core spiking whenever the start menu opens on Win11

nasso_dev•6h ago
I agree, it feels like it should be how you describe it.

But if Evan Wallace didn't obsess over performance when building Figma, it wouldn't be what it is today. Sometimes, performance is a feature.

austin-cheney•5h ago
I don’t see what size of corporation has to do with performance or optimization. Almost never do I see larger businesses doing anything to execute more quickly online.
zelphirkalt•4h ago
Too many cooks spoil the broth. If you got multiple people pushing agenda to use their favorite new JS framework, disregarding simplicity in order to chase some imaginary goal or hip thing to bolster their CV, it's not gonna end well.
anymouse123456•4h ago
This idea that performance is irrelevant gets under my skin. It's how we ended up with Docker and Kubernetes and the absolute slop stack that is destroying everything it touches.

Performance matters.

We've spent so many decades misinterpreting Knuth's quote about optimization that we've managed to chew up 5-6 orders of magnitude in hardware performance gains and still deliver slow, bloated and defective software products.

Performance does in fact matter and all other things equal, a fast product is more pleasurable than a slow one.

Thankfully some people like the folks at Figma took the risk and proved the point.

Even if we're innovating on hard technical problems (which most of us are not), performance still matters.

zelphirkalt•4h ago
Performance matters, but at least initially only as far as it doesn't complicate your code significantly. That's why a simple static website often beats some hyper modern latest framework optimization journey websites. You gotta maintain that shit. And you are making sacrifices elsewhere, in the areas of accessibility and possibly privacy and possibly ethics.

So yeah, make sure not to lose performance unreasonably, but also don't obsess with performance to the point of making things unusable or way too complicated for what they do.

sgarland•1h ago
> way too complicated for what they do

Notably, this is subjective. I’ve had devs tell me that joins (in SQL) are too complicated, so they’d prefer to just duplicate data everywhere. I get that skill is a spectrum, but it’s getting to the point where I feel like we’ve passed the floor, and need to firmly state that there are in fact some basic ideas that are required knowledge.

anymouse123456•46m ago
This kind of thinking is exactly the problem.

Yes, at the most absurd limits, some autists may occasionally obsess and make things worse. We're so far from that problem today, it would be a good one to have.

IME, making things fast almost always also makes them simpler and easier to understand.

Building high-performance software often means building less of it, which translates into simpler concepts, fewer abstractions, and shorter times to execution.

It's not a trade-off, it's valuable all the way down.

Treating high performance as a feature and low performance as a bug impacts everything we do and ignoring them for decades is how you get the rivers of garbage we're swimming in.

mr_toad•4h ago
Containers were invented because VMs were too slow to cold start and used too much memory. Their whole raison d'être is performance.
anonymars•3h ago
Yeah, I think Electron would be the poster child
bobmcnamara•1h ago
Can you live fork containers like you can VMs?

VM clone time is surprisingly quick once you stop copying memory, after that it's mostly ejecting the NIC and bringing up the new one.

mort96•1h ago
I can't say I've ever cared about live forking a container (or VM, for that matter)
marcosdumay•32m ago
You mean creating a different container that is exactly equal to the previous one?

It's absolutely possible, but I'm not sure there's any tool out there with that command... because why would you? You'll get about the same result as forking a process inside the container.

anymouse123456•53m ago
That's another reason they're so infuriating. Containers are intended to make things faster and easier. But the allure of virtualization has made most work much, much slower and much, much worse.

If you're running infra at Google, of course containers and orchestration make sense.

If you're running apps/IT for an SMB or even small enterprise, they are 100% waste, churn and destruction. I've built for both btw.

The contexts in which they are appropriate and actually improve anything at all are vanishingly small.

01HNNWZ0MV43FF•2h ago
Docker good actually
anymouse123456•41m ago
nah - we'll look back on Docker the same way many of are glaring at our own sins with OO these days.
sgarland•1h ago
Agreed, though containers and K8s aren’t themselves to blame (though they make it easier to get worse results).

Debian Slim is < 30 MB. Alpine, if you can live with musl, is 5 MB. The problem comes from people not understanding what containers are, and how they’re built; they then unknowingly (or uncaringly) add in dozens of layers without any attempt at reducing or flattening.

Similarly, K8s is of course just a container orchestration platform, but since it’s so easy to add to, people do so without knowing what they’re doing, and you wind up with 20 network hops to get out of the cluster.

jeroenhd•4h ago
When your approach is "I don't care because I have more important things to focus on", you never care. There's always something you can do that's more important to a company than optimising the page load to align with the TCP window size used to access your server.

This is why almost all applications and websites are slow and terrible these days.

keysdev•3h ago
That and SPA
andix•1h ago
SPAs are great for highly interactive pages. Something like a mail client. It's fine if it takes 2-3 seconds extra when opening the SPA, it's much more important to have instant feedback when navigating.

SPAs are really bad for mostly static websites. News sites, documentation, blogs.

sgarland•1h ago
This. A million times this.

Performance isn’t seen as sexy, for reasons I don’t understand. Devs will be agog about how McMaster-Carr manages to make a usable and incredibly fast site, but they don’t put that same energy back into their own work.

People like responsive applications - you can’t tell me you’ve never seen a non-tech person frustratingly tapping their screen repeatedly because something is slow.

marcosdumay•39m ago
Well, half of a second is a small difference. So yeah, there will probably be better things to work on up to the point when you have people working exclusively on your site.

> This is why almost all applications and websites are slow and terrible these days.

But no, there are way more things broken on the web than lack of overoptimization.

exiguus•3h ago
I think, this is just an Art project.
andersmurphy•2h ago
Doesn’t have to be a choice it could just be the default. My billion cells/checkboxes[1] demos both use datastar and so are just over 10kb. It can make a big difference on mobile networks and 3G. I did my own tests and being over 14kb often meant an extra 3s load time on bad connections. The nice thing is I got this for free because the datastar maintainer cares about tcp slow star even though I might not.

- [1] https://checkboxes.andersmurphy.com

CyberDildonics•2h ago
If you make something that, well, wastes my time because you feel it is, well, not important, then, well, I don't want to use it.
sgarland•1h ago
Depending on the physical distance, it can be much more than a few msec, as TFA discusses.
simgt•7h ago
Aside from latency, reducing ressources consumption to the minimum required should always be a concern if we intend to have a sustainable future. The environmental impact of our network is not negligible. Given the snarky comments here, we clearly have a long way to go.

EDIT: some reply missed my point, I am not claiming this particular optimization is the holy grail, only that I'd have liked for added benefit of reducing the energy consumption to be mentioned

qayxc•7h ago
It's not low-hanging fruit, though. While you try to optimise to save a couple of mWh in power use, a single search engine query uses 100x more and an LLM chat is another 100x of that. In other words: there's bigger fish to fry. Plus caching, lazy loading etc. mitigates most of this anyway.
vouaobrasil•7h ago
Engineering-wise, it sometimes isn't. But it does send a signal that can also become a trend in society to be more respectful of our energy usage. Sometimes, it does make sense to focus on the most visible aspect of energy usage, rather than the most intensive. Just by making your website smaller and being vocal about it, you could reach 100,000 people if you get a lot of visitors, whereas Google isn't going to give a darn about even trying to send a signal.
qayxc•6h ago
I'd be 100% on board with you if you were able to show me a single - just a single - regular website user who'd care about energy usage of a first(!) site load.

I'm honestly just really annoyed about this "society and environment"-spin on advise that would have an otherwise niche, but perfectly valid reason behind it (TFA: slow satellite network on the high seas).

This might sound harsh and I don't mean it personally, but making your website smaller and "being vocal about it" (whatever you mean by that) doesn't make an iota of difference. It also only works if your site is basically just text. If your website uses other resources (images, videos, 3D models, audio, etc.), the impact of first load is just noise anyway.

You can have a bigger impact by telling 100,000 people to drive an hour less each month and if just 1% of your hypothetical audience actually does that, you'd achieve orders of magnitude more in terms of environmental and societal impact.

vouaobrasil•6h ago
Perhaps you are right. But I do remember one guy who had a YouTube channel and he uploaded fairly low-quality videos at a reduced framerate to achieve a high level of compression, and he explicitly put in his video that he did it to save energy.

Now, it is true that it didn't save much because probably many people were uploaded 8K videos at the time, so drop in the bucket. But personally, I found it quite inspiring and his decision was instrumental in my deciding to never upload 4K. And in general, I will say that people like that do inspire me and keep me going to be as minimal as possible when I use energy in all domains.

For me at least, trying to optimize for using as little energy as possible isn't an engineering problem. It's a challenge to do it uniformly as much as possible, so it can't be subdivided. And I do think every little bit counts, and if I can spend time making my website smaller, I'll do that in case one person gets inspired by that. It's not like I'm a machine and my only goal is time efficiency....

marcosdumay•25m ago
So, literally virtue signaling?

And no, a million small sites won't "become a trend in society".

timeon•7h ago
Sure there are more resource-heavy places but I think the problem is general approach. Neglecting of performance and overall approach to resources brought us to these resource-heavy tools. It seems just dismissive when people pointing to places where there could be made more cuts and call it a day.

If we want to really fix places with bigger impact we need to change this approach in a first place.

qayxc•6h ago
Sure thing, but's not low-hanging fruit. The impact is so miniscule that the effort required is too high when compared to the benefit.

This is micro-optimisation for a valid use case (slow connections in bandwidth-starved situations), but in the real world, a single hi-res image, short video clip, or audio sample would negate all your text-squeezing, HTTP header optimisation games, and struggle for minimalism.

So for the vast majority of use cases it's simply irrelevant. And no, your website is likely not going to get 1,000,000 unique visitors per hour so you'd have a hard time even measuring the impact whereas simply NOT ordering pizza and having a home made salad instead would have a measurable impact orders of magnitude greater.

Estimating the overall impact of your actions and non-actions is hard, but it's easier and more practical to optimise your assets, remove bloat (no megabytes of JS frameworks), and think about whether you really need that annoying full-screen video background. THOSE are low-hanging fruit with lots of impact. Trying to trim down a functional site to <14kB is NOT.

quaintdev•6h ago
LLM companies should provide how much energy got consumed processing users request. Maybe people will think twice before generating AI slop
simgt•5h ago
Of course, but my point is that it's still a constraint we should have in mind at every level. Dupont poisoning public water with pfas does not make you less of an arsehole if you toss your old iPhone in a pond for the sake of convenience.
victorbjorklund•1h ago
On the other hand - its kind of like saying we dont need to drive env friendly cars because it is a drop in the bucket compares to containerships etc
vouaobrasil•7h ago
Absolutely agree with that. I recently visited the BBC website the other day and it loaded about 120MB of stuff into the cache - for a small text article. Not only does it use a lot of extra energy to transmit so much data, but it promotes a general atmosphere of wastefulness.

I've tried to really cut down my website as well to make it fairly minimal. And when I upload stuff to YouTube, I never use 4K, only 1080P. I think 4K and 8K video should not even exist.

A lot of people talk about adding XYZ megawatts of solar to the grid. But imagine how nice it could be if we regularly had efforts to use LESS power.

I miss the days when websites were very small in the days of 56K modems. I think there is some happy medium somewhere and we've gone way past it.

FlyingAvatar•7h ago
The vast majority of internet bandwidth is people streaming video. Shaving a few megs from a webpage load would be the tiniest drop in the bucket.

I am all for efficiency, but optimizing everywhere is a recipe for using up the resources to actually optimize where it matters.

vouaobrasil•6h ago
The problem is that a lot of people DO have their own websites for which they have some control over. So it's not like a million people optimizing their own websites will have any control over what Google does with YouTube for instance...
jychang•6h ago
A million people is a very strong political force.

A million determined voters can easily force laws to be made which forces youtube to be more efficient.

I often think about how orthodoxical all humans are. We never think about different paths outside of social norms.

- Modern western society has weakened support for mass action to the point where it is literally an unfathomable "black swan" perspective in public discourse.

- Spending a few million dollars on TV ads to get someone elected is a lot cheaper than whatever Bill Gates spends on NGOs, and for all the money he spent it seems like aid is getting cut off.

- Hiring or acting as a hitman to kill someone to achieve your goal is a lot cheaper than the other options above. It seems like this concept, for better or worse, is not quite in the public consciousness currently. The 1960s 1970s era of assassinations have truly gone and past.

vouaobrasil•6h ago
I sort of agree...but not really, because you'll never get a situation where a million people can vote on a specific law about making YT more efficient. One needs to muster some sort of general political will to even get that to be an issue, and that takes a lot more than a million people.

Personally, if a referendum were held tomorrow to disband Google, I would vote yes for that...but good luck getting that referendum to be held.

atoav•6h ago
Yes but drops in the bucket count. If I take anything away from your statement, it is that people should be selective where to use videos for communications and where not.
OtherShrezzing•6h ago
> but optimizing everywhere is a recipe for using up the resources to actually optimize where it matters.

Is it? My front end engineer spending 90 minutes cutting dependencies out of the site isn’t going to deny YouTube the opportunity to improve their streaming algorithms.

josephg•6h ago
It might do the opposite. We need to teach engineers of all stripes how to analyse and fix performance problems if we’re going to do anything about them.
molszanski•6h ago
If you turn this into open problem, without hypothetical limits of what an frontend engineer ca do it would become more interesting and more impactful in real life. That said engineer is human being who could use that time in myriad other ways that would be more productive to helping the environment
simgt•6h ago
That's exactly it, but I fully expected whataboutism under my comment. If I had mentioned video streaming as a disclaimer, I'd probably have gotten crypto or Shein as counter "arguments".

Everyone needs to be aware that we are part of an environment that has limited resources beyond "money" and act accordingly, whatever the scale.

schiffern•6h ago
In that spirit I have a userscript, ironically called Youtube HD[0], that with one edit sets the resolution to 'medium' ie 360p. On a laptop it's plenty for talking head content (the softening is nice actually), and I only find myself switching to 480p if there's small text on screen.

It's a small thing, but as you say internet video is relatively heavy.

To reduce my AI footprint I use the udm=14 trick[1] to kill AI in Google search. It generally gives better results too.

For general web browsing the best single tip is running uBlock Origin. If you can master medium[2] or hard mode (which will require un-breaking/whitelisting sites) it saves more bandwidth and has better privacy.[3]

To go all-out on bandwidth conservation, LocalCDN[4] and CleanURLs[5] are good. "Set it and forget it," improves privacy and load times, and saves a bit of energy.

Sorry this got long. Cheers

[0] https://greasyfork.org/whichen/scripts/23661-youtube-hd

[1] https://arstechnica.com/gadgets/2024/05/google-searchs-udm14...

[2] https://old.reddit.com/r/uBlockOrigin/comments/1j5tktg/ubloc...

[3] https://github.com/gorhill/ublock/wiki/Blocking-mode

[4] https://www.localcdn.org/

[5] https://github.com/ClearURLs/Addon

andrepd•6h ago
I've been using uBlock in advanced mode with 3rd party frames and scripts blocked. I recommend it, but it is indeed a pain to find the minimum set of things you need to unblock to make a website work, involving lots of refreshing.

Once you find it for a website you can just save it though so you don't need to go through it again.

LocalCDN is indeed a nobrainer for privacy! Set and forget.

oriolid•6h ago
> The vast majority of internet bandwidth is people streaming video. Shaving a few megs from a webpage load would be the tiniest drop in the bucket.

Is it really? I was surprised to see that surfing newspaper websites or Facebook produces more traffic per time than Netflix or Youtube. Of course there's a lot of embedded video in ads and it could maybe count as streaming video.

danielbln•6h ago
Cate to share that article, I find that hard to believe.
oriolid•4h ago
No article sorry, it's just what the bandwidth display on my home router shows. I could post some screenshots but I don't care for answering to everyone who tries to debunk them. Mobile version of Facebook is by the way much better optimized than the full webpage. I guess desktop browser users are a small minority.
pyman•6h ago
Talking about video streaming, I have a question for big tech companies: Why? Why are we still talking about optimising HTML, CSS and JS in 2025? This is tech from 35 years ago. Why can't browsers adopt a system like video streaming, where you "stream" a binary of your site? The server could publish a link to the uncompressed source so anyone can inspect it, keeping the spirit of the open web alive. Do you realise how many years web developers have spent obsessing over this document-based legacy system and how to improve its performance? Not just years, their whole careers! How many cool technologies were created in the last 35 years? I lost count. Honestly, why are big tech companies still building on top of a legacy system, forcing web developers to waste their time on things like performance tweaks instead of focusing on what actually matters: the product.
hnlmorg•6h ago
That’s already how it works.

The binary is a compressed artefact and the stream is a TLS pipe. But the principle is the same.

In fact videos streams over the web are actually based on how HTTP documents are chunked and retrieved, rather than the other way around.

ahofmann•6h ago
1. How does that help not wasting resources? It needs more energy and traffic

2. Everything in our world is dwarfes standing on the shoulders of giants. To rip everything up and create something completely new is most of the time an idea that sounds better than it really would be. Anyone who thinks something else is mostly to young to see this pattern.

ozim•5h ago
I see you mistake html/css for what they were 30 years ago „documents to be viewed”.

HTML/CSS/JS is the only fully open stack, free as in beer and free and not owned by a single entity and standardized by multinational standardization bodies for building applications interfaces that is cross platform and does that excellent. Especially with electron you can build native apps with HTML/CSS/JS.

There are actually web apps not „websites” that are built. Web apps are not html with sprinkled jquery around there are actually heavy apps.

01HNNWZ0MV43FF•4h ago
Practically it is owned by Google, or maybe Google + Apple
01HNNWZ0MV43FF•4h ago
> Why can't browsers adopt a system like video streaming, where you "stream" a binary of your site?

I'll have to speculate what you mean

1. If you mean drawing pixels directly instead of relying on HTML, it's going to be slower. (either because of network lag or because of WASM overhead)

2. If you mean streaming video to the browser and rendering your site server-side, it will break features like resizing the window or turning a phone sideways, and it will be hideously expensive to host.

3. It will break all accessibility features like Android's built-in screen reader, because you aren't going to maintain all the screen reader and braille stuff that everyone might need server-side, and if you do, you're going to break the workflow for someone who relies on a custom tweak to it.

4. If you are drawing pixels from scratch you also have to re-implement stuff like selecting and copying text, which is possible but not feasible.

5. A really good GUI toolkit like Qt or Chromium will take 50-100 MB. Say you can trim your site's server-side toolkit down to 10 MB somehow. If you are very very lucky, you can share some of that in the browser's cache with other sites, _if_ you are using the same exact version of the toolkit, on the same CDN. Now you are locked into using a CDN. Now your website costs 10 MB for everyone loading it with a fresh cache.

You can definitely do this if your site _needs_ it. Like, you can't build OpenStreetMap without JS, you can't build chat apps without `fetch`, and there are certain things where drawing every pixel yourself and running a custom client-side GUI toolkit might make sense. But it's like 1% of sites.

I hate HTML but it's a local minimum. For animals, weight is a type of strength, for software, popularity is a type of strength. It is really hard to beat something that's installed everywhere.

Naru41•45m ago
The ideal HTML I have in mind is a DOM tree represented entirely in TLV binary -- and a compiled .so file instead of .js. And a unpacked data to be used directly in C programming data structure. Zero copy, no parsing, (data vaildation is unavoidable but) that's certainly fast.
hnlmorg•5h ago
It matters at web scale though.

Like how industrial manufacturing are the biggest carbon consumers and compared to them, I’m just a drop in the ocean. But that doesn’t mean I don’t also have a responsibility to recycle because culminate effect of everyone like me recycling quickly becomes massive.

Similarly, if every web host did their bit with static content, you’d still see a big reduction at a global scale.

And you’re right it shouldn’t the end of the story. However that doesn’t mean it’s a wasted effort / irrelevant optimisation

jbreckmckye•5h ago
I feel this way sometimes about recycling. I am very diligent about it, washing out my cans and jars, separating my plastics. And then I watch my neighbour fill our bin with plastic bottles, last-season clothes and uneaten food.
extra88•1h ago
At least you and your neighbor are operating on the same scale. Don't stop those individual choices but more members of the populace making those choices is not how the problem gets fixed, businesses and whole industries are the real culprits.
ofalkaed•5h ago
I feel better about limiting the size of my drop in the bucket than I would feel about just saying my drop doesn't matter even if it doesn't matter. I get my internet through my phone's hotspot with its 15gig a month plan, I generally don't use the entire 15gigs. My phone and and laptop are pretty much the only high tech I have, audio interface is probably third in line and my oven is probably fourth (self cleaning). Furnace stays at 50 all winter long even when it is -40 out and if it is above freezing the furnace is turned off. Never had a car, walk and bike everywhere including groceries and laundry, have only used motorized transport maybe a dozen times in the past decade.

A nice side effect of these choices is that I only spend a small part of my pay. Never had a credit card, never had debt, just saved my money until I had enough that the purchase was no big deal.

I don't really have an issue with people who say that their drop does not matter so why should they worry, but I don't understand it, seems like they just needlessly complicate their life. Not too long ago my neighbor was bragging about how effective all the money he spent on energy efficient windows, insulation, etc, was, he saved loads of money that winter; his heating bill was still nearly three times what mine was despite using a wood stove to offset his heating bill, my house being almost the same size, barely insulated and having 70 year old windows. I just put on a sweater instead of turning up the heat.

Edit: Sorry about that sentence, not quite awake yet and doubt I will be awake enough to fix it before editing window closes.

sylware•6h ago
Country where 10 millions people play their fav greedy-3D game in the evening, with state-of-the-art 400W GPUs, all at the same time...
presentation•6h ago
Or we can just commit to building out solar infrastructure and not worry about this rounding error anymore
hiAndrewQuinn•6h ago
Do we? Let's compare some numbers.

Creating an average hamburger requires an input of 2-6 kWh of energy, from start to finish. At 15¢ USD/kWh, this gives us an upper limit of about 90¢ of electricity.

The average 14 kB web page takes about 0.000002 kWh to serve. You would need to serve that web page about 1-300,000 times to create the same energy demands of a single hamburger. A 14 mB web page, which would be a pretty heavy JavaScript app these days, would need about 1 to 3,000.

I think those are pretty good ways to use the energy.

ajsnigrutin•6h ago
Now open an average news site, with 100s of request, tens of ads, autoplaying video ads, tracking pixels, etc., using gigabytes of ram and a lot of cpu.

Then multiply that by the number of daily visitors.

Without "hamburgers" (food in general), we die, reducing the size of usesless content on websites doesn't really hurt anyone.

hiAndrewQuinn•6h ago
Now go to an average McDonalds, with hundreds of orders, automatically added value meals, customer rewards, etc. consuming thousands of cows and a lot of pastureland.

Then multiply that by the number of daily customers.

Without web pages (information in general), we return to the Dark Ages. Reducing the number of hamburgers people eat doesn't really hurt anyone.

ajsnigrutin•6h ago
Sure, but you've got to eat something.

Now, if mcdonalds padded 5kB of calories of a cheesburger with 10.000 kilobytes of calories in wasted food like news sites doo, it would be a different story. The ratio would be 200 kilos of wasted food for 100grams of usable beef.

hombre_fatal•3h ago
You don't need to eat burgers though. You can eat food that consumes a small fraction of energy, calorie, land, and animal input of a burger. And we go to McDonalds because it's a dopamine luxury.

It's just an inconvenient truth for people who only care about the environmental impact of things that don't require a behavior change on their part. And that reveals an insincere, performative, scoldy aspect of their position.

https://ourworldindata.org/land-use-diets

ajsnigrutin•2h ago
Sure, but beef tastes good. I mean.. there are better ways to eat beef than mixed with soy at mcdonalds, but still...

What benefit does an individual get from downloading tens of megabytes of useless data to get ~5kB of useful data in an article? It wastes download time, bandwidth, users time (having to close the autoplaying ad), power/battery, etc.

justmarc•6h ago
Just wondering how do you reached at the energy calculation for serving that 14k page?

For a user's access to a random web page anywhere, assuming it's not on a CDN near the user, you're looking at at ~10 routers/networks on the way involved in the connection. Did you take that into account?

swores•4h ago
If Reddit serves 20 billion page views per month, at an average of 5MB per page (these numbers are at least in the vicinity of being right), then reducing the page size by 10% would by your maths be worth 238,000 burgers, or a 50% reduction worth almost 1.2million burgers per month. That's hardly insignificant for a single (admittedly, very popular) website!

(In addition to what justmarc said about accounting for the whole network. Plus I suspect between feeding them and the indirect effects of their contribution to climate change, I suspect you're being generous about the cost of a burger.)

justmarc•4h ago
Slightly veering off topic but I honestly wonder how many burgers will I fry if I ask ChatGPT to make a fart app?
hombre_fatal•3h ago
A tiny fraction of a burger.
spacephysics•6h ago
This is one of those things that is high effort, low impact. Similar to recycling in some cities/towns where it just gets dumped in a landfill.

Instead we should be looking to nuclear power solutions for our energy needs, and not waste time with reducing website size if its purely a function of environmental impact.

zigzag312•6h ago
So, anyone serious about sustainable future should stop using Python and stop recommending it as introduction to programming language? I remember one test that showed Python using 75x more energy than C to perform the same task.
mnw21cam•6h ago
I'm just investigating why the nightly backup of the work server is taking so long. Turns out python (as conda, anaconda, miniconda, etc) have dumped 22 million files across the home directories, and this takes a while to just list, let alone work out which files have changed and need archiving. Most of these are duplicates of each other, and files that should really belong to the OS, like bin/curl.

I myself have installed one single package, and it installed 196,171 files in my home directory.

If that isn't gratuitous bloat, then I don't know what is.

sgarland•1h ago
Conda is its own beast tbf. Not saying that Python packaging is perfect, but I struggle to imagine a package pulling in 200K files. What package is it?
noduerme•6h ago
Yeah, the environmental impact of jackasses mining jackass coin, or jackasses training LLMs is not insignificant. Are you seriously telling me now that if my website is 256k or 1024k I'm responsible for destroying the planet? Take it out on your masters.

And no, reducing resource use to the minimum in the name of sustainability does not scale down the same way it scales up. You're just pushing the idea that all human activity is some sort of disease that's best disposed of. That's essentially just wishing the worst on your own species for being successful.

It's never clear to me whether people who push this line are doing so because they're bitter and want to punish other humans, or because they hate themselves. Either way, it evinces a system of thought that has already relegated humankind to the dustbin of history. If, in the long run, that's what happens, you're right and everyone else is wrong. Congratulations. It will make little difference in that case to you if the rest of us move on for a few hundred years to colonize the planets and revive the biosphere. Comfort yourself with the knowledge that this will all end in 10 or 20 thousand years, and the world will go back to being a hot hive of insects and reptiles. But what glory we wrought in our time.

simgt•6h ago
> the environmental impact of jackasses mining jackass coin, or jackasses training LLMs is not insignificant

Whataboutism. https://en.m.wikipedia.org/wiki/Whataboutism

> You're just pushing the idea that all human activity is some sort of disease that's best disposed of. That's essentially just wishing the worst on your own species for being successful.

Strawmaning. https://en.m.wikipedia.org/wiki/Straw_man

Every bloody mention of the environmental impact of our activities gets at least a reply like yours that ticks one of these boxes.

noduerme•6h ago
> the environmental impact of jackasses mining jackass coin, or jackasses training LLMs is not insignificant

(this was actually stated in agreement with the original poster, who you clearly misunderstood, so there's no "what-about" involved here. They were condemning all kinds of consumption, including the frivolous ones I mentioned).

But

I'm afraid you've missed both my small point and my wider point.

My small point was to argue against the parent's comment that

>>reducing ressources consumption to the minimum required should always be a concern if we intend to have a sustainable future

I disagree with this concept on the basis that nothing can be accomplished on a large scale if the primary concern is simply to reduce resource consumption to a minimum. If you care to disagree with that, then please address it.

The larger point was that this theory leads inexorably to the idea that humans should just kill themselves or disappear; and it almost always comes from people who themselves want to kill themselves or disappear.

simgt•5h ago
> if the primary concern is simply to reduce resource consumption to a minimum

..."required".

That allows you to fit pretty much everything in that requirement. Which actually makes my initial point a bit weak, as some would put "delivering 4K quality tiktok videos" as a requirement.

Point is that energy consumption and broad environmental impact has to be a constraint in how we design our systems (and businesses).

I stand by my accusations of whataboutism and strawmaning, though.

noduerme•4h ago
carelessly thrown about accusations of whataboutism and strawmaning are an excellent example of whataboutism and strawmaning. I was making a specific point, directly to the topic, without either putting words in their mouth or addressing an unrelated issue. I'll stand by my retort.
noduerme•5h ago
>> Every bloody mention of the environmental impact of our activities gets at least a reply like yours that ticks one of these boxes.

That's a sweeping misunderstanding of what I wrote, so I'd ask that you re-read what I said in response to the specific quote.

iinnPP•5h ago
You'll find that people "stop caring" about just about anything when it starts to impact them. Personally, I agree with your statement.

Since a main argument is seemingly that AI is worse, let's remember that AI is querying these huge pages as well.

Also that the 14kb size is less than 1% of the current average mobile website payload.

lpapez•5h ago
Being concerned about page sizes is 100% wasted effort.

Calculate how much electricity you personally consume in total browsing the Internet for a year. Multiply that by 10 to be safe.

Then compare that number to how much energy it takes to produce a single hamburger.

Do the calculation yourself if you do not believe me.

On average, we developers can make a bigger difference by choosing to eat salad one day instead of optimizing our websites for a week.

ksec•7h ago
Missing 2021 in the title.

I know it is not the exact topic, but sometimes I think we dont need the fastest response time but consistent response time. Like every single page within the site to be fully rendered with exactly 1s. Nothing more nothing less.

sangeeth96•5h ago
I think the advise is still very relevant though. Plus, the varying network conditions mentioned in the article would ensure it’s difficult if impossible to guarantee consistent response time. As someone with spotty cellular coverage, I can understand the pains of browsing when you’re stuck with that.
ksec•4h ago
Yes. I don't know how it could be achieved other than having JS rendered the whole thing, wait until time designated before showing it all. And that time could be dependent on network connection.

But this sort of goes against my no / minimal JS front end rendering philosophy.

the_precipitate•7h ago
And you do know that .exe file is wasteful, .com file actually saves quite a few bytes if you can limit your executable's size to be smaller than 0xFF00h (man, I am old).
cout•6h ago
And a.out format often saves disk space over elf, despite duplicating code across executables.
crawshaw•7h ago
If you want to have fun with this: the initial window (IW) is determined by the sender. So you can configure your server to the right number of packets for your website. It would look something like:

    ip route change default via <gw> dev <if> initcwnd 20 initrwnd 20
A web search suggests CDNs are now at 30 packets for the initial window, so you get 45kb there.
londons_explore•5h ago
be a bad citizen and just set it to 1000 packets... There isn't really any downside apart from potentially clogging up someone who has a dialup connection and bufferbloat.
notpushkin•4h ago
This sounds like a terrible idea, but can anybody pinpoint why exactly?
buckle8017•4h ago
Doing that would basically disable the congestion control at the start of the connection.

Which would be kinda annoying on a slow connection.

Either you'd have buffer issues or dropped packets.

jeroenhd•4h ago
Anything non-standard will kill shitty middleboxes so I assume spamming packets faster than anticipated will have corporate networks block you off as a security thread of some kind. Mobile carriers also do some weird proxying hacks to "save bandwidth", especially on <4G, so you may also break some mobile connections. I don't have any proof but shitty middleboxes have broken connections with much less obvious protocol features.

But in practice, I think this should work most of the time for most people. On slower connections, your connection will probably crawl to a halt due to retransmission hell, though. Unless you fill up the buffers on the ISP routers, making every other connection for that visitor slow down or get dropped, too.

r1ch•23m ago
Loss-based TCP congestion control and especially slow start are a relic from the 80s when the internet was a few dialup links and collapsed due to retransmissions. If an ISP's links can't handle a 50 KB burst of traffic then they need to upgrade them. Expecting congestion should be an exception, not the default.

Disabling slow start and using BBR congestion control (which doesn't rely on packet loss as a congestion signal) makes a world of difference for TCP throughput.

sangeeth96•5h ago
> A web search suggests CDNs are now at 30 packets for the initial window, so you get 45kb there.

Any reference for this?

ryan-c•3h ago
I'm not going to dig it up for you, but this is in line with what I've read and observed. I set this to 20 packets on my personal site.
darthShadow•3h ago
* https://sirupsen.com/napkin/problem-15

* https://www.cdnplanet.com/blog/initcwnd-settings-major-cdn-p...

austin-cheney•7h ago
It seems the better solution is to not use HTTP server software that employs this slow start concept.

Using my own server software I was able to produce a complex single page app that resembled an operating system graphical user interface and achieve full state restoration as fast as 80ms from localhost page request according to the Chrome performance tab.

mzhaase•7h ago
TCP settings are OS level. The web server does not touch them.
austin-cheney•6h ago
The article says this is not a TCP layer technology, but something employed by servers as a bandwidth estimating algorithm.

You are correct in that TCP packets are processed within the kernel of modern operating systems.

Edit for clarity:

This is a web server only algorithm. It is not associated with any other kind of TCP traffic. It seems from the down votes that some people found this challenging.

jeffbee•35m ago
Yet another reason that QUIC is better.
firecall•6h ago
Damn... I'm at 17.2KB for my home page! (not including dependencies)

FWIW I optimised the heck out of my personal homepage and got 100/100 for all Lighthouse scores. Which I had not previously thought possible LOL

Built in Rails too!

It's absolutely worth optimising your site though. It just is such a pleasing experience when a page loads without any perceptible lag!

ghoshbishakh•6h ago
rails has nothing to do with the rendered page size though. Congrats on the perfect lighthouse score.
Alifatisk•5h ago
Doesn't Rails asset pipeline have an effect on the page size, like if Propshaft being used instead of Sprockets. From what I remember, Propshaft intentionally does not include minification or compression.
firecall•1h ago
It’s all Rails 8 + Turbo + Stimulus JS with Propshaft handling the asset bundling / pipeline.

All the Tailwind building and so on is done using common JS tools, which are mostly standard out of the box Rails 8 supplied scripts!

Sprockets used to do the SASS compilation and asset bundling, but the Rails standard now is to facilitate your own preferences around compilation of CSS/JS.

firecall•1h ago
Indeed it does not :-)

It was more a quick promote Rails comment as it can get dismissed as not something to build fast website in :-)

apt-apt-apt-apt•5h ago
Yeah, the fact that news.ycombinator.com loads instantly pleases my brain so much I flick it open during downtime automonkey-ly
Alifatisk•5h ago
Lobsters, Dlangs forum and HN is one of the few places I know that loads instantly, I love it. This is how it should be like!
gammalost•6h ago
If you care about reducing the amount of back and forth then just use QUIC.
eviks•6h ago
Has this theory been tested?
justmarc•6h ago
Does anyone know have examples of tiny, yet aesthetically pleasing websites or pages?

Would love it if someone kept a list.

hackerman_fi•5h ago
There is an example link in the article. Listing more examples would serve no purpose apart from web design perspective
justmarc•5h ago
Well, exactly that, I'm looking for inspiration.
FlyingSnake•5h ago
There’s https://512kb.club/ which I follow to keep my website lightweight
wonger_•5h ago
10kbclub.com, archived: https://archive.li/olM9k

https://250kb.club/

Hopefully you'll find some of them aesthetically pleasing

adastra22•6h ago
The linked page is 35kB.
fantyoon•5h ago
35kB after its uncompressed. On my end it sends 13.48kB.
adastra22•3h ago
Makes sense, thanks!
susam•6h ago
I just checked my home page [1] and it has a compressed transfer size of 7.0 kB.

  /            2.7 kB
  main.css     2.5 kB
  favicon.png  1.8 kB
  -------------------
  Total        7.0 kB
Not bad, I think! I generate the blog listing on the home page (as well as the rest of my website) with my own static site generator, written in Common Lisp [2]. On a limited number of mathematical posts [3], I use KaTeX with client-side rendering. On such pages, KaTeX adds a whopping 347.5 kB!

  katex.min.css              23.6 kB
  katex.min.js              277.0 kB
  auto-render.min.js          3.7 kB
  KaTeX_Main-Regular.woff2   26.5 kB
  KaTeX_Main-Italic.woff2    16.7 kB
  ----------------------------------
  Total Additional          347.5 kB
Perhaps I should consider KaTeX server-side rendering someday! This has been a little passion project of mine since my university dorm room days. All of the HTML content, the common HTML template (for a consistent layout across pages), and the CSS are entirely handwritten. Also, I tend to be conservative about what I include on each page, which helps keep them small.

[1] https://susam.net/

[2] https://github.com/susam/susam.net/blob/main/site.lisp

[3] https://susam.net/tag/mathematics.html

welpo•6h ago
> That said, I do use KaTeX with client-side rendering on a limited number of pages that have mathematical content

You could try replacing KaTeX with MathML: https://w3c.github.io/mathml-core/

BlackFly•5h ago
Katex renders to MathML (either server side or client side). Generally people want a slightly more fluent way of describing an equation than is permitted by a soup of html tags. The various tex dialects (generally just referred to as latex) are the preferred methods of doing that.
mr_toad•4h ago
Server side rendering would cut out the 277kb library. The additional MathML being sent to the client is probably going to be a fraction of that.
mk12•4h ago
If you want to test out some examples from your website to see how they'd look in KaTeX vs. browser MathML rendering, I made a tool for that here: https://mk12.github.io/web-math-demo/
em3rgent0rdr•52m ago
Nice tool! Seems "New Computer Modern" font is the Native MathML rendering that looks closest like standard LaTeX rendering, I guess cause LaTeX uses Computer Modern by default. But I notice extra space around the parenthesis, which annoys me because LaTeX math allows you to be so precise about how wide your spaces (e.g. \, \: \; \!). Is there a way to get the spaces around the parenthesis to be just as wide as standard LaTeX math? And the ^ hat above f(x) isn't nicely above just the top part of the f.
susam•4h ago
> You could try replacing KaTeX with MathML: https://w3c.github.io/mathml-core/

I would love to use MathML, not directly, but automatically generated from LaTeX, since I find LaTeX much easier to work with than MathML. I mean, while I am writing a mathematical post, I'd much rather write LaTeX (which is almost muscle memory for me), than write MathML (which often tends to get deeply nested and tedious to write). However, the last time I checked, the rendering quality of MathML was quite uneven across browsers, both in terms of aesthetics as well as in terms of accuracy.

For example, if you check the default demo at https://mk12.github.io/web-math-demo/ you'd notice that the contour integral sign has a much larger circle in the MathML rendering (with most default browser fonts) which is quite inconsistent with how contour integrals actually appear in print.

Even if I decide to fix the above problem by loading custom web fonts, there are numerous other edge cases (spacing within subscripts, sizing within subscripts within subscripts, etc.) that need fixing in MathML. At that point, I might as well use full KaTeX. A viable alternative is to have KaTeX or MathJaX generate the HTML and CSS on server-side and send that to the client and that's what I meant by server-side rendering in my earlier comment.

AnotherGoodName•1h ago
Math expressions are like regex to me nowadays. I ask the llm coding assistant to write it and it’s very very good at it. I’ll probably forget the syntax soon but no big deal.

“MathML for {very rough textual form of the equation}” seems to give a 100% hit rate for me. Even when i want some formatting change i can ask the llm and that pretty much always has a solution (mathml can render symbols and subscripts in numerous ways but the syntax is deep). It’ll even add the css needed to change it up in some way if asked.

VanTodi•5h ago
Another idea maybe would be to load the heavy library after the initial page is done. But it's loaded and heavy nonetheless. Or you could create svgs for the formulas and load them when they are in the viewport. Just my 2 cents
djoldman•4h ago
I never understood math / latex display via client side js.

Why can't this be precomputed into html and css?

mr_toad•4h ago
It’s a bit more work, usually you’re going to have to install Node, Babel and some other tooling, and spend some time learning to use them if you’re not already familiar with them.
susam•3h ago
> I never understood math / latex display via client side js. Why can't this be precomputed into html and css?

It can be. But like I mentioned earlier, my personal website is a hobby project I've been running since my university days. It's built with Common Lisp (CL), which is part of the fun for me. It's not just about the end result, but also about enjoying the process.

While precomputing HTML and CSS is definitely a viable approach, I've been reluctant to introduce Node or other tooling outside the CL ecosystem into this project. I wouldn't have hesitated to add this extra tooling on any other project, but here I do. I like to keep the stack simple here, since this website is not just a utility; it is also my small creative playground, and I want to enjoy whatever I do here.

dfc•2h ago
Is it safe to say the website is your passion project?
marcthe12•1h ago
Well there is mathml but it has poor support in chrome til recently. That is the website native equations formatting.
smartmic•6h ago
If I understood correctly, the rule is dependent on web server features and/or configuration. In that case, an overview of web servers which have or have not implemented the slow start algorithm would be interesting.
mikl•6h ago
How relevant is this now, if you have a modern server that supports HTTP/3?

HTTP/3 uses UDP rather than TCP, so TCP slow start should not apply at all.

hulitu•6h ago
> How relevant is this now

Very relevant. A lot of websites need 5 to 30 seconds or more to load.

throwaway019254•6h ago
I have a suspicion that the 30 second loading time is not caused by TCP slow start.
ajross•4h ago
Slow start is about saving small-integer-numbers of RTT times that the algorithm takes to ramp up to line speed. A 5-30 second load time is an order of magnitude off, and almost certainly due to simple asset size.
gbuk2013•6h ago
As per the article, QUIC (transport protocol underneath HTTP/3) uses slow start as well. https://datatracker.ietf.org/doc/id/draft-ietf-quic-recovery...
gsliepen•5h ago
A lot of people don't realize that all these so-called issues with TCP, like slow-start, Nagle, window sizes and congestion algorithms, are not there because TCP was badly designed, but rather that these are inherent problems you get when you want to create any reliable stream protocol on top of an unreliable datagram one. The advantage of QUIC is that it can multiplex multiple reliable streams while using only a single congestion window, which is a bit more optimal than having multiple TCP sockets.

One other advantage of QUIC is that you avoid some latency from the three-way handshake that is used in almost any TCP implementation. Although technically you can already send data in the first SYN packet, the three-way handshake is necessary to avoid confusion in some edge cases (like a previous TCP connection using the same source and destination ports).

gbuk2013•2h ago
They also tend to focus on bandwidth and underestimate the impact of latency :)

Interesting to hear that QUIC does away with the 3WHS - it always catches people by surprise that it takes at least 4 x latency to get some data on a new TCP connection. :)

ilaksh•5h ago
https://github.com/runvnc/tersenet
tgv•5h ago
This could be another reason: https://blog.cloudflare.com/russian-internet-users-are-unabl...

> ... analysis [by Cloudflare] suggests that the throttling [by Russian ISPs] allows Internet users to load only the first 16 KB of any web asset, rendering most web navigation impossible.

Alifatisk•5h ago
I agree with the sentiment here, the thing is, I've noticed that the newer generations are using frameworks like Next.js as default for building simple static websites. That's their bare bone start. The era of plain html + css (and maybe a sprinkle of js) feels like it's fading away, sadly.
jbreckmckye•5h ago
I think that makes sense.

I have done the hyper optimised, inline resource, no blocking script, hand minimised JS, 14kb website thing before and the problem with doing it the "hard" way is it traps you in a design and architecture.

When your requirements change all the minimalistic choices that seemed so efficient and web-native start turning into technical debt. Everyone fantasises about "no frameworks" until the project is no longer a toy.

Whereas the isomorphic JS frameworks let you have your cake and eat it: you can start with something that spits out compiled pages and optimise it to get performant _enough_, but you can fall back to thick client JavaScript if necessary.

fleebee•5h ago
I think you're late enough for that realization that the trend already shifted back a bit. Most frameworks I've dealt with can emit static generated sites, Next.js included. Astro feels like it's designed for that purpose from the ground up.
austin-cheney•4h ago
You have noticed that only just recently? This has been the case since jQuery became popular before 2010.
chneu•3h ago
Arguably it's been this way since web 2.0 became a thing in like 2008?
zos_kia•3h ago
Next.js bundles the code and aggressively minifies it, because their base use case is to deploy on lambdas or very small servers. A static website using next would be quite optimal in terms of bundle size.
hackerman_fi•5h ago
The article has IMO two flawed arguments:

1. There is math for how long it takes to send even one packet over satellite connection (~1600ms). Its a weak argument for the 14kb rule since there is no comparison with a larger website. 10 packets wont necessarily take 16 seconds.

2. There is a mention that images on webpage are included in this 14kb rule. In what case are images inlined to a page’s initial load? If this is a special case and 99.9% of images don’t follow it, it should be mentioned at very least.

hsbauauvhabzb•5h ago
Also the assumption that my userbase uses low latency satellite connections, and are somehow unable to put up with my website, when every other website in current existence is multiple megabytes.
ricardobeat•4h ago
There was no such assumption, that was just the first example after which he mentions normal roundtrip latencies are usually in the 100-300ms range.

Just because everything else is bad, doesn't invalidate the idea that you should do better. Today's internet can feel painfully slow even on a 1Gbps connection because of this; websites were actually faster in the early 2000s, during the transition to ADSL, as they still had to cater to dial-up users and were very light as a result.

sgarland•1h ago
> Just because everything else is bad, doesn't invalidate the idea that you should do better.

I get this all the time at my job, when I recommend a team do something differently in their schema or queries: “do we have any examples of teams currently doing this?” No, because no one has ever cared to try. I understand not wanting to be guinea pigs, but you have a domain expert asking you to do something, and telling you that they’ll back you up on the decision, and help you implement it. What more do you want?!

throwup238•5h ago
> In what case are images inlined to a page’s initial load?

Low resolution thumbnails that are blurred via CSS filters over which the real images fade in once downloaded. Done properly it usually only adds a few hundred bytes per image for above the fold images.

I don’t know if many bloggers do that, though. I do on my blog and it’s probably a feature on most blogging platforms (like Wordpress or Medium) but it’s more of a commercial frontend hyperoptimization that nudges conversions half a percentage point or so.

youngtaff•5h ago
It’s not really relevant in 2025…

The HTTPS negotiation is going to consume the initial roundtrips which should start increasing the size of the window

Modern CDNs start with larger initial windows and also pace the packets onto the network to reduce the chances of congesting

There’s also a question as to how relevant the 14kb rule has ever been… HTML renders progressively so as long as there’s some meaningful content in the early packets then overall size is less important

maxlin•5h ago
The geostationary satellite example, while interesting, is kinda obsolete in the age of Starlink
theandrewbailey•3h ago
Starlink is only 1 option in the satellite internet market. There are too many embedded systems and legacy infrastructure that its not reasonable to assume that 'satellite internet' means Starlink. Maybe in 20 years, but not today.
maxlin•2h ago
That's like saying vacuum tubes are only one option in the radio market.

The quality of connection is so much better, and as you can get a starlink mini with a 50GB plan for very little money, its already in the zone that just one worker could grab his own and bring it on the rig to use on his free time and to share.

Starlink terminals aren't "infrastructure". Campers often toss one on their roof without even leaving the vehicle. Easier than moving a chair. So, as I said, the geostationary legacy system immediately becomes entirely obsolete other than for redundancy, and is kinda irrelevant for uses like browsing the web.

3cats-in-a-coat•1h ago
"Obsolete" suggests Starlink is clearly better and sustainable, and that's a very bold statement to make at this point. I suspect in few decades the stationary satellites will still be around, while Starlink would've either evolved drastically or gone away.
LAC-Tech•5h ago
This looks like such an interesting articles, but it's completely ruined by the fact that every sentence is its own paragraph.

I swear I am not just trying to be a dick here. If I didn't think it had great content I wouldn't have commented. But I feel like I'm reading a LinkedIn post. Please join some of those sentences up into paragraphs!

GavinAnderegg•4h ago
14kB is a stretch goal, though trying to stick to the first 10 packets is a cool idea. A project I like that focuses on page size is 512kb.club [1] which is like a golf score for your site’s page size. My site [2] came in just over 71k when I measured before getting added (for all assets). This project also introduced me to Cloudflare Radar [3] which includes a great tool for site analysis/page sizing, but is mainly a general dashboard for the internet.

[1] https://512kb.club/

[2] https://anderegg.ca/

[3] https://radar.cloudflare.com/

FlyingSnake•2h ago
Second this. I also find 512kb as a more realistic benchmark and use it for my website.

The modern web has crossed the rubicon long time ago for 14kb websites.

mousethatroared•2h ago
A question as a non user:

What are you doing with the extra 500kB for me, the user?

> 90% of the time in interested in text. Most of the reminder vector graphics would suffice.

14 kB is a lot of text and graphics for a page. What is the other 500 for?

nicce•1h ago
If you want a fancy syntax highlighter for code blocks with multiple languages on your website, that is alone about that size. E.g. regex rules and the regex engine.
masfuerte•1h ago
As an end user I want a website that does the highlighting once on the back end.
filleduchaos•57m ago
Text, yes. Graphics? SVGs are not as small as people think especially if they're any more complex than basic shapes, and there are plenty of things that simply cannot be represented as vector graphics anyway.

It's fair to prefer text-only pages, but the "and graphics" is quite unrealistic in my opinion.

Brajeshwar•1h ago
512kb is pretty achievable for personal websites. My next target is to stay within 99kb (100kb as the ceiling). Should be pretty trivial on a few weekends. My website is in the Orange on 512kb.
zelphirkalt•4h ago
My plain HTML alone is 10kB and it is mostly text. I don't think this is achievable for most sites, even the ones limiting themselves to only CSS and HTML, like mine.
3cats-in-a-coat•4h ago
This is about your "plain HTML". If the rest is in cache, then TCP concerns are irrelevant.
silon42•4h ago
You must also be careful not to generate "get-if-modified", or such checks.
MrJohz•4h ago
Depending on who's visiting your site and how often, the rest probably isn't in cache though. If your site is a product landing page or a small blog or something else that people are rarely going to repeatedly visit, then it's probably best to assume that all your assets will need to be downloaded most of the time.
3cats-in-a-coat•1h ago
While it'd be fun to try, I doubt you can produce any page at all that's total 14kb with assets, even back at the dawn of the web in the 90s, aside from the spartan minimal academic pages some have. And where loading faster is completely irrelevant.
nottorp•4h ago
So how bad is it when you add https?
xg15•4h ago
> Also HTTPS requires two additional round trips before it can do the first one — which gets us up to 1836ms!

Doesn't this sort of undo the entire point of the article?

If the idea was to serve the entire web page in the first roundtrip, wouldn't you have lost the moment TLS is used? Not only does the TLS handshake send lots of stuff (including the certificate) that will likely get you over the 14kb boundary before you even get the chance to send a byte of your actual content - but the handshake also includes multiple request/response exchanges between client and server, so it would require additional roundtrips even if it stayed below the 14kb boundary.

So the article's advice only holds for unencrypted plain-TCP connections, which no one would want to use today anymore.

The advice might be useful again if you use QUIC/HTTP3, because that one ditches both TLS and TCP and provides the features from both in its own thing. But then, you'd have to look up first how congestion control and bandwidth estimation works in HTTP3 and if 14kb is still the right threshold.

tomhow•3h ago
Discussed at the time:

A 14kb page can load much faster than a 15kb page - https://news.ycombinator.com/item?id=32587740 - Aug 2022 (343 comments)

mikae1•3h ago
> Once you lose the autoplaying videos, the popups, the cookies, the cookie consent banners, the social network buttons, the tracking scripts, javascript and css frameworks, and all the other junk nobody likes — you're probably there.

How about a single image? I suppose a lot of people (visitors and webmasters) like to have an image or two on the page.