The tool I use is Jampack and I'd highly recommend it: https://jampack.divriots.com
For my product website, it reduced the overall size by 590MB or approximately 59% in size, along with changing the HTML/CSS to make the optimisations that this article notes.
Mobile, Tablet, Desktop as breakpoints / AVIF with webp or jpeg fallback, Retina/Normal.
And this will increase the overall-size to 12 x 500 x 50kB = 300MB
The large majority of this comes from large header images for the Insights post content: https://www.magiclasso.co/insights/
These are png images that are relatively large in size until optimisation creates smaller, multiple image set versions.
I wouldn't say that this is an unusually large site. Any site with a medium amount of content would likely grow to such a size.
I'll definitely give this a try - only wish I knew about it before I wrote my thing!
<link rel="stylesheet" href="/_digested/assets/css/main-e03066e7ad7653435204aa3170643558.css" media="print" onload="this.media='all'">
<noscript><link rel="stylesheet" href="/_digested/assets/css/main-e03066e7ad7653435204aa3170643558.css"></noscript>
This is deliberate FOUC. Why, I have no notion whatsoever. It should read: <link rel="stylesheet" href="/_digested/assets/css/main-e03066e7ad7653435204aa3170643558.css">Can't think of any other reason why you would do this.
Appreciate the feedback!
I've even watched YouTube videos of people going into their Search Console dashboard to make sure I'm not missing anything (and indeed I've seen where those people for some pages do see a reason, but for mine I do not).
Q: Why did you decide to rewrite the whole image handling instead of just relying on the jekyll_picture_tag gem (https://github.com/rbuchberger/jekyll_picture_tag) - I am using that since years and it just works just fine.
Maybe it would make sense to decouple the image processing code from my library so the `jekyll_picture_tag` could be used, since it's a bit orthogonal to the Propshaft-esque asset loading.
Just sharing another approach where you keep the YouTube embed iframe, but replace the domain "youtube.com" by this specific domain "embedlite.com". It loads only the thumbnail of the video and when someone clicks on it, it loads the full YouTube player.
More info: https://www.embedlite.com
Their example doesn't even seem to work on mobile at least (just iframes the homepage itself), which doesn't really inspire confidence.
So I let Claude write it. I told it I wanted a simple static website without any js frameworks. It made the whole thing. Any time I add a blog post, it updates the blog index page.
The site is, of course, very fast. But the main gain, for me, was not having to figure out how to get the underlying tech working. Yes, I'm probably dumber for it, but the site was up in a few hours and I got to go on with my life.
While that may be great for Google Pagespeed, it leads to issues that wouldn't exist with a static page and a degraded experience for the end user. I'm not sure if the issue is related to the plugin discussed in the article.
With this being said, I can see many use-cases for such a plugin. Having compile-time image transformation/compression is really nice.
Why is the automatic assumption foe something not being updated for a few years to be abandoned instead of done? Are libraries not allowed to be stable/done?
You really need to look at the issues/updates ratio. Are there 57 open issues that haven't been triaged or addressed? Are there multiple open PRs or requests that should be easily added and are just sitting there rotting?
Thats why I push back against the notion that no updates = abandoned. Personal, painful, experience.
As in, "I've personally witnessed people passing over mature libraries that just don't need any more updates in favor of ones that aren't really production ready but get frequent updates, which causes quite a bad dev experience down the line".
I am not really good at articulating my thoughts properly, so thanks for making me write this longer comment.
The idea that this kind of project is "done" without even occasional chore updates just has no shot. It's obviously off of its maintainers' "rotation".
I'd be hella miffed if I loaded a page on my laptop, then opened it up somewhere without internet access, and realized that half the page fully didn't exist.
I commented a couple days ago about how I taught my team about image formats [0], and just published a blog post this morning [1].
Another aspect of image optimization that I'm aware of, but haven't even bothered with yet, is handling the art direction problem of responsive images.
Like "pan and scan" conversions of widescreen movies to the old 4:3 TV size, if you're serving a narrower image to mobile devices than say a desktop browser, the ideal image to serve is probably not a generic resize, or center-cropped version of the image. Mozilla has a nice page on responsive images that explains it better than I could: https://developer.mozilla.org/en-US/docs/Web/HTML/Guides/Res...
Brajeshwar•6mo ago
Almost all audio, images, and videos are rather ornamental, and the content will be OK without them. I try to have all content as standalone on its own as possible. For instance, the posts follow the pattern "_posts/YYYY/YYYY-MM-DD-url-goes-here.md," so I know where the yearly content is, despite each post having its own designated published date. I also have a folder "_posts/todo" where published (but work-in-progress) and future dated posts live.
For images, I stopped worrying about serving multiple sources. I optimized it somewhere between good enough for both mobile and higher (I now consider tablet and desktop the same).
https://brajeshwar.com/2021/brajeshwar.com-2021/