HEIC: Have fun licensing this.
WebP: Slightly better than JPEG maybe, but only supports 1/4 chroma resolution in lossy mode so some JPEGs will always look better than the equivalent WebP.
AVIF: Better than JPEG probably, but encoders for AV1 are currently heavily biased towards blurring, even at very high bitrates. Non-Chrome browser support took a while.
You can even "losslessly" compress existing JPEGs to JPEGXL.
JPEG XL is the natural replacement for JPEG and it is perverse that Google backtracked on supporting it.
And decoder complexity. A software JPEG decoder is a weekend project. A hardware JPEG decoder not much more. Doing the same for arbitrary JPEG XL files is much, much more complicated. In any world where any of development cost, implementation complexity, expected code quality (especially when using first-order assumptions like constant number of defects per line of code), or decoder resources (especially for hardware implementations) are important, JPEG has serious advantages.
Every device in the world with iOS 17 or macOS 14 or better has JPEG XL support across the system.
This is a complete and utter non-issue. Google had even added JPEG XL support to Chromium, and then bizarrely removed it (not long before Apple fully supported JXL across all their platforms which invariably would have pushed it over the top), presumably to try to anoint WebP as the successor. Only WebP has so many disadvantages that all they did was entrench classic JPEG.
JPEG XL is unquestionably the best current next gen format for images.
This depends on the system requirements, doesn't it? Suppose you're compositing a low-safety-impact video stream with (well, under) safety-impacting information in an avionics application, and you're currently using a direct GMSL link. There's an obvious opportunity to cost-down and weight-down the system by shifting to a lightly compressed stream over an existing shared Ethernet bus, and MJPEG is a reasonable option for this application (as is H.264, and other options -- trade study needed). When considering replacing JPEG with JPEG XL in this implementation, what's your plan for providing partitioning between the "extremely high quality" but QM software implementation and the DAL components? Are you going to dedicate a core to avoid timing interference? At that point you're spending more silicon than a dedicated JPEG decoder would take. You likely already have an FPGA in the system for doing the compositing itself, but what's the area trade-off between an existing "extremely high quality" JPEG XL hardware decoder and the JPEG one that you've been using for decades?
I don't doubt that in a world where everything is an iPhone (with a token nod to Android), "someone already wrote the code once and it's good enough" is sufficient. But there's a huge field of software engineering where complexity and quality drive decision making, and JPEG XL really is much more complex than JPEG Classic Flavor.
How many weekend project decoders are used in real apps?
While killing MP3 might be difficult, the vast majority of people aren't handling audio files themselves these days, so probably not hard to phase out fairly rapidly.
Even within the Western World, there are many people who like to own their digital music.
It's nice to have that consistent ubiquity, something very hard to find these days. Especially if you're entire audio library (audio books, podcasts, songs) comes from some streaming service that requires an app!
Incidentally, breakage on VBR bitstreams is buggy behaviour, because some lazy developers assumed frame sizes would never change. VBR is completely within spec, and decoders do not have to explicitly support it.
Lastly a note on bitrate: 320 kbps CBR (the max allowed in spec) is often wasteful and pointless. In many cases, an encoder will pad out frames to conform to the requested bitrate. Indeed tools exist that will losslessly re-encode a CBR file to VBR by removing the padding, producing a smaller file. MP3 (as good as it is) has certain problem samples that aren't fixed by throwing more bits at them. A competent encoder with proper settings, like lame which defaults to -V4 is transparent in most samples to most people. If you disagree you should double-blind test yourself.
Okay, my SanDisk player (an m300) is only 17 years old; nevertheless it plays Vorbis files just fine.
Where outside the West are you seeing many people that are specifically storing audio files in mp3 (vs just streaming/ storing in a better digital format/storing physical media)?
I live in SEA and most people here are not storing their own mp3s. Most people don't have computers at all – they have budget Android phones that don't have much built-in storage. What they do have is cheap internet, so they are either using Spotify (Free)/YouTube/etc. Many people still use CDs (mostly in cars) but those aren't mp3 either.
Sure they do. I know them personally. They don't even know what FLAC is. Buy music on bandcamp, download as mp3. A format known to them since the beginning of the 21st century.
> Where outside the West are you seeing many people that are specifically storing audio files in mp3 (vs just streaming/ storing in a better digital format/storing physical media)?
I feel like the whole former East Block is doing just that. Been to Egypt last year. They share that stuff all over the place.
Most websites break with WebP. Desktop tools choke on WebP.
It sucks, because it's a good format.
That part at least isn't true.
A funny case in point: Sora.com produces WebP outputs, but you can't turn around and use them as inputs. (Maybe they've fixed that?)
Smaller websites almost always reject them. Even within big websites, support is fractured within the product surface area. You can't use them as Reddit profile icons, for instance.
One of the most apparent issues is that a lot of thumbnailing and CDN systems don't work natively with WebP, so you have to reject WebP outright until broader support is added.
Once the WebPs are in your systems, you have to make sure everything downstream can support them... it's infectious.
Really unfortunate that we haven't been able to move past this.
WebP support was added to WordPress in 2021 with version 5.8: https://wordpress.org/documentation/wordpress-version/versio...
AVIF support in 2024, with v6.5: https://make.wordpress.org/core/2024/02/23/wordpress-6-5-add...
I struggle to understand the justification for other lossy image formats as our networks continue to get faster. From a computation standpoint, it is really hard to beat JPEG. I don't know if extra steps like intra-block spatial prediction are really worth it when we are now getting 100mbps to our smartphones on a typical day.
You might be getting 100 Mbps to your smartphone; many people – yes, even within the United States – struggle to attain a quarter of that.
If jpeg is loading like ass, webp probably isn't going to arrive much faster.
You have to find a balance, and unless (still) pictures are at the center of what you are doing, it is typically only a fraction of the bandwidth (and a fraction of the processing power too).
We are not talking about 100 Mbps, we downloaded JPEGs from dialup connections you know. You don't even need to go into the Mbps unless you are streaming MJPEG (and why would you do that?).
If humans are still around in a thousand years they’ll be using jpegs and they’ll still be using them a thousand years after that. When things work they have pernicious tendency to stick around.
When/if simple screens get usurped then we'll likely move on from JPEG.
I'm sure you were being a little flippant but your last sentence shows good insight. Someone said "we just need it to work" to me the other day and the "if it works there will be little impetus to improve it"-flag went off in my brain.
Idk about 3d, but I’ll assume someone probably will tape something out necessity if they haven't already.
…and yes, very flippant! But not without good reason. If we are to extrapolate; the popularity of jpeg, love it or hate it, will invariably necessitate it’s continued compatibility contributing to my pervious statement. That compatibility will invariably lead to plausible hypothetical circumstances where future developers out of laziness, ignorance, or just plain conformity to norms will lead to its choice and use perpetuating the cycle. The tendency as such is that short of a radical mass extension level like event brought about by mass wide spread technological adoption such as what you describe is why I don’t see it going away anytime soon. Not to say it couldn’t happen, I just feel it’s highly improbable because of the contributing human factors.
That jpeg gets so many complaints is I feel for two reasons. One, its ubiquity and two, that we actually see it! Some similar situations that don’t get nearly as much attention but are far more pervasive are tcp/ip, bash, ntpd, ad nauseam. All old pervasive protocols so embedded as to be taken for granted, and also not able to be seen.
I’ll leave with this engineering truism that I feel should be more widely adhered to in software development, especially by UI designers: if it ain’t broke don’t fix it!
one place I think jxl will really shine is PBR and game textures. for cases like that, it's very common to have color+transparency+bump map+normal map, and potentially even more. bundling all of those into a single file allows for away better compression
Wheels are vastly superior to hover technologies in the crucial areas of steering and controlled braking. (For uncontrolled braking, you just cut the power to your hover fans and lift the skirts...)
It turns out to be remarkably difficult to get a hovercraft to go up an incline...
Wheels are both suspension and traction in one system.
There's no particular physical advantage to JPEG over the others mentioned; it's just currently ubiquitous.
To answer your (false?) question, there’s a long list of benefits, but I’d say HDR and storage efficiency are the two big ones I can think of. The storage efficiency especially is massive, especially with large images.
Because Google's PageSpeed and Lighthouse both tell people to use WebP, and a large percentage of devs will do anything Google say in the hopes of winning better SERP placement:
- https://web.dev/articles/serve-images-webp
- https://developer.chrome.com/docs/lighthouse/performance/use...
That doesn't even make much sense because you lose inter-frame compressibility
* It's not actually better.
* It's patented/requires license/is owned by someone who wants a lot of royalties.
JPEG2000 is of the second variety.
Regardless, since the picture tag[0] was introduced I’ve used that for most image media by default with relevant fallbacks, with WebP as default. Also allows loading relevant sized images based on media query which is a nice bonus
[0]: https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
JPEGS are great for photographs, where lossy compression is acceptable.
PNGs can have transparency and lossless compression, so they're great for things like rasterized vector graphics, UI-element parts of a web page, etc.
Also, XYB is an option; I use the adaptive quantisation option from jpegli (with progressive level 2) to get smaller files. I never bothered with XYB, as it looked complicated.
> When I came up in the IE era
In the IE era I recall, the battle was between GIF and JPEG because IE supported alphatransparent PNGs very poorly :)
> if I recall correctly JPEG as a format can encode an image with a higher fidelity than PNG
The other way around: JPEGs are “lossy” – they throw away visual information to save file size. PNGs, on the other hand, are “lossless”, and decode back to exactly the same pixels that were fed into the encoder.
For archival purposes, where you care about not losing details is more important than image size, PNG is better (though often TIFF is used for that use case). For images with large blocks of solid colors and sharp edges (text, line drawings), PNG is arguable better (though JPEG can be acceptable if you're careful with quality settings). If you need alpha support, go for PNG since JPEG doesn't support that.
For photograph-like images, where image size is important, JPEG is preferred over PNG.
24-bit color PNGs are lossless, to the extent that the input image is encodable in 24 bits of RGB (which is pretty much everything that's not HDR). There's no higher fidelity available for normal input images. If file size limits would force palettized PNGs, it's quite possible for a JPEG at the same file size to have higher fidelity (since it makes a different set of trade-offs, keeping color resolution but giving up spatial resolution instead); but this isn't really a common or particularly valid comparison in the PNG era, was more of an issue when comparing to GIFs.
tl;dr: Nope, PNG is perfect. JPEG can approach perfect, but never get there. Comparison is only interesting with external (e.g. file size) constraints.
JPEG should be used for everything else.
JPEG is lossy, in ways that were initially optimized for photographs. The details it loses are often not details photographs are good at providing in the first place. As the upside for losing some data, it gets to pick the data it gets to compress, and it chooses it in such a way as to minimize the size of the compressed data.
PNG is a lossless format. It's practically mandatory when you need 100% fidelity, as with icons or other graphics that are intended to have high contrast in small areas. It's able to optimize large areas of the same color very well, but suffers when colors change rapidly. It's especially unsuitable for photographs, as sensor noise (or film grain, if the source was film) create subtle color variations that are very difficult for it to encode efficiently.
You basically never have a situation where either one is appropriate. They are for different things, and should be used as such.
PNG is a lossless format, so I don't think that's possible, unless there's some specific feature that is not available in PNG.
1. Lossy: JPEG fills this role;
2. Lossless: this was GIF but is now PNG; and
3. Animated: GIF.
So for a format to replace JPEG, it must bring something to the table for lossy compression. Now that JPEG is patent-free, any new format must be similarly unencumbered. And it's a real chicken-and-egg problem in getting support on platforms such that people will start using it.
I remember a similar thing happening with Youtube adding VP9 (IIRC) support as an alternate to H264, which required an MPAA patent license. The MPAA also tried to cloud VP9 by saying it infringed on their patents anyway. No idea if that's true or not but nobody wants that uncertainty.
Anyway, without total support for VP9 (which Apple devices didn't have, for example) Youtube would need to double their storage space required for videos by having both codecs. That's really hard to justify.
Same goes for images. You then need to detect and use a supported image format... or just use JPEG.
JPEG-XL is the superior format. The only reason WebP exists is not because of natural selection, but because of nepotism (Google Chrome).
https://www.reddit.com/r/AV1/comments/ju18pz/generation_loss...
At some point you have to be pragmatic and meet users where they are but doesn't mean you have to like that Google did throw their weight around in a way that only they really can.
“Technical merits” are rarely, for anything, the sole measurement of fitness for purposes.
Even for purely internal uses, internal social, cultural, and non-technical business constraints often have a real impact on what is the best choice, and when you get out into wider uses with external uses, the non-technical factors proliferate. That's just reality.
I understand the aesthetic preference to have decisions only require considering a narrow set of technical criteria which you think should be important, but you will make suboptimal decisions in the vast majority of real-world circumstances if you pretend that the actual decision before you conforms to that aesthetic ideal.
Doesn't a polyfill imply more Javascript running on the device?
In HTML, use `<picture>`: https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
In CSS, use `image-set()`: https://developer.mozilla.org/en-US/docs/Web/CSS/image/image....
They can render JPEG-XL; everything else will render the fall back format like JPEG or WebP.
It's actually not more work. The user's browser automatically handles the content negotiation and only downloads the image format it understands:
<picture>
<source srcset="photo.jxl" type="image/jxl">
<source srcset="photo.webp" type="image/webp">
<img src="photo.jpg" alt="Product photo" loading="lazy">
</picture>
macOS, iPadOS and iOS get the JPEG-XL image, device that can handle WebP get that and everything else gets JPEG.There are several image delivery services that will provide the best image format depending on the device that's connecting.
next person downloads the now JPEG's WEBP'd JPEG'd WEBP'd image
fast forward a decade the original quality versions of the images no longer exist just these degraded
Most decent image editing software (Photoshop, Pixelmator, etc) will let you choose what you want.
https://www.adobe.com/creativecloud/file-types/image/raster/...
But if you're not a professional it would be easy to mix up the two and slowly end up with VHS level degradation.
I'd love to use JPEG-XL, but I'm guessing the only way to do that is also bringing along a WASM decoder everywhere I want to use it.
standards require some politicking and money I suppose
"Build it and they will come" doesn't work for products, and it doesn't work for standards either.
If you get code merged into something like Chrome, and it's big and goes unused for a few years, at some point some security-minded person will come along and argue that your code is an unused attack surface and should be removed again.
Sure, and while this is true:
> If you get code merged into something like Chrome, and it's big and goes unused for a few years [it's likely to get removed.]
it's also true that Google could have pushed JPEG XL instead of pushing WebP, which would have massively increased the usage stats of the JPEG XL code and saved it from removal. But (for whatever reason) Google decided to set things up to push folks to use WebP at every turn, and here we are.
Given how much duct tape it took at times to get various browsers to behave I would say JS is proof of the opposite. It succeeded in an environment where standards where a mere suggestion.
https://groups.google.com/a/chromium.org/g/blink-dev/c/WjCKc...
XHTML 2 is waiting on that one... Oh well...
Everyone else... is fine with JPEG. And occasionally shares HEIC pictures from their iPhones.
Only for a very brief period. (Beta 1)
Just as there is a clear winner for video - av1 - there seems to be nothing in the way of "this is clearly the future, at least for the next few years" when it comes to encoding images.
JPEG is... old, and it shows. The filesizes are a bit bloated, which isn't really a huge problem with modern storage, but the quality isn't great.
JPEG-XL seemed like the next logical step until Google took their toys and killed it despite already having the support in Chrome, which pretty much makes it dead in the water (don't you just love monopolies making decisions for you?)
HEIC is good, as long as you pinky promise to never ever leave Apple's ecosystem, ie HEIC sucks.
AVIF seems computationally expensive and the support is pretty spotty - 8bit yuv420 might work, but 10b or yuv444 often doesn't. Windows 10 also chokes pretty hard on it.
Alternatives like WebP might be good for browsers but are nigh-unworkable on desktops, support is very spotty.
PNG is cheap and support is ubiquitous but filesizes become sky-high very quick.
So what's left? I have a whole bunch of .HEIC photos and I'd really like if Windows Explorer didn't freeze for literal minutes when I open a folder with them. Is jpeg still the only good option? Or is encoding everything in jpeg-xl or avif + praying things get better in the future a reasonable bet?
> Key features of the JPEG XL codec are: > lossless JPEG transcoding,
> Moreover, JPEG XL includes several features that help transition from the legacy JPEG coding format. Existing JPEG files can be losslessly transcoded to JPEG XL files, significantly reducing their size (Fig. 1). These can be reconstructed to the exact same JPEG file, ensuring backward compatibility with legacy applications. Both transcoding and reconstruction are computationally efficient. Migrating to JPEG XL reduces storage costs because servers can store a single JPEG XL file to serve both JPEG and JPEG XL clients. This provides a smooth transition path from legacy JPEG platforms to the modern JPEG XL.
https://ds.jpeg.org/whitepapers/jpeg-xl-whitepaper.pdf
If you need more profs, you could transcode a JPEG to JPEG XL and convert against to JPEG. The result image would be BINARY IDENTICAL to the original image.
However, perhaps are you talking about an image on JPEG XL, using features only in JPEG XL (24 bit, HDR, etc...) that obviously couldn't be converted in a lossless way to a JPEG.
So he was not wrong about this. You have perfect JPEG -> JPEG XL conversion, but not the other way around.
Trying around with some jpg and jxl files I cannot convert jxl losslessly to jpg files even if they are only 8bit. The jxl files transcoded from jpg files show "JPEG bitstream reconstruction data available" with jxlinfo, so I think some extra metadata is stored when going from jpg to jxl to make the lossless transcoding possible. I can imagine not supporting the reverse (which is pretty useless anyway) allowed for more optimizations.
JXL to JPG is lossless as in a bit-for-bit identical file can be generated
only if you got the JXL from JPG.
A lot of those features (non-8×8 DCTs, Gaborish and EPF filters, XYB) are enabled by default when you compress a non-JPEG image to a lossy JXL. At the moment, you really do need to compress to JPEG first and then transcode that to JXL if you want the JXL→JPEG direction to be lossless.
Well, as JPEGs? Why not? Quality is just fine if you don't touch the quality slider in Photoshop or other software.
For "more" there's still lossless camera RAW formats and large image formats like PSD and whatnot.
JPEG is just fine.
what i mean is that jpeg squarish artifacts look ok while av1 angular artifacts look distorted
Why would I ever care about Chrome? I can't use adblockers on Chrome, which makes the internet even less usable than it currently is. I only start up chrome to bypass cross-origin restrictions when I need to monkey-patch javascript to download things websites try to keep me from downloading (or, like when I need to scrape from a Google website... javascript scraper bots seem to evade their bot detection perfectly, just finished downloading a few hundred gigabytes of magazines off of Google Books).
Seriously, fuck Chrome. We're less than 2 years away from things being as bad as they were in the IE6 years.
I have software that won't work quite right in Safari or Firefox through a VPN every single day. Maybe it's the VPN and maybe it's the browser but it doesn't matter. We're at IE levels it's just ever so slightly more subtle this time. I'm still using alternatives but it's a battle.
Some of the warez sites I download torrents from have captchas and other javascripticles that only work on Chrome, but I've yet to see it with mainstream sites.
Fight the good fight.
Why does this myth persist?
uBlock Origin Lite works perfectly fine on Chrome, with the new Manifest v3. Blocks basically all the ads uBlock Origin did previously, including YouTube. But it uses less resources so pages load even faster.
There's an argument that adblocking could theoretically become less effective in the future but we haven't seen any evidence of that yet.
So you can very much use adblockers on Chrome.
webRequest is slower because it has to evaluate JavaScript for each request (as well as the overhead of interprocess communication), instead of the blocking being done by compiled C++ code in the same process like declarativeNetRequest does.
uBO also has a bunch of extra features like zapping that the creator explicitly chose not to include in uBO Lite, in the interests of making the Lite version as fast and resource-light as possible. For zapping, there are other extensions you can install instead if you need that.
They're two different products with two different philosophies based on two different underlying architectures. The older architecture has now gone away in Chrome, but the new one supports uBlock Origin Lite great.
https://github.com/uBlockOrigin/uBOL-home/wiki/Frequently-as...
replace=, can't modify the response body (full support is only possible with Firefox MV2)Part of it being merged for now.
It is unfortunate this narrative hasn't caught on. Actual quality over VMAF and PSNR. And we haven't had further quality improvement since x265.
I do get frustrated every time the topic of codec comes up on HN. But then the other day I only came to realise I did spend ~20 years on Doom9 and Hydrogenaudio I guess I accumulated more knowledge than most.
https://cloudinary.com/blog/what_to_focus_on_in_image_compre...
There is a whole discussions that modern codec, or especially AV1 simply doesn't care about PSY image quality. And hence how most torrents are still using x265 because AV1 simply doesn't match the quality offered by other encoder/ x265. Nor does the AOM camp cares about it, since their primarily usage is YouTube.
>in the same way that MP3 at 320 kbps is competitive with AAC at 320 kbps.
It is not. And never will be. MP3 has inherent disadvantage that needs substantial higher bitrate for quite a lot of samples, even at 320kbps. We have been through this war for 10 years at Hydropgenaudio with Data to back this up, I dont know why in the past 2-3 years the topic has pop up once again.
MP3 is not better than AAC-LC in any shape or form even at 25% higher bitrate. Just use AAC-LC, or specifically Apple's Quick Time AAC-LC Encoder.
In early AV1 encoders, psychovisual tuning was minimal and so AV1 encodes often looked soft or "plastic-y". Today's AV1 encoders are really good at this when told to prioritize psy quality (SVT-AV1 with `--tune 3`, libaom with `--tune=psy`). I'd guess that there's still lots of headroom for improvements to AV1 encoding.
> And hence how most torrents are still using x265 because…
Today most torrents still use H.264, I assume because of its ubiquitous support and modest decode requirements. Over time, I'd expect H.265 (and then AV1) to become the dominant compressed format for video sharing. It seems like that community is pretty slow to adopt advancements — most lossy-compressed music <finger quotes>sharing</finger quotes> is still MP3, even though AAC is a far better (as you note!) and ubiquitous choice.
My point about MP3 vs. AAC was simply: As you reduce the amount of compression, the perceived quality advantages of better compressed media formats is reduced. My personal music library is AAC (not MP3), encoded from CD rips using afconvert.
That's not what I'm seeing for anything recent. x265 seems to be the dominant codec now. There's still a lot of support for h.264, but it's fading.
It still (maddeningly !) defaults to PSNR, but you can change that. There are some sources where I find it now can significantly improve over H.265 at higher data rates, and, while my testing was limited, I couldn't find sources any where H.265 clearly won based on my mark-1 eyeball. This is in contrast to when I tried multiple av1 encoders 2-ish years ago and they, at best, matched H.265 at higher bitrates.
Photography nerds will religiously store raw images that they then never touch. They're like compliance records.
No photog nerd wants EVEN MORE POSTPROCESSING.
JPEG is the ur-example of lossy compression. JPEG Lossless can't have any connection with that.
JPEG, or fancier jpeg: https://developer.android.com/media/platform/hdr-image-forma...
Not really true in my experience, I have no problems using it in Windows 11, Linux, or with my non-Apple non-Google cloud photos app.
The iPhone using it in an incredibly widespread way has made it a defacto standard.
If you're having trouble in Windows, I wonder if you're running Windows 11 or 10? Because 11 seems a lot better at supporting "modern things" considering that Microsoft has been neglecting Windows 10 for 3 years and is deprecating it this year.
Say what? A random scan across the internet will reveal more videos in MP4 and H.264 format than av1. Perhaps streaming services have switched, but that is not what regular consumers usually use to make and store movies.
AV1 was created and is backed by many companies via a non-profit industry consortium, solves real problems, and its momentum continues to grow. https://bitmovin.com/blog/av1-playback-support/
[edit: clarify that it's decoding only]
I don't like AVIF, at least not for photos I want to share. I think AVIF is great for "a huge splash image for a web page that nobody is going to look at closely" but if you want something that looks like a pro photo I don't think it's better than WebP. People point out this example as "AVIF is great"
https://jakearchibald.com/2020/avif-has-landed/demos/compare...
but I think it badly mangles the reflection on the left wing of the car and... it's those reflections that make sports cars look sexy. (I'll grant that the 'acceptable' JPEG has obvious artifacts whereas the 'acceptable' AVIF replaced a sexy reflection with a plausible but slighly dull replacement)
The last major browser to add support was Safari 16 and that was released on September 12, 2022. I see pretty much no one on browsers older than Safari 16.4 in metrics on websites I run.
With one format you get decent filesize, transparency, and animation which makes things much simpler than doing things like conditionally producing gifs vs jpegs.
I can't think of any on my Fedora desktop for instance.
Another one I recently interacted with are video backgrounds for zoom. Those apparently can only be jpeg, not even png
See various charts. For example, this table:
https://res.cloudinary.com/cloudinary-marketing/images/f_aut...
For decoding,
Webp - ~70Mpx/s,
jpeg - 100 to 300 Mpx/s
Jpeg xl - 115 to 163
Avif (single thread) - 32 to 37
Avif (multithread) - 90 to 110
Cmd+shift+4 is now the only way to grab an image out of a browser. Which is annoying.
It has made my life needlessly more complicated. I wish it would go away.
Maybe if browsers auto converted when you dragged ann image out of the browser window I wouldn’t care, but when I see webp… I hate.
Just drop the offending image onto the icon in the dock.
Also, I just checked and Powerpoint has no problem dropping in a webp image. Gimp opens it just fine. You are right that web forums are often well behind the times on this.
If he's using ⌘⇧4 to take a screenshot, he probably isn't going to open it in Microsoft Paint.
I once edited Firefox config to make it pretend to not support WebP, and the only site that broke was YouTube.
bin/webp2png:
#!/bin/bash
dwebp "$1" -o "${1%%.webp}".pngWindows didn't use to show .jpgs in the window explorer. I know becase I wrote a tool to generate thumbnail HTML pages to include on archive CDs of photos.
To solve this problem, some format has to "win" and get adopted everywhere. That format could be webp, but it will take 3-10 years before everything supports it. It's not just the OS showing it in it's file viewer. It's it's preview app supporting it. It's every web site that lets you upload an image (gmail/gmaps/gchat/facebook/discord/messenger/slack/your bank/apartment-rental-companies, etc..etc..etc..) I just takes forever to get everyone to upgrade.
WebP gets pushed into your series of tubes without your consent, and the browser that you're most likely to use to view them just happens to be made by the same company that invented the codec. It's DivX and Real Media all over again.
That's such a weak argument. If I was an indie game developer, I would use whatever obscure format would offer me the most benefit, since I control the pipeline from the beginning (raw TIFF/TGA/PNG/... files) to the end (the game that needs to have a decoder and will uncompress it into GPU memory). 20 minutes extra build-time on the dev machine is irrelevant when I can save hundreds of MBs.
However, that is not the benchmark for a format widely used on the internet. Encoding times multiply, as does the need to search for specialized software, and literally everyone else needs to support the format to be able to view those files.
You can take any random device and it will be able to decode h264 at 4k. h265 not so much.
As for av1 - my Ryzent 5500GT released in 2024 does not support it.
Addendum: AMD since RDNA2 (2020-2021-ish) [1], NVIDIA since 30 series (2020) [2], Apple since M3? (2023).
Note: GP's processor released in 2024 but is based on an architecture from 2020.
[0] https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video#Hardwar...
[1] https://en.wikipedia.org/wiki/Video_Core_Next#Feature_set
[2] https://developer.nvidia.com/video-encode-and-decode-gpu-sup...
What?? Maybe I'm too much in aarrrgh circles but it's all H.264 / 265...
> Alternatives like WebP might be good for browsers but are nigh-unworkable on desktops, support is very spotty.
Right again, and WebP is the enrichment that goes with the backend when dealing with web. I wouldn’t knock it for not being local compatible, it was designed for the web first and foremost, I think it’s in the name.
I had some high resolution graphic works in TIFF (BMP + LZW). To save space, I archived them using JPEG-2000 (lossless mode), using the J2k Photoshop plug-in ( https://www.fnord.com/ ). Saved tons of GBs. It has wide multi-platform support and is a recognized archival format, so its longevity is guaranteed for some time on our digital platforms. Recently explored using HEIF or even JPEG-XL for these but these formats still don't handle CMYK colour modes well.
It's worth noting that Firefox is willing to adopt JPEG-XL[1] as soon as the rust implementation[2] is mature And that rust impl is a direct port from the reference C++ implementation[3]. Mac OS and Safari already support JPEG-XL [4]. And recently Windows picked up JPEG-XL support. The only blockers at this point are Firefox, Chromium, and Android. If/when Firefox adopts JPEG-XL, we'll probably see google follow suit if only out of pressure from downstream Chromium platforms wanting to adopt it to maintain parity.
So really if you want to see JPEG-XL get adopted, go throw some engineering hours at the rust implementation [2] to help get it up to feature parity with the reference impl.
-----
1. https://github.com/mozilla/standards-positions/pull/1064
2. https://github.com/libjxl/jxl-rs
3. https://github.com/libjxl/libjxl
4. https://www.theregister.com/2023/06/07/apple_safari_jpeg_xl/
5. https://www.windowslatest.com/2025/03/05/turn-on-jpeg-xl-jxl...
What this [removal of support for JPEG-XL in Chromium] really translates to is, “We’ve created WebP, a competing standard, and want to kill anything that might genuinely compete with it”. This would also partly explain why they adopted AVIF but not JPEG XL. AVIF wasn’t superior in every way and, as such, didn’t threaten to dethrone WebP.
[0] https://vale.rocks/posts/jpeg-xl-and-googles-war-against-it
This is less just blind pressure but rather the risk that google becomes seen as an untrustworthy custodian of chromium and that downstreams start supporting an alternate upstream outside of google's control.
Jxl is certainly a hill that google seems intent to stand on but I doubt it's one they'd choose to die on. Doubly so given the ammo it'd give in the ongoing chrome anti-trust lawsuits.
That 20B a year that Google pays to Apple, think of that but a bigger traffic stream. That is Chrome, so Chrome is worth MORE than 20B a year to Google.
https://www.theverge.com/2024/5/2/24147007/google-paid-apple...
With codecs built for that purpose I hope. Intra-frame misconceptions "formats" should stay that way. A curiosity.
I use .webp often and I don't understand this. At least on Windows 10 I can go to a .webp and see a preview and double-click and it opens in my image editor. Is it not like this elsewhere?
When looking for a format to display HQ photos on my website I settled with a combination of AVIF + JPG. Most photos are AVIF, but if AVIF is too magical comparatively to JPG (like 3x-10x smaller) I use a larger JPG instead. "Magic" means that fine details are discarded.
WebP discards gradients (like sunset, night sky or water) even at the highest quality, so I consider it useless for photography.
I've done several tests where I lowered the quality settings (and thus, the resulting file size) of JPEG-XL and AVIF encoders over a variety of images. In almost every image, JPEG-XL subjective quality fell faster than AVIF, which seemed mostly OK for web use at similar file sizes. Due to that last fact, I concede that Chrome's choice to drop JPEG-XL support is correct. If things change (JPEG-XL becomes more efficient at low file sizes, gains Chrome support), I have lossless PNG originals to re-encode from.
One of JPEG XL's best ideas was incorporating Brunsli, lossless recompression for existing JPEGs (like Dropbox's Lepton which I think might've been talked about earlier). It's not as much of a space win as a whole new format, but it's computationally cheap and much easier to just roll out today. There was even an idea of supporting it as a Content-Encoding, so a right-click and save would get you an OG .jpg avoiding the whole "what the heck is a WebP?" problem. (You might still be able to do something like this in a ServiceWorker, but capped at wasm speeds of course.) Combine it with improved JPEG encoders like mozjpeg and you're not in a terrible place. There's also work that could potentially be done with deblocking/debanding/deringing in decoders to stretch the old approach even further.
And JXL's other modes also had their advantages. VarDCT was still faster than libaom AVIF, and was reasonable in its own way (AVIFs look smoother, JXL tended more to preserve traces of low-contrast detail). There was a progressive mode, which made less sense in AVIF because it was a format for video keyframes first. The lossless mode was the evolution of FUIF and put up good numbers.
At this point I have no particular predictions. JPEG never stopped being usable despite a series of more technically sophisticated successors. (MP3 too, though its successors seemed to get better adoption.) Perhaps it means things continue not to change for a while, or at least that I needn't rush to move to $other_format or get left behind. Doesn't mean I don't complain about the situation in comments on the Internet, though.
HEIC was developed by the MPEG folks and is an ISO standard, ISO/IEC 23008-12:2022:
* https://www.iso.org/standard/83650.html
* https://en.wikipedia.org/wiki/High_Efficiency_Image_File_For...
An HEIC image is generally a still frame from ITU-T H.265† (HEVC):
* https://www.geeky-gadgets.com/heif-avc-h-264-h-265-heic-and-...
OS support includes Windows 10 v1083, Android 10+, Ubuntu 20.04, Debian 10, Fedora 36. Lots of cameras and smartphones support it as well.
There's nothing Apple-specific about it. Apple went through the process of licensing H.265, so they got HEIC 'for free' and use it as the default image format because over JPEG it supports: HDR, >8-bit colour, etc.
†Like WebP was similar to an image/frame from a VP8 video.
And the MPEG folks were so cool with video, all that licensing BS. Sounds great. No thanks!
MPEG the standards group is organized by ISO and IEC, along with JPEG.
The one you’re thinking of - MPEG LA, the licensing company - is a patent pool (which has since been subsumed by a different one[1]) that’s unaffiliated with MPEG the standards group.
Not wrong, but this is a different topic/objection than the GP's 'being locked into Apple's ecosystem'.
And as the Wikipedia article for HEIC shows, there's plenty of support for the format, even in open source OSes.
* https://en.wikipedia.org/wiki/High_Efficiency_Image_File_For...
Debian doesn't seem to have a problem with it:
Ba-ha-ha... ha-ha... no.
Support is virtually non-existent. Every year or so, I try to use my Windows PC to convert a RAW photo taken with a high-end Nikon mirrorless camera to a proper HDR photo (in any format) and send it to my friends and family that use iDevices.
This has been literally impossible for the last decade, and will remain impossible until the heat death of the universe.
Read-only support is totally broken in a wide range of apps, including Microsoft-only apps. There are many Windows imaging APIs, and I would be very surprised if more than one gained HEIC support. Which is probably broken.
Microsoft will never support an Apple format, and vice versa.
Every single new photo or video format in the last 25 years has been pushed by one megacorp, and adoption outside of their own ecosystem is close to zero.
JPEG-XL is the only non-megacorp format that is any good any got and got multi-vendor traction, which then turned into "sliding backwards on oiled ice". (Google removed support from Chromium, which is the end of that sad story.)
> Ba-ha-ha... ha-ha... no. […]
Feel free to hit "Edit" on the Wikipedia page and correct it then:
* https://en.wikipedia.org/wiki/High_Efficiency_Image_File_For...
> Microsoft will never support an Apple format, and vice versa.
Once again, it's not an Apple format: it was developed by MPEG and is published by ISO/IEC, just like H.264 and H.265.
Or do you think H.264 and H.265 are an "Apple format" as well?
Create a HDR HEIC file on anything other than an Apple Device.
Upload it to an Apple Device.
Now use it any way: Forward it, attach it to a message, etc...
This won't work.
It won't ever work because the "standard" is not what Apple implements. They implement a few very specific subsets that their specific apps produce, and nothing else.
Nobody else implements these specific Apple versions of HEIC. Nobody.
For example, Adobe Lightroom can only produce a HEIC file on an Apple device.
My Nikon camera can produce a HDR HEIC file in-body, but it is useless on an Apple device because it's too dark and if forwarded in an iMessage... too bright!
It's a shit-show, comparable to "IPv6 support" which isn't.
> This has been literally impossible for the last decade, and will remain impossible until the heat death of the universe.
It's possible right now with gainmap jpegs. Adobe can create them, Android captures in them now, and Apple devices can view them even. Or if they can't yet they can very soon, Apple announced support at the recent WWDC (Apple brands it "adaptive HDR")
There's something kinda hilariously ironic that out of all these new fancy image codecs be it HEIC, AVIF, or JPEG-XL, it's humble ol' JPEG that's the first to deliver not just portable HDR, but the best quality HDR of any format of any kind
https://github.com/mozilla/standards-positions/pull/1064#iss...
One day, people will wonder why it took so long, and I'll smile =)
That's the decoder: https://github.com/libjxl/jxl-rs
(why they couldn't use jxl-oxide, I don't know)
Also good to know, the ff nightly feature for jpeg xl currently does NOT use the aforementioned decoder, it uses something else, hence it's worthless for testing at the moment...
And JPG for photos taken on a “real” camera (including scanned negatives). Sometimes RAW, but they’re pretty large so not often.
It may be obsolete, but it is ubiquitous. I care less about cutting edge tech than I do about the probability of being able to open it in 20+ years. Storage is cheap.
Presentation is a different matter and often should be a different format than whatever your store the original files as.
I took a tiff and saved it high quality jpg. Loaded both into photoshop and “diffed” them (basically subtracted both layers). After some level adjustment you could see some difference but it was quite small.
I've found that sometimes WebP with lossless compression (-lossless) results in smaller file sizes for graphics than JPEG-XL and sometimes it's the other way around.
Images are sorted in folders, per year and some group description based on why they were taken, vacations, event, whatever.
Enable indexing on the folders, and usually there are no freezes to complain about.
PNG where quality matters, JPG where size matters.
It's not. Support is still surprisingly patchy, and it takes a second or so to decode and show the image even on a modern M* Mac. Compared to instant PNG.
> I have a whole bunch of .HEIC photos and I'd really like if Windows Explorer didn't freeze for literal minutes when I open a folder with them.
Indeed.
Yea, how is this still the case in 2025?
My family has a mix of Apple and Samsung devices and they move/backup their pictures to their Windows machines whenever they run out of space but once they move them, they can't easily view or browse them.
I had to download a 3rd party app and teach them to view them from there.
The variable compression of JPEG was very important. In Photoshop you could just grab an image and choose the file size you needed and the JPEG quality would degrade to match your design constraints.
Perfect logic: let's not switch to webp because it's bad. Why is it bad? Not everyone has switched to it yet.
Maybe it's vaguely more flexible and compresses well. I don't care. If someone uses it, I despise them.
I hear this all the time but I have yet to encounter it. Is it just social media and lolcats? Maybe the software I’m using is too uncool to not support it.
- https://nvd.nist.gov/vuln/detail/CVE-2023-41064
- https://nvd.nist.gov/vuln/detail/CVE-2023-41061
- https://nvd.nist.gov/vuln/detail/CVE-2023-4863
- https://citizenlab.ca/2023/09/blastpass-nso-group-iphone-zer...
This submission was originally shown as [dead]. I have no idea why, I read some of the content and seems decent enough, especially in the current state of things when JPEG-XL is blocked because of AOM / Google Chrome. I vouched for it and upvoted, then somehow it is on the front page.
I wonder if dead means somehow flagged it. If so, then why? If not, why is it dead?
Maybe there are formats that compress better or lossless, but thanks to advancements in disk space and transfer rates (I know, not everywhere but penetration and improvement will happen...) the disadvantages of JPEG can be handled and we can just enjoy a very simple file format.
In an era where enshitification lingers around every corner I'm just happy that I don't need to think about whether I have to convert _every digital picture I've ever taken_ into some next-gen format because some license runs out or whatnot. It just works. Let's enjoy that and hope it sticks around for 30 more years.
[1] https://opensource.googleblog.com/2024/04/introducing-jpegli...
Edit: yes, this has been done https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123...
Jinyoung Choi and Bohyung Han, Task-Aware Quantization Network for JPEG Image Compression. ECCV 2020
Even Github! Though the latter doesn't support IPv6 either.
I'm currently sticking to JPEG because, last time I tried, JPEG came out as the best format. Referencing my memory at https://chaos.social/@luc/113615076328300784
- JPEG has two advantages on slow connections: the dimensions are either stored up front so the layout doesn't jump, or maybe the renderer is better; and it loads a less-sharp version first and progressively gets sharper
- JPEG was way faster when compressing and decompressing
- on the particular photo I wanted to optimise in this instance, JPEG was also simply the best quality for a given filesize which really surprised me after 32 years of potential innovation
Regarding AVIF, my n=1 experience was that it "makes smooth gradients where jpeg degrades to blotchy pixels, but at decent quality levels, jpeg preserves the grain that makes the photo look real". Gradients instead of ugliness at really small sizes can be perfect for your use-case, but note that it's also ~80 times slower at compression (80s vs. <1s)
JpegXL isn't widely in browsers yet so I couldn't use it
> These days, the [JPEG] format is similar to MP3
The difference with mp3 is that Opus is either a bit better or much better, but it's always noticeably better.
You can save ~half the storage space. For speech (audio books) I use 40kbps, and for music maybe 128kbps which is probably overkill. And I delete the originals without even checking anymore if it really sounds the same, I noticed that I simply can't tell the original apart in a blind test, no matter what expensive headset setup I try
TFA attributes it to a simple "they were first" advantage, but I think this is why "Why JPEGs still rule the web": no file format is better than JPEG in the same way as Opus is better than MP3; in that you don't have to think about it anymore and it's always a win in either filesize or quality
That said, Opus is also annoyingly hard to get into people's minds, but I've done it and you also see major platforms from compress-once-serve-continuously video (e.g. Youtube) to VoIP (e.g. Whatsapp) switching over for all their audio applications
Lossy compressed images are usually not the most significant consumers of bandwidth and disk space. Videos are. That's why there have been a lot more focus on video formats than anything else, there is a lot to gain here, not so much with still images.
JPEG-XL is super-complicated because it supports plenty of things most people don't really need.
Webp is somewhat better supported because it is backed by Google, it is also what is essentially a single frame video, so if you did the hard work on video (where it matters), you get images almost for free, and it saves Google a tiny bit of bandwidth, and "a tiny bit" is huge at Google scale.
We are seeing the same thing with audio. MP3 (1991) is still extremely popular, the rest is mostly M4A/AAC (2001). We pretty much have had the perfect audio format now, which is Opus (2012) and yet, we don't even use it that much, because the others are good enough for what we make of them.
JPEG2000: insanely complex and nonintuitive, especially the edge-cases and overly flexible encoding decisions
WebP: also complex, and effectively Google-proprietary
People want to be able to open the images anywhere (what about watching photos in a smartTV? or an old tablet? what about digital picture frames?). They want to edit the images in any program (what about this FOSS editor? what about the many people stuck in pre-subscription Photoshop versions?).
They also want to ensure far future access to their precious photos, when formats like JPEG2000 or even WebP might be long gone. I mean, webp was made by Google and we know how many of their heavily promoted creations are dead already...
I don't understand this argument. WebP is an algorithm, not a service. You cannot kill it once it's published.
What Google pushes is in their self interest and has nothing to do with the good of the unwashed masses.
WebP is to WebM what HEIC is to HEVC.
You can argue that using free codecs is a collateral benefit here, even though Google did it for selfish reasons. It is not detrimental to the public or the internet.
Wow! I have never written a compression codec implementation, but that's kind of staggering.
Although I need a engineering explanation as to why COBOL is still alive after all those years, because any tech cannot live forever.
Was popular in the 60s in fintech, so banks, ATM:s and related systems went digital using it.
Those systems are still running.
Latin is still going strong as well as water pipes (oldest being several millenia old).
Hard to predict which innovations remain resilient. The longer they stick around the more ”Lindy-proof” they are.
There is also no guarantee that whatever new language you port the COBOL code to won't also be seen as obsolete in a few years. Software developers are a fickle bunch.
As long as these two major sources of pictures stay on JPEG, I will too. Simply because that's all for subjective and completely debatable reasons.
Not everyone is you.
Support for webp is still so rough that I have to wonder what one's ecosystem must look like for it to be seamless. Maybe if you are a googler and your phone/computer/browser use entirely google software and ditto for your friends and your friends friends and your spouses? Maybe?
I blame Google for pushing it, but I also blame every third-party product for not supporting it, when it is mostly free to do so (I'm sure all of them internally use libraries to decode images instead of rolling their own code).
I use WEBP extensively but WEBP has a major flaw: it can do both lossy and lossless.
That's the most fucktarded thing to ever do for an image compression format. I don't understand the level of confusion and cluelessness that had to happen for such a dumb choice to have been made.
I've got an entire process around determining and classifying WEBP depending on whether they're lossless or lossy. In the past we had JPG our PNG: life was good. Simple. Easy.
Then dumbfucks decided that it made sense to cram both lossy and lossless under the same umbrella.
> They also want to ensure far future access to their precious photos, when formats like JPEG2000 or even WebP might be long gone.
That however shall never be an issue. You can still open, today, old obscure formats from the DOS days. Even custom ones only use by a very select few software back then.
It's not as if we didn't have emulators, converters, etc. and it's all open source.
Opening old WEBP files in the future shall never ever be a problem.
Determining if it's a lossy or lossless WEBP for non-technical users, however... ; )
Certainly not true.
One example: I have many thousands of photos from my Sony digital camera that cannot be opened by any current operating system without installing third-party software.
I'm lucky that the camera also output JPEG versions as it saved, so I'm able to view the JEPG thumbnails, then drag the Sony version into my photo editor of choice.
Except for if you wanted a compressed image with transparency for the web, in which case you had to sacrifice one of those two things or use a different format besides those two.
> Then dumbfucks decided that it made sense to cram both lossy and lossless under the same umbrella.
> I don't understand the level of confusion and cluelessness that had to happen for such a dumb choice to have been made.
Besides many end users not caring which one it is as long as they recognize the file type and can open it, I found a few interesting reasons for having both in the same format from a simple web search. One was the possibility of having a lossy background layer with a lossless foreground layer (particularly in an animation).
JPEG XL also decided to support either lossless or lossy compression, so it wasn't just WebP that decided it made sense.
and webp is not one of them.
* https://siipo.la/blog/is-webp-really-better-than-jpeg
edit: https://opensource.googleblog.com/2024/04/introducing-jpegli... is likely the real GOAT when it comes to modern jpeg encoders in that it effectively breaks the 8bit color space "ceiling" within the format!
Same with jpeg: Most people want to just encode images, reducing file size by 10% is a negligible win for most people
I know this is IEEE but seriously? JPEG won because it was a formal standard?
Didn't JPEG win because it supported more than 256 colours and, with the lossy compression, greatly reduced file size and bandwidth needs for our cat photo and porn collections?
While proposed replacements ... solve what problems?
1) snipping/clipping/screenshot tool outputs webp
2) convert to webp on ctrl-c ctrl-v from browsers
3) whatsapp/messenger/discord support. People will say these work fine, in my experience its a gamble, which is shouldn't be. It should be seamless, literally no edge cases
"The stronger the cosine transformation, the more compressed the final result" is simply wrong.
DCT (and the inverse DCT) are transforms that transform between "sample" and "frequency" domains, it's well defined should be perfectly reversible without compression (iirc the one in JPEG should be lossless given as many bits as the samples themselves).
The trick of DCT based compression is that humans don't notice when information disappears from higher frequencies(also in _natural images_ there is often little data in high frequencies, often lots of 0 that can be immediately cut).
So harder compression means removing more high frequency data from storage without it being too noticeable when reconstructing samples from the frequency domain at decompression.
Conversly however, if you have "sharp edges" in the sample data you need more higher frequencies to reproduce the sharp edges without "ringing" artefacts (this is why you will see noisy blocks around text in highly compressed JPEG's with text since it runs out of bandwidth to adjust).
The frequency domain values, and how compression affects removing various frequencies(black and white in the filter images) can be illustrated on the wikipedia filter comparsion example image below. (low frequencies are in the upper-left corner of the filter and spectrum images whilst higher frequencies horizontally are to the right and higher vertical frequencies are towards the bottom).
https://en.wikipedia.org/wiki/File:DCT_filter_comparison.png
https://en.wikipedia.org/wiki/Discrete_cosine_transform (Mainly "Example of IDCT" section towards the bottom but also the preceding ones).
Browser support for AVIF is nearly good enough that you might not need the fallback in reality. The only real problem I have encountered is that animated AVIFs are super stuttery in Safari for some reason.
Are MP3's not dying out gradually, like, say, WAVs have? I mean, do people/organizations actually encode anything to MP3 rather than AAC, these days?
reddalo•7mo ago
nemomarx•7mo ago
throawayonthe•7mo ago
k__•7mo ago
palmfacehn•7mo ago
edflsafoiewq•7mo ago
AshleysBrain•7mo ago
jhoechtl•7mo ago
Is missing WebP support a meme?
freedomben•7mo ago
frollogaston•7mo ago
coryrc•7mo ago
So, uh, don't get your hopes up.
frollogaston•7mo ago
pimlottc•7mo ago
https://graphicdesign.stackexchange.com/questions/115814/how...
frollogaston•7mo ago
Koffiepoeder•7mo ago
Acrobatic_Road•7mo ago
pbhjpbhj•7mo ago
Looks like a mixture of runtime and compiler flags are needed except for Safari.
CM30•7mo ago
This becomes an issue if you're creating content about trending topics, since lots of marketing sites love using webp for every image.