zlib is 30 years old, according to Wikipedia. And that's technically wrong since 'zlib' was factored out of gzip (nearly 33 years old) for use in libpng, which is also 30 years old.
Compressed format Compressed size (bytes) Compress Time Decompress Time
WEBP (lossless m5) 1,475,908,700 1,112 49
WEBP (lossless m1) 1,496,478,650 720 37
ZPNG (-19) 1,703,197,687 1,529 20
ZPNG 1,755,786,378 26 24
PNG (optipng -o5) 1,899,273,578 27,680 26
PNG (optipng -o2) 1,905,215,734 4,395 27
PNG (optimize=True) 1,935,713,540 1,120 29
PNG (optimize=False) 2,003,016,524 335 34
Doesn't really seem worth it? It doesn't compress better, and only slightly faster in decompression time.But lets be real here: this is basically just a new image format. With more code to maintain, fresh new exciting zero-days, and all of that. You need a strong use case to justify that, and "already fast encode is now faster" is probably not it.
I know it needs to be battle tested as a single entity but it’s not the same as writing a new image format from scratch.
m5 vs -19 is nearly 2.5x faster to decompress; given that most data is decompressed many many more times (often thousands or millions of times more, often by devices running on small batteries) than it is compressed, that's an enormous win, not "only slightly faster".
The way in which it might not be worth it is the larger size, which is a real drawback.
More efficiency will inevitably only lead to increased usage of the CPU and in turn batteries draining faster.
Not related to images, but I remember compressing packages of executables and zstd was a clear winner over other compression standards.
Some compression algorithms can run in parallel, and on a system with lots of cpus it can be a big factor.
https://dennisforbes.ca/articles/jpegxl_just_won_the_image_w...
Nothing really supports it. Latest Safari at least has support for it not feature-flagged or anything, but it doesn't support JPEG XL animations.
To be fair, nothing supports a theoretical PNG with Zstandard compression either. While that would be an obstacle to using PNG with Zstandard for a while, I kinda suspect it wouldn't be that long of a wait because many things that support PNG today also support Zstandard anyways, so it's not a huge leap for them to add Zstandard support to their PNG codecs. Adding JPEG-XL support is a relatively bigger ticket that has struggled to cross the finish line.
The thing I'm really surprised about is that you still can't use arithmetic coding with JPEG. I think the original reason is due to patents, but I don't think there have been active patents around that in years now.
I was under the impression libjpeg added support in 2009 (in v7). I'd assume most things support it by now.
Everything supports it, except web browsers.
If Firefox is anything to go off of, the most rational explanation here seems to just be that adding a >100,000 line multi-threaded C++ codebase as a dependency for something that parses untrusted user inputs in a critical context like a web browser is undesirable at this point in the game (other codecs remain a liability but at least have seen extensive battle-testing and fuzzing over the years.) I reckon this is probably the main reason why there has been limited adoption so far. Apple seems not to mind too much, but I am guessing they've just put so much into sandboxing Webkit and image codecs already that they are relatively less concerned with whether or not there are memory safety issues in the codec... but that's just a guess.
W. T. F. Yeah, if this is the state of the reference implementation, then I'm against JPEG-XL just on moral grounds.
They aren't going to give you two problems to solve/consider: clever code and novel design.
The justification for WebP in Chrome over JPEG-XL was pure hand waving nonsense not technical merit. The reality is they would not dare cede any control or influence to the JPEG-XL working group.
Hell the EU is CONSIDERING mandatory attestation driven by whitelisted signed phone firmwares for certain essential activities. Freedom of choice is an illusion.
Yeah... guess again. It took Chrome 13 years to support animated PNG - the last major change to PNG.
It can be surmounted with WebAssembly: https://github.com/niutech/jxl.js/
Single thread demo: https://niutech.github.io/jxl.js/
Multithread demo: https://niutech.github.io/jxl.js/multithread/
The recently released PNG 3 also supports HDR and animations: https://www.w3.org/TR/png-3/
APNG isn't recent so much as the specs were merged together. APNG will be 21 years old in a few weeks.
The biggest benefit is that it's actually designed as an image format. All the video offshoots have massive compromises made so they can be decoded in 15 milliseconds in hardware.
The ability to shrink old jpegs with zero generation loss is pretty good too.
Better to make the back compat breaks be entirely new formats.
In my opinion PNG doesn't need fixing. Being ancient is a feature. Everything supports it. As much as I appreciate the nerdy exercise, PNG is fine as it is. My only gripe is that some software writes needlessly bloated files (like adding a useless alpha channel, when it's not needed). I wish we didn't need tools like OptiPNG etc.
I don't think I have ever noticed the decode time of a png.
QOI is often equivalent or better compression than PNG, _before_ you even compress it with something like LZ4 etc.
Compressing QOI with something like LZ4 would generally outperform PNG.
QOI is really cool, but I think the author cut the final version of the spec too early, and intentionally closed it off to a future version with more improvements. With another year or 2 of development, I think it probably works have become ~10% more efficient and suitable for more usecases.
https://github.com/nigeltao/qoir has some numbers comparing QOIR (which is QOI-inspired-with-LZ4) vs PNG.
QOIR has better decode speed and comparable compression ratio (depending on which PNG encoder you use).
QOIR's numbers are also roughly similar to ZPNG.
I doubt it would apply to PNG because of the length and content doesn't seem to be dictionary-friendly, but it would be interesting to try from some giant collection of scraped PNGs. This approach was important enough for Brotli to include a "built-in" dictionary covering HTML.
https://github.com/UltraVanilla/paper-zstd/blob/main/patches...
from the author of this patch on discord - the level 9 for compression isn't practical and is too slow for a real production server but it does show the effectiveness of zstd with a shared dictionary.
So you start off with a 755.2 MiB world (in this test, it is a section of an existing DEFLATE-compressed world that has been lived in for a while). If you recreate its regions it will compact it down to 695.1 MiB
You set region-file-compression=lz4 and run --recreateRegionFiles and it turns into a 998.9 MiB world. Makes sense, worse compression ratios but less CPU is what mojang documented in the changelog. Neat, but I'm confused as to what the benefits are as I/O increasingly becomes the more constrained thing nowadays. This is just a brief detour from what I'm really trying to test
You set region-file-compression=none and it turns into a 3583.0 MiB world. The largest region file in this sample was 57 MiB
Now, you take this world, and compress each of the region files individually using zstd -9, so that the region files are now .mca.zst files. And you get a world that is 390.2 MiB
I don't remember the exact compression ratios for the dictionary solution in that repo, but it wasn't quite as impressive (IIRC around a 5% reduction compared to non-dictionary zstd at the same level). And the padding inherent to the region format takes away a lot of the ratio benefit right off the bat, though it may have worked better in conjunction with the PaperMC SectorFile proposal, which has less padding, or by rewriting the storage using some sort of LSM tree library that knows how to compactly store blobs of varying size. I've dropped the dictionary idea for now, but it definitely could be useful. More research is needed.
Correct - I wouldn't expect this to be useful for PNG. Compression dictionaries are applicable in situations where a group of documents contain shared patterns of literal content, like snippets of HTML. This is very uncommon in PNG image data, especially since any difference in compression settings, like the use of a different color palette, or different row filtering algorithms, will make the pattern unrecognizable.
I'm not even sure there is a good pure Java (no JNI) and Go (without Cgo) implementations for ZSTD. And it definitely would require more powerful hardware - some micro-controllers which can use PNG are too small for ZSTD.
https://github.com/richgel999/fpng
It turns out that deflate can be much faster when implemented specifically for PNG data, instead general-purpose compression (while still remaining 100%-standard-compatible).
zX41ZdbW•7h ago
I've recently experimented with the methods of serving bitmaps out of the database in my project[1]. One option was to generate PNG on the fly, but simply outputting an array of pixel color values over HTTP with Content-Encoding: zstd has won over PNG.
Combined with the 2D-delta-encoding as in PNG, it will be even better.
[1] https://adsb.exposed/