vote for this feature to be natively supported in browsers
JPEG XL was written in C++ in a completely different part of Google without any of the safe vanity wuffs style code, and the Chrome team probably had its share of trouble with half baked compression formats (webp)
My $0.02, since the gap here on perception of the situation fascinates me:
JPEG XL as a technical project was a real nightmare, I am not surprised at all to find Mozilla is waiting for a real decoder.
If you get _any_ FAANG engineer involved in this mess a beer || truth serum, they'll have 0 idea why this has so much mindshare, modulo it sounds like something familiar (JPEG) and people invented nonsense like "Chrome want[s] to kill it" while it has the attention of an absurd amount of engineers to get it into shipping shape.
(surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)
Considering the amount of storage all of these companies are likely allocating to storing jpegs + the bandwidth of it all - maybe the instant file size wins?
We barely even have movement to webp &avif, if this was a critical issue i would expect a lot more movement on that front since it already exists. From what i understand avif gives better compression (except for lossless) and has better decoding speed than jxl anyways.
If you look at CDNs, WebP and AVIF are very popular.
> From what i understand avif gives better compression (except for lossless) and has better decoding speed than jxl anyways.
AVIF is better at low to medium quality, and JXL is better at medium to high quality. JXL decoding speed is pretty much constant regardless of how you vary the quality parameter, but AVIF gets faster and faster to decode as you reduce the quality; it's only faster to decode than JXL for low quality images. And about half of all JPEG images on the web are high quality.
The Chrome team really dislikes the concept of high quality images on the web for some reason though, that's why they only push formats that are optimized for low quality. WebP beats JPEG at low quality, but is literally incapable of very high quality[1] and is worse than JPEG at high quality. AVIF is really good at low quality but fails to be much of an improvement at high quality. For high resolution in combination with high quality, AVIF even manages to be worse than JPEG.
[1] Except for the lossless mode which was developed by Jyrki at Google Zurich in response to Mozilla's demand that any new web image format should have good lossless support.
>The Chrome team really dislikes the concept of high quality images on the web for some reason though, that's why they only push formats that are optimized for low quality.
It would be more accurate to say Bit per Pixel (BPP) rather than quality. And that is despite the Chrome team themselves showing 80%+ of images served online are in the medium BPP range or above where JPEG XL excel.
BTW, this is no longer true. With the introduction of tune IQ (Image Quality) to libaom and SVT-AV1, AVIF can be competitive with (and oftentimes beat) JXL at the medium to high quality range (up to SSIMULACRA2 85). AVIF is also better than JPEG independently of the quality parameter.
JXL is still better for lossless and very-high quality lossy though (SSIMULACRA2 >90).
Why?
> (surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)
There is no waiting on Chrome involved in: https://bugzilla.mozilla.org/show_bug.cgi?id=1986393
The fuck are you talking about? The jxl-rs library Firefox is waiting on is developed by mostly the exact same people who made libjxl which you say sucks so much.
In any case, JXL obviously has mindshare due to the features it has as a format, not the merits of the reference decoder.
There is a huge difference between deciding not to do something because the benefit vs complexity trade off doesn't make sense, and actively trying to kill something.
FWIW i agree with google, avif is a much better format for the web. Pathology imaging is a bit of a different use case, where jpeg-xl is a better fit than avif would be.
Any decade now, any decade...
The Google Chrome folks are the ones who decided to disallow it. You could argue that they are trying to kill it, but certainly not Google at large.
The truth is that every image format added to a web browser has to be supported forever, so chrome team is wary of adding new file formats unless its an above and beyond improvement. Jpeg XL isn't (relative to avif) so google decided not to implement. Its not some malicious conspiracy, it just didn't make sense from a product perspective.
From what i understand https://storage.googleapis.com/avif-comparison/index.html is what was used to justify google chosing avif over jpeg-xl. Jpeg-xl was better at lossless images but avif was better at lossy, and lossy is the usecase that matters more to the web.
IF Philips is going to stick to the DICOM format, and not add lots of proprietary stuff, _and_ it's the format that it uses internally, then this will be good.
For example, folks can check out OpenSlide (https://openslide.org) and have a look at all the different slide formats that exist. If you dig in to Philips' entry, you'll see that OpenSlide does not support Philips' non-TIFF format (iSyntax), and that the TIFF format is an "export format".
If you have a Philips microscope that uses iSyntax, you are very limited on what non-Philips software you can use. If you want files in TIFF format, you (the lab tech) have to take an action to export a side in TIFF. It can take up a fair amount of lab tech time.
Ideally, the microscope should immediately store the images in an open format, with metadata that workflow software can use to check if a scanning run is complete. I _hope_ that will be able to happen here!
Worse, you have to do it manually one by one in their interface, it takes like 30 minutes per slide and you only have like 20 minutes after it's done to pick it up and save it somewhere useful otherwise the temporary file gets lost.
DICOM is of course the way to go, but it does have its rough edges - stupid multiple files, sparse shit, concatenated levels and now Philips is the only vendor who makes JPEG XL (next to jpeg, jp2k and jpeg xr).
We learnt to live with iSyntax (and iSyntax2), if you can get access to them that is. In most deployments the whole system is a closed appliance and you have no access to the filesystem to get the damn files out.
- medicine chooses lossless formats
- there are security concerns with decoders and operating systems
- once you build a medical device, the future of your company depends on being able to expensively patch it
https://www.abyssmedia.com/heic-converter/avif-heic-jpegxl.s...
They will also prefer to gaslight their clients rather than fix issues, and good luck if you’re already committed to an (un)managed service from them.
It felt like closing a major circle in my career, because I spent the first 16 years of my career in the medical industry, working on Neurosurgical Robots (Oulu Neuronavigator System), and one of the first tools I built was an ACR-NEMA 1.0 parser, ACR-NEMA being the direct predecessor to DICOM, and then continuing on radiation treatment planning systems with plenty of DICOM work within them. To now contribute back to that very standard is incredibly rewarding.
formerly_proven•4mo ago