frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•4m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•5m ago•0 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•5m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
4•bookofjoe•6m ago•1 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•7m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
1•ilyaizen•7m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•8m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•8m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•9m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•9m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•9m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•10m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•11m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•11m ago•0 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•15m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•15m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•16m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•16m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•18m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•19m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•19m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•19m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•20m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
3•simonw•20m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•21m ago•0 comments

Show HN: Velocity - Free/Cheaper Linear Clone but with MCP for agents

https://velocity.quest
2•kevinelliott•22m ago•2 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•23m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
2•nmfccodes•23m ago•1 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
2•eatitraw•30m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•30m ago•0 comments
Open in hackernews

Why JPEG XL ignoring bit depth is genius (and why AVIF can't pull it off)

https://www.fractionalxperience.com/ux-ui-graphic-design-blog/why-jpeg-xl-ignoring-bit-depth-is-genius
102•Bogdanp•3mo ago

Comments

kiicia•3mo ago
jpeg xl is fantastic, yet autocratic google wants to force inferior format
homebrewer•3mo ago
Mozilla also isn't interested in supporting it, it's not just Google. I also often see these articles that tout jpeg-xl's technical advantages, but in my subjective testing with image sizes you would typically see on the web, avif wins every single time. It not only produces fewer artifacts on medium-to-heavily compressed images, but they're also less annoying: minor detail loss and smoothing compared to jpeg-xl's blocking and ringing (in addition to detail loss; basically the same types of artifacts as with the old jpeg).

Maybe there's a reason they're not bothering with supporting xl besides misplaced priorities or laziness.

Retric•3mo ago
JPEG-XL is optimized for the low to zero levels of compression which isn’t as commonly used on the web, but definitely fills a need.

Google citied insufficient improvements which is a rather ambiguous statement. Mozilla seems more concerned with the attack surface.

formerly_proven•3mo ago
JPEG XL seems optimally suited for media and archival purposes and of course this is something you’d want to mostly do through webapps nowadays. Even relatively basic uses cases like Wiki Commons is basically stuck on JPEG for these purposes.

For the same reason it would be good if a future revision of PDF/A would include JPEG XL, since it doesn't really have any decent codecs for low-loss (but not losless) compression (e.g. JPEG sucks at color schematics/drawings and losless is impractically big for them). It did get JP2 but support for that is quite uncommon.

OneDeuxTriSeiGo•3mo ago
> Mozilla also isn't interested in supporting it

Mozilla is more than willing to adopt it. They just won't adopt the C++ implementation. They've already put into writing that they're considering adopting it when the rust implementation is production ready.

https://github.com/mozilla/standards-positions/pull/1064

masklinn•3mo ago
You have a really strange interpretation of the word “consider”.
mistercow•3mo ago
Seems like the normal usage to me. The post above lists other criteria that have to be satisfied, beyond just being a Rust implementation. That would be the consideration.
masklinn•3mo ago
Mozilla indicates that they are willing to consider it given various prerequisite. GP translates that to being “more than willing to adopt it”. That is very much not a normal interpretation.
OneDeuxTriSeiGo•3mo ago
From the link

> To address this concern, the team at Google has agreed to apply their subject matter expertise to build a safe, performant, compact, and compatible JPEG-XL decoder in Rust, and integrate this decoder into Firefox. If they successfully contribute an implementation that satisfies these properties and meets our normal production requirements, we would ship it.

That is a perfectly clear position.

deskamess•3mo ago
How far away is JPEG-XL rust version from Google if Chrome is not interested in it?
ac29•3mo ago
You can review it here: https://github.com/libjxl/jxl-rs

Seems to be under very active development.

m-schuetz•3mo ago
Now I'm feeling a bit less bad for not using Firefox anymore. Not using it because it's C++ is <insert terms that may not be welcome on HN>
mistercow•3mo ago
Multiple severe attacks on browsers over the years have targeted image decoders. Requiring an implementation in a memory safe language seems very reasonable to me, and makes me feel better about using FF.
kouteiheika•3mo ago
So you think it's silly to not want to introduce new potentially remotely-exploitable CVEs in one of the most important pieces of software (the web browser) on one's computer? Or are you implying those 100k lines of multithreaded C++ code are bug-free and won't introduce any new CVEs?
OneDeuxTriSeiGo•3mo ago
It's not just "C++ bad". It's "we don't want to deal with memory errors in directly user facing code that parses untrusted contents".

That's a perfectly reasonable stance.

out_of_protocol•3mo ago
There's way more than one rust implementation around

- https://github.com/libjxl/jxl-rs

- https://github.com/tirr-c/jxl-oxide

- https://github.com/etemesi254/zune-image

Etc. You can wait for 20 or so years "just to be sure" or start doing something. Mozilla sticks to the option A here by not doing anything

OneDeuxTriSeiGo•3mo ago
The jxl-oxide dev is a jxl-rs dev. jxl-oxide is decode only while jxl-rs is a full encode/decode library.

zune also uses jxl-oxide for decode. zune has an encoder and they are doing great work but their encoder is not threading safe so it's not viable for Mozilla's need.

And there's work already being done for properly integrating jxl implementations with firefox but frankly things take time.

If you are seriously passionate about seeing JPEG-XL in firefox there's a really easy solution. Contribute. More engineering hours put towards a FOSS project tends to see it come to fruition faster.

demetris•3mo ago
I did some reading recently, for a benchmark I was setting up, to try and understand what the situation is. It seems things have started changing in the last year or so.

Some links from my notes:

https://www.phoronix.com/news/Mozilla-Interest-JPEG-XL-Rust

https://news.ycombinator.com/item?id=41443336 (discussion of the same GitHub comment as in the Phoronix site)

https://github.com/tirr-c/jxl-oxide

https://bugzilla.mozilla.org/show_bug.cgi?id=1986393 (land initial jpegxl rust code pref disabled)

In case anyone is curious, here is the benchmark I did my reading for:

https://op111.net/posts/2025/10/png-and-modern-formats-lossl...

idoubtit•3mo ago
No, the situation about image compression has not changed. The Grand Poster you were replying to was writing about typical web usage, that is "medium-to-heavily compressed images", while your benchmark is about lossless compression.

BTW, I don't see how Mozilla's interest in a jpegxl _decoder_ (your first link) has anything to do with the performance of jpegxl encoders compared to avif's encoders. In case you're really interested in the former, Firefox now has more than intentions, but it's still not at production level: https://bugzilla.mozilla.org/show_bug.cgi?id=1986393

demetris•3mo ago
No. demetris’ benchmark of lossless image compression is not a sign that the situation may be changing. :-D

That was just the context for some reading I did to understand where we are now.

> BTW, I don't see how Mozilla's interest in a jpegxl _decoder_ (your first link) has anything to do with the performance of jpegxl encoders compared to avif's encoders. In case you're really interested in the former, Firefox now has more than intentions, but it's still not at production level: https://bugzilla.mozilla.org/show_bug.cgi?id=1986393

That is one of the links I shared in my comment (along with the bug title in parenthesis). :-)

gcr•3mo ago
I've had exactly the opposite outcome with AVIF vs JPEG-XL. I've found that jxl outperforms AVIF quite dramatically at low bitrates.
miladyincontrol•3mo ago
Same in my experience testing and deploying a few sites that support both. In general the only time AVIF outperformed in file size for me was with laughably low quality settings beyond what any typical user or platform would choose.

And for larger files especially the benefits of actually having progressive decoding, pushed me even more in favour of jpeg-xl. Doubly so when you can just provide variations in image size by halting the bit flow arbitrarily.

ksec•3mo ago
>but in my subjective testing with image sizes you would typically see on the web, avif wins every single time.

What is that in terms of bpp? Because according to Google Chrome 80-85% of we deliver images with bpp of 1.0 or above. I don't think most people realise that.

And in most if not all circumstances, jpeg XL performs better than AVIF at bpp 1.0 and above tested by professionals.

AlienRobot•3mo ago
I wish they separated the lossless codec into "WebPNG." WebP is better than PNG, but it's too risky to use (and tell people to use) a lossless format that is lossy if you forget to use a setting.
est•3mo ago
> JPEG XL’s Radical Solution: Float32 + Perceptual Intent

So 2^32 bit depth? 4 bytes seems an overkill.

fps-hero•3mo ago
Did you miss the point of the article? JPEG-XL encoding doesn't rely on quantisation to achieve its performance goals. Its a bit like how GPU shaders use floating point arithmetic internally but output quantised values for the bit depth of the screen.
est•3mo ago
> Did you miss the point of the article?

Sorry I missed. How is the "floating point" stored in .jxl files?

Float32 has to be serialized one way or another per pixel, no?

jstanley•3mo ago
No, JPEG is not a bitmap format.
wongarsu•3mo ago
The cliff notes version is that JPEG and JPEG XL don't encode pixel values, they encode the discrete cosine transform (like a Fourier transform) of the 2d pixel grid. So what's really stored is more like the frequency and amplitude of change of pixels than individual pixel values, and the compression comes from the insight that some combinations of frequency and amplitude of color change are much more perceptible than others
tetris11•3mo ago
The gradient is stored, not the points on the gradient
jlouis•3mo ago
In addition to the other comments: you can have an internal memory representation of data be Float32, but on disk, this is encoded through some form of entropy encoding. Typically, some of the earlier steps is preparation for the entropy-encoder: you make the data more amenable to entropy-encoding through rearrangement that's either fully reversible (lossless), or near-reversible (lossy).
TD-Linux•3mo ago
Which is completely wrong by the way, JPEG-XL quantizes its coefficients after the DCT transform like every other lossy codec. Most codecs have at least some amount of range expansion in their DCT as well, so the values quantized might be greater bit depth than the input data.
Findecanor•3mo ago
The article mentions that the bit depth can be 16. You may need more bits for HDR and some additional bits for precision. For example, screen pixels have an exponential intensity curve but image processing is best done in linear.

However, I wonder if floating-point is necessary, or even the best to use compared to using 32-bit fixed-point. The floating-point format includes subnormal numbers that are very close to zero, and I'd think that could be much more precision than needed. Processing of subnormal numbers is extra slow on some processors and can't always be turned off.

diffuse_l•3mo ago
I think the article could be better and get the point across with half the length and without the second half of it being full of ai generated list of advantages, or using that space to give some more technical information
evertedsphere•3mo ago
the article could be better if it weren't entirely "ai generated"
diffuse_l•3mo ago
Well, at least the first half was somewhat useful...
Gigachad•3mo ago
Maybe we can AI summarise it back to the original prompt to save time.
cainxinth•3mo ago
> Write a short article explaining that JPEG XL's genius is its bit-depth-agnostic design, which converts all image data into a perceptually-based floating-point format (XYB) to optimize compression for what the human eye actually sees. In contrast, AVIF is locked into its video-codec legacy, forcing it to use rigid, integer-based bit-depth modes that optimize for numerical precision rather than true perceptual quality.
Lanedo•3mo ago
> Find the (minimal) AI prompt that lead to the generation of this article: [...]

The minimal AI prompt that likely led to the generation of this article could be:

"Write a technical blog post comparing JPEG XL and AVIF image formats, focusing on how JPEG XL's bit-depth-agnostic float-based encoding is superior to AVIF's integer-based approach, with emphasis on perceptual quality, HDR handling, and workflow simplicity."

This prompt captures the core elements:

- Technical comparison between two image formats

- Focus on JPEG XL's unique "ignoring bit depth" design

- Emphasis on perceptual vs. numerical quality

- Discussion of HDR and workflow benefits

- Tone and structure matching the published article

The prompt would have guided the AI to produce content that:

1. Explains the technical difference in encoding approaches

2. Demonstrates why JPEG XL's method is better

3. Provides real-world implications for users

4. Maintains the author's voice and technical depth

5. Follows the article's structure and emphasis on "perceptual intent" over bit precision

Cthulhu_•3mo ago
This is so meta, we're using AI to generate feedback loops between a prompt, the AI generated content, using AI to recreate the prompt used to generate the content, etc. Spiraling to unreadable slop - unreadable to real humans anyway.

Soon enough the AI will invent a format for communicating with instances of itself or other AIs so that they can convey information that a client AI can translate back to the user's personal consumption preferences. Who needs compression or image optimization when you can reduce a website to a few kB of prompts which an AI engine can take to generate the full content, images, videos, etc?

nicoburns•3mo ago
What makes you think it is AI generated? Perhaps it's just the Dunning-Kruger effect in an area I'm not especially knowledgable in, but this article strikes me as having more technical depth and narrative cohesion than AI is generally capable of.
diffuse_l•3mo ago
It mostly rehashes the point of using float instead of integer representation, and uses different headers (radical solution, why this matters, secret weapon, philosophy, the bottom line) for streching what could be said in a few sentences into a few pages.
vintermann•3mo ago
The reason AI loves this format is that it was a popular format before generative AI came along. It's the format of clickbaity "smart" articles, think Slate magazine etc.
Cthulhu_•3mo ago
But because it's for a subject the HN audience is interested in, it actually gets upvotes unlike those sites, so a lot of people reading these get reintroduced to the format.

Tenish years ago we had slop / listicles already and thankfully our curated internet filters helped us avoid them (but the older generation who came across them through Facebook and the like). But now they're back, and thanks to AI they don't need people who actually know what they're talking about to write articles aimed at e.g. the HN audience (because the people who know what they're talking about refuse to write slop... I hope)

evertedsphere•3mo ago
Formatting and headers aside, there are lots of local rhetorical flourishes and patterns that are fairly distinctive and appear at a far higher rate in AI writing than in most writing that isn't low-quality listicle copy artificially trying to hold your attention long enough that you'll accidentally click on one of the three auto-playing videos when you move your pointer to dismiss the newsletter pop-up.

Here's something you know. It's actually neither adjective 1 nor adjective 2—in fact, completely mundane realization! Let that sink in—restatement of realization. Restatement. Of. Realization. The Key Advantages: five-element bulleted list with pithy bolded headings followed by exactly zero new information. Newline. As a surprise, mild, ultimately pointless counterpoint designed to artificially strengthen the argument! But here's the paradox—okay, I can't do this anymore. You get the picture.

    Inside JPEG XL’s lossy encoder, all image data becomes floating-point numbers between 0.0 and 1.0. Not integers. Not 8-bit values from 0-255. Just fractions of full intensity.
Everything after the first "Not" is superfluous and fairly distinctively so.

    No switching between 8-bit mode and 10-bit mode.
    No worrying whether  quantization tables are optimized for the right bit precision.
    No cascading encoding decisions based on integer sample depth.
    The codec doesn’t care about your display’s technical specs. It just needs to know: "what brightness level does white represent?" Everything scales from there.
Same general pattern.

    JPEG XL not worrying about bit depth isn’t an oversight *or* simplification. It’s liberation from decades of accumulated cruft where we confused digital precision with perceptual quality.
It's hard to describe the pattern here in words, but the whole thing is sort of a single stimulus for me. At the very least, notice again the repetition of the thing being argued against, giving it different names and attributes for no good semantic reason, followed by another pithy restatement of the thesis.

    By ignoring bit depth, JPEG XL’s float-based encoding embraces a profound truth: pixels aren’t just numbers; they’re perceptions.
This kind of upbeat, pithy, quotable punchline really is something frontier LLMs love to generate, as is the particular form of the statement. You can also see the latter in forms like "The conflict is no longer political—it's existential."

    Why This Matters
I know I said I wouldn't comment on little tics and formatting and other such smoking guns, but if I never have to see this godforsaken sequence of characters again…
zokier•3mo ago
Working with single fixed bit depth is imho different than being bit-depth agnostic. Same argument could be made about color spaces too.
WithinReason•3mo ago
Yes, this is great, but why don't we make the same argument for resolution too? I think we should!
shiandow•3mo ago
I completely agree. Based on my limited experience with image upscaling, downscaling, and superresolution, saving video at a lower resolution is the second crudest way of reducing the file size.

The crudest is downsampling the chroma channel, which makes no sense whatsoever for digital formats.

colonwqbang•3mo ago
So they "ignore" bit depth by using 32 bits for each sample. This may be a good solution but it's not really magic. They just allocated many more bits than other codecs were willing to.

It also seems like a very CPU-centric design choice. If you implement a hardware en/decoder, you will see a stark difference in cost between one which works on 8/10 vs 32 bits. Maybe this is motivated by the intended use cases for JPEG XL? Or maybe I've missed the point of what JPEG XL is?

adgjlsfhk1•3mo ago
image decoding is fast enough that no one uses hardware encoders. The extra bits are very cheap on both CPU and GPU, and by using them internally, you prevent internal calculations from accumulating error, and end up with a much cleaner size quality trade-off. (note that 10 bit output is still valuable on an 8 bit display because it lets the display manager dither the image
colonwqbang•3mo ago
That is true! But AVIF is based on AV1. As a video codec, AV1 often does need to be implemented in dedicated hardware for cost and power efficiency reasons. I think the article is misleading in this regard: "This limitation comes from early digital video systems". No, it is very much a limitation for video systems in the current age too.
fleabitdev•3mo ago
Interesting approach. It doesn't even introduce an extra rounding error, because converting from 32-bit XYB to RGB should be similar to converting from 8-bit YUV to RGB.

However, when decoding an 8-bit-quality image as 10-bit or 12-bit, won't this strategy just fill the two least significant bits with noise?

shiandow•3mo ago
Could be noise, but finding a smooth image that rounds to a good enough approximation of the original is quite useful. If you see a video player talk about debanding it is a exactly that.

I don't know if JPEG XL constrains solutions to be smooth.

adgjlsfhk1•3mo ago
I believe they constrain to piecewise smooth (i.e. don't smooth out edges but do smooth out nose)
TechRemarker•3mo ago
When exporting images from Lightroom Classic in JPEX XL you can choose the percent of compress or choose lossless which disable that of course. But also default to 8bit, but an option for 16bit which of course results in a much larger file. And color profile setting. So curious what they mean by it ignores bit depth?

Did some sample exports comparing JXL 8bit lossless vs JPG and JXL was quite a bit bigger. Same for doing lossy 100 comparison or 99 comparison of both. When setting JXL to 80%, 70% see noticeably savings but had thought the idea was JXL full quality essentially for much smaller sizes.

To be fair the 70% does look very similar to 100% but then again the JPEG 70% vs 100% also look very similar on an Apple XDR Monitor. the 70% or 80% etc on both jpeg and jpeg xl i do see visual differences in areas like on shoes where there is mesh.

JXL comes with lots of compatibility challenges since while things were picking up with Apple's adoption it seems to have halted since and apps like Evoto, and Topaz not adding support among many others. And Apple's still not full support and no progress on that. So unless Chrome does a 180 again, think AVIF and JXL will both end up stagnating and most sticking with JPG. For Tiff though noticed significant savings lossless jxl compared to tiff so that would be a good use case except tiffs more likely ones to be edited by third party apps that most likely won't support the format.

jonsneyers•3mo ago
For lossless, bitdepth of course does matter. Lossless image compression is storing a 2D array of integer numbers exactly, and with higher bitdepth, the range of those numbers grows (and the amount of hard-to-compress least significant bits grows).

The OP article is talking about lossy compression.

When comparing lossy compression, note that lossy compression settings are not a "percent" of anything, it's just an arbitrary scale that depends on the encoder implementation. So lossy "80%" is certainly not the same thing between JPEG and JXL, or between Photoshop and ImageMagick, etc. It's not a percentage of anything — it's just an arbitrary scale that gets mapped to encoder parameters (e.g. quantization tables) in some arbitrary way.

The best way to compare lossy compression performance is to encode an image at the quality that is acceptable for your use case (according to your eyes), and then you just look for various codecs/encoders what the lowest filesize is you can get while still getting an acceptable quality.

JyrkiAlakuijala•3mo ago
The original JPEG XL requirements were relational colors, where colors are an issue external to the codec. I was able to sufficiently convince the rest of the jpeg committee that we can achieve similar interoperability with absolute color, and particularly my xyb color space, and the absolute color storage giving us more opportunity to psychovisual optimization. Also, I was behind not having 8, 10, 12 bit modes, but just a single mode, and always yuv444 — to simplify operation and creating less confusion and less hard additional quality boundaries. Some of this "beauty" such as no yuv420 we needed to backtrack from for adding the lossless jpeg1 recompression support.