frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
1•AlexeyBrin•25s ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
1•machielrey•1m ago•0 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
2•tablets•6m ago•0 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•8m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•11m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•11m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
1•billiob•11m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•17m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•23m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•24m ago•1 comments

Slop News - HN front page right now hallucinated as 100% AI SLOP

https://slop-news.pages.dev/slop-news
1•keepamovin•28m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•31m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
3•tosh•36m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
4•oxxoxoxooo•40m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•41m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•44m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•45m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•47m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•50m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
3•myk-e•52m ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•53m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•55m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•57m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•59m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•1h ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•1h ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•1h ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•1h ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments
Open in hackernews

Philips announces digital pathology scanner with native DICOM JPEG XL output

https://www.philips.com/a-w/about/news/archive/standard/news/articles/2025/philips-announces-digital-pathology-scanner-with-native-configurable-dicom-jpeg-and-jpeg-xl-output-in-world-first.html
100•ksec•4mo ago

Comments

formerly_proven•4mo ago
WebP artifacts not pathological enough?
sandGorgon•4mo ago
https://connect.mozilla.org/t5/ideas/support-jpeg-xl/idi-p/1...

vote for this feature to be natively supported in browsers

Vinnl•4mo ago
It's already under consideration but needs some work first: https://github.com/mozilla/standards-positions/pull/1064
righthand•4mo ago
Strange that Mozilla is going to rely on an internal team at Google to build a decoder for them in Rust, when Google is the one trying to kill JPEGXL.
Mindless2112•4mo ago
It's two different teams inside Google. Some part of the Chrome team is trying to quash JPEG XL.
righthand•4mo ago
Sure, but if it becomes political I expect the Chrome team to fully quash the JPEG XL team to hurt Firefox and JPEG XL in one go.
breppp•4mo ago
It's more likely related to security, image formats are a huge attack surface for browsers and they are hard to remove once added.

JPEG XL was written in C++ in a completely different part of Google without any of the safe vanity wuffs style code, and the Chrome team probably had its share of trouble with half baked compression formats (webp)

lonjil•4mo ago
Other than Jon at Cloudinary, everyone involved with JXL development, from creation of the standard to the libjxl library, works at Google Research in Zurich. The Chrome team in California has zero authority over them. They've also made a lot of stuff that's in Chrome, like Lossless WebP, Brotli, WOFF, the Highway SIMD library (actually created for libjxl and later spun off).
refulgentis•4mo ago
I'd argue the thread up through the comment you are replying to is fact-free gossiping - I'm wondering if it was an invitation to repeat the fact-free gossip, the comment doesn't read that way. Reads to me as more exasperated, so exasperated they're willing to speak publicly and establish facts.

My $0.02, since the gap here on perception of the situation fascinates me:

JPEG XL as a technical project was a real nightmare, I am not surprised at all to find Mozilla is waiting for a real decoder.

If you get _any_ FAANG engineer involved in this mess a beer || truth serum, they'll have 0 idea why this has so much mindshare, modulo it sounds like something familiar (JPEG) and people invented nonsense like "Chrome want[s] to kill it" while it has the attention of an absurd amount of engineers to get it into shipping shape.

(surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)

Implicated•4mo ago
> they'll have 0 idea why this has so much mindshare

Considering the amount of storage all of these companies are likely allocating to storing jpegs + the bandwidth of it all - maybe the instant file size wins?

bawolff•4mo ago
Hard disk and bandwidth of jpegs are almost certainly negligible in the era of streaming video. The biggest selling point is probably client side latency from downloading the file.

We barely even have movement to webp &avif, if this was a critical issue i would expect a lot more movement on that front since it already exists. From what i understand avif gives better compression (except for lossless) and has better decoding speed than jxl anyways.

danielheath•4mo ago
jxl let’s you further compress existing JPEG files without additional artifacting, which is significant given how many jpeg files already exist.
lonjil•4mo ago
> We barely even have movement to webp &avif

If you look at CDNs, WebP and AVIF are very popular.

> From what i understand avif gives better compression (except for lossless) and has better decoding speed than jxl anyways.

AVIF is better at low to medium quality, and JXL is better at medium to high quality. JXL decoding speed is pretty much constant regardless of how you vary the quality parameter, but AVIF gets faster and faster to decode as you reduce the quality; it's only faster to decode than JXL for low quality images. And about half of all JPEG images on the web are high quality.

The Chrome team really dislikes the concept of high quality images on the web for some reason though, that's why they only push formats that are optimized for low quality. WebP beats JPEG at low quality, but is literally incapable of very high quality[1] and is worse than JPEG at high quality. AVIF is really good at low quality but fails to be much of an improvement at high quality. For high resolution in combination with high quality, AVIF even manages to be worse than JPEG.

[1] Except for the lossless mode which was developed by Jyrki at Google Zurich in response to Mozilla's demand that any new web image format should have good lossless support.

ksec•4mo ago
>AVIF is better at low to medium quality,

>The Chrome team really dislikes the concept of high quality images on the web for some reason though, that's why they only push formats that are optimized for low quality.

It would be more accurate to say Bit per Pixel (BPP) rather than quality. And that is despite the Chrome team themselves showing 80%+ of images served online are in the medium BPP range or above where JPEG XL excel.

bawolff•4mo ago
Isn't medium quality the thing to optimize for? If you are doing high quality you've already made the tradeoff that you care about quality more than latency, so the precieved benefit of mild latency improvement is going to be lower.
juliobbv•4mo ago
> AVIF is better at low to medium quality, and JXL is better at medium to high quality.

BTW, this is no longer true. With the introduction of tune IQ (Image Quality) to libaom and SVT-AV1, AVIF can be competitive with (and oftentimes beat) JXL at the medium to high quality range (up to SSIMULACRA2 85). AVIF is also better than JPEG independently of the quality parameter.

JXL is still better for lossless and very-high quality lossy though (SSIMULACRA2 >90).

spider-mario•4mo ago
> JPEG XL as a technical project was a real nightmare

Why?

> (surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)

There is no waiting on Chrome involved in: https://bugzilla.mozilla.org/show_bug.cgi?id=1986393

lonjil•4mo ago
> (surprisingly, Firefox is not attributed this - they also do not support it yet, and they are not doing anything _other_ than awaiting Chrome's work for it!)

The fuck are you talking about? The jxl-rs library Firefox is waiting on is developed by mostly the exact same people who made libjxl which you say sucks so much.

In any case, JXL obviously has mindshare due to the features it has as a format, not the merits of the reference decoder.

spider-mario•4mo ago
Those writing the new Rust decoder are largely people who worked on the standard and on the original C++ implementation, + contributions from the author of jxl-oxide (who is not at Google).
bawolff•4mo ago
Sheesh. Google isn't trying to kill jxl, they just think its a bad fit for their product.

There is a huge difference between deciding not to do something because the benefit vs complexity trade off doesn't make sense, and actively trying to kill something.

FWIW i agree with google, avif is a much better format for the web. Pathology imaging is a bit of a different use case, where jpeg-xl is a better fit than avif would be.

CharlesW•4mo ago
It's nice to see Safari lead the pack: https://caniuse.com/jpegxl
jiggawatts•4mo ago
And then to actually support the HDR images that can be encoded with JPEG XL, they'd have to implement HDR in the browser graphics pipeline.

Any decade now, any decade...

bawolff•4mo ago
Voting for tickets does nothing when there are real reasons not to implement.
avalys•4mo ago
Can someone comment on what is newsworthy about this?
kangalioo•4mo ago
Someone using JPEGXL in a real world product
ndriscoll•4mo ago
jpegxl is supported by pretty much every relevant program that deals with images. The web situation is purely because of Google's monopoly.
daemonologist•4mo ago
But exceedingly few cameras (this is the only one I'm aware of). If I had to guess it's probably encoding in software but still, it's a start.
Hamuko•4mo ago
There has to be someone else since my dad just emailed me a JPEGXL image less than 15 minutes ago. No idea on how he produced or procured it.
ThrowawayTestr•4mo ago
A medical device that outputs a standard image format instead of proprietary garbage
lostlogin•4mo ago
The cluster fuck that is DICOM and HL7 once vendors go to town is far from the ‘open’ utopia we dream of.
kiicia•4mo ago
JPEG XL is alive despite google trying their best to kill it and is used to treat cancer
Caspy7•4mo ago
Google Research was central in developing and continuing to push JPEG XL.

The Google Chrome folks are the ones who decided to disallow it. You could argue that they are trying to kill it, but certainly not Google at large.

UltraSane•4mo ago
Nerds like JPEG XL but Google is trying to kill it.
makapuf•4mo ago
Why does it try to kill it ?
greenavocado•4mo ago
Because they can't control it
arccy•4mo ago
hardly, it's a google team that made the thing.
UltraSane•4mo ago
Then why does Google not want JPEG XL support in Chrome?
lonjil•4mo ago
Google is not a monolith. The Chrome team doesn't want it in Chrome, but many other parts of Google likes it.
arccy•4mo ago
nerds desperately clinging to any hope that jpeg xl will be revived
bawolff•4mo ago
Basically there is a conspiracy theory that google is trying to kill jpeg xl, so the anti-google crowd is excited someone is using it.

The truth is that every image format added to a web browser has to be supported forever, so chrome team is wary of adding new file formats unless its an above and beyond improvement. Jpeg XL isn't (relative to avif) so google decided not to implement. Its not some malicious conspiracy, it just didn't make sense from a product perspective.

From what i understand https://storage.googleapis.com/avif-comparison/index.html is what was used to justify google chosing avif over jpeg-xl. Jpeg-xl was better at lossless images but avif was better at lossy, and lossy is the usecase that matters more to the web.

dom96•4mo ago
My first ever job in software was working for PathXL (a Belfast startup implementing digital pathology software). Lots of fond memories working there, including how cool it was working on what was effectively Google Maps but for massive tissue sample images. PathXL actually ended up getting acquired by Philips, seems like a great match if they're building the hardware for this.
yread•4mo ago
They sold them off to Cirdan, they are not doing much with the software...
dom96•4mo ago
Oh interesting, I missed that. Pity that Philips sold them off.
CaliforniaKarl•4mo ago
Ugh, Pathology image processing is really annoying.

IF Philips is going to stick to the DICOM format, and not add lots of proprietary stuff, _and_ it's the format that it uses internally, then this will be good.

For example, folks can check out OpenSlide (https://openslide.org) and have a look at all the different slide formats that exist. If you dig in to Philips' entry, you'll see that OpenSlide does not support Philips' non-TIFF format (iSyntax), and that the TIFF format is an "export format".

If you have a Philips microscope that uses iSyntax, you are very limited on what non-Philips software you can use. If you want files in TIFF format, you (the lab tech) have to take an action to export a side in TIFF. It can take up a fair amount of lab tech time.

Ideally, the microscope should immediately store the images in an open format, with metadata that workflow software can use to check if a scanning run is complete. I _hope_ that will be able to happen here!

yread•4mo ago
> If you want files in TIFF format, you (the lab tech) have to take an action to export a side in TIFF. It can take up a fair amount of lab tech time.

Worse, you have to do it manually one by one in their interface, it takes like 30 minutes per slide and you only have like 20 minutes after it's done to pick it up and save it somewhere useful otherwise the temporary file gets lost.

DICOM is of course the way to go, but it does have its rough edges - stupid multiple files, sparse shit, concatenated levels and now Philips is the only vendor who makes JPEG XL (next to jpeg, jp2k and jpeg xr).

We learnt to live with iSyntax (and iSyntax2), if you can get access to them that is. In most deployments the whole system is a closed appliance and you have no access to the filesystem to get the damn files out.

zokier•4mo ago
In case others are not aware what "pathology scanner" is, apparently it is a device to scan/image microscope slides. Found some specs, apparently these Philips units do 0.25um/px and 15mm x 15mm imaging area, making the output images presumably 60000 x 60000 pixels in size. Apparently Philips previously used their own "iSyntax" format, and also JPEG2000 DICOM files for these devices.
TheChaplain•4mo ago
Always impressed when someone does anything with DICOM, it's a bit complex format IMHO.
Dayshine•4mo ago
Image data is just encapsulated: you just take a jpeg file and write it to bytes and wrap it a little.
sigwinch•4mo ago
That’s misleading.

- medicine chooses lossless formats

- there are security concerns with decoders and operating systems

- once you build a medical device, the future of your company depends on being able to expensively patch it

adolph•4mo ago
After reading a bit about jpeg xl [0], the bit depth, channel count and pixel count seem promising. Devils in the details. How will multiple focal planes be implemented?

https://www.abyssmedia.com/heic-converter/avif-heic-jpegxl.s...

throwaway9415•4mo ago
I find Philips quite an interesting player in imaging, PACS and related space. At the innovation space they might be coming up with new technologies and solutions but during service and delivery side they provide quite an awful service, at least in my personal experience in Singapore. I have friends at various healthcare institutions and we were just surprised how could the service be so bad for a vendor who is considered as a valuable one in the industry.
tokyovigilante•4mo ago
Their RIS and PACS software is also objectively poor, and they actively promote vendor lockin with solutions like iSyntax in the old IntelliSpace, and a horrifically bad and non-conforming IHE SWF implementation in Vue (which is partly Carestream’s fault to be fair).

They will also prefer to gaslight their clients rather than fix issues, and good luck if you’re already committed to an (un)managed service from them.

JyrkiAlakuijala•4mo ago
Seeing JPEG XL integrated into the DICOM standard was a particularly proud moment for me as the manager of the effort at Google.

It felt like closing a major circle in my career, because I spent the first 16 years of my career in the medical industry, working on Neurosurgical Robots (Oulu Neuronavigator System), and one of the first tools I built was an ACR-NEMA 1.0 parser, ACR-NEMA being the direct predecessor to DICOM, and then continuing on radiation treatment planning systems with plenty of DICOM work within them. To now contribute back to that very standard is incredibly rewarding.