Curious if Animated SVGs are also a thing. I remember seeing some Javascript based SVG animations (it was a animated chatbot avatar) - but not sure if there is any standard framework.
Yes. Relevant animation elements:
• <set>
• <animate>
• <animateTransform>
• <animateMotion>
This could possibly be used to build full fledged games like pong and breakout :)
https://shkspr.mobi/blog/2025/06/an-annoying-svg-animation-b...
Can animated PNG beat av1 or whatever?
[0] like for example these old Windows animations: https://www.randomnoun.com/wp/2013/10/27/windows-shell32-ani...
The AV1 spec [1] does not allow RGB color spaces, therefore AV1 cannot preserve RGB animations in a bit-identical fashion.
Animated PNGs can't beat GIF nevermind video compression algorithms.
Not entirely true, it depends on what's being displayed, see a few simple tests specifically constructed to show how much better APNG can be vs GIF and {,lossy} webp: http://littlesvr.ca/apng/gif_apng_webp.html
Of course I don't think it generalizes all that well…
That's not really true. Some websites lie to you by putting .gif in the address bar but then serving a file of a different type. File extensions are merely a convention and an address isn't a file name to begin with so the browser doesn't care about this attempt at end user deception one way or the other.
SVG is just html5, it has full support for CSS, javascript with buttons, web workers, arbitrary fetch requests, and so on (obviously not supported by image viewers or allowed by browsers).
Nowadays, AVIF serves that purpose best I think.
Probably the best news here. While you already can write custom data into a header, having Exif is good.
BTW: Does Exif have a magnetometer (rotation) and acceleration (gravity) field? I often wonder about why Google isn't saving this information in the images which the camera app saves. It could help so much with post-processing, like with leveling the horizon or creating panoramas.
Old decoders and new decoders now could render an image with exif rotation differently since it's an optional chunk that can be ignored, and even for new decoders, the spec lists no decoder recommendations for how to use the exif rotation
It does say "It is recommended that unless a decoder has independent knowledge of the validity of the Exif data, the data should be considered to be of historical value only.", so hopefully the rotation will not be used by renderers, but it's only a vague recommendation, there's no strict "don't rotate the image" which would be the only backwards compatible way
With jpeg's exif, there have also been bugs with the rotation being applied twice, e.g. desktop environment and underlying library both doing it independently
The camera knows which way it's oriented, so it should just write the pixels out in the correct order. Write the upper-left pixel first. Then the next one. And so on. WTF.
If a smartphone camera is doing it, then bad camera app!
This is particularly important on smartphones and battery operated devices. However, most smartphone devices simply save the photo the same way regardless of orientation, and simply add a display-rotated flag to the metadata.
It can be super annoying sometimes, as one can't really disable the feature on many devices. =3
It's basically a shame that the exif metadata contains things that affect the rendering
Exif fields: https://exiv2.org/tags.html
"photo scanned in 2025, is about something in easter, before 1940 and after 1920"
For ambiguous dates there is the EDTF Spec[1] which would be nice to see more widely adopted.
[0] https://www.media.mit.edu/pia/Research/deepview/exif.html
Different software reacts in different ways to partial specifications of yyyy/mm/dd such that you can try some of the cute tricks but probably only one s.w. package honours it.
And the majors ignore almost all fields other than a core set of one or two, disagree about their semantics, and also do wierd stuff with file name and atime/mtime.
Pleasantly surprised.
PNG is popular with some Commercial Application developers, but the exposure and color problems still look 1980's awful in some use-cases.
Even after spending a few grand on seats for a project, one still gets arrogant 3D clown-ware vendors telling people how they should run their pipeline with PNG hot garbage as input.
People should choose EXR more often, and pick a consistent color standard. PNG does not need yet another awful encoding option. =3
A very basic rec.709 workflow tutorial:
https://www.youtube.com/watch?v=lf8COHAgHJs
The Andreas Dürr LUT pack:
https://www.youtube.com/watch?v=dDKK54CeXgM
https://cinematiccookie.gumroad.com/l/bseftb?layout=profile
The calibration workflows also depend heavily on what is being rendered, source application(s), and the desired content look. There were some common free packs on github for popular programs at one time. Should still be around someplace... good luck. =3
What are you talking about? It's a bitmap. It has nothing to do with "exposure and color problems."
If you've never encountered the use-case, than don't worry about the aesthetics. Seriously, many vendors also just don't care... especially after they already were paid. Best of luck =3
After 20 years of success, we can't resist the temptation to mess with what works.
It has, but WWW is still de facto sRGB, and will be for a long time still. But again, I'm not strictly opposed to evolving PNG, I just hope they don't ruin it in the process, because that's usually what happens when something gets update for a modern audience. I'll be watching with mixed optimism and concern.
The continued popularity of non-HDR 1080p screens on laptops is a bleak reminder that most people would rather save a couple hundred bucks than buy HDR capable hardware.
HDR is great for TVs and a nice-to-have on phones (who mostly get it for free because OLEDs are the norm these days), but display technology only advances as much as its availability in low-cost devices.
Not sure how HDR encoding works, but my impression is that you can set a nominal white point other than (1, 1, 1) in your specified colorspace. This is an extension, but orthogonal to specifying the colorspace itself and the gamut.
For example 16bit (integer) TIFF files 'with headroom', i.e. where some bits were used to represent data over 1.0 (HDR) was a common approach for VFX work in the 90's.
16bit float TIFF is also thing since 33 years. Adobe DNG is modeled after TIFF. High end offline renderers have traditionally been using TIFF (with mip-maps) to store textures.
TIFF supports tags so primaries and white point or a known color space name can be stored in the file.
The format is so versatile, it is used everywhere.
And of course it also supports indexed color, i.e. a non-negotiable feature at the time PNG was introduced.
PNG was meant to replace GIF. Instead of looking what was already there some group of "experts" and "enthusiasts" (quote Wikipedia) succumbed to their NIH complexes. If licensing/patent woes over compression algorithms had been a motivator, why not just add a new one to TIFF?
The fact that PNG stores straight/unpremultiplied alpha says everything if you know anything about imaging in computer graphics.
And the fact that the updated format spec just released didn't address this tells you everything you need to know about the group in charge of that, today.
PNG is the VHS of image formats. It should have never seen the light day of in the first place nor the adoption it did.
Yeah, I love the fact that you can embed a PDF file inside a TIFF.
How can you call this basic fail a success?
Back then, there were no libraries in C# for it, but it's actually quite easy to make APNG from PNGs directly by writing chunks with correct headers, no encoders needed (assuming PNGs are already encoded as input).
https://github.com/NightElfik/Malsys/blob/master/src/Malsys....
While I welcome that there is now PNG with animations, I am less impressed about how Mozilla chose to push for it.
Using PNG's magic numbers and pretend to existing software that it is just normal PNG? That is the same mindset that lead to HTML becoming tag soup. After all, HTML with a <blink> tag is still HTML, no?
I think they could have achieved animated PNG standardization much faster with a more humble and careful approach.
Lossless AVIF is not competitive.
However, lossless WEBP does not support indexed color images. If you need palettes, you're stuck with PNG for now.
I did skim through the specs, it seems most of it is related to cleanup and optional blocks, so it seems PNG is still safe, am I wrong? (asking those who did dive into the new specs deeply).
Society doesn't need a new image format. I'd wager to say not any new multimedia format. Big corporate entites do, and have churning them out at a steady place.
Look at poor webp - a format pushed by the largest industry players - and the abysmal everyday use it gets, and the hate it generates.
Estimates are that 95% of Internet users have a browser that supports WebP and that ~25% of the top million websites serve WebP images. I wouldn't call that abysmal.
LeoPanthera•5h ago
This worries me. Because presumably, changing the compression algorithm will break backwards compatibility, which means we'll start to see "png" files that aren't actually png files.
It'll be like USB-C but for images.
skywal_l•5h ago
That being said, they also can do dumb things however, right at the end of the sentence you quote they say:
> we want to make sure we do it right.
So there's hope.
masklinn•4h ago
That's just changing an implementation detail of the encoder, and you don't need spec changes for that e.g. there are PNG compressors which support zopfli for extra gains on the DEFLATE (at a non-insignificant cost). This is transparent to the client as the output is still just a DEFLATE stream.
vhcr•4h ago
josefx•25m ago
lifthrasiir•5h ago
[1] https://github.com/w3c/png/issues/39#issuecomment-2674690324
snvzz•2h ago
Now, PNG datatype for AmigaOS will need upgrading.
Arnt•1h ago
Arnt•1h ago
https://svgees.us/blog/img/revoy-cICP-bt.2020.png uses the new colour space. If your software and monitor can handle it, you see better colour than I, otherwise, you see what I see.
Lerc•4h ago
The PNG format is specifically designed to allow software to read the parts they can understand and to leave the parts they cannot. Having an extensible format and electing never to extend it seems pointless.
mort96•4h ago
Yeah, we know. That's terrible.
koito17•4h ago
This proves OP analogy regarding USB-C. Having PNG as some generic container for lossless bitmap compression means fragmentation in libraries, hardware support, etc. The reason being that if the container starts to support too many formats, implementations will start restricting to only the subsets the implementers care about.
For instance, almost nobody fully implements MPEG-4 Part 3; the standard includes dozens of distinct codecs. Most software only targets a few profiles of AAC (specifically, the LC and HE profiles), and MPEG-1 Layer 3 audio. Next to no software bothers with e.g. ALS, TwinVQ, or anything else in the specification. Even libavcodec, if I recall correctly, does not implement encoders for MPEG-4 Part 3 formats like TwinVQ. GP's fear is exactly this -- that PNG ends up as a standard too large to fully implement and people have to manually check which subsets are implemented (or used at all).
bayindirh•3h ago
Same is also true for the most advanced codecs. MPEG-* family and MP3 comes to my mind.
Nothing stops PNG from defining a "set of decoders", and let implementers loose on that spec to develop encoders which generate valid files. Then developers can go to town with their creativity.
cm2187•1h ago
fc417fc802•2h ago
Regarding the potential for fragmentation of the png ecosystem the alternative is a new file format which has all the same support issues. Every time you author something you make a choice between legacy support and using new features.
From a developer perspective, adding support for a new compression type is likely to be much easier than implementing logic for an entirely new format. It's also less surface area for bugs. In terms of libraries, support added to a dependency propagates to all consumers with zero additional effort. Meanwhile adding a new library for a new format is linear effort with respect to the number of programs.
7bit•2h ago
Not Sure what youre talking abouz.
Arnt•2h ago
If you want to check yours: mediainfo **/*.mp4 | grep -A 2 '^Audio' | grep Format | sort | uniq -c
https://en.wikipedia.org/wiki/TwinVQ#TwinVQ_in_MPEG-4 tells the story of TwinVQ in MPEG-4.
cm2187•1h ago
And now think of the younger generation that has grown up with smartphones and have been trained to not even know what a file is. I remember this story about senior high school students failing their school tests during covid because the school software didn't support heif files and they were changing the file extension to jpg to attempt to convert them.
I have no trust the software ecosystem will adapt. For instance the standard libraries of the .net framework are fossilised in the world of multimedia as of 2008-ish. Don't believe heif is even supported to this day. So that's a whole bunch of code which, unless the developers create workarounds, will never support a newer png format.
pvorb•3h ago
If you've created an extensible file format, but you never need to extend it, you've done everything right, I'd say.
jajko•3h ago
That's what I would call really extensible, but then there may be no limits and hacking/viruses could have easily a field day.
lelanthran•2h ago
Will sooner or later be used to implement RCEs. Even if you could do a restriction as is done for eBPF, that code still has to execute.
Best would be not to extend it.
dooglius•3h ago
So then it was pointless for PNG to be extensible? Not sure what your argument is.
chithanh•3h ago
In an ideal world, yes. In practice however, if some field doesn't change often, then software will start to assume that it never changes, and break when it does.
TLS has learned this the hard way when they discovered that huge numbers of existing web servers have TLS version intolerance. So now TLS 1.2 is forever enshrined in the ClientHello.
shiomiru•2h ago
And considering we already have plenty of more advanced competing lossless formats, I really don't see why "feed a BMP to deflate" needs a new, incompatible spin in 2025.
fc417fc802•2h ago
Other than JXL which still has somewhat spotty support in older software? TIFF comes to mind but AFAIK its size tends to be worse than PNG. Edit: Oh right OpenEXR as well. How widespread is support for that in common end user image viewer software though?
Arnt•1h ago
More generally, PNG has a simple feature to specify what's needed. A file consists of a number of chunks, and one bit in the chunk specifies whether that chunk is required for display. All of the extensions I've seen in the past decades set that bit to "optional".
For example, this update includes a chunk containing EXIF data. As you'd expect, the exif chunk sets that bit to "optional".
HelloNurse•2h ago
colanderman•4h ago
mrheosuper•3h ago
techpression•3h ago
mrheosuper•3h ago
fragmede•3h ago
danielheath•3h ago
EG your GPU and monitor both have a USB-C port. Plug them together with the right USB cable and you'll get images displayed. Plug them together with the wrong USB cable and you won't.
USB 3 didn't have this issue - every cable worked with every port.
mrheosuper•3h ago
yoz-y•3h ago
I believe the problem here is that you will have PNG images that “look” like you can open them but can’t.
mrheosuper•3h ago
danielheath•3h ago
If PNG gets extended, it's entirely plausible that someone will view a PNG in their browser, save it, and then not be able to open the file they just saved.
There are those who claim "backwards compatibility" doesn't cover "how you use it" - but roughly none of the people who now have to deal with broken software care about such semantic arguments. It used to work, and now it doesn't.
johnisgood•3h ago
Do they mention which C libraries use this spec?
mrheosuper•2h ago
USB-C spec is anything but breaking backward compatible.
fc417fc802•2h ago
It's a dichotomy. Either the provider accommodates users with older software or not. The file extension or internal headers don't change that reality.
Another example, new versions of PDF can adopt all the bells and whistles in the world but I will still be saving anything intended to be long lived as 1/a which means I don't get to use any of those features.
mystifyingpoi•3h ago
ay•2h ago
This is just pretending that if you have a cat and a dog in two bags and you call it “a bag”, it’s one and the same thing…
lelanthran•2h ago
Labelling is a poor band-aid on the root problem - consumer cables which look identical and fit identically should work wherever they fit.
There should never have been a power-only spec for USB-C socket dimensions.
If a cable supports both power and data, it must fit in all sockets. If a cable supports only power it must not fit into a power and data socket. If a cable supports only data, it should not fit into a power and data socket.
It is possible to have designed the sockets under these constraints, with the caveat that they only go in one way. I feel that that would have been a better trade-off. Making them reversible means that you cannot have a design which enforces cable type.
Xss3•1h ago
lelanthran•1h ago
Well, yes.
Why can't you use a power+data cable for the vape (or whichever appliance takes both)? What's the deal-breaker here?
The alternative is labeling, or plugging cables in to see if they do what you want them to do.
Both are a poor user interface.
mystifyingpoi•1h ago
That's even more confusing than the current state of affairs. If my phone has power and data socket, then I cannot use power only cable to only charge it? Presumably with the charger that has power only socket. So I need a cable with two different ends anyway. Just go micro-USB at this point :)
Funnily enough, there is a 100% overkill way to solve such issues. Just use super expensive certified TB cables. Well... plus a A-to-C adapter for noncompliant devices, I guess.
globular-toast•3h ago
Xss3•1h ago
voidUpdate•3h ago
Xss3•1h ago
mystifyingpoi•3h ago
What was broken was the promise of a "single cable to rule them all", partly due to manufacturers ignoring the requirements of USB-C (missing resistors or PD chips to negotiate voltages, requiring workarounds with A-to-C adapters), and a myriad of optional stuff, that might be supported or not, without a clear way to indicate it.
zirgs•1h ago
bmacho•3h ago
josephg•3h ago
> Many of the programs you use already support the new PNG spec: Chrome, Safari, Firefox, iOS/macOS, Photoshop, DaVinci Resolve, Avid Media Composer...
It might be too late to rename png to .png4 or something. It sounds like we're using the new png standard already in a lot of our software.
jillesvangurp•2h ago
The main use case for PNG is web browsers and all of them seem to be on board. Using old web browsers is a bad idea. You do get these relics showing up using some old version of internet explorer. But some images not rendering is the least of their problems. The main challenge is actually going to be updating graphics tools to export the new files. And teaching people that sRGB maybe isn't good enough any more. That's going to be hard since most people have no clue about color spaces.
Anyway, that gives everybody plenty of time to upgrade. By the time this stuff is widely used, it will be widely supported. So, you kind of get forward compatibility that way. Your browser already supports the new format. Your image editor probably doesn't.
hnlmorg•1h ago
ajnin•1h ago
Also if you forbid evolving existing formats, the only alternative to improve is to introduce a new format, and I argue that it would be causing even more fragmentation and be more difficult to adopt to. Look at all the drama surrounding JPEG XL.
altairprime•52m ago
I’m not saying this is what will happen — but if I was able to construct a plausible approach to compression in ten minutes, then perhaps it’s a bit early to predict the doom of compatibility.