frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•52s ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
1•DEntisT_•3m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
1•tosh•3m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•3m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•6m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
4•sakanakana00•9m ago•0 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•12m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•12m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•14m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•14m ago•5 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•18m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
2•chartscout•20m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•23m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•24m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•29m ago•1 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•31m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•34m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•34m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•35m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•40m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•46m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•47m ago•1 comments

Slop News - The Front Page right now but it's only Slop

https://slop-news.pages.dev/slop-news
1•keepamovin•52m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•54m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
4•tosh•1h ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
4•oxxoxoxooo•1h ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•1h ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
4•goranmoomin•1h ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

4•throwaw12•1h ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
3•senekor•1h ago•0 comments
Open in hackernews

Shipping textures as PNGs is suboptimal

https://gamesbymason.com/blog/2025/stop-shipping-pngs/
149•ibobev•5mo ago

Comments

gmueckl•5mo ago
This advice is only practical if you have proper import tooling that can transparently do this conversion for your engine and preserves import settings per asset. Otherwise, this just adds a ton of fragile manual steps to asset creation.
LexiMax•5mo ago
This sort of batch asset conversion can be pretty easily automated with some combination of scripting and/or makefile.

Taking the time to learn about these sorts of compression methods is well worth it, as once you start loading assets of any appreciable scale or number, the savings in asset loading times can add up to be significant, turning what might have been a noticable jarring pause or stutter into something much less noticable.

Think about it, with hardware texture formats, not only are you not having to decode the PNG on the CPU, but you're also literally uploading less data to the GPU overall, since it can be transferred in a compressed form.

gmueckl•5mo ago
It's not so much the upload time/bandwidth, but some of the lossy compression algos are really slow.
flohofwoe•5mo ago
Why does that matter though when the compression happens 'offline' in the asset pipeline?
reactordev•5mo ago
BS, Blender exports KTX2, your engine supports KTX2, Gimp exports DDS/KTX2, Substance Painter exports DDS/KTX2, three.js has a KTX2 converter.

Using image formats for texture formats is amateur game development. PNG’s are fine for icons or UI or while you’re still trying to figure out what you’re doing. Once you know, you switch to a faster, no translation required, texture formats that load faster, support physical layers, compression, and are already in BGRA format.

socalgal2•5mo ago
You should always start from source IMO.

> Blender exports KTX2, your engine supports KTX2, Gimp exports DDS/KTX2, Substance Painter exports DDS/KTX2, three.js has a KTX2 converter.

If you're manually doing this conversion from source images to shipping formats your wasting your artist's time, AND, you'll likely lose the source images since people will generally only give you what's needed to build. "Hey, could you tweak building-wall-12.ktx?" "No, It was made in Photoshop and I can't find the file with the 60 layers so no, I can't tweak. Sorry"

reactordev•5mo ago
In this case, the source are TIFF's, pulled from the company texture CDN... I agree you start with high resolution sources but you don't ship lower resolution PNGs unless you're on an indie game or making a browser game.
exDM69•5mo ago
Note that KTX2 is a container format. It can contain uncompressed or compressed texture data. Some of the tools you mention might support "KTX2" but they won't compress textures for you, they just put the uncompressed data in a KTX2 container.

And my out-of-the-box Gimp 3.0.4. doesn't have KTX export at all. Didn't check the other tools you mention.

reactordev•5mo ago
Then you’re missing some plugins that were installed by default in mine.
flohofwoe•5mo ago
This can be solved with a few lines of python and a naming convention for texture file names (or alternatively a small json file per texture with metadata).
LexiMax•5mo ago
> Unfortunately, AFAICT most people end up rolling their own exporters.

Aside from the closed-source NVIDIA texture tool, I'm also aware of AMD's Compressonator, and Microsoft's DirectXTex implementation of texconv. Intel also used to maintain the ISPC texture compressor, but afaik it's also abandoned.

drak0n1c•5mo ago
Similarly in mobile apps where install size is key, designed image assets are better exported as SVG or platform draw code. There are tools to do this conversion easily and the benefit of draw code is that it is crisp at all sizes and can be dynamically colored and animated. Whether or not an app does this is often the reason why some relatively basic apps are 1 mb while others are 100+ mb.
Stevvo•5mo ago
Any modern engine does this automatically on PNG import, or as part of material/shader setup. You want different formats for different things, e.g AO, normals, bcs different formats have different compression artifacts.
pimlottc•5mo ago
How much extra time does it add for startup? For a single texture it’s probably negotiable but for a modern game, we’re talking about thousands of textures, so maybe it adds up?
charlie90•5mo ago
i did some casual testing before of a png vs a compressed texture and its like 20x faster, so yes a big difference. most of the speedup is from not needing to calculate mipmaps since they are already precalculated.
midnightclubbed•5mo ago
If you’re compressing at load-time it can be a lot. You can do a quick and dirty compress to BC1-3 (assuming you are on PC) in a few tens of milliseconds but quality will be suboptimal, higher quality will likely take a few seconds per texture and going to BC6 takes even longer but will look much better.
Dylan16807•5mo ago
> higher quality will likely take a few seconds per texture

It takes that long to do a good compression on the format that interpolates 2 colors for each 4x4 block? Huh.

midnightclubbed•5mo ago
It’s not entirely trivial (if you care about texture quality) you’re choosing 2 endpoints in 3d space that can be linearly interpolated to values that best fit 16 points in 3d space.

I may have been a little off with saying seconds per texture but it’s definitely non trivial amounts of time. And ects (mobile) and BC7 are certainly still not something you want to compress to at game load-time.

Some comparisons of texture compression speeds for various formats (from 2020 but on a 12core i9): https://www.aras-p.info/blog/2020/12/08/Texture-Compression-...

hrydgard•5mo ago
BC1-3 are like that, but BC7 is far, far more complex. The search space is huge.
exDM69•5mo ago
It's an open ended optimization problem. You can get bad results very quickly but on high quality settings it can take minutes or hours to compress large textures.

And the newer compression formats have larger block sizes from 8x8 to even 12x12 pixels. ASTC and BC7 are a different beast to the early texture compression formats. You can get pretty awesome image quality at 2 bits per pixel (and slightly less awesome at less than a bit per pixel).

flohofwoe•5mo ago
I assume parent means "import" as in "asset pipeline import", not importing at game start. Using hardware-compressed texture formats will actually speed up loading time since you can dump them directly into a 3D API texture without decoding.
cout•5mo ago
I have no idea how things are done today but in the opengl 1.x era multiple textures would sometimes be stored in a single image, so to get a different texture you would pick different texture coords.
declan_roberts•5mo ago
It sucks to get to the "better" option and there actually is no better option other than bespoke export tools.
jms55•5mo ago
I've been evaluating texture compression options for including in Bevy https://bevy.org, and there's just, not really any good options?

Requirements:

* Generate mipmaps

* Convert to BC and ASTC

* Convert to ktx2 with zstd super-compression

* Handle color, normal maps, alpha masks, HDR textures, etc

* Open source

* (Nice to have) runs on the GPU to be fast

I unfortunately haven't found any option that cover all of these points. Some tools only write DDS, or don't handle ASTC, or want to use basis universal, or don't generate mipmaps, etc.

raincole•5mo ago
What did bevy end up using?
jms55•5mo ago
Nothing, I haven't found a good option yet.

We do have the existing bindings to a 2-year old version of basis universal, but I've been looking to replace it.

pornel•5mo ago
I don't think you'll find anything much better than basis universal, assuming you want textures compressed on the GPU and the simplicity of shipping one file that decodes quickly enough. I've followed development of the encoder, and its authors know what they're doing.

You might beat basisu if you encode for one texture format at a time, and use perceptual RDO for albedo textures.

Another alternative would be to use JPEG XL for distribution and transcode to GPU texture formats on install, but you'd have to ship a decent GPU texture compressor (fast ones leave quality on the table, and it's really hard to make a good one that isn't exponentially slower).

raincole•5mo ago
Is basis universal that bad? I thought it's more or less invented for this purpose.
xyzsparetimexyz•5mo ago
Why not just use BC7 (BC6H for HDR) textures on desktop and ASTC on mobile? There's no need to ship an format like basisu that converts to both. There are rust bindings for both the Intel ispc BC(n) compressor and ASTCenc.

Oh but if you care about GPU support then I'm pretty sure https://github.com/GPUOpen-Tools/Compressonator has GPU support for BC(n) compression at least. Could rip those shaders out.

flohofwoe•5mo ago
Basis Universal also gives you much smaller size over the wire than hardware compressed formats (closer to jpg), which is important at least for web games (or any game that streams asset data over the net).
BriggyDwiggs42•5mo ago
Oh cool I used bevy for something. Really good shit
cubefox•5mo ago
Soon this will all be outdated again because neural texture compression is on its way to replace block compression, with substantially higher compression ratios both on disk and in VRAM.
flohofwoe•5mo ago
It will be interesting to see whether this 'prefers' some types of textures over others. Also how well it works for non-image textures like normal maps or other material system parameters.
cubefox•5mo ago
The more efficient versions compress the entire material stack in one, to exploit inherent redundancies, including albedo, normals, roughness etc. The main problem, as far as I can tell, is the increased GPU performance cost compared to block compression. Though recent research shows some versions are already pretty manageable on modern GPUs with ML acceleration: https://arxiv.org/abs/2506.06040

They cite 0.55ms per frame at 1080p on a recent Intel GPU. The budget for one frame at 60 FPS is 16.7ms, so the performance cost would be 3.3% of that. Though higher resolutions would be more expensive. But they mention there is room for performance improvements of their method.

The other problem is probably that it isn't very practical to ship multiple versions of the game (the old block compressed textures for lower end hardware without ML acceleration), so adoption may take a while.

rudedogg•5mo ago
Do you still want to do this for SDF font textures? Or is the lossiness an issue?
masonremaley•5mo ago
Good question. I don’t have any authored SDF content right now so take this with a grain of salt, but my thoughts are:

1. Fonts are a very small percent of most games’ storage and frame time, so there’s less motivation to compress them than other textures

2. Every pixel in a font is pretty intentional (unlike in, say, a brick texture) so I’d be hesitant to do anything lossy to it

I suspect that a single channel SDF for something like a UI shape would compress decently, but you could also just store it at a lower resolution instead since it’s a SDF. For SDF fonts I’d probably put them through the same asset pipeline but turn off the compression.

(Of course, if you try it out and find that in practice they look better compressed than downscaled, then you may as well go for it!)

[EDIT] a slightly higher level answer—you probably wouldn’t compress them, but you’d probably still use this workflow to go from my_font.ttf -> my_font_sdf.ktx2 or such.

rudedogg•5mo ago
Makes sense, thanks for the insight.
xeonmc•5mo ago
Tangential: where does pixel art textures fall under in this consideration for asset compression?
masonremaley•5mo ago
That’s also a good question.

I personally wouldn’t compress pixel art—the artist presumably placed each pixel pretty intentionally so I wouldn’t wanna do anything to mess with that. By pixel art’s nature it’s gonna be low resolution anyway, so storage and sample time are unlikely to be a concern.

Pixel art is also a special case in that it’s very unlikely you need to do a bake step where you downsize or generate mipmaps or such. As a result, using an interchange format here could actually be reasonable.

If I was shipping a pixel art title I’d probably decide based on load times. If the game loads instantly with whichever approach you implement first then it doesn’t matter. If it’s taking time to load the textures, then I’d check which approach loads faster. It’s not obvious a priori which that would be without measuring—it depends on whether the bottleneck is decoding or reading from the filesystem.

TinkersW•5mo ago
A single channel SDF can be encoded to BC4 with fairly good quality, and it can actually represent a wider range of values than a u8 texture... but with the downside of only having 8 values per 4x4 block.

So if the texture is small I'd use u8, for a very large texture BC4 isn't a bad idea.

exDM69•5mo ago
Not for multi-channel SDF at least. Texture compression works terribly badly with "uncorrelated" RGB values as they work in chroma/luminance rather than RGB. For uncorrelated values like normal maps, there are texture compression formats specifically for that (RGTC).

However, your typical MSDF font texture has three uncorrelated color channels and afaik there isn't a texture compression format with three uncorrelated channels.

jam•5mo ago
Are there good texture generators that export in KTX2? I use iliad.ai mainly, but for PNGs.
shmerl•5mo ago
Surprisingly I don't see ktx tools in Debian.

https://github.com/KhronosGroup/KTX-Software

cshores•5mo ago
Developers should revisit using indexed color formats so they only map the colors that they are using within the texture rather than an entire 32bit color space. This coupled with compression would greatly reduce the amount of ram and disk space that each texture consumes.
flohofwoe•5mo ago
BC1 uses 4 bits per pixel without being limited to 16 colors though.

In a way the hardware-compressed formats are paletted, they just use a different color palette for each 4x4 pixel block.

Having said that, for retro-style pixel art games traditional paletted textures might make sense, but when doing the color palette lookup in the pixel shader anyway you could also invent your own more flexible paletted encoding (like assigning a local color palette to each texture atlas tile).

ack_complete•5mo ago
Indexed color formats stopped being supported on GPUs past roughly the GeForce 3, partly due to the CLUTs being a bottleneck. This discourages their use because indexed textures have to be expanded on load to 16bpp or 32bpp vs. much more compact ~4 or 8bpp block compressed formats that can be directly consumed by the GPU. Shader-based palette expansion is unfavorable because it is incompatible with hardware bilinear/anisotropic filtering and sRGB to linear conversion.
flohofwoe•5mo ago
Tbf, you wouldn't want any linear filtering for pixel art textures anyway, and you can always implement some sort of custom filter in the pixel shader (at a cost of course, but still much cheaper than a photorealistic rendering pipeline).

Definitely might make sense IMHO since block compression artefacts usually prohibit using BCx for pixel art textures.

smallstepforman•5mo ago
I’ve been using bc7 since 2012, and even have a video codec using bc7 keyframes (we needed alpha videos, even have a patent for it). Our emphasis is not file size, but playback speed. Bc7 renders 20-40% faster than raw rgba32, and we overlay 5-7 videos on screen simultanously, so every boost helps.

We also have custom encoders for the custom video format.

_def•5mo ago
Is there some open equivalent of alpha in videos?
the8472•5mo ago
vp9 and avif have alpha, whether it's patented I don't know.

https://pomf2.lain.la/f/kg9r139e.webm needs non-black background, change the <html> element's background-color in developer tools if necessary

kookamamie•5mo ago
You have a patent for alpha in videos? Curious to hear more about this - the application you describe sounds eerily familiar to me.
chmod775•5mo ago
HAP is a family of codecs for GPU accelerated decoding. HAP R uses BC7 frames.

There's support for various HAP codecs in ffmpeg, VLC, etc, but I think support for HAP R is lacking. However HAP, HAP Alpha, HAP Q and HAP Q Alpha have wide support. They use DXT1/DXT3/DXT5/RGTC afaik.

Compared to your implementation, their addition of BC7 is quite recent, yet they did have support for alpha channels for probably a decade.

pmarreck•5mo ago
JPEG-XL, your time has come? ;)
exDM69•5mo ago
GPU texture compression needs fixed bitrate for random access, usually 2 bits per pixel (128b for 8x8 block) with modern formats.

Jpeg or other variable bitrate compression schemes are not applicable.

pmarreck•5mo ago
Oh. Yeah, that makes sense. Certainly a technical requirement that is tricky to satisfy.