frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•3m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•5m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•5m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•7m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•8m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•13m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
2•throwaw12•14m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•14m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•15m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•17m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•20m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•23m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•29m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•31m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•36m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•38m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•38m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•41m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•42m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•44m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•45m ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•48m ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•49m ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•52m ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•53m ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•53m ago•2 comments

Starter Template for Ory Kratos

https://github.com/Samuelk0nrad/docker-ory
1•samuel_0xK•55m ago•0 comments

LLMs are powerful, but enterprises are deterministic by nature

2•prateekdalal•58m ago•0 comments

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
2•0y•1h ago•1 comments
Open in hackernews

Image Dithering: Eleven Algorithms and Source Code (2012)

https://tannerhelland.com/2012/12/28/dithering-eleven-algorithms-source-code.html
120•Bogdanp•3mo ago

Comments

Panzerschrek•3mo ago
It's also worth to mention noise-based dithering - where some noise pattern is added atop of the image and then rounding is performed. Usually some sort of blue noise is used for this approach.
pasteldream•3mo ago
Agreed - blue noise dithering is very commonly used in computer graphics because it’s cheap and great, but it might be worth mentioning that it’s a kind of ordered dithering, which is mentioned in the article.

Christoph Peters’s free blue noise textures are the most commonly used, for people who can’t be bothered running void and cluster themselves: https://momentsingraphics.de/BlueNoise.html

bad_username•3mo ago
Dithering has similar importance in digital audio. Dithered 8-bit audio sounds way better than non-dithered (harsh artifacts are replaced with tolerable white noise, and quiet details are preserved). Higher end digital equipment even applies dithering to high-bit samples, as do plug-ins in digital audio workstations.
simondotau•3mo ago
Critically, the benefits of audio dithering come with a single side-effect (i.e. audible artefact): an increase in the noise floor. In most cases, however, this elevated noise floor remains below the threshold of audibility, or more practically, quieter than the ambient noise of any reasonable listener’s playback environment.

What's important to appreciate is that dithering digital audio should only ever be performed when preparing a final export for distribution, and even then, only for bit-perfect copies. You shouldn't dither when the next step is a lossy codec. Encoders for AAC and Opus accept high bit depth originals, because their encoded files don't have a native "bit depth". They generate and quantise (compress) MDCT coefficients. When these encoded files are decoded to 16-bit PCM during playback, the codec injects "masking noise" which serves a similar function to dither.

dreamcompiler•3mo ago
Audio dithering typically involves adding a small amount of noise before downconverting to lower resolution samples.

But there's another form of audio dithering that uses error diffusion (like TFA describes) rather than adding noise. If you use a single-bit ADC but sample much faster than Nyquist and keep track of your errors with error diffusion, you preserve all the audio information in the original with a similar number of bits as a (e.g.) 16-bit ADC sampled at Nyquist, but with the additional benefit that your sampling noised has moved above the audible range where it can be filtered out with an analog lowpass filter.

This is one-dimensional dithering but in the audio world it's called Sigma-Delta modulation or 1-bit ADC.

alejohausner•3mo ago
Ulichney (who wrote the book on halftoning) came up with ordered dithering matrices that give much nicer results than Bayer's, as good error as diffusion, and parallelizable. Look up "void and cluster".
gnabgib•3mo ago
(2012) Popular in

2016 (199 points, 61 comments) https://news.ycombinator.com/item?id=11886318

2017 (125 points, 53 comments) https://news.ycombinator.com/item?id=15413377

AndrewStephens•3mo ago
It is surprisingly difficult to get really crisp dithering on modern displays, you have to do it on the client to match 1-1 the user’s display. Notice that the pre-rendered examples on this page actually look a little blurry if you magnify them. This is not really a problem unless you really want the crispness of the original Mac screen.

A few years ago I got annoyed with this and made a little web-component that attempts to make really sharp 1-bit dithered images by rendering the image on the client to match whatever display device the user has.

https://sheep.horse/2023/1/improved_web_component_for_pixel-...

rikroots•3mo ago
I rewrote my canvas library's "reduce-palette" filter last month to make it a lot more code-efficient (because: the library doesn't use web workers, WebGL, etc). But the main reason to go hunting for efficiencies was my (slightly unhinged) decision to do all the color distance calculations in the OKLAB color space.

Demo here: https://scrawl-v8.rikweb.org.uk/demo/filters-027.html

AndrewStephens•3mo ago
That’s cool.
Affric•3mo ago
Nostalgic.

Important for lo-fi displays and printing etc

I do think that well dithered images looked better in some texts than colour images which had more wow but were more distracting.

kevinsync•3mo ago
I use a Photoshop plugin for complex dithering (DITHERTONE Pro [0] -- this is NOT AN AD lol, I'm not the creator, just a happy customer and visual nerd)

I'm only dropping it in here because the marketing site for the plugin demonstrates a lot of really interesting, full-color, wide spectrum of use-cases for different types of dithering beyond what we normally assume is dithering.

[0] https://www.doronsupply.com/product/dithertone-pro

user____name•3mo ago
On iPhone Safari this page opens a modal popup that I cannot close, rendering it useless...
hatthew•3mo ago
Something I think about sometimes is how it's usually more important to maintain shape rather than color. For example, in the first 2 images on the page, quantizing the pink hearts results in some pink, white, and grey. An error diffusion alg will result in pink speckled with white and grey, whereas it might be preferable to have a solid pink that's slightly off color but has no speckles.

Are there existing techniques that do this sort of thing? I'm imagining something like doing a median filter on the image, run clustering on the pixels in the colorspace, and then shift/smudge clusters towards "convenient" points in the colorspace, e.g. the N points of the quantized palette and the N^2 points halfway between each pair. Then a partial-error-diffusion alg like atkinson smooths out the final result.

lynnharry•3mo ago
There's no way to do it traditionally. Your request would need the algorithm to understand the content of the image. Deep learning based image vectorization probably has a similar objective.
momojo•3mo ago
One of my favorite modern dithering methods is blue-noise dithering (see Figure 3b): https://developer.nvidia.com/blog/rendering-in-real-time-wit...

The only catch is that generating blue noise is a roughly O(n^2) algorithm. Its not feasible to be generated on the fly, so in practice you just pregenerate a bunch of blue-noise textures and tile them.

If you google 'pregenerated blue noise' you find plenty of them: https://momentsingraphics.de/BlueNoise.html

abetusk•3mo ago
O(n^2) where n is what? If it's the number of pixels, you have to touch those anyway, no?

Why can't you create blue noise from walking a Hilbert curve and placing points randomly with a minimum and maximum threshold?

inhumantsar•3mo ago
I was curious about this too so I dug into it a bit. it seems that the point placement has to be optimized to ensure they have roughly even spacing while still being randomly placed.

the naive algorithm is O(n^2) where n is the number of pixels in an image. tiling and sampling pregenerated noise is O(n), so that's what most people use. the noise can be generated on the fly using a FFT-based algorithm, though it still needs to be applied iteratively so you'd typically end up with O(k n log n) s.t. 10 <= k <= 100.

this has been neat stuff to read up on. my favorite nugget of learning: blue noise is white noise that's fine through a high pass filter a few times. the result of a high pass filter is the same as subtracting the result of a low pass filter from the original signal. blurring is a low pass filter for images. since blue noise is high frequency information, blurring a noised up image effectively removes the blue noise. so the result looks like a blurred version of the original even though it contains a fraction of the original's information.

atoav•3mo ago
As a former VFX freelancer I think many people underestimate how effective cheap tricks like using image textures like that can be.

You don't need real noise, it is enough to have a single texture that is a bit bigger than the input image and then randomly offset and rotate it. If that random offset is random enough (so not pseudorandom with a low periodicity), nobody will ever notice.

Memory has gotten cheaper while latency deadlines are still deciding over how much you can do realtime. That means cheap tricks like this are not an embarrassing workaround, but sometimes the smart choice.

bobmcnamara•3mo ago
And with blue noise, you can tiel the noise images and somehow your brain down pick up on it like it does for most other tiled noise images.
user____name•3mo ago
You can also run a high pass filter on a white noise image and get something roughly blue-noise like. By varying the width of the filter you can control the frequency of the noise. You can use this property to get constant a physical size to the noise, like DPI awareness.

A lot of blue noise references: https://gist.github.com/pixelmager/5d25fa32987273b9608a2d2c6...

There also exist pseudo blue noise generators, e.g.: https://observablehq.com/@fil/pseudoblue https://www.shadertoy.com/view/3tB3z3

bazzargh•3mo ago
You can also get reasonable results from using quasirandom sequences https://extremelearning.com.au/unreasonable-effectiveness-of... - which are trivial to generate.

That's the kind of thing I use dithering on BBC Micro because it's such a cheap technique, here in a thread directly comparing to Bayer-like dithering https://hachyderm.io/@bbcmicrobot@mastodon.me.uk/11200546490... or here faking the Windows XP desktop https://hachyderm.io/@bbcmicrobot@mastodon.me.uk/11288651013...

kragen•3mo ago
The major dithering algorithm that's missing from this list is blue-noise dithering. This is very similar to "ordered dithering"; you can think of ordered dithering as either thresholding the pixel values with a different threshold value on each pixel, following a regular pattern, or as adding a different offset value to each pixel, following a regular pattern, and thresholding the result with a constant threshold. Blue-noise dithering replaces the regular pattern with a random pattern that's been high-pass filtered. This has all the advantages of ordered dithering, in particular avoiding "crawling" patterns during animation, but avoids the repetitive patterns and line artifacts it introduces.

https://nelari.us/post/quick_and_dirty_dithering/ is the best quick introduction to the technique that I've seen. There's a more comprehensive introduction at https://momentsingraphics.de/BlueNoise.html. https://bartwronski.com/2016/10/30/dithering-part-three-real... also demonstrates it, comparing it to other dithering algorithms.

Ulichney introduced blue noise to dithering in 01988 as a refinement of "white-noise dithering", also known as "random dithering", where you just add white noise before thresholding: https://cv.ulichney.com/papers/1988-blue-noise.pdf. Ulichney's paper is also a pretty comprehensive overview of dithering algorithms at the time, and he also makes some interesting observations about high-pass prefiltering ("sharpening", for example with Laplacians). Error-diffusion dithering necessarily introduces some low-pass filtering into your image, because the error that was diffused is no longer in the same place, and high-pass prefiltering can help. He also talks about the continuum between error-diffusion and non-error-diffusion dithering, for example adding a little bit of noise to your error-diffusion algorithm.

But Ulichney is really considering blue noise as an output of conventional error-diffusion algorithms; as far as I can tell from a quick skim, nowhere in his paper does he propose using a precomputed blue-noise pattern in place of the white-noise pattern for "random dithering". That approach has really only come into its own in recent years with real-time raytracing on the GPU.

An interesting side quest is Georgiev and Fajardo's abstract "Blue-Noise Dithered Sampling" from SIGGRAPH '16 http://web.archive.org/web/20170606222238/https://www.solida..., sadly now memory-holed by Autodesk. Georgiev and Fajardo attribute the technique to the 02008 second edition of Lau and Arce's book "Modern Digital Halftoning", and what they were interested in was actually improving the sampling locations for antialiased raytracing, which they found improved significantly when they used a blue-noise pattern to perturb the ray locations instead of the traditional white noise. This has a visual effect similar to the switch from white to blue noise for random dithering. They also reference a Ulichney paper from 01993, "The void-and-cluster method for dither array generation," which I haven't read yet, but which certainly sounds like it's generating a blue-noise pattern for thresholding images.

Lau, Arce, and Bacca Rodriguez also wrote a paper I haven't read about blue-noise dithering in 02008, saying, "The introduction of the blue-noise spectra—high-frequency white noise with minimal energy at low frequencies—has had a profound impact on digital halftoning for binary display devices, such as inkjet printers, because it represents an optimal distribution of black and white pixels producing the illusion of a given shade of gray," suggesting that blue-noise dithering was already well established in inkjet-land long before it became a thing on GPUs.

Maxime Heckel has a nice interactive WebGL demo of different dithering algorithms at https://blog.maximeheckel.com/posts/the-art-of-dithering-and..., with mouse-drag orbit controls, including white-noise dithering, ordered dithering, and blue-noise dithering. Some of her examples are broken for me.

It's probably worth mentioning the redoubtable https://surma.dev/things/ditherpunk/ and the previous discussion here: https://news.ycombinator.com/item?id=25633483.

zeroq•3mo ago
slightly related: it would be great to have a list of TOP 100 recurring links on HN. :)

on topic: https://surma.dev/things/ditherpunk/ is a great companion read to the subject

naet•3mo ago
Does anyone have a primer on multi-color dithering? I made a fun dither like program for monotone style dithering, but I'm not really sure how to adapt it to color palettes with more than two tones.
agarv•3mo ago
Color dithering is very similar black and white dithering, the difference is that instead of 2 colors (black and white), you have n colors, and you want to find the one that has the shortest distance to the current pixel. There are various formulas[1] to determine which color is closest, and which formula you choose will have an effect on the results. I built a dithering app[2] that lets you choose the distance formula, so you can see for yourself.

[1] https://en.wikipedia.org/wiki/Color_difference [2] https://dithermark.com

dreamcompiler•3mo ago
Bresenham's line drawing algorithm is another error diffusion algorithm except its goal is not approximating colors that don't exist but rather approximating lines at angles that are not a multiple of 45 degrees on pixel grids.
tomsonj•3mo ago
Still topical for constrained palette displays like color einks
alberth•3mo ago
Here’s a great online tool that lets you upload any image and apply different dithering styles to it.

https://doodad.dev/dither-me-this/

larodi•3mo ago
Dithermark.com is all I ever need … amazing stuff
glimshe•3mo ago
Does anyone know of any application/tool that can perform palette dithering? The idea is "here is the n-color palette specified in their RGB values, here is the full-color RGB image, give me the best possible dithered image using the provided palette". The tools that I've used were underwhelming and produced results full of banding and artifacts.

Basically, great dithering in color instead of B/W.