frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
1•init0•3m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•3m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•6m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•8m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•18m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•19m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•24m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•28m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•29m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•31m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•32m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•35m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•46m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•52m ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
2•cwwc•56m ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
5•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
3•pabs3•1h ago•0 comments

Seedance 2.0 Is Coming

https://seedance-2.app/
1•Jenny249•1h ago•0 comments
Open in hackernews

Mapping to the PICO-8 palette, perceptually

https://30fps.net/pages/perceptual-pico8-pixel-mapping/
64•ibobev•5mo ago

Comments

Marazan•4mo ago
The main issue with any pixel-to-pixel colour mapping approach is that we don't perceive individual pixels so you will not get a good overall effect from pixel-to-pixel mapping (the article touches on this by talking about structure but you don;t have to go that far to see massively improved results).

Any serious attempt would involve higher level dithering to better reproduce the colours of the original image and dithering is one of those topics that goes unexpectedly crazy deep if you are not familiar with the literature.

aquova•4mo ago
An interesting article, but it seems like quite an oversight to not even mention dithering techniques, which in my opinion give much better results.

I've done some development work in Pico-8, and some time ago I wrote a plugin for the Aseprite pixel art editor to convert an arbitrary image into the Pico-8 palette using Floyd-Steinberg dithering[0]

I ran their example image through it, and personally I think the results it gives were the best of the bunch https://imgur.com/a/O6YN8S2

[0] https://github.com/aquova/aseprite-scripts/blob/master/pico-...

lugarlugarlugar•4mo ago
I too thought about dithering while reading the article, but couldn't have imagined the result would be this much better. Thanks for sharing!
kibwen•4mo ago
Dithering is sort of like having the ability to "blend" any two colors of your palette (possibly even more than any two, if you use it well), so instead of being a 16-color pallete, it's like working with a 16+15+14+13+12+...=136-color pallete. It's a drastic difference (at the cost of graininess, of course).
smusamashah•4mo ago
Tried this online tool https://onlinetools.com/image/apply-dithering-to-image and Floyd and Atkinson both look great, Atkinson a bit better.
sudobash1•4mo ago
Dithering is still more important than is commonly known, even with 24-bit "true color". For example, imagine that you had a gradient that goes from white to black across a 1920x1080 monitor. 24-bit color means you only have 256 levels of color, so a naive gradient implementation will result in 256 discrete bands of different grays, each about 8 pixels wide (about as wide as this "w" character).

You might not thing that you'd notice that, but it looks surprisingly bad. Your eyes would immediately notice that there are "stripes" of solid gray instead of a smooth continuum. But if you apply dithering, your eyes won't be able to notice (at least not easily). It will all look smooth again.

In a situation like this, I like to use "blue noise" dithering, but there are scores of dithering methods to choose from.

chrismorgan•4mo ago
Direct link to the image, though you may have to fetch it through a non-browser to avoid it redirecting back to the stupid HTML: https://i.imgur.com/y93naNw.png

(I don’t know how it works for others, but it has always been atrocious for me. Their server is over 200ms away, and even with uBlock Origin blocking ten different trackers it takes fully 35 seconds before it even begins to load the actual image, and the experience once it’s finished is significantly worse than just navigating directly to the image anyway. Tried it in Chromium a couple of times, 55 and 45 seconds. Seriously, imgur is so bad. Maybe it ain’t so bad in the USA, I don’t know, but in Australia and in India it’s appallingly bad. You used to be able to open the image URLs directly, but some years ago they started redirecting to the HTML in general if not loading as a subresource or maybe something about accept headers; curl will still get it directly.)

wizzwizz4•4mo ago
https://addons.mozilla.org/firefox/addon/fucking-jpeg/
greysonp•4mo ago
They don't explicitly state it in the article that I can see, but the PICO-8 is 128x128, and it appears that their output images were constrained to that. Your dithered images appear to be much higher resolution. I'd be curious what dithering would look like at 128x128!
mezentius•4mo ago
Dithering is used quite frequently in PICO-8 projects at the “native” (128x128) resolution. Here’s an example from a few years ago: https://www.lexaloffle.com/bbs/?pid=110273#p
p0w3n3d•4mo ago
Yeah I was counting on dithering too
WithinReason•4mo ago
I was looking forward to seeing a dithered [0] version but it was missing. In addition, shouldn't OKLAB already be perceptually uniform and not require luma weighting?

[0]: https://en.wikipedia.org/wiki/Floyd%E2%80%93Steinberg_dither...

Fraterkes•4mo ago
Kinda off-topic but for a while I’ve had an idea for a photography app where you’d take a picture, and then you could select a color in the picture and adjust it till it matched the color you see in reality. You could do that for a few colors and then eventually just map all the colors in the picture to be much closer to the perceived colors without having to do coarser post-processing.

Even if you got something very posterized like in the article I think it could at least be a great reference for a more traditional processing step afterwards. Always wonder why that doesn’t seem to exist yet.

latexr•4mo ago
Sounds like a lot of work for something which wouldn’t produce that good of a result. If you ever tried to take a colour from a picture with the eyedropper tool, you quickly realise what you see as one colour is in fact disparate number of pixels and it can be quite hard to get the exact thing you want. So right there you find the initial hurdle of finding and mapping the colour to change. Finding the edges would also be a problem.

Not to mention every screen is different, so whatever changes you’re doing, even if they looked right to you in the moment, would be useless when you sent your image to your computer for further processing.

Oh, and our eyes can perceive it differently too. So now you’re doing a ton of work to badly change the colours of an image so they look maybe a bit closer to reality for a single person on a single device.

CharlesW•4mo ago
So this would be a subjective alternative to matching to color cards? What would the benefit be over a precise/objective match?
bux93•4mo ago
This is essentially what you do as step 1 when color correcting in Davinci Resolve, but only for white (or, anything that's grayscale). Select a spot that's white/gray, click on the white balance picker, and the white balance is set.

It's not perfect of course, but gets a surprisingly good result for close to zero effort.

JKCalhoun•4mo ago
I wonder if this is not like including in the photo a Macbeth Chart [1] and then trying to color match your image so that the swatches on the Macbeth Chart look the same digitally as well as in real life.

One bottleneck of course is that the display you are on, where you are viewing the image, is likely not to have a gamut rich enough to even display all the colors of the Macbeth chart. No amount of fiddling with knobs will get you a green as rich as reality if there is an intense green outside the display's capabilities.

But of course you can try to get close.

[1] https://en.wikipedia.org/wiki/Color_chart

(I seem to recall, BTW, that these Greytag-Macbeth color charts are so consistent because they are representing each color chemically. I mean, I suppose all dyes are chemical, but I understood that there was little to no mixing of pigments to get the Macbeth colors. I could be wrong about that though. My first thought when I heard it was of sulfur: for example, how pure sulfur, in one of its states, must be the same color every time. Make a sulfur swatch and you should be able to constantly reproduce it.)

a_shovel•4mo ago
Something I've noticed from automatic palette mappings is that they tend to produce large blocks of gray that a human artist would never consider. You can see it in the water for most mappings in this sample, and even some grayish-brown grass for sRGB. It makes sense mathematically, since gray is the "average" color, and pixel art palettes are typically much more saturated than the average colors in a 24-bit RGB image. It looks ugly regardless.

CAM16-UCS looks the best because it avoids this. It gives us peach-and-pink water that matches the "feel" of the original image better. I wonder if it's designed to saturate the image to match the palette?

growingkittens•4mo ago
I notice that many palettes tend to follow the "traditional" color wheel strictly, without defining pink as a separate color on the main wheel.
badc0ffee•4mo ago
I don't know anything about the PICO-8, but that is an interesting palette. It reminds me of a more saturated version of the C64.

Other systems of the time either used a simple RGBI formula with modifications (IBM, with its "CGA brown"), or a palette evenly spaced around the NTSC hue wheel (Apple II, or again the CGA in composite output mode)

ChristopherDrum•4mo ago
The PICO-8 actually has 32 colors from which we can choose any 16. I understand that making use of the default palette is the article's intent; I'm just thinking aloud.

If one were wanting to render an image on the PICO-8 itself, the ideal algorithm would select the best 16 colors from the full 32-color palette which, when dithered, produce the most perceptually accurate version of the original image in 128x128 pixels. Were I a smarter man I would create this, but alas.

Waterluvian•4mo ago
I think this is how the original Myst image compression worked. Every image used an 8-bit palette, but each palette was custom for each image.
bmn__•4mo ago
Result of granddaddy https://web.archive.org/web/2000/http://fordy.planetunreal.g...

→ https://files.catbox.moe/2uuqka.png

It's bad. :-o