frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•55s ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•1m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•2m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•3m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•3m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
1•alainrk•4m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•4m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
1•edent•8m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•11m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•11m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•17m ago•0 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•17m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•18m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•21m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•24m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•24m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•24m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•24m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•26m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•28m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•30m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•32m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•33m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•33m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•36m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
5•sakanakana00•39m ago•1 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•41m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•42m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•43m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•44m ago•6 comments
Open in hackernews

A visualization of the RGB space covered by named colors

https://codepen.io/meodai/full/zdgXJj/
298•BlankCanvas•3mo ago

Comments

billyp-rva•3mo ago
It's always struck me as odd how there are so many off-white colors in HTML/CSS compared to the rest of the space.
WillAdams•3mo ago
Vagaries of monitor technology and a lack of calibration/the difficulty of calibrating for lighter colours.
PaulHoule•3mo ago
You mean all the low-saturation colors you see around the diagonal?
tocs3•3mo ago
I think that is hat was meant and I wonder about that also.

Adding:

Looking some more I think it would be nice if the rotation could be stopped.

Labeling the axis would be nice also.

NooneAtAll3•3mo ago
> I think it would be nice if the rotation could be stopped.

author said he fixed that, interacting will stop it now

billyp-rva•3mo ago
When you switch the list to show just HTML/CSS colors, it's all the colors in the corner.
kazinator•3mo ago
Because there are so many off white colors in wall paint.
layer8•3mo ago
That’s because standard RGB is linear while human perception is closer to logarithmic.
Eric_WVGG•3mo ago
I use a similar app called Name That Color — https://chir.ag/projects/name-that-color/#6195ED

I like sharing descriptive names with designers instead of naming everything "light blue" "dark blue" "not quite as light but still not dark blue" etc.

This new thing is tons of fun but seems a bit less practically useful.

chime•3mo ago
You just reminded me that my app turned 18 a few months ago.

Another dev, Daniel Flück, extended the app to help color blind users: https://www.color-blindness.com/color-name-hue/

IgorPartola•3mo ago
I am curious why in your example you compare indigo to violet and purple since purple has a major red component while indigo and violet are on the complete opposite end of the visible color spectrum and are single wavelength colors.
meodai•3mo ago
congrats!
extraduder_ire•3mo ago
Neat seeing the different shapes the RGB space gets compressed into if you select a different colourspace on the bottom right.
phdelightful•3mo ago
What coordinate in the space is furthest from any named color? It looks like there are some relatively large voids in the blue/purple boundary area but it’s hard to say.
madcaptenor•3mo ago
Here's the list of colors it works off of: https://github.com/meodai/color-names/blob/main/src/colornam...

I'm trying to figure it out.

madcaptenor•3mo ago
For Euclidean distance it seems to be in the neighborhood of (59, 250, 60) which is a bright green, although of course Euclidean distance is not perceptual distance. The blue at (57, 42, 214) also is up there.
meodai•3mo ago
oh Id love to add this to the tooling of the color names list. How did you figure out what the largest gap was?
madcaptenor•3mo ago
Pick points at random, then use a general-purpose optimization method (the optim function in R) to find local maxima. I don’t claim this is a good way to do it.
turtletontine•3mo ago
perceptual distance is quite different from Euclidean distance in this RGB space. Like if put two swaths of color side by side and said “how similar are these?” to samples of people, the groupings would not much resemble this cube.

They’ve done this! It’s shown on a “chromaticity diagram”, and is useful for comparing what colors different screens/printers/etc can reproduce. (It’s 2D not 3D cause it’s normalized for luminance or brightness.) Color science is weirdly fascinating:

https://en.wikipedia.org/wiki/Color_space?wprov=sfti1#

adzm•3mo ago
You can choose other color spaces here which is neat and helps visualize this a bit more accurately.
Etheryte•3mo ago
Wish there was a way to make it stop spinning, it's practically impossible to figure out adjacent colors because everything keeps moving no matter what you do. Perhaps there is a way, but I didn't find it?
whatsupdog•3mo ago
Same. So annoying.
graypegg•3mo ago
https://codepen.io/graypegg/full/XJXoxYB

Only change is lines 421 + 422 that sloooowly rotated the cube are commented out in the javascript, otherwise should act the same!

internetter•3mo ago
holy shit its so much better
meodai•3mo ago
changed the original: interacting with the cube will stop the spin
meodai•3mo ago
I changed it: as soon as you interact with it, it stops spinning
kazinator•3mo ago
I like the view into the black corner toward white. From that aspect, the black-white axis looks like an atmospheric effect, and the blacks appear as if they were opaque objects balls suspended in front of an illuminated fog.
ajsnigrutin•3mo ago
Oh yes, i also use the "Graphical 80's sky" when describing my car color. (#0000fc)
layer8•3mo ago
Very nice! But there is no option to show color labels?
arichard123•3mo ago
Xkcd Colour names based on a survey: https://blog.xkcd.com/2010/05/03/color-survey-results/
madcaptenor•3mo ago
My favorite bit of this survey (scroll down to "Miscellaneous") is that one of the color names in the raw data set is "unsure-whether-boy-or-girl baby room color". My daughter's room is this color - we painted before she was born. They told us we were going to have a boy but they misread the ultrasound.
markburns•3mo ago
Can anyone explain the kind of dense cloud in the middle? Is that down to human perception? We don't give names to things we can't perceive uniquely?
allenu•3mo ago
It's probably just aesthetics. Those colors are more commonly used in illustration and design, so they tend to get labeled. There might be some perception involved in there as well as it's easier for our eyes to pick apart the more pastel colors from each other than the darker colors from each other.
csmoak•3mo ago
i would expect the more dense part to be the smaller gamut that can be made with paint since we've been naming those colors for a lot longer than the larger gamut that can be made with a screen. The paint/print gamut looks kinda like the more dense parts of these scatter plots within the larger sRGB cube (though the paint gamut isn't entirely contained within sRGB).
vardump•3mo ago
Is there a tool that can dither to named colors?
dougb5•3mo ago
Great project! It's visually dazzling and it really drives home the sheer size of the universe(s) of named colors.

I've long been interested in the names of colors and their associations. If I may plug my own site a bit, check out the "color thesaurus" feature on OneLook that organizes color names more linearly. Start with mauve, as an example: https://onelook.com/?w=mauve&colors=1 (It also lets you see the words evoked by the color and vice versa, which was a fun LLM-driven analysis.)

Tempest1981•3mo ago
And how far things have come since the X11 color names
mceachen•3mo ago
X11 color names are atrociously bad. Inconsistent prefixes and suffixes, flatly wrong names for many handfuls of RGB triplets, and it’s what got hard wired into CSS and HTML.
meodai•3mo ago
I am the creator of the 3d thing that was shared. I am very interested in collaborating on something. Is the data you used for it accessible somewhere?
dougb5•3mo ago
Yeah I can make it available! Contact me at the email in my profile and I can explain what I have.
Peteragain•3mo ago
What is interesting to me is the blank spaces for various naming systems. Ornithologist's view (Ridgway) versus Japanese traditional. Reminds me of the discussion of the blue/green distinction by Kay etc al.
CobrastanJorji•3mo ago
Neat!

Feature request: I want the name of the color I'm hovering over to pop up next to the color. I don't want to have to look in the top left to see the name, especially with the board spinning. Also, I want the specific circle I'm hovering over to get a bit bigger so that I can see its exact color better and know that I've selected it.

rezmason•3mo ago
Bravo! I love color and color spaces.

I've been researching the way classic Macs quantize colors to limited palettes:

https://rezmason.net/retrospectrum/color-cube

This cube is the "inverse table" used to map colors to a palette. The animated regions are tints and shades of pure red, green, and blue. Ideally, this cube would be a voronoi diagram, but that would be prohibitively expensive for Macs of the late eighties. Instead, they mapped the palette colors to indices into the table, and expanded the regions assigned to those colors via a simultaneous flood fill, like if you clicked the Paint Bucket tool with multiple colors in multiple places at the same time. Except in 3D.

wormius•3mo ago
Is the initial setting (Color Name List) a list of ALL the colors in each "sub" category listed in the drop menu?

If so, would it be possible to put a "namespace" in front (like html.violet, or html::violet). That way you see which source it's from? That way you know where it's from (though I realize this may cause multiple "hits" on the same value/name) Or perhaps same names have different values.

Either way, pretty cool. I agree, it would be nice to have a button or mode to stop spinning without having to hack it manually.

meodai•3mo ago
No they are separate lists. "All Color Names" comes from: https://github.com/meodai/color-names
jl6•3mo ago
Wait, does this not use the colornames.org dataset?
meodai•3mo ago
No it does not colornames.org emerged after my color name list.
mrgaro•3mo ago
I'm curious to understand the need to have names for such many different colors and I'd love to hear your take! A naive reasoning would say that names are useful if at least two different persons know the meaning for a name and thus it will help communication.

Now I'm not sure how many colors are there in that list, but it feels like there are too many to be practically useful. How do you see this?

meodai•3mo ago
I build a lot of tools that generate colorpalettes and I wanted a wide range of nice-sounding names that feel evocative of the colors they represent. I see it as an API between a program and a human.

I started with about 1,600 names scraped from Wikipedia, but with only that many, there were a lot of redundancies and when you disallow duplicates, you end up with colors being labeled as “orange” even though they don’t actually look orange. On top of that many of the names were racist or at least questionable (so are many names on colornames.org)

Other large lists like the Pantone one, don't have a permissive license.

So for the past ten years or so, I’ve been collecting color names in a very unscientific way. It slowly turned into a hobby—something I often do on vacation, especially when I’m surrounded by unfamiliar places, dishes, or objects where color is used in unexpected ways.

Tools I made that benefit from using the names:

- https://meodai.github.io/poline/ - https://words.github.io/color-description/ - https://farbvelo.elastiq.ch/ - https://codepen.io/meodai/pen/PoaRgLm - https://parrot.color.pizza/ - https://meodai.github.io/rampensau/

And probably some that I forgot about...

efilife•3mo ago
incredible tools, I love rampensau

also, beautiful site! https://elastiq.ch/

meodai•3mo ago
thanks!
kouru225•3mo ago
Very clearly shows much more sensitive our eyes are to luminance rather than hue or saturation, which was the main observation that allowed for the high compression rate of JPEG
dinkelberg•3mo ago
Are you speaking of chroma subsampling, or is there a property of the discrete cosine transform that makes it more effective on luma rather than chroma?
ricardobeat•3mo ago
Probably chroma subsampling - storing color at lower resolution than luminance to take advantage of the aforementioned sensitivity difference. Since it’s stored at 1/4 resolution it can alone almost halve the file size.

Saying it’s the insight that led to JPEG seems wrong though, as DCT + quantization was (don’t quote me on this) the main technical breakthrough?

dinkelberg•3mo ago
Chroma subsampling was developed for TV, long before JPEG.
jjcm•3mo ago
One thing I'd love to see is a comparison between named colors and colors in use. What areas are under represented by named colors?
AceJohnny2•3mo ago
Why is #00FFFF called "Aqua" and not "Cyan"?

I guess there exist multiple names for the same colors, per https://www.w3schools.com/cssref/css_colors.php, and for some reason "Aqua" takes precendence in this display.

badmonkey0001•3mo ago
Could be just plain alphabetical. There's a selector for which color name list to use/examine on the bottom of the visualization. There's also a selector for which color space model to use.
hwc•3mo ago
Now make it do a perceptually uniform color space.
astolarz•3mo ago
Randomly mousing over it I noticed "Trunks hair" (#9b5fc0) and had to look it up to be sure I wasn't crazy...
anomie31•3mo ago
Are there really no named colors outside the sRGB gamut?
meodai•3mo ago
Id love to start one. Not sure in what format to store them to be future proof though.
virtualritz•3mo ago
RGB is a color model[1], not a color space[2].

Note that the headline gets this wrong but the page linked to gets this right.

sRGB or Rec2020 or ACEScg etc. are color spaces with known primaries and a known whitepoint. This is not nit-picky. Almost everyone doing CGI for the first time w/o reading up on some theory beforehand gets this wrong (because of gamma and then premultiplication, usually in that order).

Then there are color models which are also color spaces. CIE XYZ is an example.

[1] https://en.wikipedia.org/wiki/Color_model

[2] https://en.wikipedia.org/wiki/Color_space

zeroq•3mo ago
Not an expert but I'll drop my 2c

Most of my career was somehow related to graphics programming and I always thought it's bit weird that most quantization algorithms were operating in RGB model despite the fact that it was designed for hardware, not so for faithful color manipulation.

The easiest way to see that is to imagine a gradient between two colors and trying to make it in RGB. It doesn't seem right most of the time.

If so, then why would we consider distance in 3D space between two colors as faithful representation of their likeness?

Well, lo and behold, it's 2025 and everyone finally accepting LAB as the new standard. :)

subb•3mo ago
Except color is a construction of your eye-brain derived from stimuli, surround, memories, etc.

It's definitely not something you can plug into a three-value model. Those are good stimuli encoding space, however.

The distinction between brain-color and physical-color is what screws everyone up.

zeroq•3mo ago
fun fact: there's a guy with similar background to mine, with similar dedication to color, yet way more productive and he came out with this incredible piece of art: rebelle app

As with most recent technological breakthroughs it uses math from 1931 paper to magically blend colors in ways that seems so realistic it's almost uncanny.

dahart•3mo ago
> It's definitely not something you can plug into a three-value model.

What do you mean? And what is screwed up? We use 3 dimensions because most of us are trichromats, and because (un-coincidentally) most digital display devices have 3 primaries. The three-value models definitely are sufficient for many color tasks & goals. Three-value models work so well that outside of science and graphics research it’s hard to find good reasons to need more, especially for art & design work. It’d be more interesting to identify cases where a 3d color model or color space doesn’t work… what cases are you thinking of? 3D cone response is neither physical (spectral) color nor perceptual (“brain”) color, and it lands much closer to the physically-based side of things, but completely physically justifies using 3D models without needing to understand the brain or perception, does it not?

IgorPartola•3mo ago
I had experimented with some photo printing services and came across one professional level service that offered pigment inkjet printing (vs much more common dye inkjet printing). Their printers had 12 colors of ink vs the traditional 4. I did some test photos and visually they looked stunning.
wongarsu•3mo ago
"You only need three colors" is a bit of a cheat, because it doesn't really work out in reality. You can use three colors to get a good color gamut (as your screen is doing right now), but to represent close to every color we can see you would need to choose a red and blue close to the edge of what we can perceive, which would make it very dim. And because human vision is weird you would need some negative red as well, which doesn't really exist.

Printing instead uses colors that are in the range we can perceive well, and whenever you want a color that is beyond what a combination of the chosen CMYK tones can represent you just add more colors to widen your gamut. Also printed media arguably prints more information than just color (e.g. "metal" colors with different reflectivity, or "neon" colors that convert UV to visible light to appear unnaturally bright)

IgorPartola•3mo ago
Which is interesting because I am printing digital photos which I edit on an RGB screen.
mceachen•3mo ago
I paid for college in part by doing digital prepress. We had CMYK and 8 and 12 color separations.

CMYK always has a dramatic color shift from any on-screen colorspace. Vivid green is really hard to get. Neons are (kinda obviously) impossible. And, hilariously/ironically (given how prevalent they are), all manor of skin tones are tough too.

Photoshop and Illustrator let you work in CMYK, and is directionally correct. Ask your printer if they accept those natively.

dahart•3mo ago
Have you looked at the actual ink colors? Printing is a very different story. They’re not using 12 primaries, they’re using multiple gradations of the same primary. I don’t know which ink set you used, but 5 different grayscale values is common in a 12-ink set. Here’s an example of a 12 ink set:

https://www.amazon.com/Xeepton-Cartridge-Replacement-PFI4100...

There’s only 1 extra color there: red. There are multiple blacks, multiple cyans, multiple yellows, and multiple magentas. The reason printers use more than 3 inks is for better tone in gradations, better gloss and consistency. It’s not because there’s anything wrong with 3D color models. It’s because they’re a different medium than TVs. Note that most color printers take 3D color models as input, even when they use more than 3 inks.

IgorPartola•3mo ago
I believe they had the standard CMYK, four shades of black, as well as red, orange, green, and either violet or blue. But it has been a bunch of years so this is off memory. I honestly don’t remember the name of it. What I do remember is that they didn’t not have a web based ordering system. Instead they had a piece of desktop software you had to install. And you had to prove that you are a professional photographer before they would let you create an account. I am not a professional photographer but I did enough amateur photography that I managed to fake my way into it and placed a few orders. Quality was definitely better for all options compared to Nations Photo Lab but so was the price and the ordering setup was much more complex so I didn’t continue using them. They did have a lot more specialty options than any other printer I have seen.
gsck•3mo ago
You see this all the time with professional lighting fixtures as well!

For example, the ETC Source4 LED Lustr X8 has: Deep Red, Red, Amber, Lime, Green, Cyan, Blue, Indigo[0]

RGB LEDs are pretty crappy at rendering colours as they miss quite a lot of the colour spectrum, so the solution is just add more to fill in the gaps!

[0] https://www.etcconnect.com/WorkArea/DownloadAsset.aspx?id=10...

zeroq•3mo ago
Printing is a whole other beast.

My fav part - if you're preparing an ad for a newspaper you need to contain the sum of all of your CMYK components to under 120 or so value otherwise the print will either dissolve the paper and it will go through.

subb•3mo ago
They are very useful to encode stimuli, but stimuli is "not yet" color. When you have an image that is not just a patch of RGB value, a lot of things will influence what color you will compute based on the exact same RGB.

Akiyoshi's color constancy demonstrations are good examples of this. The RGB model (and any three-values "perceptual" model) fails to predict the perceived color here. You are seeing different colors but the RGB values are the same.

https://www.psy.ritsumei.ac.jp/akitaoka/marie-eyecolorconsta...

dahart•3mo ago
Here you’re talking about only perception, and not physical color. You could use 100 dimensional spectral colors, or even 1D grayscale values, and still have the same result. So this example doesn’t have any bearing on whether a 3D color space works well for humans or not. Do you have any other examples that suggest a 3D color space isn’t good enough? I still don’t understand what you meant.
subb•3mo ago
Yes exactly. I'm intentionally using "color" as a perceptual thing, not as a physical thing. If we are talking about a color model, then it needs to model perception. As such, RGB, as a predictor of perception, can often fail because it doesn't account for much more than what hits the retina, not what happens after. For one, it lacks spatial context - placing the same RGB value with a different surround will feel different, like in the example above. But if you had a real color (as-in, perceptual) picker in Photoshop, you would get a different value.

It's excellent at compressing the visible part of the EM spectrum, however. This is what I meant by stimuli encoding.

dahart•3mo ago
Still not seeing why you claimed color is definitely not something you can plug into a 3D model. We can, and do, use 3D color models, of course. And some of them are designed to try to be closer to perceptual in nature, such as the LAB space like @zeroq mentioned at the top of this sub-thread. No well known perceptual color space I know of, and no color space in Photoshop, accounts for context/surround/background, so I don’t understand your claim about Photoshop immediately after talking about the surround problem, but FWIW everyone here knows that RGB is not a perceptual color space and doesn’t have a specification or standard, and everyone here knows that color spaces don’t solve all perceptual problems.

I find it confusing to claim that cone response isn’t color yet, that’s going to get you in trouble in serious color discussions. Maybe better to just be careful and qualify that you’re talking about perception than say something that is highly contestable?

The claim that a color model must model perception is also inaccurate. Whether to incorporate human perception is a choice that is a goal in some models. Having perceptual goals is absolutely not a requirement to designing a practical color model, that depends entirely on the design goals. It’s perfectly valid to have physical color models with no perceptual elements.

subb•3mo ago
The problem is that we mix up physical and perception, including in our language. If you look at the physical stuff, there's nothing in this specific range of EM radiation that is different from UV or IR light (or further). The physical stuff is not unique, our reading is. Therefore, color is not a physical thing.

And so when I say "color" I only mean it to be the construction that we make out of the physical thing.

We project back these construction outside of us (e.g. the apple is red), but we must no fool ourselves that the projection is the thing, especially when we try to be more precise about what is happening.

This is why I'm saying a 3D model of color (brain thing) is very far from modelling color (brain thing) at all. But! It's not purely physical either, otherwise it would just be a spectral band or something. So this is pseudo-perceptual. It's the physical stuff, tailored for the very first bits of anatomy that we have to read this physical stuff. It's stimuli encoding.

If you build a color model, it's therefore always perceptual, and needs to be evaluated against what you are trying to model - perception. You create a model to predict things. RGB and all the other models based on three values in a vaccum will always fail at predicting color (brain!) when the stimuli's surround is more complex.

dahart•3mo ago
There’s a valid point in there somewhere, but you’re also saying some stuff that seems hyperbolic and getting harder to agree with. You’re right that perception is complicated, and I agree with you when you say 3D models don’t capture all of perception. That is true. That does not imply that people can’t use 3D models for lots of color tasks. Again, it always depends on your goals. You’re making abstract and general claims without stating your goals.

It’s fine for you to think of perception when you say color, but that’s not what everyone means, and therefore, you’re headed for miscommunication when you make assumptions and/or insist on non-standard definitions of these words.

Physical color is of course a thing. (BTW, it seems funny to say it’s not a thing after you introduced the term physical-color to this thread.) Physical color can mean, among other things, the wavelength distribution of light power. A physical color model is also a thing, it can include the quantized numerical representation of a spectral power distribution. Red can mean 700nm light. Some people, especially researchers and scientists, use physical color models all the time. You’re talking about meanings that are more specific than the general terms you’re using, so maybe re-familiarizing yourself with the accepted definitions of color and color model would help? https://en.wikipedia.org/wiki/Color_model

Again, it’s fine to talk about perception and human vision, but FWIW the way you’re talking about this makes it seem like you’re not understanding the specific goals behind 3D color spaces like LAB. Nobody is claiming or fooling themselves to think they solve all perception problems or meet all possible goals, so it seems like a straw man to keep insisting on something that was never an issue in this thread. If you want to talk about 3D models not being good enough for perception, then please be more precise about your goals. That’s why I asked what use cases you’re thinking of, and we haven’t discussed a goal that justifies needing something other than a 3D color model - color constancy illusions do not make that point.

subb•3mo ago
Unfortunately, it seems like we will not reach any agreement here.
zeroq•3mo ago
Honestly I haven't read the whole thread but I think your mixing stuff like green and blue being called the same word in some languages or ancient greek completely missing word for blue.

What I was thinking is along the lines of showing a real life scene to ten random people - like a view of a city park outside of an office window - and then showing them a picture of said scene on a computer screen using only 256 colors (quantization) and asking them if it looks the same.

Or modeling a 3D photo realistic scene of a room in a video game and then switching off the light and asking the player if the scene still looks realistic after we changed the colors or did we stumbled into uncanny valley.

The simplest, hands on experiment, I can think of is putting yourself in shoes of an oil painter and thinking about creating a gradient between two colors, let's say blue and green (or any other pair, it doesn't really matter). Now try to imagine said gradient in your mind and then try to recreate it with graphical program like Photoshop. If you went down this route the gradient will seem odd. Unnatural.

All common standards we were commonly using for the last 30 years like RGB, HSL, HSV, etc. falls flat. They are not so much off to call them "uncanny" (as in "uncanny valley"), but they seem wrong if you look close enough.

To actually simulate mixing two blobs of an oil paint you need arcane algorithms like Kubelka-Mink (yet another ground breaking discovery in IT made by reading a 100 years old research).

All in all - take a look at this video, I know it's 40 minutes long, but this topic has been a peeve for me for almost 20 years and it's the best and most comprehensible take on the subject: https://www.youtube.com/watch?v=gnUYoQ1pwes

dahart•3mo ago
That video is excellent, thanks for sharing. BTW it does back up the point @subb was making, that the experience of color is a perceptual thing; “light isn’t what makes something a color. As we’ve seen, colors are ultimately a psychological phenomenon.” Which is true.

FWIW I suspect the issue in this thread is that color models and color spaces are not necessarily modeling perception. The word color is overloaded and has multiple meanings. Just because color experience is perception, that doesn’t mean “color” is always referring to perception nor that phrases like “spectral color” or “color model” are referring to perceived experience, and they’re often not.

A color model is any numeric representation that captures the information needed to recreate a color, and it can be a physical or spectral color model, a stimulus model (cone response), or a perception model. Being able to recreate a color does not imply that the information is perceptual. Spectral “color” measurements are just pure physics, and spectral color models are just modeling pure physics.

By and large, the color matching experiments that lead to our CIE standards mostly measured average cone response for an average observer, and were never intended nor designed to capture effects like adaptation and surround. This is why many of the 3D color spaces we have that trace lineage to those experiments, especially the “perceptual” ones, are primarily modeling cone response and not perception. CIE color spaces do involve some kind of very averaged out perception of color, in a static unchanging, well adapted, no surround kind of way, which is for example why the “red” color matching function goes negative. [1]

There are people doing stuff like adaptation and spatial tone mapping in video games and research, and they’re using more tools than just 3D color spaces for that. That’s the kind of discussion I was hoping @subb would get into, i.e., what specific cases require going beyond the CIE models.

[1] https://yuhaozhu.com/blog/cmf.html

meodai•3mo ago
thanks for brining that up. Its a fight I stoped fighting a long time ago...
globular-toast•3mo ago
Is this why there appears to be a quite distinct plane inside the cube? If we were looking at them in the colour space would it look more uniformly spread?
adrian_b•3mo ago
While you are right, sometimes "RGB" is used as an abbreviation for some color space that is understood from the context, e.g. the CIE 1931 RGB color space (from which the CIE XYZ color space has been derived) or the RGB decoded correspondent of some TV color space, e.g. NTSC, PAL or SECAM.
virtualritz•3mo ago
I would really like to understand where that "sometimes" is, nowadays.

RGB just means that color is expressed as a triplet of specific wavelengths. But what is red? And what does red = 1.0 mean w/o context (aka primaries & whitepoint)? What about HDR? What does green = 2.0 convey? Etc.

For context, I worked in VFX production from the 90's to the early 2010's. About 25 years.

And in commercially available VFX-related software, until the early 2000's, mostly, RGB meant non-linear sRGB, unfortunately (or actually: "whatever" would be more true).

And it shows. We have VGX composed in non-linear color space with blown-out, oversaturated colors in highlights, fringes from resulting alpha blending errors, etc. A good compositor can compensate for some of these issues but only so far. If the maths are wrong, stuff will look shitty to some extend. Or as people in VFX say: "I have comments."

After that, SIGGRAPH courses etc. ensured people were developing an understanding on how much this matters.

And after that we had color spaces and learned to do everything in linear. And never looked back.

Games, as always, caught up a decade after. But they, too, did, eventually.

gyrovagueGeist•3mo ago
Thief of Time by Terry Prachett has a great minor bit about characters who are naming themselves after colors running out of human made labels, as they have to get increasingly esoteric with the names. It's fun to see that visualized.
calrain•3mo ago
It would be great to see this for each culture around the world, identifying the named colours from their language / culture.

I saw a BBC? documentary about this years ago and it showed how some cultures had the ability to clearly identify different colours where I couldn't see any difference.

It turns out that knowing subtle differences in colours can have a strong impact on your daily life, so cultures pick unique parts of the colour spectrum to assign names to.

shagie•3mo ago
https://news.mit.edu/2017/analyzing-language-color-0918 and also https://theconversation.com/languages-dont-all-have-the-same...

VOX : The surprising pattern behind color names around the world https://youtu.be/gMqZR3pqMjg

If you're interested in this is as a board game - https://boardgamegeek.com/boardgame/302520/hues-and-cues

meodai•3mo ago
I originally made this about 8 years ago just for myself: to see where the color name list I maintain had gaps: https://github.com/meodai/color-names

As I learned more about color models, I kept adding different ones over time. The perceptual models helped me understand the “missing” areas much better.

Later, after building an API around the list (https://github.com/meodai/color-name-api ), I started including other lists with permissive licenses too.

Appreciate all the thoughts and feedback here. I’ve also changed it so the cube stops spinning once you interact with it.

jcattle•3mo ago
I've recently used two decades of satellite data to compute average colors for land-cover types (like average forest color or average water color).

If you want to extend your color naming game by being able to say: This looks like Afghanistan-Water, or this looks like Ecuador-Forest

Page is here: https://landshade.com

utopiah•3mo ago
Neat, inspired me to make an immersive version (in WebXR) limited to named colors in HTML (140) so here is a 18s video https://video.benetou.fr/w/rugsEB2sSbqgixNm2QjumH

11 lines of JavaScript thanks to AFrame, threejs and some of my own tinkering :

  fetch('colors.json').then( res => res.json() ).then( colors => {
  colors.map( c => {
  let boxEl = document.createElement("a-box")
  boxEl.id = 'color_'+c.name
  let [r,g,b] = c.rgb.replace('RGB(','').replace(')','').split(',').map( n => Number(n)/100 )
  let pos = `${1-r} ${0.5+g} ${-0.5-b}`
  boxEl.setAttribute("position", pos)
  boxEl.initialPosition = pos
  boxEl.setAttribute("scale", ".1 .1 .1")
  boxEl.setAttribute("color", c.name)
  boxEl.setAttribute("target", "")
  boxEl.setAttribute("onpicked", "setFeedbackHUD('color'+selectedElements.at(-1).element.getAttribute('color'))" )
  boxEl.setAttribute("onreleased", "let el = selectedElements.at(-1).element; el.setAttribute('position',el.initialPosition)" )
  AFRAME.scenes[0].appendChild(boxEl)
  }) // end of fetch()