frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What an unprocessed photo looks like

https://maurycyz.com/misc/raw_photo/
398•zdw•2h ago•99 comments

Stepping down as Mockito maintainer after 10 years

https://github.com/mockito/mockito/issues/3777
198•saikatsg•5h ago•90 comments

Unity's Mono problem: Why your C# code runs slower than it should

https://marekfiser.com/blog/mono-vs-dot-net-in-unity/
89•iliketrains•3h ago•45 comments

62 years in the making: NYC's newest water tunnel nears the finish line

https://ny1.com/nyc/all-boroughs/news/2025/11/09/water--dep--tunnels-
44•eatonphil•2h ago•12 comments

Spherical Cow

https://lib.rs/crates/spherical-cow
28•Natfan•2h ago•1 comments

MongoBleed Explained Simply

https://bigdata.2minutestreaming.com/p/mongobleed-explained-simply
80•todsacerdoti•4h ago•20 comments

PySDR: A Guide to SDR and DSP Using Python

https://pysdr.org/content/intro.html
99•kklisura•5h ago•6 comments

Show HN: My app just won best iOS Japanese learning tool of 2025 award

https://skerritt.blog/best-japanese-learning-tools-2025-award-show/
35•wahnfrieden•1h ago•3 comments

Rich Hickey: Thanks AI

https://gist.github.com/richhickey/ea94e3741ff0a4e3af55b9fe6287887f
61•austinbirch•1h ago•7 comments

Researchers Discover Molecular Difference in Autistic Brains

https://medicine.yale.edu/news-article/molecular-difference-in-autistic-brains/
36•amichail•2h ago•12 comments

Growing up in “404 Not Found”: China's nuclear city in the Gobi Desert

https://substack.com/inbox/post/182743659
683•Vincent_Yan404•18h ago•296 comments

Slaughtering Competition Problems with Quantifier Elimination

https://grossack.site/2021/12/22/qe-competition.html
16•todsacerdoti•2h ago•0 comments

Why I Disappeared – My week with minimal internet in a remote island chain

https://www.kenklippenstein.com/p/why-i-disappeared
31•eh_why_not•3h ago•3 comments

Time in C++: Inter-Clock Conversions, Epochs, and Durations

https://www.sandordargo.com/blog/2025/12/24/clocks-part-5-conversions
21•ibobev•2d ago•3 comments

Remembering Lou Gerstner

https://newsroom.ibm.com/2025-12-28-Remembering-Lou-Gerstner
65•thm•6h ago•29 comments

Building a macOS app to know when my Mac is thermal throttling

https://stanislas.blog/2025/12/macos-thermal-throttling-app/
227•angristan•13h ago•99 comments

Writing non-English languages with a QWERTY keyboard

https://altgr-weur.eu/altgr-intl.html
7•tokai•4d ago•0 comments

Dolphin Progress Report: Release 2512

https://dolphin-emu.org/blog/2025/12/22/dolphin-progress-report-release-2512/
74•akyuu•3h ago•5 comments

Fast Cvvdp Implementation in C

https://github.com/halidecx/fcvvdp
4•todsacerdoti•1h ago•0 comments

Doublespeak: In-Context Representation Hijacking

https://mentaleap.ai/doublespeak/
45•surprisetalk•6d ago•5 comments

How to Complain

https://outerproduct.net/trivial/2024-03-25_complain.html
9•ysangkok•1h ago•1 comments

Learn computer graphics from scratch and for free

https://www.scratchapixel.com
172•theusus•14h ago•23 comments

As AI gobbles up chips, prices for devices may rise

https://www.npr.org/2025/12/28/nx-s1-5656190/ai-chips-memory-prices-ram
45•geox•2h ago•29 comments

Intermission: Battle Pulses

https://acoup.blog/2025/12/18/intermission-battle-pulses/
6•Khaine•2d ago•0 comments

Software engineers should be a little bit cynical

https://www.seangoedecke.com/a-little-bit-cynical/
111•zdw•3h ago•83 comments

Show HN: Pion SCTP with RACK is 70% faster with 30% less latency

https://pion.ly/blog/sctp-and-rack/
35•pch07•7h ago•5 comments

One year of keeping a tada list

https://www.ducktyped.org/p/one-year-of-keeping-a-tada-list
221•egonschiele•6d ago•64 comments

Oral History of Richard Greenblatt (2005) [pdf]

https://archive.computerhistory.org/resources/text/Oral_History/Greenblatt_Richard/greenblatt.ora...
10•0xpgm•3d ago•0 comments

John Malone and the Invention of Liquid-Based Engines

https://permalink.lanl.gov/object/tr?what=info:lanl-repo/lareport/LA-UR-93-1350-25
14•akshatjiwan•4d ago•2 comments

Show HN: Phantas – A browser-based binaural strobe engine (Web Audio API)

https://phantas.io
16•AphantaZach•4h ago•8 comments
Open in hackernews

What an unprocessed photo looks like

https://maurycyz.com/misc/raw_photo/
395•zdw•2h ago

Comments

throw310822•2h ago
Very interesting, pity the author chose such a poor example for the explanation (low, artificial and multicoloured light), making it really hard to understand what the "ground truth" and expected result should be.
delecti•1h ago
I'm not sure I understand your complaint. The "expected result" is either of the last two images (depending on your preference), and one of the main points of the post is to challenge the notion of "ground truth" in the first place.
throw310822•1h ago
Not a complaint, but both the final images have poor contrast, lighting, saturation and colour balance, making them a disappointing target for an explanation of how these elements are produced from raw sensor data.

But anyway, I enjoyed the article.

killingtime74•2h ago
Very cool and detailed
krackers•2h ago
>if the linear data is displayed directly, it will appear much darker then it should be.

This seems more a limitation of monitors. If you had very large bit depth, couldn't you just display images in linear light without gamma correction.

AlotOfReading•1h ago
Correction is useful for a bunch of different reasons, not all of them related to monitors. Even ISP pipelines without displays involved will still usually do it to allocate more bits to the highlights/shadows than the relatively distinguishable middle bits. Old CRTs did it because the electron gun had a non-linear response and the gamma curve actually linearized the output. Film processing and logarithmic CMOS sensors do it because the sensing medium has a nonlinear sensitivity to the light level.
dheera•59m ago
If we're talking about a sunset, then we're talking about your monitor shooting out blinding, eye-hurting brightness light wherever the sun is in the image. That wouldn't be very pleasant.
myself248•3m ago
Which is why I'm looking at replacing my car's rear-view mirror with a camera and a monitor. Because I can hard-cap the monitor brightness and curve the brightness below that, eliminating the problem of billion-lumens headlights behind me.
Sharlin•45m ago
No. It's about the shape of the curve. Human light intensity perception is not linear. You have to nonlinearize at some point of the pipeline, but yes, typically you should use high-resolution (>=16 bits per channel) linear color in calculations and apply the gamma curve just before display. The fact that traditionally this was not done, and linear operations like blending were applied to nonlinear RGB values, resulted in ugly dark, muddy bands of intermediate colors even in high-end applications like Photoshop.
krackers•24m ago
>Human light intensity perception is not linear... You have to nonlinearize at some point of the pipeline

Why exactly? My understanding is that gamma correction is effectively a optimization scheme to allocate bits in a perceptually uniform way across the dynamic range. But if you just have enough bits to work with and are not concerned with file sizes (and assuming all hardware could support these higher bit depths), then this shouldn't matter? IIRC unlike crts, LCDs don't have a power curve response in terms of the hardware anyway, and emulate the overall 2.2 trc via LUT. So you could certainly get monitors to accept linear input (assuming you manage to crank up the bit depth enough to the point where you're not losing perceptual fidelity), and just do everything in linear light.

In fact if you just encoded the linear values as floats that would probably give you best of both worlds, since floating point is basically log-encoding where density of floats is lower at the higher end of the range.

https://www.scantips.com/lights/gamma2.html

Dylan16807•12m ago
The shape of the curve doesn't matter at all. What matters is having a mismatch between the capture curve and the display curve.

If you kept it linear all the way to the output pixels, it would look fine. You only have to go nonlinear because the screen expects nonlinear data. The screen expects this because it saves a few bits, which is nice but far from necessary.

To put it another way, it appears so dark because it isn't being "displayed directly". It's going directly out to the monitor, and the chip inside the monitor is distorting it.

userbinator•2h ago
I think everyone agrees that dynamic range compression and de-Bayering (for sensors which are colour-filtered) are necessary for digital photography, but at the other end of the spectrum is "use AI to recognise objects and hallucinate what they 'should' look like" --- and despite how everyone would probably say that isn't a real photo anymore, it seems manufacturers are pushing strongly in that direction, raising issues with things like admissibility of evidence.
stavros•1h ago
One thing I've learned while dabbling in photography is that there are no "fake" images, because there are no "real" images. Everything is an interpretation of the data that the camera has to do, making a thousand choices along the way, as this post beautifully demonstrates.

A better discriminator might be global edits vs local edits, with local edits being things like retouching specific parts of the image to make desired changes, and one could argue that local edits are "more fake" than global edits, but it still depends on a thousand factors, most importantly intent.

"Fake" images are images with intent to deceive. By that definition, even an image that came straight out of the camera can be "fake" if it's showing something other than what it's purported to (e.g. a real photo of police violence but with a label saying it's in a different country is a fake photo).

What most people think when they say "fake", though, is a photo that has had filters applied, which makes zero sense. As the post shows, all photos have filters applied. We should get over that specific editing process, it's no more fake than anything else.

nospice•1h ago
> A better discriminator might be global edits vs local edits,

Even that isn't all that clear-cut. Is noise removal a local edit? It only touches some pixels, but obviously, that's a silly take.

Is automated dust removal still global? The same idea, just a bit more selective. If we let it slide, what about automated skin blemish removal? Depth map + relighting, de-hazing, or fake bokeh? I think that modern image processing techniques really blur the distinction here because many edits that would previously need to be done selectively by hand are now a "global" filter that's a single keypress away.

Intent is the defining factor, as you note, but intent is... often hazy. If you dial down the exposure to make the photo more dramatic / more sinister, you're manipulating emotions too. Yet, that kind of editing is perfectly OK in photojournalism. Adding or removing elements for dramatic effect? Not so much.

card_zero•1h ago
What's this, special pleading for doctored photos?

The only process in the article that involves nearby pixels is to combine R G and B (and other G) into one screen pixel. (In principle these could be mapped to subpixels.) Everything fancier than that can be reasonably called some fake cosmetic bullshit.

nospice•58m ago
I honestly don't understand what you're saying here.
card_zero•51m ago
I can't see how to rephrase it. How about this:

Removing dust and blemishes entails looking at more than one pixel at a time.

Nothing in the basic processing described in the article does that.

seba_dos1•32m ago
The article doesn't even go anywhere near what you need to do in order to get an acceptable output. It only shows the absolute basics. If you apply only those to a photo from a phone camera, it will be massively distorted (the effect is smaller, but still present on big cameras).
card_zero•25m ago
"Distorted" makes me think of a fisheye effect or something similar. Unsure if that's what you meant.
seba_dos1•20m ago
That's just one kind of distortion you'll see. There will also be bad pixels, lens shading, excessive noise in low light, various electrical differences across rows and temperatures that need to be compensated... Some (most?) sensors will even correct some of these for you already before handing you "raw" data.

Raw formats usually carry "Bayer-filtered linear (well, almost linear) light in device-specific color space", not necessarily "raw unprocessed readings from the sensor array", although some vendors move it slightly more towards the latter than others.

teeray•1h ago
> "Fake" images are images with intent to deceive

The ones that make the annual rounds up here in New England are those foliage photos with saturation jacked. “Look at how amazing it was!” They’re easy to spot since doing that usually wildly blows out the blues in the photo unless you know enough to selectively pull those back.

dheera•1h ago
Photography is also an art. When painters jack up saturations in their choices of paint colors people don't bat an eyelid. There's no good reason photographers cannot take that liberty as well, and tone mapping choices is in fact a big part of photographers' expressive medium.

If you want reality, go there in person and stop looking at photos. Viewing imagery is a fundamentally different type of experience.

zmgsabst•51m ago
Sure — but people reasonably distinguish between photos and digital art, with “photo” used to denote the intent to accurately convey rather than artistic expression.

We’ve had similar debates about art using miniatures and lens distortions versus photos since photography was invented — and digital editing fell on the lens trick and miniature side of the issue.

dheera•45m ago
Journalistic/event photography is about accuracy to reality, almost all other types of photography are not.

Portrait photography -- no, people don't look like that in real life with skin flaws edited out

Landscape photography -- no, the landscapes don't look like that 99% of the time, the photographer picks the 1% of the time when it looks surreal

Staged photography -- no, it didn't really happen

Street photography -- a lot of it is staged spontaneously

Product photography -- no, they don't look like that in normal lighting

xgulfie•1h ago
There's an obvious difference between debayering and white balance vs using Photoshop's generative fill
sho_hn•56m ago
Pretending that "these two things are the same, actually" when in fact no, you can seperately name and describe them quite clearly, is a favorite pastime of vacuous content on the internet.

Artists, who use these tools with clear vision and intent to achieve specific goals, strangely never have this problem.

imiric•1h ago
I understand what you and the article are saying, but what GP is getting at, and what I agree with, is that there is a difference between a photo that attempts to reproduce what the "average" human sees, and digital processing that augments the image in ways that no human could possibly visualize. Sometimes we create "fake" images to improve clarity, detail, etc., but that's still less "fake" than smoothing skin to remove blemishes, or removing background objects. One is clearly a closer approximation of how we perceive reality than the other.

So there are levels of image processing, and it would be wrong to dump them all in the same category.

userbinator•1h ago
Everything is an interpretation of the data that the camera has to do

What about this? https://news.ycombinator.com/item?id=35107601

mrandish•1h ago
News agencies like AP have already come up with technical standards and guidelines to technically define 'acceptable' types and degrees of image processing applied to professional photo-journalism.

You can look it up because it's published on the web but IIRC it's generally what you'd expect. It's okay to do whole-image processing where all pixels have the same algorithm applied like the basic brightness, contrast, color, tint, gamma, levels, cropping, scaling, etc filters that have been standard for decades. The usual debayering and color space conversions are also fine. Selectively removing, adding or changing only some pixels or objects is generally not okay for journalistic purposes. Obviously, per-object AI enhancement of the type many mobile phones and social media apps apply by default don't meet such standards.

mgraczyk•46m ago
I think Samsung was doing what was alleged, but as somebody who was working on state of the art algorithms for camera processing at a competitor while this was happening, this experiment does not prove what is alleged. Gaussian blurring does not remove the information, you can deconvolve and it's possible that Samsung's pre-ML super resolution was essentially the same as inverting a gaussian convolution
userbinator•18m ago
If you read the original source article, you'll find this important line:

I downsized it to 170x170 pixels

mgraczyk•13m ago
And? What algorithm was used for downsampling? What was the high frequency content of the downsampled imagine after doing a psuedo inverse with upsampling? How closely does it match the Samsung output?

My point is that there IS an experiment which would show that Samsung is doing some nonstandard processing likely involving replacement. The evidence provided is insufficient to show that

mmooss•1h ago
> What most people think when they say "fake", though, is a photo that has had filters applied, which makes zero sense. As the post shows, all photos have filters applied.

Filters themselves don't make it fake, just like words themselves don't make something a lie. How the filters and words are used, whether they bring us closer or further from some truth, is what makes the difference.

Photos implicitly convey, usually, 'this is what you would see if you were there'. Obviously filters can help with that, as in the OP, or hurt.

to11mtm•1h ago
Well that's why back in the day (and even still) 'Photographer listing their whole kit for every shot' is a thing thing you sometimes see.

i.e. Camera+Lens+ISO+SS+FStop+FL+TC (If present)+Filter (If present). Add focus distance if being super duper proper.

And some of that is to help at least provide the right requirements to try to recreate.

melagonster•32m ago
Today, I trust the other meaning of "fake images" is that an image was generated by AI.
kortilla•26m ago
But when you shift the goal posts that far, a real image has never been produced. But people very clearly want to describe when an image has been modified to represent something that didn’t happen.
barishnamazov•1h ago
I love posts that peel back the abstraction layer of "images." It really highlights that modern photography is just signal processing with better marketing.

A fun tangent on the "green cast" mentioned in the post: the reason the Bayer pattern is RGGB (50% green) isn't just about color balance, but spatial resolution. The human eye is most sensitive to green light, so that channel effectively carries the majority of the luminance (brightness/detail) data. In many advanced demosaicing algorithms, the pipeline actually reconstructs the green channel first to get a high-resolution luminance map, and then interpolates the red/blue signals—which act more like "color difference" layers—on top of it. We can get away with this because the human visual system is much more forgiving of low-resolution color data than it is of low-resolution brightness data. It’s the same psycho-visual principle that justifies 4:2:0 chroma subsampling in video compression.

Also, for anyone interested in how deep the rabbit hole goes, looking at the source code for dcraw (or libraw) is a rite of passage. It’s impressive how many edge cases exist just to interpret the "raw" voltages from different sensor manufacturers.

delecti•1h ago
I have a related anecdote.

When I worked at Amazon on the Kindle Special Offers team (ads on your eink Kindle while it was sleeping), the first implementation of auto-generated ads was by someone who didn't know that properly converting RGB to grayscale was a smidge more complicated than just averaging the RGB channels. So for ~6 months in 2015ish, you may have seen a bunch of ads that looked pretty rough. I think I just needed to add a flag to the FFmpeg call to get it to convert RGB to luminance before mapping it to the 4-bit grayscale needed.

barishnamazov•1h ago
I don't think Kindle ads were available in my region in 2015 because I don't remember seeing these back then, but you're a lucky one to fix this classic mistake :-)

I remember trying out some of the home-made methods while I was implementing a creative work section for a school assignment. It’s surprising how "flat" the basic average looks until you actually respect the coefficients (usually some flavor of 0.21R + 0.72G + 0.07B). I bet it's even more apparent in a 4-bit display.

reactordev•1h ago
If you really want that old school NTSC look: 0.3R + 0.59G + 0.11B

This is the coefficients I use regularly.

kccqzy•43m ago
I remember using some photo editing software (Aperture I think) that would allow you to customize the different coefficients and there were even presets that give different names to different coefficients. Ultimately you can pick any coefficients you want, and only your eyes can judge how nice they are.
dheera•1h ago
This is also why I absolute hate, hate, hate it when people ask me whether I "edited" a photo or whether a photo is "original", as if trying to explain away nice-looking images as if they are fake.

The JPEGs cameras produce are heavily processed, and they are emphatically NOT "original". Taking manual control of that process to produce an alternative JPEG with different curves, mappings, calibrations, is not a crime.

to11mtm•56m ago
JPEG with OOC processing is different from JPEG OOPC (out-of-phone-camera) processing. Thank Samsung for forcing the need to differentiate.
seba_dos1•42m ago
I wrote the raw Bayer to JPEG pipeline used by the phone I write this comment on. The choices on how to interpret the data are mine. Can I tweak these afterwards? :)
to11mtm•14m ago
I mean it depends, does your Bayer-to-JPEG pipeline try to detect things like 'this is a zoomed in picture of the moon' and then do auto-fixup to put a perfect moon image there? That's why there's some need to differentiate between SOOC's now, because Samsung did that.

I know my Sony gear can't call out to AI because the WIFI sucks like every other Sony product and barely works inside my house, but also I know the first ILC manufacturer that tries to put AI right into RAW files is probably the first to leave part of the photography market.

That said I'm a purist to the point where I always offer RAWs for my work [0] and don't do any photoshop/etc. D/A, horizon, bright adjust/crop to taste.

Where phones can possibly do better is the smaller size and true MP structure of a cell phone camera sensor, makes it easier to handle things like motion blur. and rolling shutter.

But, I have yet to see anything that gets closer to an ILC for true quality than the decade+ old pureview cameras on Nokia cameras, probably partially because they often had sensors large enough.

There's only so much computation can do to simulate true physics.

[0] - I've found people -like- that. TBH, it helps that I tend to work cheap or for barter type jobs in that scene, however it winds up being something where I've gotten repeat work because they found me and a 'photoshop person' was cheaper than getting an AIO pro.

make3•3m ago
it's not a crime but applying post processing in an overly generous way that goes a lot further than replicating what a human sees does take away from what makes pictures interesting imho vs other mediums, that it's a genuine representation of something that actually happened.

if you take that away, a picture is not very interesting, it's hyperrealistic so not super creative a lot of the time (compared to eg paintings), & it doesn't even require the mastery of other mediums to get hyperrealistism

bstsb•1h ago
hey, not accusing you of anything (bad assumptions don't lead to a conducive conversation) but did you use AI to write or assist with this comment?

this is totally out of my own self-interest, no problems with its content

ajkjk•1h ago
found the guy who didn't know about em dashes before this year

also your question implies a bad assumption even if you disclaim it. if you don't want to imply a bad assumption the way to do that is to not say the words, not disclaim them

reactordev•1h ago
The hatred mostly comes from TTS models not properly pausing for them.

“NO EM DASHES” is common system prompt behavior.

sho_hn•1h ago
Upon inspection, the author's personal website used em dashes in 2023. I hope this helped with your witch hunt.

I'm imagining a sort of Logan's Run-like scifi setup where only people with a documented em dash before November 30, 2022, i.e. D(ash)-day, are left with permission to write.

brookst•11m ago
Phew. I have published work with em dashes, bulleted lists, “not just X, but Y” phrasing, and the use of “certainly”, all from the 90’s. Feel sorry for the kids, but I got mine.
ekidd•47m ago
I have been overusing em dashes and bulleted lists since the actual 80s, I'm sad to say. I spent much of the 90s manually typing "smart" quotes.

I have actually been deliberately modifying my long-time writing style and use of punctuation to look less like an LLM. I'm not sure how I feel about this.

disillusioned•38m ago
Alt + 0151, baby! Or... however you do it on MacOS.

But now, likewise, having to bail on emdashes. My last differentiator is that I always close set the emdash—no spaces on either side, whereas ChatGPT typically opens them (AP Style).

piskov•32m ago
Just use some typography layout with a separate layer. Eg “right alt” plus “-” for m-dash

Russians use this for at least 15 years

https://ilyabirman.ru/typography-layout/

ksherlock•11m ago
On the mac you just type — for an em dash or – for an en dash.
thousand_nights•53m ago
the bayer pattern is one of those things that makes me irrationally angry, in the true sense, based on my ignorance of the subject

what's so special about green? oh so just because our eyes are more sensitive to green we should dedicate double the area to green in camera sensors? i mean, probably yes. but still. (⩺_⩹)

brookst•13m ago
Even old school chemical films were the same thing, just different domain.

There is no such thing as “unprocessed” data, at least that we can perceive.

shagie•4m ago
> A fun tangent on the "green cast" mentioned in the post: the reason the Bayer pattern is RGGB (50% green) isn't just about color balance, but spatial resolution. The human eye is most sensitive to green light, so that channel effectively carries the majority of the luminance (brightness/detail) data.

From the classic file format "ppm" (portable pixel map) the ppm to pgm (portable grayscale map)

https://linux.die.net/man/1/ppmtopgm

    The quantization formula ppmtopgm uses is g = .299 r + .587 g + .114 b.
You'll note the relatively high value of green there, making up nearly 60% of the luminosity of the resulting grayscale image.

I also love the quote in there...

   Quote

   Cold-hearted orb that rules the night
   Removes the colors from our sight
   Red is gray, and yellow white
   But we decide which is right
   And which is a quantization error.
(context for the original - https://www.youtube.com/watch?v=VNC54BKv3mc )
emodendroket•1h ago
This is actually really useful. A lot of people demand an "unprocessed" photo but don't understand what they're actually asking for.
XCSme•1h ago
I am confused by the color filter step.

Is the output produced by the sensor RGB or a single value per pixel?

ranger207•1h ago
It's a single value per pixel, but each pixel has a different color filter in front of it, so it's effectively that each pixel is one of R, G, or B
XCSme•1h ago
So, for a 3x3 image, the input data would be 9 values like:

   R G B
   B R G
   G B R

?
card_zero•1h ago
In the example ("let's color each pixel ...") the layout is:

  R G
  G B
Then at a later stage the image is green because "There are twice as many green pixels in the filter matrix".
nomel•51m ago
And this is important because our perception is more sensitive to luminance changes than color, and with our eyes being most sensitive to green, luminance is also. So, higher perceived spatial resolution by using more green [1]. This is also why JPG has lower resolution red and green channels, and why modern OLED usually use a pentile display, with only green being at full resolutio [2].

[1] https://en.wikipedia.org/wiki/Bayer_filter#Explanation

[2] https://en.wikipedia.org/wiki/PenTile_matrix_family

card_zero•41m ago
Funny that subpixels and camera sensors aren't using the same layouts.
userbinator•37m ago
Pentile displays are acceptable for photos and videos, but look really horrible displaying text and fine detail --- which looks almost like what you'd see on an old triad-shadow-mask colour CRT.
jeeyoungk•1h ago
If you want "3x3 colored image", you would need 6x6 of the bayer filter pixels.

Each RGB pixel would be 2x2 grid of

``` G R B G ```

So G appears twice as many as other colors (this is mostly the same for both the screen and sensor technology).

There are different ways to do the color filter layouts for screens and sensors (Fuji X-Trans have different layout, for example).

Lanzaa•52m ago
This depends on the camera and the sensor's bayer filter [0]. For example the quad bayer uses a 4x4 like:

    G G R R
    G G R R
    B B G G
    B B G G
[0]: https://en.wikipedia.org/wiki/Bayer_filter
steveBK123•1h ago
In its most raw form, camera sensors only see illumination not color.

In front of the sensor is a bayer filter which results in each physical pixel seeing illumination filtered R G or B.

From there the software onboard the camera or in your RAW converter does interpolation to create RGB values at each pixel. For example if the local pixel is R filtered, it then interpolates its G & B values from nearby pixels of that filter.

https://en.wikipedia.org/wiki/Bayer_filter

There are alternatives such as what Fuji does with its X-trans sensor filter.

https://en.wikipedia.org/wiki/Fujifilm_X-Trans_sensor

Another alternative is Foveon (owned by Sigma now) which makes full color pixel sensors but they have not kept up with state of the art.

https://en.wikipedia.org/wiki/Foveon_X3_sensor

This is also why Leica B&W sensor cameras have higher apparently sharpness & ISO sensitivity than the related color sensor models because there is no filter in front or software interpolation happening.

XCSme•1h ago
What about taking 3 photos while quickly changing the filter (e.g. filters are something like quantum dots that can be turned on/off)?
itishappy•1h ago
> What about taking 3 photos while quickly changing the filter

Works great. Most astro shots are taken using a monochrome sensor and filter wheel.

> filters are something like quantum dots that can be turned on/off

If anyone has this tech, plz let me know! Maybe an etalon?

https://en.wikipedia.org/wiki/Fabry%E2%80%93P%C3%A9rot_inter...

XCSme•1h ago
> If anyone has this tech, plz let me know!

I have no idea, it was my first thought when I thought of modern color filters.

card_zero•55m ago
That's how the earliest color photography worked. "Making color separations by reloading the camera and changing the filter between exposures was inconvenient", notes Wikipedia.
to11mtm•39m ago
I think they are both more asking about 'per pixel color filters'; that is, something like a sensor filter/glass but the color separators could change (at least 'per-line') fast enough to get a proper readout of the color in formation.

AKA imagine a camera with R/G/B filters being quickly rotated out for 3 exposures, then imagine it again but the technology is integrated right into the sensor (and, ideally, the sensor and switching mechanism is fast enough to read out with rolling shutter competitive with modern ILCs)

MarkusWandel•1h ago
Works for static images, but if there's motion the "changing the filters" part is never fast enough, there will always be colour fringing somewhere.

Edit or maybe it does work? I've watched at least one movie on a DLP type video projector with sequential colour and not noticed colour fringing. But still photos have much higher demand here.

lidavidm•41m ago
Olympus and other cameras can do this with "pixel shift": it uses the stabilization mechanism to quickly move the sensor by 1 pixel.

https://en.wikipedia.org/wiki/Pixel_shift

EDIT: Sigma also has "Foveon" sensors that do not have the filter and instead stacks multiple sensors (for different wavelengths) at each pixel.

https://en.wikipedia.org/wiki/Foveon_X3_sensor

stefan_•1h ago
B&W sensors are generally more sensitive than their color versions, as all filters (going back to signal processing..) attenuate the signal.
wtallis•1h ago
The sensor outputs a single value per pixel. A later processing step is needed to interpret that data given knowledge about the color filter (usually Bayer pattern) in front of the sensor.
i-am-gizm0•1h ago
The raw sensor output is a single value per sensor pixel, each of which is behind a red, green, or blue color filter. So to get a usable image (where each pixel has a value for all three colors), we have to somehow condense the values from some number of these sensor pixels. This is the "Debayering" process.
uolmir•1h ago
This is a great write up. It's also weirdly similar to a video I happened upon yesterday playing around with raw Hubble imagery: https://www.youtube.com/watch?v=1gBXSQCWdSI

He take a few minutes to get to the punch line. Feel free to skip ahead to around 5:30.

shepherdjerred•1h ago
Wow this is amazing. What a good and simple explanation!
Forgeties79•1h ago
For those who are curious, this is basically what we do when we color grade in video production but taken to its most extreme. Or rather, stripped down to the most fundamental level. Lots of ways to describe it.

Generally we shoot “flat” (there are so many caveats to this but I don’t feel like getting bogged down in all of it. If you plan on getting down and dirty with colors and really grading, you generally shoot flat). The image that we handover to DIT/editing can be borderline grayscale in its appearance. The colors are so muted, the dynamic range is so wide, that you basically have a highly muted image. The reason for this is you then have the freedom to “push” the color and look and almost any direction, versus if you have a very saturated, high contrast image, you are more “locked” into that look. This matters more and more when you are using a compressed codec and not something with an incredibly high bitrate or raw codecs, which is a whole other world and I am also doing a bit of a disservice to by oversimplifying.

Though this being HN it is incredibly likely I am telling few to no people anything new here lol

nospice•1h ago
"Flat" is a bit of a misnomer in this context. It's not flat, it's actually a logarithmic ("log profile") representation of data computed by the camera to allow a wider dynamic range to be squeezed into traditional video formats.

It's sort of the opposite of what's going on with photography, where you have a dedicated "raw" format with linear readings from the sensor. Without these formats, someone would probably have invented "log JPEG" or something like that to preserve more data in highlights and in the shadows.

mvdtnz•1h ago
The article keeps using the acronym "ADC" without defining it.
benatkin•1h ago
There are also no citations, and it has this phrase "This website is not licensed for ML/LLM training or content creation." Yeah right, that's like the privacy notice posts people make to facebook from time to time that contradict the terms of service https://knowyourmeme.com/memes/facebook-privacy-notices
packetslave•42m ago
Right-click, "Search Google for 'ADC'", takes much less time than making this useless comment.

https://en.wikipedia.org/wiki/Analog-to-digital_converter

to11mtm•1h ago
OK now do Fuji Super CCD (where for reasons unknown the RAW is diagonal [0])

[0] - https://en.wikipedia.org/wiki/Super_CCD#/media/File:Fuji_CCD...

bri3d•54m ago
The reasons aren’t exactly unknown, considering that the sensor is diagonally oriented also?

Processing these does seem like more fun though.

gruez•1h ago
Honestly, I think the gamma normalization step don't really count as "processing", any more than the gzip decompression step doesn't count as "processing" for the purposes of "this is what an unprocessed html file looks like" demo. At the end of the day, it's the same information, but encoded differently. Similar arguments can be made for de-bayer filter step. If you ignore these two steps, the "processing" that happens looks far less dramatic.
Waterluvian•1h ago
I studied remote sensing in undergrad and it really helped me grok sensors and signal processing. My favourite mental model revelation to come from it was that what I see isn’t the “ground truth.” It’s a view of a subset of the data. My eyes, my cat’s eyes, my cameras all collect and render different subsets of the data, providing different views of the subject matter.

It gets even wilder when perceiving space and time as additional signal dimensions.

I imagine a sort of absolute reality that is the universe. And we’re all just sensor systems observing tiny bits of it in different and often overlapping ways.

reactordev•1h ago
Maybe it’s just me but I took one look at the unprocessed photo (the first one) and immediately knew it was a skinny Christmas tree.

I’ve been staring at 16-bit HDR greyscale space for so long…

exabrial•1h ago
I love the look of the final product after the manual work (not the one for comparison). Just something very realistic and wholesome about it, not pumped to 10 via AI or Instagram filters.
strogonoff•54m ago
An unprocessed photo does not “look”. It is RGGB pixel values that far exceed any display media in dynamic range. Fitting it into the tiny dynamic range of screens by thrusting throwing away data strategically (inventing perceptual the neutral grey point, etc.) is what actually makes sense of them, and what is the creative task.
ChrisMarshallNY•43m ago
That's a cool walkthrough.

I spent a good part of my career, working in image processing.

That first image is pretty much exactly what a raw Bayer format looks like.

jonplackett•42m ago
The matrix step has 90s video game pixel art vibes.
jiggawatts•39m ago
I've been studying machine learning during the xmas break, and as an exercise I started tinkering around with the raw Bayer data from my Nikon camera, throwing it at various architectures to see what I can squeeze out of the sensor.

Something that surprised me is that very little of the computation photography magic that has been developed for mobile phones has been applied to larger DSLRs. Perhaps it's because it's not as desperately needed, or because prior to the current AI madness nobody had sufficient GPU power lying around for such a purpose.

For example, it's a relatively straightforward exercise to feed in "dark" and "flat" frames as extra per-pixel embeddings, which lets the model learn about the specifics of each individual sensor and its associated amplifier. In principle, this could allow not only better denoising, but also stretch the dynamic range a tiny bit by leveraging the less sensitive photosites in highlights and the more senstive ones in the dark areas.

Similarly, few if any photo editing products do simultaneous debayering and denoising, most do the latter as a step in normal RGB space.

Not to mention multi-frame stacking that compensates for camera motion, etc...

The whole area is "untapped" for full-frame cameras, someone just needs to throw a few server grade GPUs at the problem for a while!

yoonwoosik12•30m ago
This is really interesting. I'll be back after reading it.
0xWTF•25m ago
This reminds me of a couple things:

== Tim's Vermeer ==

Specifically Tim's quote "There's also this modern idea that art and technology must never meet - you know, you go to school for technology or you go to school for art, but never for both... And in the Golden Age, they were one and the same person."

https://en.wikipedia.org/wiki/Tim%27s_Vermeer

https://www.imdb.com/title/tt3089388/quotes/?item=qt2312040

== John Lind's The Science of Photography ==

Best explanation I ever read on the science of photography https://johnlind.tripod.com/science/scienceframe.html

== Bob Atkins ==

Bob used to have some incredible articles on the science of photography that were linked from photo.net back when Philip Greenspun owned and operated it. A detailed explanation of digital sensor fundamentals (e.g. why bigger wells are inherently better) particularly sticks in my mind. They're still online (bookmarked now!)

https://www.bobatkins.com/photography/digital/size_matters.h...

MarkusWandel•17m ago
But does applying the same transfer function to each pixel (of a given colour anyway) count as "processing"?

What bothers me as an old-school photographer is this. When you really pushed it with film (e.g. overprocess 400ISO B&W film to 1600 ISO and even then maybe underexpose at the enlargement step) you got nasty grain. But that was uniform "noise" all over the picture. Nowadays, noise reduction is impressive, but at the cost of sometimes changing the picture. For example, the IP cameras I have, sometimes when I come home on the bike, part of the wheel is missing, having been deleted by the algorithm as it struggled with the "grainy" asphalt driveway underneath.

Smartphone and dedicated digital still cameras aren't as drastic, but when zoomed in, or in low light, faces have a "painted" kind of look. I'd prefer honest noise, or better yet an adjustable denoising algorithm from "none" (grainy but honest) to what is now the default.