frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

How We Optimize RocksDB in TiKV – Write Batch Optimization

https://medium.com/@siddontang/how-we-optimize-rocksdb-in-tikv-write-batch-optimization-28751a4bdd8b
1•eatonphil•5m ago•0 comments

Prompts.chat/Builder: Prompt Building Suite

https://prompts.chat/builder
1•fka•5m ago•0 comments

Show HN: Deep Code Research – AI surveys 10 similar repos to review yours

1•WindChimeRan•6m ago•0 comments

Silver Thursday

https://en.wikipedia.org/wiki/Silver_Thursday
1•gjvc•7m ago•0 comments

Familial confounding in the associations between maternal health and autism

https://doi.org/10.1038/s41591-024-03479-5
1•rendx•9m ago•0 comments

CLI for internet speed test via Cloudflare

https://github.com/kavehtehrani/cloudflare-speed-cli
2•sashk•9m ago•0 comments

BuyMeACoffee with Crypto Rails

https://cryptocoffee.dev/
1•shrimalmadhur•14m ago•0 comments

Going Paperless

https://www.julianfalk.dev/blog/going-paperless
2•jan0r•16m ago•0 comments

A New Navigation Paradigm

https://www.doc.cc/articles/ai-navigation
1•ggauravr•16m ago•0 comments

Rich Hickey: Thanks AI

https://gist.github.com/richhickey/ea94e3741ff0a4e3af55b9fe6287887f
3•austinbirch•18m ago•0 comments

Skynet Starter Kit: From AI Jailbreak to Remote Takeover of Humanoid Robots [video]

https://www.youtube.com/watch?v=qjA__5-Bybs
1•tiborsaas•18m ago•0 comments

Silver Swan (Automaton)

https://en.wikipedia.org/wiki/Silver_Swan_(automaton)
1•croes•19m ago•0 comments

The Day the LLM Stood Still: A Diary from a World Without AI

https://blog.pytoshka.me/post/the-day-the-llm-stood-still/
1•alexclear•20m ago•0 comments

Is this AI? How can you tell?

https://open.spotify.com/track/5epbOCBHAVNWkvOQDmki3V
1•vinlock•20m ago•0 comments

Lecture on spectral unfolding: Closely related to local noise removal

https://github.com/msuzen/leymosun/blob/main/lectures/spectral_unfolding.ipynb
1•northlondoner•28m ago•1 comments

Video Analysis Site

https://visionanalyze.com
1•greentimer•33m ago•0 comments

Show HN: My app just won best iOS Japanese learning tool of 2025 award

https://skerritt.blog/best-japanese-learning-tools-2025-award-show/
8•wahnfrieden•36m ago•1 comments

Show HN: Kiss – code-complexity feedback for LLM coding agents

https://github.com/dsweet99/kiss
2•dspub99•39m ago•0 comments

Playing with Turmites: better than crypto/rand?

https://blog.vrypan.net/2025/12/28/playing-with-turmites-better-than-crypto-rand/
2•vrypan•43m ago•1 comments

Apple retires 25 products, ends iconic iPhone SE era

https://news.az/news/apple-retires-25-products-ends-iconic-iphone-se-era
4•doener•43m ago•0 comments

Curator 'shocked' by Melbourne pitch performance in Ashes

https://www.rnz.co.nz/news/sport/582828/curator-shocked-by-melbourne-pitch-performance-in-ashes
2•tigerlily•51m ago•0 comments

ThinkTank, an "idea processor" that launched a religion (of outliners)

https://stonetools.ghost.io/thinktank-dos/
2•ChristopherDrum•57m ago•0 comments

Shields.io Uses the GitHub API

https://shields.io/blog/token-pool
2•angristan•58m ago•0 comments

Show HN: DeviceGPT – AI-powered Android device monitor with real data

1•teamzlab•58m ago•0 comments

The brain decides what to remember with sequential molecular timers

https://medicalxpress.com/news/2025-11-brain-reveals-sequentially-molecular-timers.html
1•PaulHoule•1h ago•0 comments

The Question Nobody Asks

https://aliveness.kunnas.com/articles/the-question-nobody-asks
1•ekns•1h ago•0 comments

Multi-Tenant SaaS's Wildcard TLS: An Overview of DNS-01 Challenges

https://www.skeptrune.com/posts/wildcard-tls-for-multi-tenant-systems/
1•skeptrune•1h ago•0 comments

Fast Cvvdp Implementation in C

https://github.com/halidecx/fcvvdp
2•todsacerdoti•1h ago•0 comments

The Sociology of the Crease

https://www.sebs.website/blog/the-sociology-of-the-crease
2•Incerto•1h ago•0 comments

Show HN: SecureNow – Security Fixes You Can Apply Today

https://www.securenow.dev
1•pelmenibenni•1h ago•0 comments
Open in hackernews

What an unprocessed photo looks like

https://maurycyz.com/misc/raw_photo/
258•zdw•2h ago

Comments

throw310822•1h ago
Very interesting, pity the author chose such a poor example for the explanation (low, artificial and multicoloured light), making it really hard to understand what the "ground truth" and expected result should be.
delecti•1h ago
I'm not sure I understand your complaint. The "expected result" is either of the last two images (depending on your preference), and one of the main points of the post is to challenge the notion of "ground truth" in the first place.
throw310822•44m ago
Not a complaint, but both the final images have poor contrast, lighting, saturation and colour balance, making them a disappointing target for an explanation of how these elements are produced from raw sensor data.

But anyway, I enjoyed the article.

killingtime74•1h ago
Very cool and detailed
krackers•1h ago
>if the linear data is displayed directly, it will appear much darker then it should be.

This seems more a limitation of monitors. If you had very large bit depth, couldn't you just display images in linear light without gamma correction.

AlotOfReading•58m ago
Correction is useful for a bunch of different reasons, not all of them related to monitors. Even ISP pipelines without displays involved will still usually do it to allocate more bits to the highlights/shadows than the relatively distinguishable middle bits. Old CRTs did it because the electron gun had a non-linear response and the gamma curve actually linearized the output. Film processing and logarithmic CMOS sensors do it because the sensing medium has a nonlinear sensitivity to the light level.
dheera•14m ago
If we're talking about a sunset, then we're talking about your monitor shooting out blinding, eye-hurting brightness light wherever the sun is in the image. That wouldn't be very pleasant.
userbinator•1h ago
I think everyone agrees that dynamic range compression and de-Bayering (for sensors which are colour-filtered) are necessary for digital photography, but at the other end of the spectrum is "use AI to recognise objects and hallucinate what they 'should' look like" --- and despite how everyone would probably say that isn't a real photo anymore, it seems manufacturers are pushing strongly in that direction, raising issues with things like admissibility of evidence.
stavros•1h ago
One thing I've learned while dabbling in photography is that there are no "fake" images, because there are no "real" images. Everything is an interpretation of the data that the camera has to do, making a thousand choices along the way, as this post beautifully demonstrates.

A better discriminator might be global edits vs local edits, with local edits being things like retouching specific parts of the image to make desired changes, and one could argue that local edits are "more fake" than global edits, but it still depends on a thousand factors, most importantly intent.

"Fake" images are images with intent to deceive. By that definition, even an image that came straight out of the camera can be "fake" if it's showing something other than what it's purported to (e.g. a real photo of police violence but with a label saying it's in a different country is a fake photo).

What most people think when they say "fake", though, is a photo that has had filters applied, which makes zero sense. As the post shows, all photos have filters applied. We should get over that specific editing process, it's no more fake than anything else.

nospice•1h ago
> A better discriminator might be global edits vs local edits,

Even that isn't all that clear-cut. Is noise removal a local edit? It only touches some pixels, but obviously, that's a silly take.

Is automated dust removal still global? The same idea, just a bit more selective. If we let it slide, what about automated skin blemish removal? Depth map + relighting, de-hazing, or fake bokeh? I think that modern image processing techniques really blur the distinction here because many edits that would previously need to be done selectively by hand are now a "global" filter that's a single keypress away.

Intent is the defining factor, as you note, but intent is... often hazy. If you dial down the exposure to make the photo more dramatic / more sinister, you're manipulating emotions too. Yet, that kind of editing is perfectly OK in photojournalism. Adding or removing elements for dramatic effect? Not so much.

card_zero•23m ago
What's this, special pleading for doctored photos?

The only process in the article that involves nearby pixels is to combine R G and B (and other G) into one screen pixel. (In principle these could be mapped to subpixels.) Everything fancier than that can be reasonably called some fake cosmetic bullshit.

nospice•13m ago
I honestly don't understand what you're saying here.
card_zero•6m ago
I can't see how to rephrase it. How about this:

Removing dust and blemishes entails looking at more than one pixel at a time.

Nothing in the basic processing described in the article does that.

teeray•59m ago
> "Fake" images are images with intent to deceive

The ones that make the annual rounds up here in New England are those foliage photos with saturation jacked. “Look at how amazing it was!” They’re easy to spot since doing that usually wildly blows out the blues in the photo unless you know enough to selectively pull those back.

dheera•22m ago
Photography is also an art. When painters jack up saturations in their choices of paint colors people don't bat an eyelid. There's no good reason photographers cannot take that liberty as well, and tone mapping choices is in fact a big part of photographers' expressive medium.

If you want reality, go there in person and stop looking at photos. Viewing imagery is a fundamentally different type of experience.

zmgsabst•6m ago
Sure — but people reasonably distinguish between photos and digital art, with “photo” used to denote the intent to accurately convey rather than artistic expression.

We’ve had similar debates about art using miniatures and lens distortions versus photos since photography was invented — and digital editing fell on the lens trick and miniature side of the issue.

xgulfie•55m ago
There's an obvious difference between debayering and white balance vs using Photoshop's generative fill
sho_hn•11m ago
Pretending that "these two things are the same, actually" when in fact no, you can seperately name and describe them quite clearly, is a favorite pastime of vacuous content on the internet.

Artists, who use these tools with clear vision and intent to achieve specific goals, strangely never have this problem.

imiric•53m ago
I understand what you and the article are saying, but what GP is getting at, and what I agree with, is that there is a difference between a photo that attempts to reproduce what the "average" human sees, and digital processing that augments the image in ways that no human could possibly visualize. Sometimes we create "fake" images to improve clarity, detail, etc., but that's still less "fake" than smoothing skin to remove blemishes, or removing background objects. One is clearly a closer approximation of how we perceive reality than the other.

So there are levels of image processing, and it would be wrong to dump them all in the same category.

userbinator•52m ago
Everything is an interpretation of the data that the camera has to do

What about this? https://news.ycombinator.com/item?id=35107601

mrandish•21m ago
News agencies like AP have already come up with technical standards and guidelines to technically define 'acceptable' types and degrees of image processing applied to professional photo-journalism.

You can look it up because it's published on the web but IIRC it's generally what you'd expect. It's okay to do whole-image processing where all pixels have the same algorithm applied like the basic brightness, contrast, color, tint, gamma, levels, cropping, scaling, etc filters that have been standard for decades. The usual debayering and color space conversions are also fine. Selectively removing, adding or changing only some pixels or objects is generally not okay for journalistic purposes. Obviously, per-object AI enhancement of the type many mobile phones and social media apps apply by default don't meet such standards.

mmooss•41m ago
> What most people think when they say "fake", though, is a photo that has had filters applied, which makes zero sense. As the post shows, all photos have filters applied.

Filters themselves don't make it fake, just like words themselves don't make something a lie. How the filters and words are used, whether they bring us closer or further from some truth, is what makes the difference.

Photos implicitly convey, usually, 'this is what you would see if you were there'. Obviously filters can help with that, as in the OP, or hurt.

to11mtm•34m ago
Well that's why back in the day (and even still) 'Photographer listing their whole kit for every shot' is a thing thing you sometimes see.

i.e. Camera+Lens+ISO+SS+FStop+FL+TC (If present)+Filter (If present). Add focus distance if being super duper proper.

And some of that is to help at least provide the right requirements to try to recreate.

barishnamazov•1h ago
I love posts that peel back the abstraction layer of "images." It really highlights that modern photography is just signal processing with better marketing.

A fun tangent on the "green cast" mentioned in the post: the reason the Bayer pattern is RGGB (50% green) isn't just about color balance, but spatial resolution. The human eye is most sensitive to green light, so that channel effectively carries the majority of the luminance (brightness/detail) data. In many advanced demosaicing algorithms, the pipeline actually reconstructs the green channel first to get a high-resolution luminance map, and then interpolates the red/blue signals—which act more like "color difference" layers—on top of it. We can get away with this because the human visual system is much more forgiving of low-resolution color data than it is of low-resolution brightness data. It’s the same psycho-visual principle that justifies 4:2:0 chroma subsampling in video compression.

Also, for anyone interested in how deep the rabbit hole goes, looking at the source code for dcraw (or libraw) is a rite of passage. It’s impressive how many edge cases exist just to interpret the "raw" voltages from different sensor manufacturers.

delecti•45m ago
I have a related anecdote.

When I worked at Amazon on the Kindle Special Offers team (ads on your eink Kindle while it was sleeping), the first implementation of auto-generated ads was by someone who didn't know that properly converting RGB to grayscale was a smidge more complicated than just averaging the RGB channels. So for ~6 months in 2015ish, you may have seen a bunch of ads that looked pretty rough. I think I just needed to add a flag to the FFmpeg call to get it to convert RGB to luminance before mapping it to the 4-bit grayscale needed.

barishnamazov•32m ago
I don't think Kindle ads were available in my region in 2015 because I don't remember seeing these back then, but you're a lucky one to fix this classic mistake :-)

I remember trying out some of the home-made methods while I was implementing a creative work section for a school assignment. It’s surprising how "flat" the basic average looks until you actually respect the coefficients (usually some flavor of 0.21R + 0.72G + 0.07B). I bet it's even more apparent in a 4-bit display.

reactordev•16m ago
If you really want that old school NTSC look: 0.3R + 0.59G + 0.11B

This is the coefficients I use regularly.

dheera•27m ago
This is also why I absolute hate, hate, hate it when people ask me whether I "edited" a photo or whether a photo is "original", as if trying to explain away nice-looking images as if they are fake.

The JPEGs cameras produce are heavily processed, and they are emphatically NOT "original". Taking manual control of that process to produce an alternative JPEG with different curves, mappings, calibrations, is not a crime.

to11mtm•11m ago
JPEG with OOC processing is different from JPEG OOPC (out-of-phone-camera) processing. Thank Samsung for forcing the need to differentiate.
bstsb•24m ago
hey, not accusing you of anything (bad assumptions don't lead to a conducive conversation) but did you use AI to write or assist with this comment?

this is totally out of my own self-interest, no problems with its content

ajkjk•17m ago
found the guy who didn't know about em dashes before this year

also your question implies a bad assumption even if you disclaim it. if you don't want to imply a bad assumption the way to do that is to not say the words, not disclaim them

reactordev•15m ago
The hatred mostly comes from TTS models not properly pausing for them.

“NO EM DASHES” is common system prompt behavior.

sho_hn•15m ago
Upon inspection, the author's personal website used em dashes in 2023. I hope this helped with your witch hunt.

I'm imagining a sort of Logan's Run-like scifi setup where only people with a documented em dash before November 30, 2022, i.e. D(ash)-day, are left with permission to write.

thousand_nights•9m ago
the bayer pattern is one of those things that makes me irrationally angry, in the true sense, based on my ignorance of the subject

what's so special about green? oh so just because our eyes are more sensitive to green we should dedicate double the area to green in camera sensors? i mean, probably yes. but still. (⩺_⩹)

emodendroket•1h ago
This is actually really useful. A lot of people demand an "unprocessed" photo but don't understand what they're actually asking for.
XCSme•57m ago
I am confused by the color filter step.

Is the output produced by the sensor RGB or a single value per pixel?

ranger207•48m ago
It's a single value per pixel, but each pixel has a different color filter in front of it, so it's effectively that each pixel is one of R, G, or B
XCSme•46m ago
So, for a 3x3 image, the input data would be 9 values like:

   R G B
   B R G
   G B R

?
card_zero•40m ago
In the example ("let's color each pixel ...") the layout is:

  R G
  G B
Then at a later stage the image is green because "There are twice as many green pixels in the filter matrix".
nomel•6m ago
And this is important because our perception is more sensitive to luminance changes than color, and with our eyes being most sensitive to green, luminance is also. So, higher perceived spatial resolution by using more green [1]. This is also why JPG has lower resolution red and green channels, and why modern OLED usually use a pentile display, with only green being at full resolutio [2].

[1] https://en.wikipedia.org/wiki/Bayer_filter#Explanation

[2] https://en.wikipedia.org/wiki/PenTile_matrix_family

jeeyoungk•37m ago
If you want "3x3 colored image", you would need 6x6 of the bayer filter pixels.

Each RGB pixel would be 2x2 grid of

``` G R B G ```

So G appears twice as many as other colors (this is mostly the same for both the screen and sensor technology).

There are different ways to do the color filter layouts for screens and sensors (Fuji X-Trans have different layout, for example).

Lanzaa•7m ago
This depends on the camera and the sensor's bayer filter [0]. For example the quad bayer uses a 4x4 like:

    G G R R
    G G R R
    B B G G
    B B G G
[0]: https://en.wikipedia.org/wiki/Bayer_filter
steveBK123•46m ago
In its most raw form, camera sensors only see illumination not color.

In front of the sensor is a bayer filter which results in each physical pixel seeing illumination filtered R G or B.

From there the software onboard the camera or in your RAW converter does interpolation to create RGB values at each pixel. For example if the local pixel is R filtered, it then interpolates its G & B values from nearby pixels of that filter.

https://en.wikipedia.org/wiki/Bayer_filter

There are alternatives such as what Fuji does with its X-trans sensor filter.

https://en.wikipedia.org/wiki/Fujifilm_X-Trans_sensor

Another alternative is Foveon (owned by Sigma now) which makes full color pixel sensors but they have not kept up with state of the art.

https://en.wikipedia.org/wiki/Foveon_X3_sensor

This is also why Leica B&W sensor cameras have higher apparently sharpness & ISO sensitivity than the related color sensor models because there is no filter in front or software interpolation happening.

XCSme•42m ago
What about taking 3 photos while quickly changing the filter (e.g. filters are something like quantum dots that can be turned on/off)?
itishappy•32m ago
> What about taking 3 photos while quickly changing the filter

Works great. Most astro shots are taken using a monochrome sensor and filter wheel.

> filters are something like quantum dots that can be turned on/off

If anyone has this tech, plz let me know! Maybe an etalon?

https://en.wikipedia.org/wiki/Fabry%E2%80%93P%C3%A9rot_inter...

XCSme•25m ago
> If anyone has this tech, plz let me know!

I have no idea, it was my first thought when I thought of modern color filters.

card_zero•10m ago
That's how the earliest color photography worked. "Making color separations by reloading the camera and changing the filter between exposures was inconvenient", notes Wikipedia.
MarkusWandel•19m ago
Works for static images, but if there's motion the "changing the filters" part is never fast enough, there will always be colour fringing somewhere.

Edit or maybe it does work? I've watched at least one movie on a DLP type video projector with sequential colour and not noticed colour fringing. But still photos have much higher demand here.

stefan_•34m ago
B&W sensors are generally more sensitive than their color versions, as all filters (going back to signal processing..) attenuate the signal.
wtallis•45m ago
The sensor outputs a single value per pixel. A later processing step is needed to interpret that data given knowledge about the color filter (usually Bayer pattern) in front of the sensor.
i-am-gizm0•36m ago
The raw sensor output is a single value per sensor pixel, each of which is behind a red, green, or blue color filter. So to get a usable image (where each pixel has a value for all three colors), we have to somehow condense the values from some number of these sensor pixels. This is the "Debayering" process.
uolmir•54m ago
This is a great write up. It's also weirdly similar to a video I happened upon yesterday playing around with raw Hubble imagery: https://www.youtube.com/watch?v=1gBXSQCWdSI

He take a few minutes to get to the punch line. Feel free to skip ahead to around 5:30.

shepherdjerred•41m ago
Wow this is amazing. What a good and simple explanation!
Forgeties79•41m ago
For those who are curious, this is basically what we do when we color grade in video production but taken to its most extreme. Or rather, stripped down to the most fundamental level. Lots of ways to describe it.

Generally we shoot “flat” (there are so many caveats to this but I don’t feel like getting bogged down in all of it. If you plan on getting down and dirty with colors and really grading, you generally shoot flat). The image that we handover to DIT/editing can be borderline grayscale in its appearance. The colors are so muted, the dynamic range is so wide, that you basically have a highly muted image. The reason for this is you then have the freedom to “push” the color and look and almost any direction, versus if you have a very saturated, high contrast image, you are more “locked” into that look. This matters more and more when you are using a compressed codec and not something with an incredibly high bitrate or raw codecs, which is a whole other world and I am also doing a bit of a disservice to by oversimplifying.

Though this being HN it is incredibly likely I am telling few to no people anything new here lol

nospice•16m ago
"Flat" is a bit of a misnomer in this context. It's not flat, it's actually a logarithmic ("log profile") representation of data computed by the camera to allow a wider dynamic range to be squeezed into traditional video formats.

It's sort of the opposite of what's going on with photography, where you have a dedicated "raw" format with linear readings from the sensor. Without these formats, someone would probably have invented "log JPEG" or something like that to preserve more data in highlights and in the shadows.

mvdtnz•33m ago
The article keeps using the acronym "ADC" without defining it.
benatkin•21m ago
There are also no citations, and it has this phrase "This website is not licensed for ML/LLM training or content creation." Yeah right, that's like the privacy notice posts people make to facebook from time to time that contradict the terms of service https://knowyourmeme.com/memes/facebook-privacy-notices
to11mtm•32m ago
OK now do Fuji Super CCD (where for reasons unknown the RAW is diagonal [0])

[0] - https://en.wikipedia.org/wiki/Super_CCD#/media/File:Fuji_CCD...

bri3d•10m ago
The reasons aren’t exactly unknown, considering that the sensor is diagonally oriented also?

Processing these does seem like more fun though.

gruez•24m ago
Honestly, I think the gamma normalization step don't really count as "processing", any more than the gzip decompression step doesn't count as "processing" for the purposes of "this is what an unprocessed html file looks like" demo. At the end of the day, it's the same information, but encoded differently. Similar arguments can be made for de-bayer filter step. If you ignore these two steps, the "processing" that happens looks far less dramatic.
Waterluvian•21m ago
I studied remote sensing in undergrad and it really helped me grok sensors and signal processing. My favourite mental model revelation to come from it was that what I see isn’t the “ground truth.” It’s a view of a subset of the data. My eyes, my cat’s eyes, my cameras all collect and render different subsets of the data, providing different views of the subject matter.

It gets even wilder when perceiving space and time as additional signal dimensions.

I imagine a sort of absolute reality that is the universe. And we’re all just sensor systems observing tiny bits of it in different and often overlapping ways.

reactordev•20m ago
Maybe it’s just me but I took one look at the unprocessed photo (the first one) and immediately knew it was a skinny Christmas tree.

I’ve been staring at 16-bit HDR greyscale space for so long…

exabrial•16m ago
I love the look of the final product after the manual work (not the one for comparison). Just something very realistic and wholesome about it, not pumped to 10 via AI or Instagram filters.
strogonoff•9m ago
An unprocessed photo does not “look”. It is RGGB pixel values that far exceed any display media in dynamic range. Fitting it into the tiny dynamic range of screens by thrusting throwing away data strategically (inventing perceptual the neutral grey point, etc.) is what actually makes sense of them, and what is the creative task.