frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Clang: -Wexperimental-lifetime-safety: Experimental C++ Lifetime Safety Analysis

https://github.com/llvm/llvm-project/commit/3076794e924f
1•matt_d•22s ago•1 comments

Apple's MLX adding CUDA support

https://github.com/ml-explore/mlx/pull/1983
1•nsagent•1m ago•0 comments

Qlass: VQE on glass and other photonic quantum devices

https://github.com/unitaryfoundation/qlass
1•westurner•4m ago•1 comments

Ask HN: What are your favorite coding tools?

2•codingclaws•4m ago•1 comments

PHP License Update

https://wiki.php.net/rfc/php_license_update
2•josephwegner•4m ago•0 comments

Dog Walk: Blender Studio's official game project

https://blenderstudio.itch.io/dogwalk
3•doener•8m ago•0 comments

Coming to ISO C++ 26 Standard: An AI Acceleration Edge

https://thenewstack.io/coming-to-iso-c-26-standard-an-ai-acceleration-edge/
1•olemindgv•9m ago•1 comments

Chronic heat stress facilitates triglyceride biosynthesis in broiler chickens

https://www.nature.com/articles/s41598-025-03439-0
1•PaulHoule•9m ago•0 comments

Prominent EU politician stands up for Stop Killing Games

https://www.pcgamer.com/gaming-industry/a-game-once-sold-belongs-to-the-customer-prominent-eu-politician-stands-up-for-stop-killing-games/
1•doener•12m ago•0 comments

Browser Wars 2.0

https://twitter.com/TenZorroAI/status/1944871327195656609
1•paulo20223•12m ago•0 comments

A universal interface connecting you to today's AI models

https://tenzorro.com/en/models
1•paulo20223•13m ago•0 comments

Cavitation and How Does It Affect Ship Cruising Speed?

https://www.slashgear.com/1910963/ship-cavitation-what-is-it-how-it-affects-speed/
1•Bluestein•13m ago•0 comments

US Government announces $200M Grok contract a week after 'MechaHitler'

https://www.theverge.com/news/706855/grok-mechahitler-xai-defense-department-contract
9•doener•15m ago•3 comments

When blood hits clothes, physics takes over Researchers fired blood at fabrics

https://www.popsci.com/science/bloodstains-crime-scene-forensic/
1•Bluestein•15m ago•0 comments

Security behind decision to end DoD's satellite data sharing

https://www.theregister.com/2025/07/07/cyber_security_behind_dod_satellite_data_cutoff/
2•rbanffy•17m ago•0 comments

Intros to VC – Help

1•ronakronak•17m ago•0 comments

Masterclass on user experience for garbage collection

https://www.youtube.com/watch?v=dSLe6G3_JmE
1•seinecle•17m ago•0 comments

Seeking AI chat-driven fiction community

1•bgilroy26•18m ago•1 comments

Llmnop, a tiny Rust rewrite of LLMPerf

https://github.com/jpreagan/llmnop
1•jpreagan•21m ago•1 comments

Alternate Reality – Ubuntu with Plasma

https://www.dedoimedo.com/computers/linux-alternate-reality-ubuntu-with-plasma.html
1•fuck_AI•23m ago•0 comments

Review: Of Mice, Mechanisms, and Dementia

https://www.astralcodexten.com/p/your-review-of-mice-mechanisms-and
1•andromaton•24m ago•0 comments

Dietary Mycotoxins: An Overview with Emphasis on Aflatoxicosis in Humans

https://pmc.ncbi.nlm.nih.gov/articles/PMC11598113/
1•pera•25m ago•0 comments

Anthropic, Google, OpenAI and XAI Granted Up to $200M from Defense Department

https://www.cnbc.com/2025/07/14/anthropic-google-openai-xai-granted-up-to-200-million-from-dod.html
20•ChrisArchitect•25m ago•0 comments

A bionic knee integrated into tissue can restore natural movement

https://news.mit.edu/2025/bionic-knee-integrated-into-tissue-can-restore-natural-movement-0710
3•gmays•33m ago•0 comments

Go2klo: Public Toilet Map Worldwide

https://go2klo.com
4•czett•36m ago•3 comments

Goldman Sachs doesn't have to hire a $180k software engineer–meet Devin

https://fortune.com/2025/07/14/goldman-sachs-ai-powered-software-engineer-devin-new-employee-increase-productivity-fears-of-job-replacement/
5•leptoniscool•36m ago•1 comments

Vite vs. Webpack: A Guide to Choosing the Right Bundler

https://jsdevspace.substack.com/p/vite-vs-webpack-a-guide-to-choosing
1•javatuts•36m ago•0 comments

Introduction to the Par Language

https://faiface.github.io/par-lang/
1•4ad•41m ago•0 comments

The Best C++ Library

https://mcyoung.xyz/2025/07/14/best/
2•ingve•43m ago•0 comments

The Sacrifices We Choose to Make

https://michaelnotebook.com/sacrifice/index.html
3•exolymph•43m ago•0 comments
Open in hackernews

A Pixel Is Not a Little Square (1995) [pdf]

http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf
30•justin_•2mo ago

Comments

turtleyacht•2mo ago
See also: Pixel is a unit of length and area - https://news.ycombinator.com/item?id=43769478 - 1 hour ago (11 points, 20 comments)
mkl•2mo ago
Never really convinced me; I've done lots of graphics stuff, and I find thinking about pixels as squares works fine. Under magnification, LCD pixels are usually square blocks of rectangular RGB segments (OLED and phone screens can be stranger geometry), and camera sensors are usually made of square (ish) pixel sensor blocks in a Bayer colour array pattern. They're not point sources or point samples, they emit or sense light over an area. Maybe I'm missing something.

Lots of past discussions:

https://news.ycombinator.com/item?id=35076487 74 points, 2 years ago, 69 comments

https://news.ycombinator.com/item?id=26950455 81 points, 4 years ago,70 comments

https://news.ycombinator.com/item?id=20535984 143 points, 6 years ago, 79 comments

https://news.ycombinator.com/item?id=8614159 118 points, 10 years ago, 64 comments

https://news.ycombinator.com/item?id=1472175 46 points, 15 years ago, 20 comments

codeflo•2mo ago
This classic article is wrong, BTW, there's no nicer way to put it. It applies the wrong theory. It was already wrong in 1995 when monitors where CRTs, and it's way wrong in 2025 in the LCD/OLED era where pixels are truly discrete.

Audio samples are point samples (usually). This is nice, because there's a whole theory on how to upsample point samples without loss of information. But more importantly, this theory works because it matches how your playback hardware functions (for both analog and digital reasons that I won't go into).

Pixels, however, are actually displayed by the hardware as little physical rectangles. Take a magnifying glass and check. Treating them as points is a bad approximation that can only result in unnecessarily blurry images.

I have no idea why this article is quoted so often. Maybe "everybody is doing it wrong" is just a popular article genre. Maybe not everyone is familiar enough with sampling theory to know exactly why it works in audio (to see why those reasons don't apply to graphics).

p0w3n3d•2mo ago
I think you're wrong. According to my knowledge the pixels on CRT were rectangular and were throwing their color on neighbouring pixels. Graphics created for CRT were shown nicely on those screens, had much better visuals than displayed on LCD/LED and were antialiases by default (i.e. by the display technology)
badmintonbaseba•2mo ago
Pixels displayed on CRT displays are not squares, but they are not infinitely small dots either. They are much less well-defined blobs, that even overlap with each other.

There is also the complication of composite video signals, where you can't treat pixels as linearly independent components.

codeflo•2mo ago
Good PC monitors (especially in 1995) displayed pixels as almost perfect discrete squares. It was home consoles on televisions where people get the idea from that all CRTs were blurry.
p0w3n3d•2mo ago
Let me guess: when game creators were thinking about result visuals were they considering that everyone would have this good pc monitor with exact square pixel, or were they taking into account possible distortions that would occur on average CRT?

Also: people playing retro nowadays use shaders to emulate CRT https://youtube.com/shorts/W_ZI3w9CYnI

codeflo•2mo ago
As I said,

>> It was home consoles on televisions where people get the idea from that all CRTs were blurry.

The video you linked shows a PS1 game, which proves my point. It's possible you're too young to remember the big difference between a CRT TV and a CRT monitor. Monitors really did show discrete pixels (which was important for tiny text in applications to be readable), while TVs were blurry messes.

justin_•2mo ago
> Audio samples are point samples (usually). This is nice, because there's a whole theory on how to upsample point samples without loss of information.

This signal processing applies to images as well. Resampling is used very often for upscaling, for example. Here's an example: https://en.wikipedia.org/wiki/Lanczos_resampling

> It was already wrong in 1995 when monitors where CRTs, and it's way wrong in 2025 in the LCD/OLED era where pixels are truly discrete.

I don't think it has anything to do with display technologies though. Imagine this: there is a computer that is dedicated to image processing. It has no display, no CRT, no LCD, nothing. The computer is running a service that is resizing images from 100x100 pixels to 200x200 pixels. Would the programmer of this server be better off thinking in terms of samples or rectangular subdivisions of a display?

Alvy Ray Smith, the author of this paper, was coming from the background of developing Renderman for Pixar. In that case, there were render farms doing all sorts of graphics processing before the final image was displayed anywhere.

jansan•2mo ago
> I don't think it has anything to do with display technologies though. Imagine this: there is a computer that is dedicated to image processing. It has no display, no CRT, no LCD, nothing. The computer is running a service that is resizing images from 100x100 pixels to 200x200 pixels. Would the programmer of this server be better off thinking in terms of samples or rectangular subdivisions of a display?

How about a counter example: As part of a vectorization engine you need to trace the outline of all pixels of the same color in a bitmap. What other choice to you have than to think of pixels as squares with four sides?

jvanderbot•2mo ago
I really think it would be defined by the set of pixels not of that color that border that color, but maybe I'm thinking about this wrong.
thaumasiotes•2mo ago
If your model of the pixels is that they're point samples, they have no edges and there's no way to know what they do or don't border. They're near other pixels, but there could be anything in between.
Someone•2mo ago
> As part of a vectorization engine you need to trace the outline of all pixels of the same color in a bitmap. What other choice to you have than to think of pixels as squares with four sides?

I think that’s a bad example. For vector tracing, you want the ability to trace using lines and curves at any angle, not alongside the pixel boundaries, so you want to see the image as a function from ℝ² to RGB space for which you have samples at grid positions. Target then is to find a shape that covers the part of ℝ² that satisfies a discriminator function (e.g. “red component at least 0.8, green and blue components at most 0.1) decently well, balancing the simplicity of the shape (in terms of number of control points or something like that) with the quality of the cover.

codeflo•2mo ago
> I don't think it has anything to do with display technologies though.

I think your two examples nicely illustrate that it's all about the display technology.

> The computer is running a service that is resizing images from 100x100 pixels to 200x200 pixels. Would the programmer of this server be better off thinking in terms of samples or rectangular subdivisions of a display?

That entirely depends on how the resizing is done. Usually people choose nearest neighbor in scenarios like that to be faithful to the original 100x100 display, and to keep the images sharp. This treats the pixels as squares, which means the programmer should do so as well.

> Alvy Ray Smith, the author of this paper, was coming from the background of developing Renderman for Pixar.

That's meaningful context. I'm sure that in 1995, Pixar movies were exposed onto analog film before being shown in theatres. I'm almost certain this process didn't preserve sharp pixels, so "pixels aren't squares" was perhaps literally true for this technology.

justin_•2mo ago
> Usually people choose nearest neighbor in scenarios like that to be faithful to the original

Perhaps I should have chosen a higher resolution. AIUI, in many modern systems, such as your OS, it’s usually bilinear or Lanczos resampling.

You say that the resize should be faithful to the “100x100 display”, but we don’t know whether it was used from such a display, or coming from a camera, or generated by software.

> I'm almost certain this process didn't preserve sharp pixels

Sure, but modern image processing pipelines work the same way. They are working to capture the original signal, with a hopeful representation of the continuous signal, not just a grid of squares.

I suppose this is different for a “pixel art” situation, where resampling has to be explicitly set to nearest neighbor. Even so, images like that have problems in modern video codecs, which model samples of a continuous signal.

And yes, I am aware that the “pixel” in “pixel art” means a little square :). The terminology being overloaded is what makes these discussions so confusing.

jvanderbot•2mo ago
A pixel is a box that changes color to match the point sample it represents. What's the issue?
codeflo•2mo ago
To quote my post that you reply to, it's

> a bad approximation that can only result in unnecessarily blurry images

captainmuon•2mo ago
Except pixels are little squares. Sure, if you look under a microscope, they have funny shapes, but they are always laid out in a rectangular grid. I've never seen any system where the logical pixels are staggered like a hex grid, for example. No matter how the actual light emitters are arranged, the abstraction offered to the programmer is a rectangular grid.

If you light up pixels in a row, you get a line - a long thin rectangle - and not a chain of blobs. If you light them up diagnoally, you get a jagged line. For me that is proof that they squares - at least close enough to squares. Heck even on old displays that don't have a square pixel ratio they are squished squares ;-). And you have to treat them like little squares if you want to understand antialiasing, or why you sometimes have to add (0.5, 0.5) to get sharp lines.

(And a counterpoint: The signal-theoretical view that they are point samples is useful if you want to understand the role of gamma in anti-aliasing, or if you want to do things like superresolution with RGB-sub-pixels.)

mkl•2mo ago
There are some screen types with variations on the geometry, like some sub-pixels shared between logical pixels. E.g. Samsung's diamond pixel https://global.samsungdisplay.com/29043/, Apple watch https://imgur.com/GkKjjwy. They are still programmed as squares, but the light isn't emitted exactly like that (still coming from discrete areas, not points).

See also https://www.reddit.com/r/apple/comments/9fp1ty/did_you_ever_....

ChrisMarshallNY•2mo ago
I don't remember the manufacturer (may have been Fuji[0]), but someone made a camera sensor that was laid out around a 45-degree angle.

[0] https://en.wikipedia.org/wiki/Super_CCD

IAmBroom•2mo ago
There's also a hex-pattern camera sensor out there. It claimed to have better resolution without increased chip-printing cost (N pixels/area produced better effective visual resolution), but never took off.
GuB-42•2mo ago
Things becomes less clear when you take supersampling into account. Samples may be taken in a quincunx pattern for instance.

But these samples are usually called fragments, not pixels. They turn into little square pixels later in the pipeline, so yeah, I guess that pixels really are little squares, or maybe little rectangles.

roflmaostc•2mo ago
Mathematically speaking the paper is correct.

I think it actually depends what you define as "pixel". Sure, the pixel on your screen emits light on a tiny square into space. And sure, a sensor pixel measures the intensity on a tiny square.

But let's say I calculate something like:

  # samples from 0, 0.1, ..., 1 
  x = range(0, 1, 11)
  # evaluate the sin function at each point
  y = sin.(x)
Then each pixel (or entry in the array) is not a tiny square. It represents the value of sin at this specific location. A real pixelated detector would have integrated sin from `y[u] = int_{u}^{u + 0.1} sin(x) dx` which is entirely different from the point wise evaluation before.

So for me that's the main difference to understand.

gitroom•2mo ago
i think i've argued with friends over this exact thing - like, once you zoom in, does it even matter what shape the pixel is or is it just about how we use it? you think treating pixels as points or little squares actually changes decisions when making art or code
mordae•2mo ago
There's pixels and pixels.

Screen pixels are (nowadays) usually three vertical rectangles that occupy a square spot on the grid that forms the screen. This is sometimes exploited for sub-pixel font smoothing purposes.

Digital photography pixels are reconstructed from sensors that perceive cone of incoming light of certain frequency band, arranged in a Bayer grid.

Rendered 3D scene pixels are point samples unless they approximate cones via sampling neighborhood of the pixel center.

In any case, Nyquist will tear your head off and spit into your neck hole as soon as you come close to any kind of pixel. Square or point.

GrantMoyer•2mo ago
People get caught up on display technology, but how pixels are displayed on a screen is irrelevant. From a typical viewing distance and with imperfect human lenses, a point impulse and a little square are barely distingishable. The important part is that thinking about pixels as little squares instead of points makes all the math you do with them harder for no benefit.

Consider the Direct3D rasterization rules[1], which offset each sample point by 0.5 on each axis to sample "at the pixel center". Why are the "pixel centers" even at half-integer coordinates in the first place? Because if thinking of pixels as little squares, it's tempting to align the "corners" with integer coordinates like graph paper. If instead the specifiers had thought of pixels as lattice of sample points, it would have been natural to align the sample points with integer coordinates. "Little square" pixels resulted in an unneeded complication to sampling, an extra translation by a fractional distance, so now every use of the API for pixel perfect rendering must apply the inverse transform.

[1]: https://learn.microsoft.com/en-us/windows/win32/direct3d11/d...

leguminous•2mo ago
The half pixel offset makes sense, though. If you have two textures, you want the edges to align, not the centers of the pixels.

See, for example: https://bartwronski.com/2021/02/15/bilinear-down-upsampling-...

Implementations of resizing based on aligning pixel centers resulted in slight shifts, which caused a lot of trouble.

IAmBroom•2mo ago
> People get caught up on display technology, but how pixels are displayed on a screen is irrelevant.

To a user, usually.

To a home entertainment customer, never (even if they wouldn't really notice!).

To an optical engineer like myself, never true.

gomijacogeo•2mo ago
The paper would be a lot less infamous if the title had more accurately been "A Texel is Not a Little Square".