frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

In a Skyscraper City, They Fix Cobblestone Streets by Hand

https://www.nytimes.com/2025/11/08/nyregion/nyc-cobblestone-streets.html
1•bookofjoe•4m ago•1 comments

The 'Toy Story' You Remember

https://animationobsessive.substack.com/p/the-toy-story-you-remember
1•ani_obsessive•7m ago•0 comments

Paramount Cuts 1,600 More Jobs as Part of Plan to Save $3B

https://www.bloomberg.com/news/videos/2025-11-10/paramount-cuts-1-600-more-jobs-in-cost-cutting-m...
2•mgh2•9m ago•0 comments

Recessions have become ultra-rare. That is storing up trouble

https://www.economist.com/finance-and-economics/2025/11/10/recessions-have-become-ultra-rare-that...
3•andsoitis•10m ago•0 comments

Happy 30th Birthday Task Manager

https://www.youtube.com/watch?v=yQykvrAR_po
2•quizme2000•12m ago•1 comments

Universal Basic Income in an AGI Future

https://substack.com/home/post/p-178560893
1•DalasNoin•13m ago•0 comments

Space Dj

https://magenta.withgoogle.com/spacedj-announce
1•frmssmd•15m ago•0 comments

The Definitive Classic Mac Pro (2006-2012) Upgrade Guide

https://blog.greggant.com/posts/2018/05/07/definitive-mac-pro-upgrade-guide.html
1•ibobev•18m ago•0 comments

Natural Language, Semantic Analysis, and Interactive Fiction (2006) [pdf]

https://worrydream.com/refs/Nelson_G_2006_-_Natural_Language,_Semantic_Analysis_and_Interactive_F...
1•vinhnx•22m ago•0 comments

Show HN: Data Modeling Ancient Chinese Logic (Bazi/Ziwei Doushu) with AI

https://suanmingzhun.com
1•Ethancurly5246•24m ago•0 comments

Precision Spindle Metrology Pt.1: Fundamental Concepts [video]

https://www.youtube.com/watch?v=gt2gK-oxy5s
1•pillars•30m ago•1 comments

State of Crypto

https://stateofcrypto.a16zcrypto.com/
1•gmays•30m ago•0 comments

Branches influence the performance of your code and what can you do about it

https://johnnysswlab.com/how-branches-influence-the-performance-of-your-code-and-what-can-you-do-...
2•vinhnx•33m ago•0 comments

Lloyd's Open Form

https://en.wikipedia.org/wiki/Lloyd%27s_Open_Form
2•thunderbong•35m ago•0 comments

Is Fast Charging Killing the Battery? A 2-Year Test on 40 Phones [video]

https://www.youtube.com/watch?v=kLS5Cg_yNdM
1•zdw•39m ago•0 comments

A Couple of Cool Neurotech Companies

https://thelightcone.substack.com/p/a-couple-of-cool-neurotech-companies
1•bci12333•41m ago•0 comments

We built a black box X-Ray for AI Agents

https://devhunt.org/tool/agent-compass-by-future-agi
1•nikhilpareek13•42m ago•0 comments

Virginia Teen Narrowly Defeats His Former Civics Teacher in County Election

https://www.nytimes.com/2025/11/07/us/politics/surry-county-virginia-supervisor-election.html
10•zdw•43m ago•1 comments

Dioxus 0.7: User interfaces in Rust that run anywhere

https://github.com/DioxusLabs/dioxus/releases/tag/v0.7.0
1•petralithic•44m ago•0 comments

Aussie Engineers, Get to the States

https://thundergolfer.com/blog/get-to-the-states
1•steveharrison•44m ago•2 comments

Dundee and US surgeons achieve world-first remote stroke surgery on a human body

https://www.bbc.com/news/articles/cjw983pvz6lo
2•1659447091•47m ago•0 comments

Ask HD: How should the UK Post Office problem be solved?

https://www.bbc.co.uk/news/articles/cz6n2v7ywgeo
2•IndySun•50m ago•1 comments

My Reporting on the Columbia Protests Led to My Deportation

1•computersuck•54m ago•0 comments

iPhone Air Sales Are So Bad That Apple's Delaying the Next-Generation Version

https://www.macrumors.com/2025/11/10/next-generation-iphone-air-delayed/
4•mgh2•57m ago•1 comments

Show HN: Typesafe async friendly unopinionated enhancements to SQLAlchemy Core

https://github.com/sayanarijit/sqla-fancy-core
1•sayanarijit•59m ago•0 comments

Grammars Written for Antlr v4

https://github.com/antlr/grammars-v4
1•peter_d_sherman•1h ago•0 comments

AI is all about inference now

https://www.infoworld.com/article/4087007/ai-is-all-about-inference-now.html
3•tanelpoder•1h ago•0 comments

AI's bubble just entered a new phase. This one's debt-fuelled

https://www.afr.com/chanticleer/184b-in-seven-weeks-the-other-ai-surge-investors-must-watch-20251...
3•zerosizedweasle•1h ago•1 comments

Too Good to Be Bad: On the Failure of LLMs to Role-Play Villains [pdf]

https://arxiv.org/abs/2511.04962
1•SerCe•1h ago•0 comments

Rising Prevalence of Sleep Apnoea During Nighttime Heatwaves Across Europe

https://publications.ersnet.org/content/erj/early/2025/09/28/1399300301631-2025
1•PaulHoule•1h ago•0 comments
Open in hackernews

A Pixel Is Not a Little Square (1995) [pdf]

http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf
30•justin_•6mo ago

Comments

turtleyacht•6mo ago
See also: Pixel is a unit of length and area - https://news.ycombinator.com/item?id=43769478 - 1 hour ago (11 points, 20 comments)
mkl•6mo ago
Never really convinced me; I've done lots of graphics stuff, and I find thinking about pixels as squares works fine. Under magnification, LCD pixels are usually square blocks of rectangular RGB segments (OLED and phone screens can be stranger geometry), and camera sensors are usually made of square (ish) pixel sensor blocks in a Bayer colour array pattern. They're not point sources or point samples, they emit or sense light over an area. Maybe I'm missing something.

Lots of past discussions:

https://news.ycombinator.com/item?id=35076487 74 points, 2 years ago, 69 comments

https://news.ycombinator.com/item?id=26950455 81 points, 4 years ago,70 comments

https://news.ycombinator.com/item?id=20535984 143 points, 6 years ago, 79 comments

https://news.ycombinator.com/item?id=8614159 118 points, 10 years ago, 64 comments

https://news.ycombinator.com/item?id=1472175 46 points, 15 years ago, 20 comments

codeflo•6mo ago
This classic article is wrong, BTW, there's no nicer way to put it. It applies the wrong theory. It was already wrong in 1995 when monitors where CRTs, and it's way wrong in 2025 in the LCD/OLED era where pixels are truly discrete.

Audio samples are point samples (usually). This is nice, because there's a whole theory on how to upsample point samples without loss of information. But more importantly, this theory works because it matches how your playback hardware functions (for both analog and digital reasons that I won't go into).

Pixels, however, are actually displayed by the hardware as little physical rectangles. Take a magnifying glass and check. Treating them as points is a bad approximation that can only result in unnecessarily blurry images.

I have no idea why this article is quoted so often. Maybe "everybody is doing it wrong" is just a popular article genre. Maybe not everyone is familiar enough with sampling theory to know exactly why it works in audio (to see why those reasons don't apply to graphics).

p0w3n3d•6mo ago
I think you're wrong. According to my knowledge the pixels on CRT were rectangular and were throwing their color on neighbouring pixels. Graphics created for CRT were shown nicely on those screens, had much better visuals than displayed on LCD/LED and were antialiases by default (i.e. by the display technology)
badmintonbaseba•6mo ago
Pixels displayed on CRT displays are not squares, but they are not infinitely small dots either. They are much less well-defined blobs, that even overlap with each other.

There is also the complication of composite video signals, where you can't treat pixels as linearly independent components.

codeflo•6mo ago
Good PC monitors (especially in 1995) displayed pixels as almost perfect discrete squares. It was home consoles on televisions where people get the idea from that all CRTs were blurry.
p0w3n3d•6mo ago
Let me guess: when game creators were thinking about result visuals were they considering that everyone would have this good pc monitor with exact square pixel, or were they taking into account possible distortions that would occur on average CRT?

Also: people playing retro nowadays use shaders to emulate CRT https://youtube.com/shorts/W_ZI3w9CYnI

codeflo•6mo ago
As I said,

>> It was home consoles on televisions where people get the idea from that all CRTs were blurry.

The video you linked shows a PS1 game, which proves my point. It's possible you're too young to remember the big difference between a CRT TV and a CRT monitor. Monitors really did show discrete pixels (which was important for tiny text in applications to be readable), while TVs were blurry messes.

justin_•6mo ago
> Audio samples are point samples (usually). This is nice, because there's a whole theory on how to upsample point samples without loss of information.

This signal processing applies to images as well. Resampling is used very often for upscaling, for example. Here's an example: https://en.wikipedia.org/wiki/Lanczos_resampling

> It was already wrong in 1995 when monitors where CRTs, and it's way wrong in 2025 in the LCD/OLED era where pixels are truly discrete.

I don't think it has anything to do with display technologies though. Imagine this: there is a computer that is dedicated to image processing. It has no display, no CRT, no LCD, nothing. The computer is running a service that is resizing images from 100x100 pixels to 200x200 pixels. Would the programmer of this server be better off thinking in terms of samples or rectangular subdivisions of a display?

Alvy Ray Smith, the author of this paper, was coming from the background of developing Renderman for Pixar. In that case, there were render farms doing all sorts of graphics processing before the final image was displayed anywhere.

jansan•6mo ago
> I don't think it has anything to do with display technologies though. Imagine this: there is a computer that is dedicated to image processing. It has no display, no CRT, no LCD, nothing. The computer is running a service that is resizing images from 100x100 pixels to 200x200 pixels. Would the programmer of this server be better off thinking in terms of samples or rectangular subdivisions of a display?

How about a counter example: As part of a vectorization engine you need to trace the outline of all pixels of the same color in a bitmap. What other choice to you have than to think of pixels as squares with four sides?

jvanderbot•6mo ago
I really think it would be defined by the set of pixels not of that color that border that color, but maybe I'm thinking about this wrong.
thaumasiotes•6mo ago
If your model of the pixels is that they're point samples, they have no edges and there's no way to know what they do or don't border. They're near other pixels, but there could be anything in between.
Someone•6mo ago
> As part of a vectorization engine you need to trace the outline of all pixels of the same color in a bitmap. What other choice to you have than to think of pixels as squares with four sides?

I think that’s a bad example. For vector tracing, you want the ability to trace using lines and curves at any angle, not alongside the pixel boundaries, so you want to see the image as a function from ℝ² to RGB space for which you have samples at grid positions. Target then is to find a shape that covers the part of ℝ² that satisfies a discriminator function (e.g. “red component at least 0.8, green and blue components at most 0.1) decently well, balancing the simplicity of the shape (in terms of number of control points or something like that) with the quality of the cover.

codeflo•6mo ago
> I don't think it has anything to do with display technologies though.

I think your two examples nicely illustrate that it's all about the display technology.

> The computer is running a service that is resizing images from 100x100 pixels to 200x200 pixels. Would the programmer of this server be better off thinking in terms of samples or rectangular subdivisions of a display?

That entirely depends on how the resizing is done. Usually people choose nearest neighbor in scenarios like that to be faithful to the original 100x100 display, and to keep the images sharp. This treats the pixels as squares, which means the programmer should do so as well.

> Alvy Ray Smith, the author of this paper, was coming from the background of developing Renderman for Pixar.

That's meaningful context. I'm sure that in 1995, Pixar movies were exposed onto analog film before being shown in theatres. I'm almost certain this process didn't preserve sharp pixels, so "pixels aren't squares" was perhaps literally true for this technology.

justin_•6mo ago
> Usually people choose nearest neighbor in scenarios like that to be faithful to the original

Perhaps I should have chosen a higher resolution. AIUI, in many modern systems, such as your OS, it’s usually bilinear or Lanczos resampling.

You say that the resize should be faithful to the “100x100 display”, but we don’t know whether it was used from such a display, or coming from a camera, or generated by software.

> I'm almost certain this process didn't preserve sharp pixels

Sure, but modern image processing pipelines work the same way. They are working to capture the original signal, with a hopeful representation of the continuous signal, not just a grid of squares.

I suppose this is different for a “pixel art” situation, where resampling has to be explicitly set to nearest neighbor. Even so, images like that have problems in modern video codecs, which model samples of a continuous signal.

And yes, I am aware that the “pixel” in “pixel art” means a little square :). The terminology being overloaded is what makes these discussions so confusing.

jvanderbot•6mo ago
A pixel is a box that changes color to match the point sample it represents. What's the issue?
codeflo•6mo ago
To quote my post that you reply to, it's

> a bad approximation that can only result in unnecessarily blurry images

captainmuon•6mo ago
Except pixels are little squares. Sure, if you look under a microscope, they have funny shapes, but they are always laid out in a rectangular grid. I've never seen any system where the logical pixels are staggered like a hex grid, for example. No matter how the actual light emitters are arranged, the abstraction offered to the programmer is a rectangular grid.

If you light up pixels in a row, you get a line - a long thin rectangle - and not a chain of blobs. If you light them up diagnoally, you get a jagged line. For me that is proof that they squares - at least close enough to squares. Heck even on old displays that don't have a square pixel ratio they are squished squares ;-). And you have to treat them like little squares if you want to understand antialiasing, or why you sometimes have to add (0.5, 0.5) to get sharp lines.

(And a counterpoint: The signal-theoretical view that they are point samples is useful if you want to understand the role of gamma in anti-aliasing, or if you want to do things like superresolution with RGB-sub-pixels.)

mkl•6mo ago
There are some screen types with variations on the geometry, like some sub-pixels shared between logical pixels. E.g. Samsung's diamond pixel https://global.samsungdisplay.com/29043/, Apple watch https://imgur.com/GkKjjwy. They are still programmed as squares, but the light isn't emitted exactly like that (still coming from discrete areas, not points).

See also https://www.reddit.com/r/apple/comments/9fp1ty/did_you_ever_....

ChrisMarshallNY•6mo ago
I don't remember the manufacturer (may have been Fuji[0]), but someone made a camera sensor that was laid out around a 45-degree angle.

[0] https://en.wikipedia.org/wiki/Super_CCD

IAmBroom•6mo ago
There's also a hex-pattern camera sensor out there. It claimed to have better resolution without increased chip-printing cost (N pixels/area produced better effective visual resolution), but never took off.
GuB-42•6mo ago
Things becomes less clear when you take supersampling into account. Samples may be taken in a quincunx pattern for instance.

But these samples are usually called fragments, not pixels. They turn into little square pixels later in the pipeline, so yeah, I guess that pixels really are little squares, or maybe little rectangles.

roflmaostc•6mo ago
Mathematically speaking the paper is correct.

I think it actually depends what you define as "pixel". Sure, the pixel on your screen emits light on a tiny square into space. And sure, a sensor pixel measures the intensity on a tiny square.

But let's say I calculate something like:

  # samples from 0, 0.1, ..., 1 
  x = range(0, 1, 11)
  # evaluate the sin function at each point
  y = sin.(x)
Then each pixel (or entry in the array) is not a tiny square. It represents the value of sin at this specific location. A real pixelated detector would have integrated sin from `y[u] = int_{u}^{u + 0.1} sin(x) dx` which is entirely different from the point wise evaluation before.

So for me that's the main difference to understand.

gitroom•6mo ago
i think i've argued with friends over this exact thing - like, once you zoom in, does it even matter what shape the pixel is or is it just about how we use it? you think treating pixels as points or little squares actually changes decisions when making art or code
mordae•6mo ago
There's pixels and pixels.

Screen pixels are (nowadays) usually three vertical rectangles that occupy a square spot on the grid that forms the screen. This is sometimes exploited for sub-pixel font smoothing purposes.

Digital photography pixels are reconstructed from sensors that perceive cone of incoming light of certain frequency band, arranged in a Bayer grid.

Rendered 3D scene pixels are point samples unless they approximate cones via sampling neighborhood of the pixel center.

In any case, Nyquist will tear your head off and spit into your neck hole as soon as you come close to any kind of pixel. Square or point.

GrantMoyer•6mo ago
People get caught up on display technology, but how pixels are displayed on a screen is irrelevant. From a typical viewing distance and with imperfect human lenses, a point impulse and a little square are barely distingishable. The important part is that thinking about pixels as little squares instead of points makes all the math you do with them harder for no benefit.

Consider the Direct3D rasterization rules[1], which offset each sample point by 0.5 on each axis to sample "at the pixel center". Why are the "pixel centers" even at half-integer coordinates in the first place? Because if thinking of pixels as little squares, it's tempting to align the "corners" with integer coordinates like graph paper. If instead the specifiers had thought of pixels as lattice of sample points, it would have been natural to align the sample points with integer coordinates. "Little square" pixels resulted in an unneeded complication to sampling, an extra translation by a fractional distance, so now every use of the API for pixel perfect rendering must apply the inverse transform.

[1]: https://learn.microsoft.com/en-us/windows/win32/direct3d11/d...

leguminous•6mo ago
The half pixel offset makes sense, though. If you have two textures, you want the edges to align, not the centers of the pixels.

See, for example: https://bartwronski.com/2021/02/15/bilinear-down-upsampling-...

Implementations of resizing based on aligning pixel centers resulted in slight shifts, which caused a lot of trouble.

IAmBroom•6mo ago
> People get caught up on display technology, but how pixels are displayed on a screen is irrelevant.

To a user, usually.

To a home entertainment customer, never (even if they wouldn't really notice!).

To an optical engineer like myself, never true.

gomijacogeo•6mo ago
The paper would be a lot less infamous if the title had more accurately been "A Texel is Not a Little Square".