frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Justcopy.ai – Copy and Customize Digital Tools in Minutes

https://blog.justcopy.ai/p/dont-build-from-scratch-just-copy
1•anup_sia•21s ago•0 comments

Building a Coding Agent in Rust – Introduction

https://www.youtube.com/watch?v=tQJTuYkZ4u8
1•0xshadow•4m ago•0 comments

Drivers who do 10k miles a year 'face new £1,500 charge'

https://www.birminghammail.co.uk/motoring/motoring-news/drivers-who-10000-miles-year-32727647
1•hnburnsy•5m ago•0 comments

Are You Ready?

https://jeffreylminch.substack.com/p/are-you-ready
1•rmason•5m ago•0 comments

Elon Musk launches a Wikipedia rival that extols his own 'vision'

https://www.washingtonpost.com/technology/2025/10/27/grokipedia-wikipedia-musk-/
2•1vuio0pswjnm7•9m ago•1 comments

VibePointer – ML based human-like cursor mover

https://github.com/Nothflare/VibePointer
1•nothflare•10m ago•0 comments

4B parameter Deep Research model based on Qwen

https://huggingface.co/flashresearch/FlashResearch-4B-Thinking
1•sumo43•11m ago•0 comments

Society will accept a death caused by a robotaxi, Waymo co-CEO says

https://www.sfgate.com/tech/article/society-accept-robotaxi-death-waymo-21123178.php
1•1vuio0pswjnm7•12m ago•0 comments

I made this video game with AI – Grand Theft Toronto

https://grandthefttoronto.com
1•Cyborgowski•13m ago•1 comments

Chegg slashes 45% of workforce, blames 'new realities of AI'

https://www.cnbc.com/2025/10/27/chegg-slashes-45percent-of-workforce-blames-new-realities-of-ai.html
1•mathattack•15m ago•1 comments

Hyundai hoped to return skilled South Korean workers to the US after ICE raid

https://www.cnn.com/2025/10/27/business/hyundai-trump-ice
1•rawgabbit•16m ago•0 comments

Stop DDoS Attacking the Research Community with AI-Generated Survey Papers

https://arxiv.org/abs/2510.09686
2•whym•16m ago•0 comments

Linux Desktop Assistant

https://photondesktop.com/
1•mulugethhen•18m ago•1 comments

Grokipedia on Elon Musk

https://grokipedia.com/page/Elon_Musk
1•KnuthIsGod•25m ago•1 comments

The Bacon Junction

https://jasminkaur.substack.com/p/the-bacon-junction
1•cosman•27m ago•0 comments

Lessons from California's HSR Project

https://sf.streetsblog.org/2025/10/23/report-lessons-from-californias-hsr-project
1•panic•29m ago•0 comments

OpenEarable 2.0: Open-Source Headphones for Music and 30 Health Metrics

https://dl.acm.org/doi/pdf/10.1145/3712069
1•mlcq•34m ago•0 comments

Directives and the Platform Boundary: A quiet trend in the JavaScript ecosystem

https://tanstack.com/blog/directives-and-the-platform-boundary
1•samuel246•36m ago•0 comments

The AI Workflows of Every's Six Engineers

https://every.to/source-code/inside-the-ai-workflows-of-every-s-six-engineers
1•samuel246•38m ago•0 comments

Show HN: LaunchPad – Built tonight after hearing about Amazon's 30k layoffs

https://launchpad-kappa-ashy.vercel.app
1•ashish_sharda•39m ago•0 comments

AI for Industry Challenge by Intrinsic and Open Robotics

https://www.intrinsic.ai/events/ai-for-industry-challenge
2•kscottz•42m ago•0 comments

Hack Any Outlook Account in Firebase Apps – Zero-Click Email Verification

1•vrajshroff•43m ago•2 comments

GLP-1 Therapeutics: Their Emerging Role in Alcohol and Substance Use Disorders

https://academic.oup.com/jes/article/9/11/bvaf141/8277723?login=false
2•PaulHoule•44m ago•0 comments

Dynamic Simulations: Lecture Notes

https://nanohub.org/resources/7570
1•northlondoner•45m ago•1 comments

DGX Spark may have only half the performance claimed

https://old.reddit.com/r/LocalLLaMA/comments/1ohtp6d/bad_news_dgx_spark_may_have_only_half_the/
1•limoce•47m ago•0 comments

A new online HTML/CSS editor and viewer for learning HTML and CSS

https://www.codepuzzle.io/html-studio/2RC7MY56
1•laurentabbal•49m ago•0 comments

Python Software Foundation withdraws security-related grant proposal

https://lwn.net/Articles/1043563/
1•csmantle•50m ago•1 comments

QubitCompile

https://qubitcompile.com/
1•jonbaer•51m ago•0 comments

Microsoft hopes Mico succeeds where Clippy failed as tech companies warily ...

https://apnews.com/article/ai-character-mico-clippy-microsoft-a5937ca6778381907848e9a82d1131db
1•1vuio0pswjnm7•53m ago•0 comments

Windows is dangerous and harmful, YouTube says so

https://www.aardvark.co.nz/daily/2025/1028.shtml
1•flyingkiwi44•56m ago•0 comments
Open in hackernews

A Pixel Is Not a Little Square (1995) [pdf]

http://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf
30•justin_•6mo ago

Comments

turtleyacht•6mo ago
See also: Pixel is a unit of length and area - https://news.ycombinator.com/item?id=43769478 - 1 hour ago (11 points, 20 comments)
mkl•6mo ago
Never really convinced me; I've done lots of graphics stuff, and I find thinking about pixels as squares works fine. Under magnification, LCD pixels are usually square blocks of rectangular RGB segments (OLED and phone screens can be stranger geometry), and camera sensors are usually made of square (ish) pixel sensor blocks in a Bayer colour array pattern. They're not point sources or point samples, they emit or sense light over an area. Maybe I'm missing something.

Lots of past discussions:

https://news.ycombinator.com/item?id=35076487 74 points, 2 years ago, 69 comments

https://news.ycombinator.com/item?id=26950455 81 points, 4 years ago,70 comments

https://news.ycombinator.com/item?id=20535984 143 points, 6 years ago, 79 comments

https://news.ycombinator.com/item?id=8614159 118 points, 10 years ago, 64 comments

https://news.ycombinator.com/item?id=1472175 46 points, 15 years ago, 20 comments

codeflo•6mo ago
This classic article is wrong, BTW, there's no nicer way to put it. It applies the wrong theory. It was already wrong in 1995 when monitors where CRTs, and it's way wrong in 2025 in the LCD/OLED era where pixels are truly discrete.

Audio samples are point samples (usually). This is nice, because there's a whole theory on how to upsample point samples without loss of information. But more importantly, this theory works because it matches how your playback hardware functions (for both analog and digital reasons that I won't go into).

Pixels, however, are actually displayed by the hardware as little physical rectangles. Take a magnifying glass and check. Treating them as points is a bad approximation that can only result in unnecessarily blurry images.

I have no idea why this article is quoted so often. Maybe "everybody is doing it wrong" is just a popular article genre. Maybe not everyone is familiar enough with sampling theory to know exactly why it works in audio (to see why those reasons don't apply to graphics).

p0w3n3d•6mo ago
I think you're wrong. According to my knowledge the pixels on CRT were rectangular and were throwing their color on neighbouring pixels. Graphics created for CRT were shown nicely on those screens, had much better visuals than displayed on LCD/LED and were antialiases by default (i.e. by the display technology)
badmintonbaseba•6mo ago
Pixels displayed on CRT displays are not squares, but they are not infinitely small dots either. They are much less well-defined blobs, that even overlap with each other.

There is also the complication of composite video signals, where you can't treat pixels as linearly independent components.

codeflo•6mo ago
Good PC monitors (especially in 1995) displayed pixels as almost perfect discrete squares. It was home consoles on televisions where people get the idea from that all CRTs were blurry.
p0w3n3d•6mo ago
Let me guess: when game creators were thinking about result visuals were they considering that everyone would have this good pc monitor with exact square pixel, or were they taking into account possible distortions that would occur on average CRT?

Also: people playing retro nowadays use shaders to emulate CRT https://youtube.com/shorts/W_ZI3w9CYnI

codeflo•6mo ago
As I said,

>> It was home consoles on televisions where people get the idea from that all CRTs were blurry.

The video you linked shows a PS1 game, which proves my point. It's possible you're too young to remember the big difference between a CRT TV and a CRT monitor. Monitors really did show discrete pixels (which was important for tiny text in applications to be readable), while TVs were blurry messes.

justin_•6mo ago
> Audio samples are point samples (usually). This is nice, because there's a whole theory on how to upsample point samples without loss of information.

This signal processing applies to images as well. Resampling is used very often for upscaling, for example. Here's an example: https://en.wikipedia.org/wiki/Lanczos_resampling

> It was already wrong in 1995 when monitors where CRTs, and it's way wrong in 2025 in the LCD/OLED era where pixels are truly discrete.

I don't think it has anything to do with display technologies though. Imagine this: there is a computer that is dedicated to image processing. It has no display, no CRT, no LCD, nothing. The computer is running a service that is resizing images from 100x100 pixels to 200x200 pixels. Would the programmer of this server be better off thinking in terms of samples or rectangular subdivisions of a display?

Alvy Ray Smith, the author of this paper, was coming from the background of developing Renderman for Pixar. In that case, there were render farms doing all sorts of graphics processing before the final image was displayed anywhere.

jansan•6mo ago
> I don't think it has anything to do with display technologies though. Imagine this: there is a computer that is dedicated to image processing. It has no display, no CRT, no LCD, nothing. The computer is running a service that is resizing images from 100x100 pixels to 200x200 pixels. Would the programmer of this server be better off thinking in terms of samples or rectangular subdivisions of a display?

How about a counter example: As part of a vectorization engine you need to trace the outline of all pixels of the same color in a bitmap. What other choice to you have than to think of pixels as squares with four sides?

jvanderbot•6mo ago
I really think it would be defined by the set of pixels not of that color that border that color, but maybe I'm thinking about this wrong.
thaumasiotes•6mo ago
If your model of the pixels is that they're point samples, they have no edges and there's no way to know what they do or don't border. They're near other pixels, but there could be anything in between.
Someone•6mo ago
> As part of a vectorization engine you need to trace the outline of all pixels of the same color in a bitmap. What other choice to you have than to think of pixels as squares with four sides?

I think that’s a bad example. For vector tracing, you want the ability to trace using lines and curves at any angle, not alongside the pixel boundaries, so you want to see the image as a function from ℝ² to RGB space for which you have samples at grid positions. Target then is to find a shape that covers the part of ℝ² that satisfies a discriminator function (e.g. “red component at least 0.8, green and blue components at most 0.1) decently well, balancing the simplicity of the shape (in terms of number of control points or something like that) with the quality of the cover.

codeflo•6mo ago
> I don't think it has anything to do with display technologies though.

I think your two examples nicely illustrate that it's all about the display technology.

> The computer is running a service that is resizing images from 100x100 pixels to 200x200 pixels. Would the programmer of this server be better off thinking in terms of samples or rectangular subdivisions of a display?

That entirely depends on how the resizing is done. Usually people choose nearest neighbor in scenarios like that to be faithful to the original 100x100 display, and to keep the images sharp. This treats the pixels as squares, which means the programmer should do so as well.

> Alvy Ray Smith, the author of this paper, was coming from the background of developing Renderman for Pixar.

That's meaningful context. I'm sure that in 1995, Pixar movies were exposed onto analog film before being shown in theatres. I'm almost certain this process didn't preserve sharp pixels, so "pixels aren't squares" was perhaps literally true for this technology.

justin_•6mo ago
> Usually people choose nearest neighbor in scenarios like that to be faithful to the original

Perhaps I should have chosen a higher resolution. AIUI, in many modern systems, such as your OS, it’s usually bilinear or Lanczos resampling.

You say that the resize should be faithful to the “100x100 display”, but we don’t know whether it was used from such a display, or coming from a camera, or generated by software.

> I'm almost certain this process didn't preserve sharp pixels

Sure, but modern image processing pipelines work the same way. They are working to capture the original signal, with a hopeful representation of the continuous signal, not just a grid of squares.

I suppose this is different for a “pixel art” situation, where resampling has to be explicitly set to nearest neighbor. Even so, images like that have problems in modern video codecs, which model samples of a continuous signal.

And yes, I am aware that the “pixel” in “pixel art” means a little square :). The terminology being overloaded is what makes these discussions so confusing.

jvanderbot•6mo ago
A pixel is a box that changes color to match the point sample it represents. What's the issue?
codeflo•6mo ago
To quote my post that you reply to, it's

> a bad approximation that can only result in unnecessarily blurry images

captainmuon•6mo ago
Except pixels are little squares. Sure, if you look under a microscope, they have funny shapes, but they are always laid out in a rectangular grid. I've never seen any system where the logical pixels are staggered like a hex grid, for example. No matter how the actual light emitters are arranged, the abstraction offered to the programmer is a rectangular grid.

If you light up pixels in a row, you get a line - a long thin rectangle - and not a chain of blobs. If you light them up diagnoally, you get a jagged line. For me that is proof that they squares - at least close enough to squares. Heck even on old displays that don't have a square pixel ratio they are squished squares ;-). And you have to treat them like little squares if you want to understand antialiasing, or why you sometimes have to add (0.5, 0.5) to get sharp lines.

(And a counterpoint: The signal-theoretical view that they are point samples is useful if you want to understand the role of gamma in anti-aliasing, or if you want to do things like superresolution with RGB-sub-pixels.)

mkl•6mo ago
There are some screen types with variations on the geometry, like some sub-pixels shared between logical pixels. E.g. Samsung's diamond pixel https://global.samsungdisplay.com/29043/, Apple watch https://imgur.com/GkKjjwy. They are still programmed as squares, but the light isn't emitted exactly like that (still coming from discrete areas, not points).

See also https://www.reddit.com/r/apple/comments/9fp1ty/did_you_ever_....

ChrisMarshallNY•6mo ago
I don't remember the manufacturer (may have been Fuji[0]), but someone made a camera sensor that was laid out around a 45-degree angle.

[0] https://en.wikipedia.org/wiki/Super_CCD

IAmBroom•6mo ago
There's also a hex-pattern camera sensor out there. It claimed to have better resolution without increased chip-printing cost (N pixels/area produced better effective visual resolution), but never took off.
GuB-42•6mo ago
Things becomes less clear when you take supersampling into account. Samples may be taken in a quincunx pattern for instance.

But these samples are usually called fragments, not pixels. They turn into little square pixels later in the pipeline, so yeah, I guess that pixels really are little squares, or maybe little rectangles.

roflmaostc•6mo ago
Mathematically speaking the paper is correct.

I think it actually depends what you define as "pixel". Sure, the pixel on your screen emits light on a tiny square into space. And sure, a sensor pixel measures the intensity on a tiny square.

But let's say I calculate something like:

  # samples from 0, 0.1, ..., 1 
  x = range(0, 1, 11)
  # evaluate the sin function at each point
  y = sin.(x)
Then each pixel (or entry in the array) is not a tiny square. It represents the value of sin at this specific location. A real pixelated detector would have integrated sin from `y[u] = int_{u}^{u + 0.1} sin(x) dx` which is entirely different from the point wise evaluation before.

So for me that's the main difference to understand.

gitroom•6mo ago
i think i've argued with friends over this exact thing - like, once you zoom in, does it even matter what shape the pixel is or is it just about how we use it? you think treating pixels as points or little squares actually changes decisions when making art or code
mordae•6mo ago
There's pixels and pixels.

Screen pixels are (nowadays) usually three vertical rectangles that occupy a square spot on the grid that forms the screen. This is sometimes exploited for sub-pixel font smoothing purposes.

Digital photography pixels are reconstructed from sensors that perceive cone of incoming light of certain frequency band, arranged in a Bayer grid.

Rendered 3D scene pixels are point samples unless they approximate cones via sampling neighborhood of the pixel center.

In any case, Nyquist will tear your head off and spit into your neck hole as soon as you come close to any kind of pixel. Square or point.

GrantMoyer•6mo ago
People get caught up on display technology, but how pixels are displayed on a screen is irrelevant. From a typical viewing distance and with imperfect human lenses, a point impulse and a little square are barely distingishable. The important part is that thinking about pixels as little squares instead of points makes all the math you do with them harder for no benefit.

Consider the Direct3D rasterization rules[1], which offset each sample point by 0.5 on each axis to sample "at the pixel center". Why are the "pixel centers" even at half-integer coordinates in the first place? Because if thinking of pixels as little squares, it's tempting to align the "corners" with integer coordinates like graph paper. If instead the specifiers had thought of pixels as lattice of sample points, it would have been natural to align the sample points with integer coordinates. "Little square" pixels resulted in an unneeded complication to sampling, an extra translation by a fractional distance, so now every use of the API for pixel perfect rendering must apply the inverse transform.

[1]: https://learn.microsoft.com/en-us/windows/win32/direct3d11/d...

leguminous•6mo ago
The half pixel offset makes sense, though. If you have two textures, you want the edges to align, not the centers of the pixels.

See, for example: https://bartwronski.com/2021/02/15/bilinear-down-upsampling-...

Implementations of resizing based on aligning pixel centers resulted in slight shifts, which caused a lot of trouble.

IAmBroom•6mo ago
> People get caught up on display technology, but how pixels are displayed on a screen is irrelevant.

To a user, usually.

To a home entertainment customer, never (even if they wouldn't really notice!).

To an optical engineer like myself, never true.

gomijacogeo•6mo ago
The paper would be a lot less infamous if the title had more accurately been "A Texel is Not a Little Square".