frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
424•klaussilveira•5h ago•97 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
19•mfiguiere•40m ago•7 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
774•xnx•11h ago•472 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
141•isitcontent•6h ago•15 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
134•dmpetrov•6h ago•57 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
41•quibono•4d ago•3 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
68•jnord•3d ago•4 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
246•vecti•8h ago•117 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
313•aktau•12h ago•153 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
177•eljojo•8h ago•124 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
11•matheusalmeida•1d ago•0 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
311•ostacke•12h ago•85 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
396•todsacerdoti•13h ago•217 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
322•lstoll•12h ago•233 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
11•kmm•4d ago•0 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
48•phreda4•5h ago•8 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
109•vmatsiiako•11h ago•34 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
186•i5heu•8h ago•129 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
236•surprisetalk•3d ago•31 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
975•cdrnsf•15h ago•415 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
144•limoce•3d ago•79 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
17•gfortaine•3h ago•2 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
41•rescrv•13h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
48•ray__•2h ago•11 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
35•lebovic•1d ago•11 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
77•antves•1d ago•57 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
50•SerCe•2h ago•41 comments

The Oklahoma Architect Who Turned Kitsch into Art

https://www.bloomberg.com/news/features/2026-01-31/oklahoma-architect-bruce-goff-s-wild-home-desi...
18•MarlonPro•3d ago•4 comments

Claude Composer

https://www.josh.ing/blog/claude-composer
108•coloneltcb•2d ago•70 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
39•nwparker•1d ago•10 comments
Open in hackernews

Perfecting anti-aliasing on signed distance functions

https://blog.pkh.me/p/44-perfecting-anti-aliasing-on-signed-distance-functions.html
111•ibobev•6mo ago

Comments

mxfh•6mo ago
The minute black area on the inner part of the sector getting perceptually boosted with the same ramp width like the outer area is effectively how an outline on a shape would behave, not two shapes with no stroke width. I would expect the output brightness should scale with the volume/depth under a pixel in the 3d visualization.

Is this intentional? To me this is an opiniated (aka artistic preference) feature preserving method not the perfect one.

Btw the common visualization has a source and an author:

https://iquilezles.org/articles/distfunctions2d/ https://www.shadertoy.com/playlist/MXdSRf

Retr0id•6mo ago
> The minute black area on the inner part of the sector

I'm not grasping what you're referring to here.

mxfh•6mo ago
That Pac-Man “mouth” is collapsing to a constant width line like halfway in for the last for examples for me.

Having some weird mid-length discontinuity in the edge direction for me. Not just perceptually.

Maybe I’m misunderstanding something here or have a different idea what the goal of that exercise is, but I would expect some pixels to turn near white near the center towards that gap sector.

NohatCoder•6mo ago
Reminds me that I found an alternative way of sampling an SDF:

First take a sample in each corner of the pixel to be rendered (s1 s2 s3 s4), then compute:

    coverage=0.5 + (s1+s2+s3+s4)/(abs(s1)+abs(s2)+abs(s3)+abs(s4))/2
It is a good approximation, and it keeps on working no matter how you scale and stretch the field.

Relative to the standard method it is expensive to calculate. But for a modern GPU it is still a very light workload to do this once per screen pixel.

brookman64k•6mo ago
Would that be done in two passes? 1. Render the image shifted by 0.5 pixels in both directions (plus one additional row & column). 2. Apply above formula to each pixel (4 reads, 1 write).
NohatCoder•6mo ago
You certainly could imagine doing that, but as long as the initial evaluation is fairly cheap (say a texture lookup), I don't see the extra pass being worth it.
ralferoo•6mo ago
That'd be one way of doing it.

You don't technically need 4 reads per pixel either, for instance you can process a 7x7 group with a 64-count thread group. Each thread does 1 read, and then fetches the other 3 values from its neighbours and calculates the average. Then the 7x7 subset of the 8x8 write their values.

You could integrate this into the first pass too, but then there would be duplication on the overlapped areas of each block. Depending on the complexity of the first pass, it still might be more efficient to do that than an extra pass.

Knowing that it's only the edges that are shared between threads, you could expand the work of each thread to do multiple pixels so that each thread group covers more pixels the reduce the number of pixels sampled multiple times. How much you do this by depends on register pressure, it's probably not worth doing more than 4 pixels per thread but YMMV.

shiandow•6mo ago
Technically that only requires calculating one extra row and column of pixels.

It is indeed scale invariant but I think you can do better, you should have enough to make it invariant to any linear transformation. The calculation will be more complex but that is nothing compared to evaluating the SDF

NohatCoder•6mo ago
I do believe that it is already invariant to linear transformations the way you want, i.e. we can evaluate the corners of an arbitrary parallelogram instead of a square and get a similar coverage estimate.
shiandow•6mo ago
Similar maybe but it can't be the same surely? Just pick some function like f(x,y) = x-1 and start rotating it around your centre pixel, the average (s1+s2+s3+s4) will be the same (since it's a linear function) but there's no way those absolute values will remain constant.

You should be pretty close though. For a linear function you can just calculate the distance to the 0 line, which is invariant to any linear transformation that leaves that line where it is (which is what you want). This is just the function value divided by the norm of the gradient. Both of which you can estimate from those 4 points. This gives something like

    dx = (s2 - s1 + s4 - s3)
    dy = (s3 - s1 + s4 - s2)
    f  = (s1+s2+s3+s4)/4
    dist = f / sqrt(dx*dx + dy*dy)
NohatCoder•6mo ago
My function approximate coverage of a square pixel, so indeed if you rotate a line around it at a certain distance that line will clip the corners at some angles and be clear of the pixel at other angles.
talkingtab•6mo ago
A very good example of SDF thinking, using signed distance fields in shaders. Both shaders and SDF are new to me and very interesting. Another example of what is being done is MSDF here: https://github.com/Chlumsky/msdfgen.
mxfh•6mo ago
That what I wondering, for sharp narrow corners like with that pac-man mouth center and in font rendering composite/multichannel is probably the better approach for any situation where the there is potential for self-intersection of the distance field in concave situations. https://lambdacube3d.wordpress.com/2014/11/12/playing-around...
WithinReason•6mo ago
Instead of OKLAB isn't it simpler to just use a linear color space and only do gamma correction at the very end?
badlibrarian•6mo ago
Simpler and worse in this application.
yorwba•6mo ago
If the application is simulating a crisp higher-resolution image that was slightly blurred while downscaling to the output resolution, a linear color space is exactly the right choice. Yes, it means bright objects on a dark background will look larger than the reverse, but that's just a fact of human light sensitivity. If the blurring is purely optical, with no pixels in between, a small light in the dark can still create a large halo, whereas there's no corresponding anti-halo for dark spots in a well-lit room.

On the other hand, if you want something that looks roughly the same no matter which color you use, counteracting such oddities of perception is certainly unavoidable.

Const-me•6mo ago
Good article, but I believe it lacks information what specifically these magical dFdx, dFdy, and fwidth = abs(dFdx) + abs(dFdy) functions are computing.

The following stackexchange answer addresses that question rather well: https://gamedev.stackexchange.com/a/130933/3355 As you see, dFdx and dFdx are not exactly derivatives, these are discrete screen-space approximations of these derivativities. Very cheap to compute due to the weird execution model of pixel shaders running in hardware GPUs.

mananaysiempre•6mo ago
If you’ve ever sampled a texture in a shader, then you know what those are, so it’s probably fair to include them in the prerequisites for the article. But yes, those are intended to be approximate screen-space derivatives of whatever quantity you plug into them, and (I believe) on basically any hardware the approximation in question is a single-sided first difference, because the particular fragment (single-pixel contribution) you’re computing always exists in a 2×2 group of screen-space neighbours executing in lockstep.
david-gpu•6mo ago
This looks a lot like some line anti aliasing I had to hack together many years ago when a customer started complaining loudly about the lack of hardware support for it. I think I had something like a week to put together three different alternatives for them to pick from, and this was the winner. It looked the best by far.

Years later my boss was telling me how satisfied he was that he could throw any problem in my general direction and it would be gone in no time. There is nothing like the risk of losing his work permit to motivate a young guy to work himself down to a crisp, all for peanuts.

jeremyscanvic•6mo ago
Really interesting write-up! I'm not very familiar with signed distance functions but aliasing is a major part of my PhD and this is really insightful to me!
pcwalton•6mo ago
Mathematically, what you want to do here is to calculate the area of the pixel square (or circle; however you want to approximate it) that the shape covers. In this case a linear ramp actually approximates the true value better than smoothstep does. (I had the derivation worked out at some point; I don't have it handy, unfortunately.) Of course, beauty is in the eye of the beholder, and aesthetically one might prefer smoothstep.

By the way, since the article mentions ellipse distance approximations, the fastest way to approximate distance to an ellipse is to use a trick I came up with based on a paper from 1994 [1]: https://github.com/servo/webrender/blob/c4bd5b47d8f5cd684334... Unless it's changed recently, this is what Firefox uses for border radius.

[1]: http://mesh.brown.edu/taubin/pdfs/Taubin-tog94.pdf