frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•7m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
1•o8vm•9m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•10m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•23m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•26m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
1•helloplanets•28m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•36m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•38m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•39m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•39m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•42m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•43m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•47m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•49m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•49m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•50m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•52m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•55m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•57m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•1h ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
2•lifeisstillgood•1h ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•1 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments
Open in hackernews

Ray Marching Soft Shadows in 2D (2020)

https://www.rykap.com/2020/09/23/distance-fields/
199•memalign•2mo ago

Comments

ravetcofx•2mo ago
It's always impressive to see a live demo in a technical blog post like this, especially one that runs so fast and slick on mobile. Kudos.
keyle•2mo ago
In relative terms your mobile is a superb computer compared to 20 years ago; and it's a small resolution.
forrestthewoods•2mo ago
> small resolution

My iPhone is 1320 × 2868. That’s more than 1080p. So I would not consider it a “small resolution”!

speedgoose•2mo ago
The iPhone 17 pro is faster in quite a few benchmarks compared to the standard HP Intel notebook my company provides if you prefer windows over MacOs.
cubefox•2mo ago
This sounds similar to radiance cascades:

https://mini.gmshaders.com/p/radiance-cascades

https://youtube.com/watch?v=3so7xdZHKxw

s-macke•2mo ago
While the methods are similar in that they both ray-march through the scene to compute per-pixel fluence, the algorithm presented in the blog post scales linearly with the number of light sources, whereas Radiance Cascades can handle an arbitrary distribution of light sources with constant time by benefiting from geometric properties of lighting. Radiance Cascades are also not limited to SDFs for smooth shadows.
cubefox•2mo ago
Yeah, and I believe Radiance Cascades accurately calculate the size of the penumbra from the size and distance of the area light, which also means that point light sources, as in reality, always produce hard shadows.

The technique here seems to rely more on eyeballing a plausible penumbra without explicitly considering a size of the light source, though I don't quite understand the core intuition.

flobosg•2mo ago
Probably not that related, but the article reminded me of a shadow casting implementation on the PICO-8: https://medium.com/hackernoon/lighting-by-hand-4-into-the-sh...
opminion•2mo ago
Note that the first image is an interactive demo. Click or touch it. (It's not obvious from the text at the time of writing)
QuantumNomad_•2mo ago
Same goes for a few of the other images too, but not all of them.

The article would probably benefit from having figure captions below each image stating whether the image is interactive or not.

Or alternatively to figure captions about interactivity, showing some kind of symbol in one of the corners of each of the ones that are interactive. In that case, the intro should also mention that symbol and what it means before any images that have that symbol on it.

esperent•2mo ago
The demo at the top has some bad noise issues when the light is in small gaps, at least on my phone (which I don't think the article acknowledges).

The demo at the end has bad banding issues (which the article does acknowledge).

It seems like a cheat-ish improvement to both of these would be a blur applied at the end.

kg•2mo ago
AFAIK (I have a similar soft shadows system based on SDFs) the reason the noise issues occur in small gaps is that the distance values become small there so the steps become small and you start ending up in artifact land. The workaround for this is to enforce a minimum step size of perhaps 0.5 - 2.0 pixels (depending on the quality of your SDF) so you don't get trapped like that - the author probably knows but it's not done by their sample code.

Small step sizes are doubly bad because low-spec shader models like WebGL and D3D9 have a limitation on the number of loop iterations, so no matter how powerful your GPU is the step loop will terminate somewhat early and produce results that don't resemble the ground truth.

magicalist•2mo ago
> The demo at the top has some bad noise issues when the light is in small gaps, at least on my phone (which I don't think the article acknowledges).

Right at the end:

> The random jitter ensures that pixels next to each other don’t end up in the same band. This makes the result a little grainy which isn’t great. But I think looks better than banding… This is an aspect of the demo that I’m still not satisfied with, so if you have ideas for how to improve it please tell me!

esperent•2mo ago
Ah I missed that, thanks. More than a little grainy for me but that might be a resolution/pixel ratio thing on my phone that could be tweaked out.
black_knight•2mo ago
Not only. There is an inherent aliasing effect with this method which is very apparent when the light is close to the wall.

I implemented a similar algorithm myself, and had the same issue. I did find a solution without that particular aliasing, but with its own tradeoffs. So, I guess I should write it up some time as a blog post.

wongarsu•2mo ago
However I don't have any issues with the demo in the middle (the hard shadows). So the artifacting has to be from the soft shadow rules, or from the "few extra tweaks".

The primary force behind real soft shadows is obviously that real lights are not point sources. I wonder how much worse the performance would be if instead of the first two (kinda hacky) soft shadow rules we instead replaced the light by maybe five lights that represent random points in a small circular light source. Maybe you'd get too much banding unless you used a much higher number of light sources, but at the very least it would be an interesting comparison to justify using the approximation

noduerme•2mo ago
This is truly a very clever series of calculations, a really cool effect, and a great explanation of what went into it. I'll admit that I skimmed over some of the technical details because I want to try it myself from scratch... but the distance map is a great clue.
IsTom•2mo ago
I wonder if it would help if you looked at gradient of the SDF as well – maybe you could walk further safely if you're not moving in the same direction as the gradient?
dahart•2mo ago
I’ve seen a paper about this, I’ll see if I can dig up a link. I believe you’re right and the answer is yes it can help, but it can be complicated to prove what’s safe or not. The gradient tells you about the orientation of the nearest surface, but doesn’t tell you how fast the orientation is changing, so for nonlinear shapes you need to look at higher order derivatives too. Super interesting stuff, but somewhat gets in the way of the pure elegant simplicity of basic ray marching.

edit: here’s one. I’m not sure this is the one I was thinking of, but I think it does validate your hypothesis that you can reduce the number of steps needed by looking at gradients. https://hal.science/hal-02507361/file/lipschitz-author-versi...

IsTom•2mo ago
That's a pretty cool paper, though it does get more elaborate as you're saying. In 20/20 hindsight lipschitz bounds do make sense.
ionwake•2mo ago
this looks great but is there no demo link? maybe Im blind and missed it?
sigmoid10•2mo ago
They are embedded in the blog. Just click around on the images.
ionwake•2mo ago
oops - thanks
rncode•2mo ago
the fact that this runs butter-smooth on webgl while my company's 'enterprise dashboard' struggles to render 50 divs says everything about how much performance we leave on the table with bad abstractions
aktuel•2mo ago
This is really cool! If I were to work on it, I would make the light source a bouncing ball or something similar (maybe even a fish or a bird) via some 2D physics next.
IvanK_net•2mo ago
It reminded me this demo that I made in 2012 (computed in real time by Javascript on the CPU) https://polyk.ivank.net/?p=demos&d=raycast
jasonjmcghee•2mo ago
None of the demos worked for me on mobile but he has a pinned tweet that demonstrates it

https://x.com/ryanjkaplan/status/1308818844048330757?s=46

dahart•2mo ago
Great looking demo. Someone could use this for a show’s title sequence. There’s something about the combination of soft shadows and r-squared light falloff that always tickles me.

Fun fact - you can use very similar logic to do a single-sample depth of field and/or antialiasing. The core idea, that maybe this blog post doesn’t quite explain, is that you’re tracing a thin cone, and not just a ray. You can track the distance to anything the ray grazes, assume it’s an edge that partially covers your cone (think of dividing a circle into two parts with an arbitrary straight line and keeping whichever part contains the center), and that gives you a way to compute both soft shadows and partial pixel or circle-of-confusion coverage. You can do a lot of really cool effects with such a simple trick!

I searched briefly and found another nice blog post and demo about this: https://blog.42yeah.is/rendering/2023/02/25/dof.html

doawoo•2mo ago
Awesome demo page! SDFs are super fun, and usually pretty useful (in addition to being pretty)

I recall a paper published by Valve that showed their approach to using SDFs to pack glyphs into low res textures while still rendering them at high resolution:

https://steamcdn-a.akamaihd.net/apps/valve/2007/SIGGRAPH2007...

i-e-b•2mo ago
A lot of the noise and banding is reduced if you calculate the brightness by area instead of distance

https://jsfiddle.net/i_e_b/9kLro7ns/

user____name•2mo ago
Here is a 2023 presentation on the implementation of screenspace contact shadows for Days Gone by Bend Studios, it uses a clever variation on this basic technique. I'm not sure it scales as well to many lightsources though.

https://youtube.com/watch?v=btWy-BAERoY&t=1929s&pp=2AGJD5ACA...