frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
460•klaussilveira•6h ago•112 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
800•xnx•12h ago•484 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
154•isitcontent•7h ago•15 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
149•dmpetrov•7h ago•65 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
24•matheusalmeida•1d ago•0 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
48•quibono•4d ago•5 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
88•jnord•3d ago•10 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
259•vecti•9h ago•122 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
326•aktau•13h ago•157 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
199•eljojo•9h ago•128 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
322•ostacke•12h ago•85 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
405•todsacerdoti•14h ago•218 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
331•lstoll•13h ago•240 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
20•kmm•4d ago•1 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
51•phreda4•6h ago•8 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
113•vmatsiiako•11h ago•36 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
192•i5heu•9h ago•141 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
150•limoce•3d ago•79 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
240•surprisetalk•3d ago•31 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
3•romes•4d ago•0 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
990•cdrnsf•16h ago•417 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
23•gfortaine•4h ago•2 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
7•DesoPK•1h ago•4 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
45•rescrv•14h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
61•ray__•3h ago•18 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
36•lebovic•1d ago•11 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•57 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
5•gmays•2h ago•1 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
40•nwparker•1d ago•10 comments

The Oklahoma Architect Who Turned Kitsch into Art

https://www.bloomberg.com/news/features/2026-01-31/oklahoma-architect-bruce-goff-s-wild-home-desi...
21•MarlonPro•3d ago•4 comments
Open in hackernews

Implementing a tiny CPU rasterizer (2024)

https://lisyarus.github.io/blog/posts/implementing-a-tiny-cpu-rasterizer-part-1.html
118•PaulHoule•1w ago

Comments

delta_p_delta_x•1w ago
This is a great resource. Some others along the same lines:

TinyRenderer: https://haqr.eu/tinyrenderer/

ScratchAPixel: https://www.scratchapixel.com/index.html

3D Computer Graphics Programming by Pikuma (paid): https://pikuma.com/courses/learn-3d-computer-graphics-progra...

Ray-tracing:

Ray Tracing in One Weekend: https://raytracing.github.io/

Ray Tracing Gems: https://www.realtimerendering.com/raytracinggems/

Physically Based Rendering, 4th Edition: https://pbr-book.org/

Both:

Computer Graphics from Scratch: https://gabrielgambetta.com/computer-graphics-from-scratch/

I'll also link a comment[1] I made a while back about learning 3D graphics. There's no better teacher than manually implementing the rasterisation and ray-tracing pipelines.

[1]: https://news.ycombinator.com/item?id=46410210#46416135

ggambetta•1w ago
May I add Computer Graphics From Scratch, which covers both rasterization and raytracing? https://gabrielgambetta.com/computer-graphics-from-scratch/i...

I have to admit I'm quite surprised by how eerily similar this website feels to my book. The chapter structure, the sequencing of the concepts, the examples and diagrams, even the "why" section (mine https://gabrielgambetta.com/computer-graphics-from-scratch/0... - theirs https://lisyarus.github.io/blog/posts/implementing-a-tiny-cp...)

I don't know what to make of this. Maybe there's nothing to it. But I feel uneasy :(

delta_p_delta_x•1w ago
Ah yes, great book; thanks for pointing it out. Added to the list.

As for similarity, I think the sections you've highlighted are broadly similar, but I can't detect any phrase-for-phrase copy-pasting that is typical of LLM or thesaurus find-replace. I feel that the topic layout and the motivations for any tutorial or course covering the same subject matter will eventually converge to the same broad ideas.

The website's sequence of steps is also a bit different compared to your book's. And most telling, the code, diagrams, and maths in the website are all different (such assets are usually an instant giveaway of plagiarism). You've got pseudocode; the website uses the C++ standard library to a great extent.

If it were me, I might rest a little easier :)

lisyarus•1w ago
Hi! Blog post author here. I have heard the "Computer Graphics from Scratch" book before, but I haven't read it myself, so it would be quite hard for me to plagiarize it. I guess some similarities are expected when talking about a well-established topic.
globalnode•1w ago
its a standard pipeline, everything from everyone will look roughly similar. your book likely looks something like previous work. i wouldnt worry about it, ps i really loved your web tutorials back in the day
gopla•1w ago
An additional resource on rasterisation, using the scan conversion technique:

https://kristoffer-dyrkorn.github.io/scanline-rasterizer/

Levitating•1w ago
I can vouch for scratchapixel, it taught me the basics of 3d projection
gustavopezzi•6d ago
Thanks for mentioning pikuma. :-)

The 3D software rendering is still the most popular lecture from our school even after all these years. And it really surprises me because we "spend a lot of time" talking about some old techniques (MS-DOS, Amiga, ST, Archimedes, etc.). But it's fun to see how much doing things manually help students understand the math and the data movement that the GPU helps automate and vectorize in modern systems.

feelamee•1w ago
> Triangles are easy to rasterize

sure, rasterizing triangle is not so hard, but.. you know, rasterizing rectangle is far far easier

maximilianburke•1w ago
…as long as all points are co-planar.
qingcharles•1w ago
Rasterizing triangles is a nightmare, especially if performance is a goal. One of the biggest issues is getting abutting triangles to render so you don't have overlapping pixels or gaps.

I did this stuff for a living 30 years ago. Just this week I had Deep Think create a 3D engine with triangle rasterizer in 16-bit x86 for the original IBM XT.

nottorp•1w ago
> One of the biggest issues is getting abutting triangles to render so you don't have overlapping pixels or gaps. > I did this stuff for a living 30 years ago.

So you did CAD or something like that? Since that matters far less in games.

qingcharles•1w ago
It still looks janky in games. Look at any old 8/16-bit pre-GPU games. Getting this stuff right is hard.
nottorp•6d ago
It does. But gamers accepted it and played the hell out of it. That's why I assumed you did CAD.
ralferoo•1w ago
It's fairly easy to get triangle rasterisation performant if you think about the problem hard enough.

Here's an implementation I wrote for the PS3 SPU many moons ago: https://github.com/ralferoo/spugl/blob/master/pixelshaders/t...

That does perspective correct texture mapping, and from a quick count of the instructions in the main loop is approximately 44 cycles per 8 pixels.

The process of solving the half-line equation used also doesn't suffer from any overlapping pixel or gaps, as long as both points are the same and you use fixed point arithmetic.

The key trick is to rework each line equation such that it's effectively x.dx+y.dy+C=0. You can then evaluate A=x.dx+y.dy+C at the top left of the square that encloses the triangle. Every pixel to the right, you can just add dx, and every pixel down, you can just add dy. The sign bit indicates whether the pixel is or isn't inside that side of the triangle, and you can and/or the 3 side's sign bits together to determine whether a pixel is inside or outside the triangle. (Whether to use and or or depends on how you've decided to interpret the sign bit)

The calculation for the all the values consumed by the rasteriser (C,dx,dy) for all 3 sides of a triangle, given the 3 coordinates is here: https://github.com/ralferoo/spugl/blob/db6e22e18fdf3b4338390...

Some of the explanations I wrote down while trying to understand Barycentric coordinates (from which this stuff kind of just falls out of), ended up here: https://github.com/ralferoo/spugl/blob/master/doc/ideas.txt

(Apologies if my memory/terminology is a bit hazy on this - it was a very long time ago now!)

IIRC in terms of performance, this software implementation filling a 720p screen with perspective-correct texture mapped triangles could hit 60Hz using only 1 of the the 7 SPUs, although they weren't overlapping so there was no overdraw. The biggest problem was actually saturating the memory bandwidth, because I wasn't caching the texture data as an unconditional DMA fetch from main memory always completed before the values were needed later in the loop.

ralferoo•1w ago
Forgot to add, that when you're calculating these fixed values for each triangle, you can also get the hidden surface removal for free. If you have a constant CW or CCW orientation, the sign of the base value for C tells you whether the triangle is facing towards you or away.
qingcharles•1w ago
It does, but all hell breaks loose if you start having translucent stuff going on :(
ralferoo•1w ago
I'm not sure I understand the problem you're having.

Obviously, if you have translucent, then you need to be doing those objects last, but if you're using the half line method, then two triangles that share an edge will follow the same edge exactly if you're using fixed point math (and doing it properly, I guess!) A pixel will either be in one or the other, not both.

The only issue would be if you're wanted to do MSAA, then yes it gets more complicated, but I'd say it's conceptually simpler to have a 2x resolution and then downsample later. I didn't attempt to tackle MSAA, but one optimisation would be to write 2x2 from a single calculated pixel and but do the half line equation at the finer resolution to determine which of the 2x2 pixels receive the contribution. And then after you render everything, do a 2x2 downsample on the final image.

qingcharles•1w ago
It's definitely not "fairly easy" once you get into perspective-correct texture-mapping on the triangles, and making sure the pixels along the diagonal of a quad aren't all janky so they texture has an obvious line across it. Then you add on whatever methods you're using to light/shade it. It gets horrible really quick. To me, at least!
ralferoo•1w ago
The first link I posted, specifically lines 200-204 ( https://github.com/ralferoo/spugl/blob/master/pixelshaders/t... ) isn't quite what I remembered as this seems the be doing a texture correct visualisation of s,t,k used for calculating mipmap levels and not actually doing the texture fetch - you'll have to forgive me, it's been 17 years since I looked at the code so I forgot where everything is.

It looks like the full texture mapper including mipmap levels is only in the OLD version of the code here: https://github.com/ralferoo/spugl/blob/master/old/shader.c

This is doing full perspective correct texture mapping, including mipmapping and then effectively doing GL_NEAREST by sampling the 4 nearest pixels from 2 mipmap layers, and blending both sets of 4 pixels and then interpolating between the mipmaps.

But anyway, to do any interpolation perspective correctly, you need to interpolate w, exactly as you would interpolate r,g,b for flat colours or u,v for texture coords. You then have 1 reciprocal per pixel to get 1/w, and then multiply all the interpolated parameters by that.

In terms of "obvious line across it", it could be that you're just not clamping u and v between 0,1 (or whatever texture coordinates you're using) or clamping them not wrapping for a wrapped texture. And if you're not doing mipmapping and just doing nearest pixel on a high-res texture, then you will get sparklies.

I've got a very old and poor quality video here, and it's kind of hard to see anything because it was filmed using a phone pointing at the screen: https://www.youtube.com/watch?v=U5o-01s5KQw I don't have anything newer as I haven't turned on my linux PS3 for probably at least 15 years now, but even though it's low quality there's no obvious problem at the edges.

tubs•6d ago
As the other posters have shown it’s not that hard.

Most graphics specs will explicitly say how tie break rules work.

The key is to work in fixed point (16.8 or even 16.4 if you’re feeling spicy). It’s not “trivial” but in general you write it and it’s done. It’s not something you have to go back to over and over for weird bugs.

Wide lines are a more fun case…

nottorp•1w ago
With the discrete GPUs pricing themselves out of the consumer space, we may actually need to switch back to software rendering :)
cyber_kinetist•6d ago
That's too much of a stretch, but I believe games of the next era will be optimized more towards integrated GPUs (such as AMD's iGPU in the Steam Deck and Steam Machine).

When hardware is priced out for most consumers (along with a global supply chain collapse due to tariffs and a potential Taiwan invasion), a new era awaits where performance optimization is going to be critical again for games. I expect existing game engines like Unity and Unreal Engine falling out because of all the performance issues they have, and maybe we can return to a temporary "wild west" era where everyone has their own hacky solution to cram stuff into limited hardware.

nottorp•6d ago
> everyone has their own hacky solution to cram stuff into limited hardware

Limited hardware gave us a lot of classic titles and fundamental game mechanics.

Off the top of my head:

Metal Gear's stealth was born because they couldn't draw enough enemy sprites to make a shooting game. Instead they drew just a few and made you avoid them.

Ico's and Silent Hill's foggy atmosphere is partially determined by their polygon budget. They didn't have the hardware to draw distant scenery so they hid it in fog.

bool3max•6d ago
That's wishful thinking. The reality is that your iGPU will be used to decode the video stream of an Unreal game running on a dedicated GPU on some cloud server which you pay a monthly subscription fee for.
Sohcahtoa82•1w ago
I tried doing this in Python a bit ago. It did not go well and it really showed how SLOW Python really is.

Even with just an 1280x720 window, setting every pixel to a single color by setting a value in a byte array and then using a PyGame function to just give it a full frame to draw, I maxed out at like 10 fps. I tried so many things and simply could not get any faster.

phendrenad2•6d ago
I'm surprised more indie games don't use software rendering, just to get a more unique style. 640x400 aught to be enough for anybody!