sure, rasterizing triangle is not so hard, but.. you know, rasterizing rectangle is far far easier
I did this stuff for a living 30 years ago. Just this week I had Deep Think create a 3D engine with triangle rasterizer in 16-bit x86 for the original IBM XT.
So you did CAD or something like that? Since that matters far less in games.
Here's an implementation I wrote for the PS3 SPU many moons ago: https://github.com/ralferoo/spugl/blob/master/pixelshaders/t...
That does perspective correct texture mapping, and from a quick count of the instructions in the main loop is approximately 44 cycles per 8 pixels.
The process of solving the half-line equation used also doesn't suffer from any overlapping pixel or gaps, as long as both points are the same and you use fixed point arithmetic.
The key trick is to rework each line equation such that it's effectively x.dx+y.dy+C=0. You can then evaluate A=x.dx+y.dy+C at the top left of the square that encloses the triangle. Every pixel to the right, you can just add dx, and every pixel down, you can just add dy. The sign bit indicates whether the pixel is or isn't inside that side of the triangle, and you can and/or the 3 side's sign bits together to determine whether a pixel is inside or outside the triangle. (Whether to use and or or depends on how you've decided to interpret the sign bit)
The calculation for the all the values consumed by the rasteriser (C,dx,dy) for all 3 sides of a triangle, given the 3 coordinates is here: https://github.com/ralferoo/spugl/blob/db6e22e18fdf3b4338390...
Some of the explanations I wrote down while trying to understand Barycentric coordinates (from which this stuff kind of just falls out of), ended up here: https://github.com/ralferoo/spugl/blob/master/doc/ideas.txt
(Apologies if my memory/terminology is a bit hazy on this - it was a very long time ago now!)
IIRC in terms of performance, this software implementation filling a 720p screen with perspective-correct texture mapped triangles could hit 60Hz using only 1 of the the 7 SPUs, although they weren't overlapping so there was no overdraw. The biggest problem was actually saturating the memory bandwidth, because I wasn't caching the texture data as an unconditional DMA fetch from main memory always completed before the values were needed later in the loop.
Obviously, if you have translucent, then you need to be doing those objects last, but if you're using the half line method, then two triangles that share an edge will follow the same edge exactly if you're using fixed point math (and doing it properly, I guess!) A pixel will either be in one or the other, not both.
The only issue would be if you're wanted to do MSAA, then yes it gets more complicated, but I'd say it's conceptually simpler to have a 2x resolution and then downsample later. I didn't attempt to tackle MSAA, but one optimisation would be to write 2x2 from a single calculated pixel and but do the half line equation at the finer resolution to determine which of the 2x2 pixels receive the contribution. And then after you render everything, do a 2x2 downsample on the final image.
It looks like the full texture mapper including mipmap levels is only in the OLD version of the code here: https://github.com/ralferoo/spugl/blob/master/old/shader.c
This is doing full perspective correct texture mapping, including mipmapping and then effectively doing GL_NEAREST by sampling the 4 nearest pixels from 2 mipmap layers, and blending both sets of 4 pixels and then interpolating between the mipmaps.
But anyway, to do any interpolation perspective correctly, you need to interpolate w, exactly as you would interpolate r,g,b for flat colours or u,v for texture coords. You then have 1 reciprocal per pixel to get 1/w, and then multiply all the interpolated parameters by that.
In terms of "obvious line across it", it could be that you're just not clamping u and v between 0,1 (or whatever texture coordinates you're using) or clamping them not wrapping for a wrapped texture. And if you're not doing mipmapping and just doing nearest pixel on a high-res texture, then you will get sparklies.
I've got a very old and poor quality video here, and it's kind of hard to see anything because it was filmed using a phone pointing at the screen: https://www.youtube.com/watch?v=U5o-01s5KQw I don't have anything newer as I haven't turned on my linux PS3 for probably at least 15 years now, but even though it's low quality there's no obvious problem at the edges.
Most graphics specs will explicitly say how tie break rules work.
The key is to work in fixed point (16.8 or even 16.4 if you’re feeling spicy). It’s not “trivial” but in general you write it and it’s done. It’s not something you have to go back to over and over for weird bugs.
Wide lines are a more fun case…
When hardware is priced out for most consumers (along with a global supply chain collapse due to tariffs and a potential Taiwan invasion), a new era awaits where performance optimization is going to be critical again for games. I expect existing game engines like Unity and Unreal Engine falling out because of all the performance issues they have, and maybe we can return to a temporary "wild west" era where everyone has their own hacky solution to cram stuff into limited hardware.
Limited hardware gave us a lot of classic titles and fundamental game mechanics.
Off the top of my head:
Metal Gear's stealth was born because they couldn't draw enough enemy sprites to make a shooting game. Instead they drew just a few and made you avoid them.
Ico's and Silent Hill's foggy atmosphere is partially determined by their polygon budget. They didn't have the hardware to draw distant scenery so they hid it in fog.
Even with just an 1280x720 window, setting every pixel to a single color by setting a value in a byte array and then using a PyGame function to just give it a full frame to draw, I maxed out at like 10 fps. I tried so many things and simply could not get any faster.
delta_p_delta_x•1w ago
TinyRenderer: https://haqr.eu/tinyrenderer/
ScratchAPixel: https://www.scratchapixel.com/index.html
3D Computer Graphics Programming by Pikuma (paid): https://pikuma.com/courses/learn-3d-computer-graphics-progra...
Ray-tracing:
Ray Tracing in One Weekend: https://raytracing.github.io/
Ray Tracing Gems: https://www.realtimerendering.com/raytracinggems/
Physically Based Rendering, 4th Edition: https://pbr-book.org/
Both:
Computer Graphics from Scratch: https://gabrielgambetta.com/computer-graphics-from-scratch/
I'll also link a comment[1] I made a while back about learning 3D graphics. There's no better teacher than manually implementing the rasterisation and ray-tracing pipelines.
[1]: https://news.ycombinator.com/item?id=46410210#46416135
ggambetta•1w ago
I have to admit I'm quite surprised by how eerily similar this website feels to my book. The chapter structure, the sequencing of the concepts, the examples and diagrams, even the "why" section (mine https://gabrielgambetta.com/computer-graphics-from-scratch/0... - theirs https://lisyarus.github.io/blog/posts/implementing-a-tiny-cp...)
I don't know what to make of this. Maybe there's nothing to it. But I feel uneasy :(
delta_p_delta_x•1w ago
As for similarity, I think the sections you've highlighted are broadly similar, but I can't detect any phrase-for-phrase copy-pasting that is typical of LLM or thesaurus find-replace. I feel that the topic layout and the motivations for any tutorial or course covering the same subject matter will eventually converge to the same broad ideas.
The website's sequence of steps is also a bit different compared to your book's. And most telling, the code, diagrams, and maths in the website are all different (such assets are usually an instant giveaway of plagiarism). You've got pseudocode; the website uses the C++ standard library to a great extent.
If it were me, I might rest a little easier :)
lisyarus•1w ago
globalnode•1w ago
gopla•1w ago
https://kristoffer-dyrkorn.github.io/scanline-rasterizer/
Levitating•1w ago
gustavopezzi•6d ago
The 3D software rendering is still the most popular lecture from our school even after all these years. And it really surprises me because we "spend a lot of time" talking about some old techniques (MS-DOS, Amiga, ST, Archimedes, etc.). But it's fun to see how much doing things manually help students understand the math and the data movement that the GPU helps automate and vectorize in modern systems.