frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
56•theblazehen•2d ago•11 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
637•klaussilveira•13h ago•188 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
935•xnx•18h ago•549 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
35•helloplanets•4d ago•30 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
113•matheusalmeida•1d ago•28 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
13•kaonwarb•3d ago•12 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
45•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
222•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
214•dmpetrov•13h ago•106 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
324•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
374•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
478•todsacerdoti•21h ago•237 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•19h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
278•eljojo•16h ago•166 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
407•lstoll•19h ago•273 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
17•jesperordrup•3h ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
58•kmm•5d ago•4 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
27•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
245•i5heu•16h ago•193 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
14•bikenaga•3d ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
54•gfortaine•11h ago•22 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
143•vmatsiiako•18h ago•65 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1061•cdrnsf•22h ago•438 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
179•limoce•3d ago•96 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
284•surprisetalk•3d ago•38 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
137•SerCe•9h ago•125 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•8h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•21h ago•23 comments
Open in hackernews

Show HN: Curved Space Shader in Three.js (via 4D sphere projection)

https://github.com/bntre/CurvedSpaceShader
68•bntr•8mo ago
I made a GLSL shader that bends 3D space using a 4D hypersphere projection.

The idea:

  1. Project a model onto a 4D sphere
  2. Rotate the sphere
  3. Project the model back to 3D
Code and details: https://github.com/bntre/CurvedSpaceShader

Curious what you think.

Comments

Sourabhsss1•8mo ago
This is interesting...
tetris11•8mo ago
I like it as a curiousity, but it only makes sense to me if I think of it 2D scene to 3D sphere.

Is 4D sphere the upper limit on this method, or can you project say 3D scene onto 5D sphere? (e.g a 1D line onto a 3D sphere analog)

bntr•8mo ago
The 4D sphere makes sense here because its surface is 3-dimensional. That means I can project the model from 4D sphere back to 3D in a bijective (one-to-one) way.

You could project from 5D down to 3D, but the dimensional mismatch breaks the bijection - you'd lose information or overlap points. However, a 4D → 5D → 4D projection would preserve structure, though it gets harder to visualize.

I chose 3D ↔ 4D specifically because curved 3D space is much more intuitive and has direct physical meaning - it corresponds to positively curved space (see e.g. https://en.wikipedia.org/wiki/Shape_of_the_universe#Universe... )

saltwatercowboy•8mo ago
Very cool. Have you tried applying it to a cube sphere/are the results are contiguous? I'd be interested in incorporating it into a hybrid planetary science/storymapping project I'm working on.
bntr•8mo ago
I'm not entirely sure I understand the question. I doubt that any kind of sphere other than the abstract mathematical one (X²+Y²+...= 1) would be suitable for transformations like stereographic projection.
qwertox•8mo ago
These projections, how do they make sense?

I can project a 3D item onto a 2D plane, but only observe it because I'm outside of that 2D plane. This is like expecting the 2D plane to see itself and deduce 3D-dimensionality from what it sees. Like a stickman. It would only be able to raycast from its eye in a circle. It could do so from multiple points on the plane, but still, how would it know that it is looking at the projection of a sphere?

bntr•8mo ago
The surface of a 4D sphere (a 3-sphere) is itself 3-dimensional (just like the surface of an ordinary 3D ball is 2D). So when I use the hypersphere in intermediate computations, I’m not actually adding an extra dimension to the world.

What this transformation does give me is a way to imagine a closed, finite 3D space, where any path you follow eventually loops back to where you started (like a stickman walking on the surface of a globe). Whether or not that space “really” needs a 4th spatial dimension is less important than the intuition it gives: this curved embedding helps us visualize what a positively curved 3D universe might feel like from the inside.

fallinditch•8mo ago
Good job, a lovely idea! It reminds me of AI morphing animation, I wonder if these techniques can be combined...
tasoeur•8mo ago
I wonder if there’s something interesting visually if this shader could be explored immersively (VR). Could be worth prototyping it on my little app :-) (https://shader.vision).
bntr•8mo ago
VR has come up a couple of times in response to my experiments - maybe it’s time I give it a try.

I once tried a cross-eye 4D view: https://github.com/bntre/40-js

ivanjermakov•8mo ago
Because transformation happens in the vertex shaders, curvature would not work on low-poly objects. For this reason camera distortion is usually implemented in clip space (only after non-distorted frame is ready)
bntr•8mo ago
Do you mean applying geometric distortion in the fragment shader? I'm not quite sure how that would work (I'm not so familiar with shaders at that level).

I've heard of true 3D bump mapping being done in fragment shaders (not just lighting), but I can't really imagine how more radical geometric distortion could be implemented there.

ivanjermakov•8mo ago
Fragment shader distortion suffers from another issue: heavy distortions require higher resolutions and (depending on distortion type) higher field of view. Even more radical distortions would require cubemaps of undistorted frames to handle fragmens from behind the camera.

This answer suggests some other ideas on implementing lens distortion: https://stackoverflow.com/a/44492971

bntr•8mo ago
Thanks! The cube mapping idea is really interesting — I didn’t know about that approach. However, I doubt it would help in my case, where the distortion is strong enough to flip the depth order of objects.

Maybe these methods could be combined somehow, but it seems simpler to use subdivision (as also mentioned in that thread) — perhaps selectively, for objects near the periphery where distortion is strongest.

Duanemclemore•8mo ago
This is rad. The game is especially cool. Congrats, OP!

This is the same math as this old program called Jenn3d[0] which I played around with almost twenty years ago. (Amazingly the site is still online!) The crazies who built it also built it to play Go in 3d projective space. I was never able to play Go with it, but I've been in to projective geometries since.

OP - if you want to try something else cool with 4d to 3d projective geometries, here's an idea I ran across working with 3d to 2d.

I make a tool for generating continuous groupings of repetitive objects in architectural computation. [1] When faced with trying to view the inside of lattices containing sets of solids which tile space continuously, I tried a few different methods (one unsuccessful but cool looking one here [2])

So when I created the sphere upon which to project the objects in the lattice, rather than just project the edges I made concentric spherical section planes and projected the intersection of those with the objects. [3] By using objects parallel to the projection plane to cut sections I was able to generate spacings between the final generated section lines that mapped how oblique the surface being cut was from the ray projecting from the centerpoint of the sphere to its surface.

Sorry OP, that's a long description. TL;DR - instead of projecting 3d mesh edges to a 4d sphere then back down to 3d space, what if you tried describing the meshes as the intersection of their 3d geometry with 4d hyperspheres parallel to the projection hypersphere? It would look more abstract, but I bet it would look cool as heck, especially navigating in 3d projective space!

[0] https://jenn3d.org/ [1] https://www.food4rhino.com/en/app/horta [2] https://vimeo.com/698774461 [3] https://vimeo.com/698774461

p.s. Also, if any actual geometers are reading this - I'd love to co-author a math paper that more rigorously considers what I explored / demonstrated with the drawings above. I have a whole set of them methodically stepping through the process, and could generate more at will. I also have a paper about it I can send on request (or if you can hunt down the Design Communication Association Conference Proceedings 2022).

bntr•8mo ago
Thanks for the kind words and for sharing your thoughts! I actually remember Jenn3d as well — the animations always reminded me of some kind of shimmering foam.

Unfortunately, I couldn’t quite grasp the method you’re describing — perhaps I’m missing some illustrations. (By the way, links [2] and [3] seem to point to the same video, and I’m not sure they match your description.)

It sounds like you’re suggesting a way to slice objects into almost-repetitive sections, so the brain can reconstruct a fuller picture — a bit like how compound eyes work in insects.

Duanemclemore•8mo ago
That's so strange. For some reason it gave me the link for a completely different video...

Anyway - here's

[2] https://vimeo.com/757057720

and [3] https://vimeo.com/757062988

Yeah, jenn was really rad. It's red meat to me when anyone's working on these kinds of projections.

Since without the proper explanation the whole "concentric spherical section planes" thing is unclear (and actually, they wouldn't be section "planes" in the first place), here's the paper I was referencing:

https://www.academia.edu/129490488/Visualizing_Space_Group_H...

(see pg. 3 for a visual explanation that I hope helps.)

I intersected the objects in the lattice with spheres to create lines, then projected those to the outer sphere and down to the 2d plane. In the same way, you could use concentric hyperspheres to intersect a 3d object serially, then project those intersections back to 3d space...

bntr•8mo ago
Thanks — your method makes more sense now. I’m not very familiar with architectural design problems, so I didn’t fully grasp how this technique helps build a more complete understanding of the internal structure of composed objects. The final image reminds me of a kind of holographic source.

When I think in that direction, it seems more appropriate not to add spatial dimensions (like 4D), but to add animation to your method (shifting or rotating the original composed object). That might help an untrained viewer better understand the usefulness of the final projection.

meta-meta•8mo ago
Nice! My partner predicted this in some album art she did for a friend. https://badbraids.bandcamp.com/album/supreme-parallel
jimmySixDOF•8mo ago
For anyone interested in exploring Godot are working on a json spec for 4D Shapes for rendering and physics -- it's called G4MF (Good 4D Model Format) loosely based on Khronos glTF -- still a work in progress but there is playground editor support for x/y/z/w

https://github.com/godot-dimensions/g4mf

bntr•8mo ago
Thanks! I didn’t know about G4MF — looks cool. What I’ve missed more often, though, is 5×5 matrices for real 4D transformations.
bobsmooth•8mo ago
God I wish I could understand 4D geometry.
talkingtab•8mo ago
While the demo is great, and the 4D stuff is very cool, for me the amazing thing is the code to do this. Three.js opens a door to using webgl & webgpu, and shaders open yet more doors.