frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
612•klaussilveira•12h ago•180 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
915•xnx•17h ago•545 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
29•helloplanets•4d ago•22 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
102•matheusalmeida•1d ago•24 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
36•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
212•isitcontent•12h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
5•kaonwarb•3d ago•1 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
206•dmpetrov•12h ago•101 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
316•vecti•14h ago•140 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
355•aktau•18h ago•181 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
361•ostacke•18h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
471•todsacerdoti•20h ago•232 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
267•eljojo•15h ago•157 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
400•lstoll•18h ago•271 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
82•quibono•4d ago•20 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
54•kmm•4d ago•3 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
9•bikenaga•3d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
242•i5heu•15h ago•183 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
51•gfortaine•10h ago•16 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
138•vmatsiiako•17h ago•60 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
275•surprisetalk•3d ago•37 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
68•phreda4•11h ago•13 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1052•cdrnsf•21h ago•433 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
127•SerCe•8h ago•111 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•7h ago•10 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
173•limoce•3d ago•93 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
7•jesperordrup•2h ago•4 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
61•rescrv•20h ago•22 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
17•neogoose•4h ago•9 comments
Open in hackernews

Civil War in 3D: Stereographs from the New-York Historical Society (2015)

https://www.nyhistory.org/blogs/civil-war-in-3d-stereographs-from-the-new-york-historical-society
58•LorenDB•8mo ago

Comments

ramesh31•8mo ago
You can see the effect in these images directly without a device, by simply crossing your eyes and focusing on the third central image that appears, similar to those 3D optical illusion books: https://youtu.be/zBa-bCxsZDk
JKCalhoun•8mo ago
The cross-eyed method requires the images be swapped left-for-right.
kazinator•8mo ago
Not sure why you are downvoted; that is correct.
kazinator•8mo ago
This gallery presents the original stereograms in their stare-into-distance configuration (left image goes with left eye, right with right), not cross-eyes configuration (left image goes with right eye and vice versa).
JeremyHerrman•8mo ago
Is it just me or are some of these examples not actually stereo image pairs?

I'm just crossing my eyes to see the "negative" depth image but some like "McLean’s House" and "Lincoln visits General McClellan at Antietam" don't appear to have any depth changes between them.

JKCalhoun•8mo ago
You need to swap left and right images to use the cross-eyed method on these. You can try downloading as an image, use an app like Preview to Flip Horizontal (that will work).

Otherwise you're seeing a kind of inverse stereo image.

(EDIT: Having said that, I tried a few of the images and the stereo effect is subtle. The soldier on the horse — I was not even able to get that to "snap" for me. I am not great with cross-eyed stereo though.)

JeremyHerrman•8mo ago
yes understood that cross-eyed method inverts the depth. My point was that some of the image pairs are from the exact same perspective - so there is no stereo depth no matter if you're using cow-eyed or cross-eyed.
JKCalhoun•8mo ago
Yeah, if there is depth, it was pretty subtle on the few I got to work.
kazinator•8mo ago
These images were prepared for insertion into a stereogram in which the left eye looks at the left image and right eye looks at the right image, through a magnifying lens. When viewing with the naked eye, you must stare past the images into the distance to get them to converge that way.
JeremyHerrman•8mo ago
Thanks, I understand how stereograms work and have quite a few of these IRL. I use cross-eyed method to quickly view them (albeit inverted depth) when shown on screen.

I've tried to show my point in these videos which show basically no difference between the two images when overlapped and crossfaded between the two. https://imgur.com/a/RMy3QA3

kazinator•8mo ago
I agree that particular image is a dud; I was not able to perceive any depth.

The creator mistakenly used the same image twice.

The two men in a tent image is likewise a dud. If we look at the pole at the tent entrance, there is no difference in parallax between that and objects at the back wall.

The Abe Lincoln doesn't pop out much for me.

The dead soldiers in the field also seems to be identical images.

The clearly genuine ones are the horse-drawn carriage in the forest, and the horseman in front of the cannon.

JeremyHerrman•8mo ago
Here are some videos trying to show what I mean. I overlapped the two images on top and crossfaded between the two. Aside from some minor distortion I don't see any major differences normally found between stereo pairs.

https://imgur.com/a/RMy3QA3

saddat•8mo ago
Create two pictures from it and use https://huggingface.co/spaces/cavargas10/TRELLIS-Multiple3D
kazinator•8mo ago
For casual viewing with the unaided eye, you want to present stereograms in cross-your-eyes order not stare-into-distance order.

Most people are not able to cause their eyes to diverge, so the scale of images in a stare-into-distance stereogram is limited by the interocular distance.

In cross-eye configuration, larger images can be used.

(Of course, the use of magnification in stereoscopes relieves the issue, as well as making it easier for the eyes to focus, since the magnified virtual images appear farther away. Viewing stare-into-distance stereograms requires the eyes to believe they are looking far away due to the parallel gaze, while simultaneously focusing near on the images; magnification brings the images farther out.)

LorenDB•8mo ago
I personally find the crosseyed type to be nearly impossible, while the parallel type are pretty easy for me. So I think it really depends on the person. Additionally, most stereograms I've seen (e.g. coffee-table books) have been parallel type.
kazinator•8mo ago
The parallel types are also very easy for me, but they are always small.

If the spacing between them is wider than my inter-ocular distance, I find them impossible to converge.

I made stereograms in the past and wanted to see larger images with the naked eye, so I had no choice but swap the images and cross the eyes.

6yyyyyy•8mo ago
I flipped them all, enjoy:

https://imgur.com/a/OOiQ5AK

(FYI: -vf stereo3d=in=sbsl:out=sbsr in ffmpeg.)

entropicdrifter•8mo ago
Woo! The true solution!
pimlottc•8mo ago
You can flip images horizontally via CSS:

    img {
      transform: scaleX(-1);
    } 
Here's a javascript bookmarklet that will do this for all images on the page:

javascript:(()%3D%3E%7B%5B...document.querySelectorAll(%22img%22)%5D.forEach((e%3D%3E%7Be.style.transform%3D%22scaleX(-1)%22%7D))%3B%7D)()%3B

kazinator•8mo ago
That is very clever, and useful, thank you.

But it doesn't achieve the effect we are after at present.

When we reflect the stereogram left to right, the orientation of the parallax recorded in the images also flips and so the net effect is zero: if the original stereo pair is a stare-into-the distance stereogram, the reflected stereogram is also.

pimlottc•8mo ago
Ah, good point. I wonder if it's possible to achieve the left/right swap in CSS? Alas I am not a CSS guru.
ge96•8mo ago
For an example that works see this squirrel sorry reddit link

https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....

crazy but I feel sick now ha, I had a VR headset before and I'd get super sick trying to play FO4, VRChat wasn't bad

bredren•8mo ago
Would be cool to get these converted into spatial photos for Vision Pro.
mdswanson•8mo ago
Not too many steps away from this: https://blog.mikeswanson.com/spatial/