frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
180•ColinWright•1h ago•164 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
22•valyala•2h ago•7 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
124•AlexeyBrin•7h ago•24 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
17•valyala•2h ago•1 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
65•vinhnx•5h ago•9 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
155•alephnerd•2h ago•105 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
833•klaussilveira•22h ago•250 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
119•1vuio0pswjnm7•8h ago•148 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
57•thelok•4h ago•8 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1060•xnx•1d ago•612 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
79•onurkanbkrc•7h ago•5 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
4•gnufx•56m ago•1 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
487•theblazehen•3d ago•177 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
212•jesperordrup•12h ago•72 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
567•nar001•6h ago•259 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
226•alainrk•6h ago•354 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
40•rbanffy•4d ago•7 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
9•momciloo•2h ago•0 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
19•brudgers•5d ago•4 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
29•marklit•5d ago•3 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
114•videotopia•4d ago•33 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
77•speckx•4d ago•82 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
274•isitcontent•22h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
201•limoce•4d ago•112 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
287•dmpetrov•22h ago•155 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
22•sandGorgon•2d ago•12 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
557•todsacerdoti•1d ago•269 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
427•ostacke•1d ago•111 comments
Open in hackernews

Civil War in 3D: Stereographs from the New-York Historical Society (2015)

https://www.nyhistory.org/blogs/civil-war-in-3d-stereographs-from-the-new-york-historical-society
58•LorenDB•8mo ago

Comments

ramesh31•8mo ago
You can see the effect in these images directly without a device, by simply crossing your eyes and focusing on the third central image that appears, similar to those 3D optical illusion books: https://youtu.be/zBa-bCxsZDk
JKCalhoun•8mo ago
The cross-eyed method requires the images be swapped left-for-right.
kazinator•8mo ago
Not sure why you are downvoted; that is correct.
kazinator•8mo ago
This gallery presents the original stereograms in their stare-into-distance configuration (left image goes with left eye, right with right), not cross-eyes configuration (left image goes with right eye and vice versa).
JeremyHerrman•8mo ago
Is it just me or are some of these examples not actually stereo image pairs?

I'm just crossing my eyes to see the "negative" depth image but some like "McLean’s House" and "Lincoln visits General McClellan at Antietam" don't appear to have any depth changes between them.

JKCalhoun•8mo ago
You need to swap left and right images to use the cross-eyed method on these. You can try downloading as an image, use an app like Preview to Flip Horizontal (that will work).

Otherwise you're seeing a kind of inverse stereo image.

(EDIT: Having said that, I tried a few of the images and the stereo effect is subtle. The soldier on the horse — I was not even able to get that to "snap" for me. I am not great with cross-eyed stereo though.)

JeremyHerrman•8mo ago
yes understood that cross-eyed method inverts the depth. My point was that some of the image pairs are from the exact same perspective - so there is no stereo depth no matter if you're using cow-eyed or cross-eyed.
JKCalhoun•8mo ago
Yeah, if there is depth, it was pretty subtle on the few I got to work.
kazinator•8mo ago
These images were prepared for insertion into a stereogram in which the left eye looks at the left image and right eye looks at the right image, through a magnifying lens. When viewing with the naked eye, you must stare past the images into the distance to get them to converge that way.
JeremyHerrman•8mo ago
Thanks, I understand how stereograms work and have quite a few of these IRL. I use cross-eyed method to quickly view them (albeit inverted depth) when shown on screen.

I've tried to show my point in these videos which show basically no difference between the two images when overlapped and crossfaded between the two. https://imgur.com/a/RMy3QA3

kazinator•8mo ago
I agree that particular image is a dud; I was not able to perceive any depth.

The creator mistakenly used the same image twice.

The two men in a tent image is likewise a dud. If we look at the pole at the tent entrance, there is no difference in parallax between that and objects at the back wall.

The Abe Lincoln doesn't pop out much for me.

The dead soldiers in the field also seems to be identical images.

The clearly genuine ones are the horse-drawn carriage in the forest, and the horseman in front of the cannon.

JeremyHerrman•8mo ago
Here are some videos trying to show what I mean. I overlapped the two images on top and crossfaded between the two. Aside from some minor distortion I don't see any major differences normally found between stereo pairs.

https://imgur.com/a/RMy3QA3

saddat•8mo ago
Create two pictures from it and use https://huggingface.co/spaces/cavargas10/TRELLIS-Multiple3D
kazinator•8mo ago
For casual viewing with the unaided eye, you want to present stereograms in cross-your-eyes order not stare-into-distance order.

Most people are not able to cause their eyes to diverge, so the scale of images in a stare-into-distance stereogram is limited by the interocular distance.

In cross-eye configuration, larger images can be used.

(Of course, the use of magnification in stereoscopes relieves the issue, as well as making it easier for the eyes to focus, since the magnified virtual images appear farther away. Viewing stare-into-distance stereograms requires the eyes to believe they are looking far away due to the parallel gaze, while simultaneously focusing near on the images; magnification brings the images farther out.)

LorenDB•8mo ago
I personally find the crosseyed type to be nearly impossible, while the parallel type are pretty easy for me. So I think it really depends on the person. Additionally, most stereograms I've seen (e.g. coffee-table books) have been parallel type.
kazinator•8mo ago
The parallel types are also very easy for me, but they are always small.

If the spacing between them is wider than my inter-ocular distance, I find them impossible to converge.

I made stereograms in the past and wanted to see larger images with the naked eye, so I had no choice but swap the images and cross the eyes.

6yyyyyy•8mo ago
I flipped them all, enjoy:

https://imgur.com/a/OOiQ5AK

(FYI: -vf stereo3d=in=sbsl:out=sbsr in ffmpeg.)

entropicdrifter•8mo ago
Woo! The true solution!
pimlottc•8mo ago
You can flip images horizontally via CSS:

    img {
      transform: scaleX(-1);
    } 
Here's a javascript bookmarklet that will do this for all images on the page:

javascript:(()%3D%3E%7B%5B...document.querySelectorAll(%22img%22)%5D.forEach((e%3D%3E%7Be.style.transform%3D%22scaleX(-1)%22%7D))%3B%7D)()%3B

kazinator•8mo ago
That is very clever, and useful, thank you.

But it doesn't achieve the effect we are after at present.

When we reflect the stereogram left to right, the orientation of the parallax recorded in the images also flips and so the net effect is zero: if the original stereo pair is a stare-into-the distance stereogram, the reflected stereogram is also.

pimlottc•8mo ago
Ah, good point. I wonder if it's possible to achieve the left/right swap in CSS? Alas I am not a CSS guru.
ge96•8mo ago
For an example that works see this squirrel sorry reddit link

https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....

crazy but I feel sick now ha, I had a VR headset before and I'd get super sick trying to play FO4, VRChat wasn't bad

bredren•8mo ago
Would be cool to get these converted into spatial photos for Vision Pro.
mdswanson•8mo ago
Not too many steps away from this: https://blog.mikeswanson.com/spatial/