frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
141•theblazehen•2d ago•41 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
668•klaussilveira•14h ago•202 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
949•xnx•19h ago•551 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
122•matheusalmeida•2d ago•32 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
53•videotopia•4d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
229•isitcontent•14h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
16•kaonwarb•3d ago•19 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
222•dmpetrov•14h ago•117 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
26•jesperordrup•4h ago•16 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
330•vecti•16h ago•143 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
494•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
381•ostacke•20h ago•95 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
288•eljojo•17h ago•169 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
412•lstoll•20h ago•278 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
19•bikenaga•3d ago•4 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
63•kmm•5d ago•6 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
90•quibono•4d ago•21 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
256•i5heu•17h ago•196 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
43•helloplanets•4d ago•42 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
12•speckx•3d ago•4 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
59•gfortaine•12h ago•25 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
33•gmays•9h ago•12 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1066•cdrnsf•23h ago•446 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•67 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
149•SerCe•10h ago•138 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
287•surprisetalk•3d ago•43 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
182•limoce•3d ago•98 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•13h ago•14 comments
Open in hackernews

World Emulation via Neural Network

https://madebyoll.in/posts/world_emulation_via_dnn/
250•treesciencebot•9mo ago

Comments

quantumHazer•9mo ago
Is this a solo/personal project? If it is is indeed very cool.

Is OP the blog’s author? Because in the post the author said that the purpose of the project is to show why NN are truly special and I wanted a more articulate view of why he/she thinks that? Good work anyway!

treesciencebot•9mo ago
author is: https://x.com/madebyollin
ollin•9mo ago
Yes! This was a solo project done in my free time :) to learn about WMs and get more practice training GANs.

The special aspect of NNs (in the context of simulating worlds) is that NNs can mimic entire worlds from videos alone, without access to the source code (in the case of pokemon) or even without the source code having existed (as is the case for the real-world forest trail mimicked in this post). They mimic the entire interactive behavior of the world, not just the geometry (note e.g. the not-programmed-in autoexposure that appears when you look at the sky).

Although the neural world in the post is a toy project, and quite far from generating photorealistic frames with "trees that bend in the wind, lilypads that bob in the rain, birds that sing to each other", I think getting better results is mostly a matter of scale. See e.g. the GAIA-2 results (https://wayve.ai/wp-content/uploads/2025/03/generalisation_0..., https://wayve.ai/wp-content/uploads/2025/03/unsafe_ego_01_le...) for an example of what WMs can do without the realtime-rendering-in-a-browser constraints :)

janalsncm•9mo ago
You mentioned it took 100 gpu hours, what gpu did you train on?
ollin•9mo ago
Mostly 1xA10 (though I switched to 1xGH200 briefly at the end, lambda has a sale going). The network used in the post is very tiny, but I had to train a really long time w/ large batch to get somewhat-stable results.
attilakun•9mo ago
Amazing project. This has the same feel as Karpathy’s classic “The Unreasonable Effectiveness of Recurrent Neural Networks” blog post. I think in 10 years’ time we will look back and say “wow, this is how it started.”
alain94040•9mo ago
Appreciate this article that shows some failures on the way to a great result. Too many times, people only show how the polished end-result: look, I trained this AI and it produces these great results. The world dissolving was very interesting to see, even if I'm not sure I understand how it got fixed.
ollin•9mo ago
Thanks! My favorite failure mode (not mentioned in the post - I think it was during the first round of upgrades?) was a "dry" form of soupification where the texture detail didn't fully disappear https://imgur.com/c7gVRG0
puchatek•9mo ago
This is great but I think I'll stick to mushrooms.
bongodongobob•9mo ago
Yeah, the similarities to psychedelics with some of this stuff is remarkable.
ilaksh•9mo ago
It makes me think that maybe our visual perception is similar to what this program is doing in some ways.

I wonder if there are any computer vision projects that take a similar world emulation approach?

Imagine you collected the depth data also.

voidspark•9mo ago
Yes the model is a U-Net, which is a type of Convolutional Neural Network (CNN), which is inspired by the structure of the visual cortex.

https://en.wikipedia.org/wiki/Convolutional_neural_network#H...

LoganDark•9mo ago
For some reason, psilocybin causes me to randomly just lose consciousness, and LSD doesn't. Weird stuff.
ulrikrasmussen•9mo ago
I also thought those wooden guard rails looked pretty spot on how they would look on 2C-B. The only thing that's missing is the overlay of geometric patterns on even surfaces.
throwaway314155•9mo ago
Really cool. How much compute did you require to successfully train these models? Is it in the ballpark of something you could do with a single gaming GPU? Or did you spin up something fancier?

edit: I see now that you mention a pricepoint of 100 GPU-hours/roughly 100$. My mistake.

bitwize•9mo ago
I want to see a spiritual successor to LSD: Dream Emulator based on this.

https://en.m.wikipedia.org/wiki/LSD:_Dream_Emulator

udia•9mo ago
Very nice work. Seems very similar to the Oasis Minecraft simulator.

https://oasis.decart.ai/

ollin•9mo ago
Yup, definitely similar! There are a lot of video-game-emulation World Models floating around now, https://worldarcade.gg had a list. In the self-driving & robotics literature there have also been many WMs created for policy training and evaluation. I don't remember a prior WM built on first-person cell-phone video, but it's a simple enough concept that someone has probably done it for a student project or something :)
AndrewKemendo•9mo ago
I think this is very interesting because you seem to have reinvented NeRF, if I’m understanding it correctly. I only did one pass through but it looks at first glance like a different approach entirely.

More interesting is that you made an easy to use environment authoring tool that (I haven’t tried it yet) seems really slick.

Both of those are impressive alone but together that’s very exciting.

bjornsing•9mo ago
NeRF is a more complex and constrained approach, based on a kind of ray tracing. But results are obviously similar.
AndrewKemendo•9mo ago
Right which is why i said it’s an entirely different approach but results in almost the same kind of output
tehsauce•9mo ago
I love this! Your results seem comparable to the counter strike or minecraft models from a bit ago with massively less compute and data. It's particularly cool that it uses real world data. I've been wanting to do something like this for a while, like capturing a large dataset while backpacking in the cascades :)

I didn't see it in an obvious place on your github, do you have any plans to open source the training code?

ilaksh•9mo ago
This seems incredibly powerful.

Imagine a similar technique but with productivity software.

And a pre-trained network that adapts quickly.

gitroom•9mo ago
Gotta say, Ive always wanted to try building something like this myself. That kind of grind pays off way more than shiny announcements imo.
bjornsing•9mo ago
What used to be cutting edge research not so long ago is now a fun hobby project. I love it.
Valk3_•9mo ago
This might be a vague question, but what kind of intuition or knowledge do you need to work with these kind of things, say if you want to make your own model? Is it just having experience with image generation and trying to incorporate relevant inputs that you would expect in a 3D world, like the control information you added for instance?
ollin•9mo ago
I think https://diamond-wm.github.io is a reasonable place to start (they have public world-model training code, and people have successfully adapted their codebase to other games e.g. https://derewah.dev/projects/ai-mariokart). Most modern world models are essentially image generators with additional inputs (past-frames + controls) added on, so understanding how Diffusion/IADB/Flow Matching work would definitely help.
Valk3_•9mo ago
Thanks!
nopakos•9mo ago
Next we should try "Excel emulation via Neural Network". We get rid of a lot of intermediate steps, calculations, user interface etc!

What could go wrong?

Jokes aside, this is insanely cool!

downboots•9mo ago
or for a large dataset of math identities and have the user draw one side
titouanch•9mo ago
This is very impressive for a hobby project. I was wondering if you were planning to release the source code. Being able to create client-hosted, low-requirement neural networks for world generation could be really useful for game dev or artistic projects.
thenthenthen•9mo ago
Yes please! I would love to try and use this on disappearing neighbourhoods, the results are so dreamlike, or like memories!
das_keyboard•9mo ago
> So, if traditional game worlds are paintings, neural worlds are photographs. Information flows from sensor to screen without passing through human hands.

I don't get this analogy at all. Instead of a human information flows through a neural network which alters the information.

> Every lifelike detail in the final world is only there because my phone recorded it.

I might be wrong here but I don't think this is true. It might also be there because the network inferred that it is there based on previous data.

Imo this just takes the human out of a artistic process - creating video game worlds and I'm not sure if this is worth archiving.

ajb•9mo ago
>I don't get this analogy at all. Instead of a human information flows through a neural network which alters the information.

These days most photos are also stored using lossy compression which alters the information.

You can think of this as a form of highly lossy compression of an image of this forest in time and space.

Most lossy compression is 'subtractive' in that detail is subtracted from the image in order to compress it, so the kind of alterations are limited. However there have been previous non-subtractive forms of compression (eg, fractal compression) that have been criticised on the basis of making up details, which is certainly something that a neural network will do. However if the network is only trained on this forest data, rather than being also trained on other data and then fine tuned, then in some sense it does only represent this forest rather than giving an 'informed impression' like a human artist would.

andai•9mo ago
>These days most photos are also stored using lossy compression which alters the information.

I noticed this in some photos I see online starting maybe 5-10 years ago.

I'd click through to a high res version of the photo, and instead of sensor noise or jpeg artefacts, I'd see these bizarre snakelike formations, as though the thing had been put through style transfer.

Legend2440•9mo ago
>It might also be there because the network inferred that it is there based on previous data.

There is no previous data. This network is exclusively trained on the data he collected from the scene.

Imanari•9mo ago
Amazing work. Could you elaborate on the model architecture and the process that lead you to using this architecture?
Macuyiko•9mo ago
The model seems to be viewable here:

https://netron.app/?url=https://madebyoll.in/posts/world_emu...

montebicyclelo•9mo ago
Awesome work / demo / blog

Link to the demo in case people miss it [1]

> using a customized camera app which also recorded my phone’s motion

Using phone's gyro as a proxy for "controls" is very clever

[1] https://madebyoll.in/posts/world_emulation_via_dnn/demo/

stormfather•9mo ago
Its a time capsule, among other things. I want to take many, many videos of my grandpa's farm, and be able to walk around in it in VR using something like this in the future.
foxglacier•9mo ago
You can do it using the more classic technique of photogrammetry. There are commercial products used by real estate salesmen to produce high quality "games" where you walk around inside a house, but they're more like Google Streetview where you swoosh between points where a 360 degree photo was taken. All those things will be more faithful than neurally generating next frames based on previous frames and control input.
alekseiprokopev•9mo ago
It would be quite interesting to try to mess with the neural representations do add or remove the images of some objects there. I'm also curious if the topology of the actual place is similar to the topology of the embedding space.
Jotalea•9mo ago
It's a really interesting project, reminds me of the 360° videos I used to watch on my phone, back in 2015.

But there's one thing that I'm a little bit worried about: I was getting like 8 stable FPS on my 3 years old flagship phone. My concern is that these models are not optimized to run on this type of hardware, which may or may not lead to hardware obsolescence quicker than planned. And it's not like these aren't powerful, they really are.

ollin•9mo ago
Curious, which device/OS/browser? I did all my testing on 4-year old hardware (iPhone 13 Pro, M1 Pro MBP), and the model itself is extremely tiny (~1GFLOP) so I'm optimistic that performance issues would be solvable with a better software stack (e.g. native app).
Jotalea•9mo ago
I was on my Samsung Galaxy S21FE (Snapdragon 888), on the latest version of the Firefox browser for Android (138.0), on One UI 6.1 (Android 14). It is possibly the most powerful device I own, that's why I was concerned.
ollin•9mo ago
Got it, that makes sense! In terms of raw compute capability, a Snapdragon 888's GPU should have more than enough power to run this demo smoothly. I think I just need to optimize the inference setup better (maybe switch to WebGPU if the platform supports it?) and do targeted testing on Firefox/Android.