frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
1•samuel246•38s ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
1•downboots•45s ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
1•whack•1m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•1m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•2m ago•0 comments

The AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
1•geox•4m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•5m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
1•jerpint•5m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•7m ago•0 comments

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
1•breadwithjam•10m ago•1 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•10m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•12m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•13m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•13m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•13m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
2•vkelk•14m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•15m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•16m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•17m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
2•ykdojo•21m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•21m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•23m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
2•mariuz•23m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•27m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•30m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•31m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
2•gmays•31m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
2•andsoitis•32m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
2•lysace•33m ago•0 comments

Zen Tools

http://postmake.io/zen-list
2•Malfunction92•35m ago•0 comments
Open in hackernews

Marble by World Labs: Multimodal world model to create and edit 3D worlds

http://marble.worldlabs.ai/
48•dmarcos•2mo ago

Comments

the_real_cher•2mo ago
Thats totally insane and amazing.
ganelonhb•2mo ago
wow, it’s slop!
thetoon•2mo ago
Not to belittle this or anything (it does look good and show promise), it feels like they somehow generate several consistent (but discrete) views of a given world, then feed all that to the good old pose estimation + gaussian splatting workflow. Whenever you leave the generated area (which isn't exactly huge on the few I tested) you get tell-tale signs of GS.
xg15•2mo ago
Yeah, if the entire point is that you can move around inside those worlds, I'd have expected a bit more "walkability" - maybe a few different viewpoints that each have their own Gaussian splatting? Right now, it dissolves pretty quickly once you change the location.
embedding-shape•2mo ago
Yeah, it's more of a somewhat 3D-drawing of a frame that you can navigate inside, rather than a world up that happen to fit with whatever image you use as an input, but makes sense as a standalone world when you walk around. For being a "world" model, it doesn't seem to grasp physical space very well.

The interior scenes look and walks great, but any scenes with/in exteriors seems kind of bad.

kkukshtel•2mo ago
This was my take as well — this is just pose estimation from generated stereo panoramic images.
pedalpete•2mo ago
It's amazing to see how this space is developing. About 7 years ago I was building "spatial media" with https://ayvri.com

Nobody believed us when we said AI would create 3D virtual worlds that were indistinguishable from the real thing, and we'd be able to transport people to different places.

I particularly like the artistic effect of the drawing that brings the person into this world. Like a point-cloud that then gets "filled in".

I have little doubt this was a design decision and I think it is very well executed.

jaccola•2mo ago
Even more amazing to me is that the tech to create these really existed 7 years ago (would have been slower to train but most methods don't need the latest GPUs). This means there are no doubt more improvements just waiting to be discovered!
pedalpete•2mo ago
The tech was not available, but it was the direction we were heading.

Digital Twins were a thing, and we had developed a high-resolution 3d world, outside of cities.

At the time, we thought that NERFs were going to allow us to increase resolution and fill in the gaps of what we didn't know about the world. Then Gaussian Splats came in and just took over.

There are definitely still improvements and techniques.

However, people occasionally still reach out to me to ask how to build a replica of Ayvri, and I tell them you wouldn't build it today like we did back then.

Today, you wouldn't go through the processes of setting up tile-servers, I think you can get current AI to build a scene frame by frame and transition between frames, rather than tile by tile.

But others in the gaming world may have different opinions as to where the industry is heading.

alyxya•2mo ago
Something about the camera perspective creates a skew that makes things feel artificial to me. It's a minor thing that bothers me, but I'd like the geometry to feel more like what I normally see. Video generation models tend to feel more natural in perspective.
proof_by_vibes•2mo ago
Are there any experts that could help me bootstrap myself on the current literature on "world models?"
jaccola•2mo ago
In this current generation, "world models" is basically a marketing term. You can research gaussian splatting, novel view synthesis, neural radiance fields (nerf), etc... I find Mr Nerf is good to follow: https://x.com/janusch_patas

There is another thing called world models that involves predicting the state of something after some action. But this is a very very limited area of research. My understanding of this is that there just isn't much data of action->reaction.

Same issue with gaussian splatting/nerf really, very little data (relative to text/images/videos) of text -> 3d splats. I'd guess what world labs are doing is text -> image -> splats, but of course it is just speculation.

cl42•2mo ago
> There is another thing called world models that involves predicting the state of something after some action. But this is a very very limited area of research. My understanding of this is that there just isn't much data of action->reaction.

Folks interested in this can look up Yann LeCun's work on world models and JEPA, which his team at Meta created. This lecture is a nice summary of his thinking on this space and also why he isn't a fan of autoregressive LLMs: https://www.youtube.com/watch?v=yUmDRxV0krg

MarsIronPI•2mo ago
I'm looking forward to the future of games and movies if these world models keep improving. Imagine if anyone with an interesting idea could sketch it, plug it into a world model and share the result with everyone. It'd open up a huge amount of possibilities.

Not to mention being able to explore worlds from already existing works. Care to go for a ride on a broomstick? How about simply walking into Mordor? It's exciting.

ogogmad•2mo ago
Slightly off-topic: I've just watched this takedown of an AI-generated chart-topping song: https://www.youtube.com/watch?v=rGremoYVMPc&lc=UgxfDvqX1G6kp...

OK, so I've talked about this phenomenon with ChatGPT, and I think that the issue here is that to a lot of people, a song needs to be more than just a "song". There's some sort of requirement for it to be the un-faked result of having certain experiences. It has to relate to something happening in reality, and to be derived from it, and cannot exist in a vacuum separated from the rest of reality. Otherwise to them, the music isn't "real".

embedding-shape•2mo ago
Endless droned ambient music disagrees with you that there is any sort of "requirement of certain experiences". Some of it is basically someone hitting play on a modular synth patch and letting it play until it sounds done, (some) people are still fine with listening to it.
lvl155•2mo ago
It would be nice to have these world models integrated with Blender.
lightcrafter•2mo ago
I wrote a tutorial on how to load and scale these Gaussian splats in Blender: https://docs.lightcraft.pro/tutorials/blender-workflows/gaus...

There is a better Blender Gaussian splat add-on now: https://superhivemarket.com/products/splatforge

ChrisArchitect•2mo ago
Blog post: https://www.worldlabs.ai/blog/marble-world-model (https://news.ycombinator.com/item?id=45907541)
lightcrafter•2mo ago
The thing to understand is how well Marble's Gaussian splat models integrate into virtual production for visual storytelling. Then it's filmmaking!

We worked with filmmaker Joshua Kerr on this project that took him back to his 8 year old zombie epic roots: https://www.worldlabs.ai/case-studies/lightcraft

It's hard to appreciate how magical this is until you get beat up by traditional photogrammetry-based set acquisition or trying to build it from scratch in 3D.

In a couple of years you are going to see high school drama groups everywhere staging film productions with this tech.