frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
258•theblazehen•2d ago•86 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
27•AlexeyBrin•1h ago•3 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
707•klaussilveira•15h ago•206 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
969•xnx•21h ago•558 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
70•jesperordrup•6h ago•31 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
7•onurkanbkrc•49m ago•0 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
135•matheusalmeida•2d ago•35 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
45•speckx•4d ago•36 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
68•videotopia•4d ago•7 comments

Welcome to the Room – A lesson in leadership by Satya Nadella

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
39•kaonwarb•3d ago•30 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
13•matt_d•3d ago•2 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
45•helloplanets•4d ago•46 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
240•isitcontent•16h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
238•dmpetrov•16h ago•127 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
340•vecti•18h ago•150 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
506•todsacerdoti•23h ago•248 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
390•ostacke•22h ago•98 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
304•eljojo•18h ago•188 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•186 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
428•lstoll•22h ago•284 comments

Cross-Region MSK Replication: K2K vs. MirrorMaker2

https://medium.com/lensesio/cross-region-msk-replication-a-comprehensive-performance-comparison-o...
3•andmarios•4d ago•1 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
71•kmm•5d ago•10 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
24•bikenaga•3d ago•11 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
26•1vuio0pswjnm7•2h ago•16 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
271•i5heu•18h ago•219 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
34•romes•4d ago•3 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1079•cdrnsf•1d ago•462 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
64•gfortaine•13h ago•30 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
306•surprisetalk•3d ago•44 comments
Open in hackernews

Packing Input Frame Context in Next-Frame Prediction Models for Video Generation

https://lllyasviel.github.io/frame_pack_gitpage/
270•GaggiX•9mo ago

Comments

ZeroCool2u•9mo ago
Wow, the examples are fairly impressive and the resources used to create them are practically trivial. Seems like inference can be run on previous generation consumer hardware. I'd like to see throughput stats for inference on a 5090 too at some point.
Jaxkr•9mo ago
This guy is a genius; for those who don’t know he also brought us ControlNet.

This is the first decent video generation model that runs on consumer hardware. Big deal and I expect ControlNet pose support soon too.

msp26•9mo ago
I haven't bothered with video gen because I'm too impatient but isn't Wan pretty good too on regular hardware?
dewarrn1•9mo ago
LTX-Video isn't quite the same quality as Wan, but the new distilled 0.9.6 version is pretty good and screamingly fast.

https://github.com/Lightricks/LTX-Video

vunderba•9mo ago
Wan 2.1 is solid but you start to get pretty bad continuity / drift issues when genning more than 81 frames (approx 5 seconds of video) whereas FramePack lets you generate 1+ minute.
dragonwriter•9mo ago
Wan 2.1 (and Hunyuan and LTXV, in descending ordee of overall video quality but each has unique strengths) work well—but slow, except LTXV—for short (single digit seconds at their usual frame rates — 16 for WAN, 24 for LXTV, I forget for Hunyuan) videos on consumer hardware. But this blows them entirely out of the water on the length it can handle, so if it does so with coherence and quality across general prompts (especially if it is competitive with WAN and Hunyuan on trainability for concepts it may not handle normally) it is potentially a radical game changer.
dragonwriter•9mo ago
For completeness, I should note I'm talking about the 14B i2v and t2v WAN 2.1 models; there are others in the family, notably a set of 1.3B models that are presumably much faster, but I haven't worked with them as much
artninja1988•9mo ago
He also brought us IC-Light! I wonder why he's still contributing to open source... Surely all the big companies have made him huge offers. He's so talented
dragonwriter•9mo ago
I think he is working on his Ph.D. at Stanford. I assume whatever offers he has haven't been attractive enough to abandon that, whether he’ll still be doing open work or get sucked into the bowels of some proprietary corporate behemoth afterwards remains to be seen, but I suspect he won't have trouble monetizing his skills either way.
IshKebab•9mo ago
Funny how it really wants people to dance. Even the guy sitting down for an interview just starts dancing sitting down.
Jaxkr•9mo ago
Massive open TikTok training set lots of video researchers use
jonas21•9mo ago
Presumably they're dancing because it's in the prompt. You could change the prompt to have them do something else (but that would be less fun!)
IshKebab•9mo ago
I'm no expert but are you sure there is a prompt?
dragonwriter•9mo ago
Yes, while the page here does not directly mention the prompts, the linked paper does, and the linked code repo shows that prompts are used as well.
vunderba•9mo ago
100%. I don't think I've ever even come across an I2V model that didn't require at least a positive prompt. Some people get around it by integrating a vision LLM into their ComfyUI workflows however.
IshKebab•9mo ago
Ah yeah you're right - they seem to just really like giving dancing prompts. I guess they work well due to the training set.
bravura•9mo ago
It's a peculiar and fascinating observation you make.

With static images, we always look for eyes.

With video, we always look for dancing.

fregocap•9mo ago
looks like the only motion it can do...is to dance
jsolson•9mo ago
It can dance if it wants to...

It can leave LLMs behind...

'Cause LLMs don't dance, and if they don't dance, well, they're no friends of mine.

rhdunn•9mo ago
That's a certified bop! ;) You should get elybeatmaker to do a remix!

Edit: I didn't realize that this was actually a reference to Men Without Hats - The Safety Dance. I was referencing a different parody/allusion to that song!

MyOutfitIsVague•9mo ago
The AI Safety dance?
dragonwriter•9mo ago
There is plenty of non-dance motion (only one or two where its non-dance foot motion, but feet aren't the only things that move.)
enlyth•9mo ago
It takes a text prompt along with the image input, dancing is presumably what they've used for the examples
WithinReason•9mo ago
Could you do this spatially as well? E.g. generate the image top-down instead of all at once
modeless•9mo ago
Could this be used for video interpolation instead of extrapolation?
yorwba•9mo ago
Their "inverted anti-drifting" basically amounts to first extrapolating a lot and then interpolating backwards.
ilaksh•9mo ago
Amazing. If you have more RAM or something, can it go faster? Can you get even more speed on an H100 or H200?