frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

France's homegrown open source online office suite

https://github.com/suitenumerique
376•nar001•3h ago•181 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
106•bookofjoe•1h ago•86 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
417•theblazehen•2d ago•152 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
80•AlexeyBrin•4h ago•15 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
28•vinhnx•2h ago•4 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
13•thelok•1h ago•0 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
772•klaussilveira•19h ago•240 comments

First Proof

https://arxiv.org/abs/2602.05192
33•samasblack•1h ago•19 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
49•onurkanbkrc•4h ago•3 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1021•xnx•1d ago•580 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
158•alainrk•4h ago•202 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
160•jesperordrup•9h ago•58 comments

Software Factories and the Agentic Moment

https://factory.strongdm.ai/
11•mellosouls•2h ago•11 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
9•marklit•5d ago•0 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
103•videotopia•4d ago•26 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
17•rbanffy•4d ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
8•simonw•1h ago•2 comments

Ga68, a GNU Algol 68 Compiler

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
35•matt_d•4d ago•9 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
152•matheusalmeida•2d ago•42 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
261•isitcontent•19h ago•33 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
275•dmpetrov•20h ago•145 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
15•sandGorgon•2d ago•3 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
545•todsacerdoti•1d ago•263 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
417•ostacke•1d ago•108 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
361•vecti•21h ago•161 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
61•helloplanets•4d ago•64 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
333•eljojo•22h ago•206 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
456•lstoll•1d ago•298 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
371•aktau•1d ago•195 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
61•gmays•14h ago•23 comments
Open in hackernews

TurboDiffusion: 100–200× Acceleration for Video Diffusion Models

https://github.com/thu-ml/TurboDiffusion
248•meander_water•1mo ago

Comments

jjcm•1mo ago
Looks like there is some quality reduction, but nonetheless 2s to generate a 5s video on a 5090 for WAN 2.1 is absolutely crazy. Excited to see more optimizations like this moving into 2026.
villgax•1mo ago
That’s not the actual time if you run it, encoding and decoding is extra
Lerc•1mo ago
Nevertheless it does seem that generating will fairly soon become fast enough to extend a video clip in realtime. Autoregressive by the second. Integrated with a multi modal input model you would be very close to an AI avatar that would be extremely compelling.
avaer•1mo ago
Efficient realtime video diffusion will revolutionize the way people use computers even more so than LLMs.

I actually think we are already there with quality, but nobody is going to wait 10 minutes to do a task with video that takes 2 seconds with text.

If Sora/Kling/whatever ran cool locally 24/7 at 60FPS, would anyone ever build a UI? Or a (traditional) OS?

I think it's worth watching the scaling graph.

IsTom•1mo ago
> If Sora/Kling/whatever ran cool locally 24/7 at 60FPS, would anyone ever build a UI?

I like my buttons to stay where I left them.

pavlov•1mo ago
Yeah, it’s like asking “why would anyone read a book today when LLMs can generate infinite streams of text”
exe34•1mo ago
those streams of text are often conditioned on the prompts - people are using it to learn about new concepts, and as a hyperpersonalised version of search. it can not only tell you of tools you didn't know existed, but it can show you how to use them.

I do like my buttons to stay where I left them - but that can be conditioned. instead of gnome "designers" telling me the button needs to be wide enough to hit with my left foot, I could tell the system I want this button to be small and in that corner - and add it to my prompt.

pavlov•1mo ago
I suppose if one only reads self-help books of the “You’re the best, trust your instincts!” kind, then LLMs are an appropriate replacement.
exe34•1mo ago
Or indeed, if one has a mind of their own and wants a tool to obey them, rather than submit to their "betters"'s opinions.
pylotlight•1mo ago
I feel like a lot of the above assumes the user knows what they want or what works best. I want an intelligent designer to figure out the best flow/story/narrative/game and create/present it, cause I'm a dumb user who doesn't know what is actually good.
exe34•1mo ago
that's called a default - I'm happy for a gnome designer to "design" the button to be large enough to hit with my foot with a blindfold on, but I'd like the option to change it to adjust to my workflow rather than adjust my workflow to the button.
subscribed•1mo ago
Please no, please no

That will be Windows 12 and perhaps 2 generations in of iOS :)

villgax•1mo ago
I mean the baselines were deliberately worse and not how someone would be using these to begin with maybe noobs and the quoted number is only for DIT steps not for other encoding and decoding steps, which is actually quite high still. No actual use of FA4/Cutlass based kernels nor TRT at any point.
redundantly•1mo ago
Now if someone could release an optimization like this for the M4 Max I would be so happy. Last time I tried generating a video it was something like an hour for a 480p 5-second clip.
jimmydoe•1mo ago
maybe wait for M5 Max and new MLX.
mishu2•1mo ago
Having the ability to do real-time video generation on a single workstation GPU is mind blowing.

I'm currently hosting a video generation website, also on a single GPU (with a queue), which is also something I didn't even think possible a few years ago (my show HN from earlier today, coincidentally: https://news.ycombinator.com/item?id=46388819). Interesting times.

iberator•1mo ago
Computer games have been doing it for decades already.
nkmnz•1mo ago
Bob Ross did it, too.
pwython•1mo ago
1 frame of Bob Ross = 1,800s
ash_091•1mo ago
So with 108,000 (60 X 1,800) Bob Ross PPUs (parallel painting units) we should be able to achieve a stable 60FPS!
mishu2•1mo ago
Once you set up a pipeline, sure. They'd need a lot of bandwidth to ensure the combined output makes any kind of sense, not unlike the GPU I guess.

Otherwise it's similar to the way nine women can make a baby in a month. :)

justinclift•1mo ago
The food/housing/etc bill for 108k Bob Ross er... PPU's seems like it would be fairly substantial too.
arghwhat•1mo ago
A very, very different mechanism that "just" displays the scene as the author explictly and manually drew it, and yet has to pull an ungodly amount of hacks to make that viable and fast enough, resulting in a far from realistic rendition...

This on the other hand happily pretends to match any kind of realism requested like a skilled painter would, with the tradeoff mainly being control and artistic errors.

echelon•1mo ago
> with the tradeoff mainly being control and artistic errors.

For now. We're not even a decade in with this tech, and look how far we've come in the last year alone with Veo 3, Sora 2, and Kling 4x, and Kling O1. Not to mention the editing models like Qwen Edit and Nano Banana!

This is going to be serious tech soon.

I think vision is easier than "intelligence". In essence, we solved it in closed form sixty years ago.

We have many formulations of algorithms and pipelines. Not just for the real physics, but also tons of different hacks to account for hardware limitations.

We understand optics in a way we don't understand intelligence.

Furthermore, evolution keeps evolving vision over and over. It's fast and highly detailed. It must be correspondingly simple.

We're going to optimize the shit out of this. In a decade we'll probably have perfectly consistent Holodecks.

arghwhat•1mo ago
I feel like this misses the point. Also, vision and image generation are entirely different things. Even for humans, with some people not being able to create images in their head despite having perfectly good vision.

Understanding optics instead of intelligence speaks to the traditional render workflow, a pure simulation of input data with no "creative processes". Either the massive hack that is traditional game render pipelines, or proper light simulation. We'll probably eventually get to the point where we can have full-scene, real-time ray-tracing.

The AI image generation approach is the "intelligence" approach where you throw all optics, physics and render knowledge up in the air and let the model "paint" according to how it imagines the scene, like handing a pencil to a cartoon/anime artist. Zero simulation, zero physics, zero roles - just the imagination of a black box.

No light, physics or existing render pipeline tricks are relevant. If that's what you want, you're looking for entirely new tricks: Tricks to ensure object permanence, attention to detail (no variable finger counts), and inference performance. Even if we have it running in real-time, giving up your control and definition of consistency is part of the deal when you hand off the role of artist to the box.

If you want AI in the simulation approach you'll be taking an entirely different path, skipping any involvement in rendering/image creation and instead just letting the model pupetteer the scene within some physics restraints. Makes for cool games, but completely unrelated to the technology being discussed.

justinclift•1mo ago
Hmmm, future video's might just "compress" down to a common AI model and a bunch of prompts + metadata about scene order. ;)
echelon•1mo ago
I think video-based world models like Genie 2 will happen and that they'll be shrunken down for consumer hardware (the only place they're practical).

They'll have player input controls, obviously, but they'll also be fed ControlNets for things like level layout, enemy placement, and game loop events. This will make them highly controllable and persistent.

When that happens, and when it gets good, it'll take over as the dominant type of game "engine".

qingcharles•1mo ago
I don't know how much they can be shrunk down for consumer hardware right now (though I'm hopeful), but in the near-term it'll probably all be done in the cloud and streamed as it is now. People are playing streamed video games and eating the lag, so they'll probably do it for this too, for now.
ragequittah•1mo ago
This is also the VR killer app.
cess11•1mo ago
Are you sure it's not just polish on the porn that is already the "VR killer app"?
codingbuddy•1mo ago
We are scarily close to realtime personalization of video which if you agree with this NeurIPS paper [1] may lead to someone inadvertently creating “digital heroin”

[1] https://neurips.cc/virtual/2025/loc/san-diego/poster/121952

hapticmonkey•1mo ago
> We further urge the machine learning community to act proactively by establishing robust design guidelines, collaborating with public health experts, and supporting targeted policy measures to ensure responsible and ethical deployment

We’ve seen this play out before, when social media first came to prominence. I’m too old and cynical to believe anything will happen. But I really don’t know what to do about it at a person level. Even if I refuse to engage in this content, and am able to identify it, and keep my family away from it…it feels like a critical mass of people in my community/city/country are going to be engaging with it. It feels hopeless.

rvnx•1mo ago
I tend to think that it leads to censorship, and then censorship at a broader level in the name of protecting our kids. See with social networks where you now have to give your ID card to protect kids.

The best way in that case is education of the kids / people and automatically flag potentially harmful / disgusting content and let the owner of the device set-up the level of filtering he wants.

Like with LLMs they should be somewhat neutral in default mode but they should never refuse a request if user asks.

Otherwise the line between technology provider and content moderator is too blurry, and tomorrow SV people are going to abuse of that power (or be coerced by money or politics).

At a person / parent level, time limits (like you can do with web filtering device for TikTok), content policy would solve and taking time to spend with the kids as much as possible and to talk to them so they don’t become dumber and dumber due to short videos.

But totally opposed that it should be done on public policy level: “now you have right to watch pornography but only after you give ID to prove you are adult” (this is already the case in France for example)

It can quickly become: “now to watch / generate controversial content, you have to ID”

shikon7•1mo ago
That doesn't work when the Chinese produce uncensored open weight models, or ones that can easily be adapted to create uncensored content.

Censorship for generative AI simply doesn't work the way we are used to, unless we make it illegal to posess a model that might generate illegal content, or that might have been trained on illegal data.

holowoodman•1mo ago
> Censorship for generative AI simply doesn't work the way we are used to, unless we make it illegal to posess a model that might generate illegal content, or that might have been trained on illegal data.

Censorship doesn't work for stuff that is currently illegal. See pirated movies.

casey2•1mo ago
You just fell outrage bait so I doubt you will be able to identify AI.
numpad0•1mo ago
It saddens me to think that the efforts so far hasn't been it. Maybe I should try my hand at "closing the loop" for image generation models.

Could it destroy the society? The humanity had lived through bunch of such actual substances, and always got bored of it in matters of decades... those risk talks feel a bit overblown to me.

lysace•1mo ago
Potentially interesting that the authors are primarily affiliated with NatWest - a British bank. I had to Google their names to find that out, though.

They highlight reduced workplace productivity as a risk, among other things.

jjmarr•1mo ago
Infinite Jest predicted this.
kristopolous•1mo ago
this is probably the best tool for this stuff now: https://github.com/deepbeepmeep/Wan2GP

It has fastwan ... probably will have this soon. it's a request in multiple tickets: https://github.com/deepbeepmeep/Wan2GP/issues

bsenftner•1mo ago
Video AI acceleration is tricky, where many of the currently in use acceleration loras and cache level accelerations have a subtle at first impact on the generated video, which renders these accelerations as poison for video work: the AI's become dumber to the degree they can't follow camera directions, and the character performances suffer, the lip sync becomes a lip flap, and the body motions are reduced in quality, and become repetitive.

Now, I've not tested TurboDiffusion yet, but I am very actively generating AI video, I probably did a half hour of finished video clips yesterday. There is no test for this issue yet, and for the majority it is yet to be realized as an issue.

fcpk•1mo ago
Out of curiosity, what do you do with the footage? in a personal way I found it fun for the occasional funny situational video, or for some small background animations, but not so useful as a whole. I understand for things like making sketches from scripts and quick prototyping it's nice, but I am genuinely curious what's the use :)
Atomic_Torrfisk•1mo ago
Also interested, since that is my same impression.
bsenftner•1mo ago
I'm creating a college / corporate seminar level class with a 3D animated host and instructor. Each lesson begins with a brief animated intro, which the student then reads their lesson afterwards, and then there is a chatbot that understands that lesson that engages with the student afterwards. The course will have about an hour of 1-2 minute videos in the end, and because they are an animated "professor" it is then possible to create other ethic versions, and other language versions easier than otherwise. And for the curious, that hour of final in-use video will be sourced out of somewhere around 8 hours of final video; these video AIs are fine and dandy for short content, but when working on longer form media the consistency issues grow to Godzilla sized and really become the most consistent issue: trying to keep the likenesses of the characters, and the environment from drifting over time.
sroussey•1mo ago
I want to use this on a website!
benreesman•1mo ago
Fun fact: if you say the right prayers to the Myelin Gods it will fuse straight through sage3 at D/DQ like it's seen it before, which of course it has.

https://gist.github.com/b7r6/94f738f4e5d1a67d4632a8fbd18d347...

Faster than Turbo with no pre-distill.