frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
66•yi_wang•2h ago•23 comments

SectorC: A C Compiler in 512 bytes (2023)

https://xorvoid.com/sectorc.html
233•valyala•10h ago•44 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
24•RebelPotato•2h ago•4 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
144•surprisetalk•10h ago•145 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
175•mellosouls•13h ago•333 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
62•gnufx•9h ago•55 comments

IBM Beam Spring: The Ultimate Retro Keyboard

https://www.rs-online.com/designspark/ibm-beam-spring-the-ultimate-retro-keyboard
19•rbanffy•4d ago•4 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
172•AlexeyBrin•15h ago•32 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
152•vinhnx•13h ago•16 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
41•swah•4d ago•89 comments

First Proof

https://arxiv.org/abs/2602.05192
125•samasblack•12h ago•75 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
298•jesperordrup•20h ago•95 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
69•momciloo•10h ago•13 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
96•randycupertino•5h ago•212 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
98•thelok•12h ago•21 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
35•mbitsnbites•3d ago•3 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
566•theblazehen•3d ago•206 comments

Show HN: Axiomeer – An open marketplace for AI agents

https://github.com/ujjwalredd/Axiomeer
7•ujjwalreddyks•5d ago•2 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
286•1vuio0pswjnm7•16h ago•462 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
125•josephcsible•8h ago•153 comments

The silent death of good code

https://amit.prasad.me/blog/rip-good-code
81•amitprasad•4h ago•76 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
180•valyala•10h ago•165 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
28•languid-photic•4d ago•9 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
899•klaussilveira•1d ago•275 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
225•limoce•4d ago•125 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
115•onurkanbkrc•15h ago•5 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
34•chwtutha•59m ago•5 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
111•zdw•3d ago•55 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
141•speckx•4d ago•223 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
143•videotopia•4d ago•48 comments
Open in hackernews

Wan – Open-source alternative to VEO 3

https://github.com/Wan-Video/Wan2.2
220•modinfo•5mo ago

Comments

esseph•5mo ago
Ugh hate they used this name
yorwba•5mo ago
You can call it Wanxiang (万相, ten thousand pictures) if you want. Similarly, Qwen is Qianwen (千问, one thousand questions).
latentsea•5mo ago
They should just pretend it's an acronym. Wide Art Network.
CapsAdmin•5mo ago
Its original name was WanX, but the gen ai community found that to be too funny / unfortunate, so they changed it to just Wan.
bn-l•5mo ago
It’s probably a more appropriate name to be fair.
qiine•5mo ago
ha TIL, very cool names!
ProofHouse•5mo ago
HATE
diggan•5mo ago
Why "hate" this name more than any other name? At least justify your semi-spam.
esseph•5mo ago
https://en.wikipedia.org/wiki/Wide_area_network
diggan•5mo ago
I'm familiar with that, but would people really confuse a video generation model for a type of computer networks?
esseph•5mo ago
That assumes you know what VEO 3 is by reading the title.

But, I guess sometimes you use a plane to build a plane while the material is aligned to a particular plane.

ProofHouse•5mo ago
How can they manage that but not the website?
cubefox•5mo ago
Arguably most interesting facts about the new Wan 2.2 model:

- they are now using a 27B MoE architecture (with two 14B experts, for low level and high level detail), which were usually only used for autoregressive LLMs rather than diffusion models

- the smaller 5B model supports up to 720p24 video and runs on 24 GB of VRAM, e.g. an RTX 4090, a consumer graphics card

- if their benchmarks are reliable, the model performance is SOTA even compared to closed source models

mandeepj•5mo ago
> - the smaller 5B model supports up to 720p24 video and runs on 24 GB of VRAM, e.g. an RTX 4090, a consumer graphics card

Seems like you can run it 2 Gpus each having 12 GB VRAM. At least, a breakdown on their GitHub page implied so.

cubefox•5mo ago
That would be a lot cheaper than an RTX 4090.
liuliu•5mo ago
Some facts are wrong:

- The 27B "MoE" are not the MoE commonly referred to in LLM world. It is not MoE on FFN layers. It simply means two different models used for different denoising timestep ranges (exactly the same as SDXL-Base / SDXL-Refiner). Calling it MoE is not technically wrong. But claiming "which were usually only used for autoregressive LLMs rather than diffusion models" is just wrong (not to mention HiDream I1 is a model actually incorporated MoE layers (in FFN layer) and is a diffusion model).

- The A14B models can run on 24GiB VRAM too, with CPU offloading and quantization.

- Yes, it is SotA even including some closed source models.

bsenftner•5mo ago
If you want to play with this, as in really play, with over a dozen variant models with acceleration loras and a vibrant community, ya gotta check out:

https://github.com/deepbeepmeep/Wan2GP

And the discord community: https://discord.gg/g7efUW9jGV

"Wan2GP" is AI video and images "for the GPU poor", get all this operating with as little as 6GB VRAM, Nvidia only.

diggan•5mo ago
On the other side, is there any projects focusing on performance instead? I have the VRAM available to run Wan2.1, but still takes minutes per frame. Basically something like what vLLM is for running local LLM weights, but for video/WAN?
bsenftner•5mo ago
This person here has accelerator loras that reduce the compute from 30+ steps to 4 and 8 steps with minimal quality loss: https://huggingface.co/Kijai/WanVideo_comfy

There are a lot of people focused on performance, various methods, just as there are a lot of people focused on non-performance issues like fine tunes that add aspects the models lack, such as terminology linking professional media terms to the model, the pop culture terminology the model does not know, accuracy of body posture during fight, dance, gymnastic, and sports activity, and then less flashy but pragmatic actions like proper use of tableware, chopsticks, keyboards and musical instruments - complex actions that stand out when done incorrectly or never shown. The model knowledge is high but has limits, which people are adding.

bsenftner•5mo ago
There is also a ton of Wan video activity in the ComfyUI community. Everyday for a while, about two weeks ago, ComfyUI had updates specific to Wan 2.2 video integrations in the standard installation. ComfyUI is more complex application, significantly, than Wan2GP though.
bobajeff•5mo ago
If having only 6GB VRAM is GPU poor then I must be GPU destitute.
hirako2000•5mo ago
It's hard to get an nvidia consumer having then less than 12GB of VRAM, not just these days.

By GPU poor they didn't mean GPUless or GPU of the previous decade. It's on the readme that only Nvidia is supported.

hypercube33•5mo ago
I wish they'd state suggested or required hardware upfront.

Also disappointing that I haven't seen anything target the new Ryzen AI chips that can do 96gb since they seem pretty capable. I'm not sure how much memory m4 pro on the apple side can be utilized for this but it seems like the typical machines are 48 or 64gb these days. Lot more bang for your buck than an Nvidia card on paper?

pkroll•5mo ago
Well, they sort of do: they keep referring to the 4090, on their Github and primary promotional pages (https://wan.video/).

But really all the various video models really want an 80+ gig vram card, to run comfortably. The contortions the ComfyUI community goes through to get things running at a reasonable speed on the current, dinky-sized vram consumer cards, are impressive.

giancarlostoro•5mo ago
That doesn't stop Mac / iPhone from using these models. I've build videos with Wan 2.2 on my wife's M4 Mac Mini w/ 24GB of RAM. It might take a little longer to render though ;)
zakki•5mo ago
Nice. Any reference for the installation?
giancarlostoro•5mo ago
Draw Things app in the appstore is free, open source and holds your hand.
giancarlostoro•5mo ago
Try Framepack... nevermind, even that needs at least 6GB VRAM...

https://github.com/lllyasviel/FramePack

franky47•5mo ago
Quick, someone make a UI for this and call it Obi.
tmikaeld•5mo ago
The Obi for your Wan
cuuupid•5mo ago
I’ve been using this via Replicate for a while and it’s honestly amazing while being way cheaper. China is definitely leading on open source
danielbln•5mo ago
*open weights
ahmedhawas123•5mo ago
Are there video generation benchmarks similar to how there are benchmarks for LLMs? Reason I ask is because with lots of these models you have to go through a long cycle to get them up and running before you see an output, and often they will break with basic tasks requiring physics, state, etc. Would love to see some comparison of models across basic things like that.
ivape•5mo ago
Censored?
CosmicShadow•5mo ago
Wan2.1 was great, but Wan2.2 is really awesome! Here's some samples I made locally with my 5090:

- https://imgur.com/a/VeTn4Ej

- https://imgur.com/a/CujxVX3

Those were both Image to Video and then I upscaled them to 4k. I made the images using Flux Dev Krea.

Took about 3-4 minutes per video to generate and another 2-3 to upscale. Images took 20-40s to generate.

scroogey•5mo ago
What did you use to upscale them?
CosmicShadow•5mo ago
One was with Topaz Video, the other was with SeedVR2.
scroogey•5mo ago
Thanks!