frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
1•juujian•1m ago•0 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•3m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•5m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
1•DEntisT_•7m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
1•tosh•8m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•8m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•11m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
4•sakanakana00•14m ago•0 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•17m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•17m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•19m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•19m ago•6 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•23m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
2•chartscout•25m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•28m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•29m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•34m ago•1 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•36m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•39m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•39m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•40m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•45m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•51m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•52m ago•1 comments

Slop News - The Front Page right now but it's only Slop

https://slop-news.pages.dev/slop-news
1•keepamovin•56m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•59m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
4•tosh•1h ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
4•oxxoxoxooo•1h ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•1h ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
4•goranmoomin•1h ago•0 comments
Open in hackernews

LTXVideo 13B AI video generation

https://ltxv.video/
216•zoudong376•9mo ago

Comments

pwillia7•9mo ago
Will have to test this out and it looks like it runs on consumer hardware which is cool. I tried making a movie[1] with LTXV several months ago and had a good time but 30x faster generations sounds necessary.

[1]: https://www.youtube.com/watch?v=_18NBAbJSqQ

givinguflac•9mo ago
The requirements say:

NVIDIA 4090/5090 GPU 8GB+ VRAM (Full Version)

I have a 3070 w 8GB of VRAM.

Is there any reason I couldn’t run it (albeit slower) on my card?

GTP•9mo ago
Just try it and see.
mycall•9mo ago
Will this work with ROCm instead of CUDA?
echelon•9mo ago
No way. AMD is lightyears behind in software support.
Zambyte•9mo ago
Specifically for video? Ollama runs great on my 7900 XTX.
roenxi•9mo ago
That isn't really what being behind implies. We've known how to multiply matrices since ... at least the 70s. And video processing isn't a wild new task for our friends at AMD. I'd expect that this would run on an AMD card.

But I don't own an AMD card to check, because when I did it randomly crashed too often doing machine learning work.

blkhawk•9mo ago
I have a 9070 XT... rockm ATM is unoptimized for it and the generation speed is less than what it should be if AMD isn't fudging the specs. Also the memory management is dire / buggy and will cause random OOMs on one ruin then be fine the next. splitting workflow helps so you can have one OOM crash in between. VAEs also crash from OOM. This is all just software issues because vram isn't released properly on AMD.

*OOM = Out Of Memory Error

snagadooker•9mo ago
2B model was running well on AMD, fingers crossed with 13B too: https://www.reddit.com/user/kejos92/comments/1hjkkmx/ltxv_in...
blkhawk•9mo ago
any idea how i could implement that for comfyUI on the 9070? Going to try to apply whats in the reddit post to my venv and see if it does anything.
blkhawk•8mo ago
update: didn't help :')
zorgmonkey•9mo ago
Sometimes it is a little more work to get stuff setup, but it works fine I've run plenty of models on my 7900 XTX wan2.1 14B, flux 1.dev and whisper. (wan and flux were with comfyui and whisper with whisper.cpp)
turnsout•9mo ago
Or MLX/Apple?
washadjeffmad•9mo ago
Sure, just offload to system RAM, and don't use your system driver's fallback, but a specific implementation like MultiGPU.

It won't speed it up, but using a quantization that fits in VRAM will prevent the offload penalty.

Invictus0•9mo ago
Could've made a better website in Wix, lol. Did they forget to add the videos?
linsomniac•9mo ago
I got a bunch of videos on the page, it looked fine to me.
sergiotapia•9mo ago
FWIW I only see the hero video on the website and no other content except text. Is this a bug?
coldcode•9mo ago
Every browser I tried on my Mac does not show any of the videos. You only see the top animation.

Also shown: cdn.tailwindcss.com should not be used in production. To use Tailwind CSS in production, install it as a PostCSS plugin or use the Tailwind CLI: https://tailwindcss.com/docs/installation

There are a couple of JS errors, which I presume keep the videos from appearing.

hobs•9mo ago
https://pub.wanai.pro/ltxv_hero.mp4 same, but the video does work, just some problems in the site
esafak•9mo ago
wanai says it uses Alibaba's Wan2.1 video generation model. What's going on here? It LTXV somehow related? https://huggingface.co/blog/LLMhacker/wanai-wan21
soared•9mo ago
On iOS the unmute button will unmute and play the video. The play button did not work for me.
jsheard•9mo ago
That's the least of the problems with how they've optimized their assets, there's about 250MB of animated GIFs on the Huggingface page (actual 1989 vintage GIFs, not modern videos pretending to be GIFs). AI people just can't get enough of wasting bandwidth apparently, at least this time it's another AI company footing the bill for all the expensive AWS egress they're burning through for no reason.
dingdingdang•9mo ago
Super AI tech to the rescue!
_345•9mo ago
this is the tech equivalent of being upset that someone forgot to also recycle the aluminum cap that came with their glass bottle
terhechte•9mo ago
It says `Coming Soon` for the `inference.py` for the quantized version. Does anyone happen to know how to modify the non-quantized version [0] to work?

[0] https://github.com/Lightricks/LTX-Video/blob/main/configs/lt...

october8140•9mo ago
Videos are also not loading on GitHub.
shakna•9mo ago
> Hi , i'm using default image to video workflow with default settings and i'm getting pixalated image to video output full of squares , how to fix this ?

[0] https://github.com/Lightricks/LTX-Video/issues/163

soared•9mo ago
I wish groups would stop following OpenAI/etc’s naming conventions of having things like “13B” in the product name.
strangescript•9mo ago
In open source its super useful to be able to immediately have an idea of how big the model is and what kind of hardware it could potentially run on.
smlacy•9mo ago
Why? Isn't it one of the most important aspects of this product?
ericrallen•9mo ago
Seems a bit unfair (or maybe just ill-informed?) to lump this in with the confusing mess that is model naming at OpenAI.

The parameter count is more more useful and concrete information than anything OpenAI or their competitors have put into the name of their models.

The parameter count gives you a heuristic for estimating if you can run this model on your own hardware, and how capable you might expect it to be compared to the broader spectrum of smaller models.

It also allows you to easily distinguish between different sizes of model trained in the same way, but with more parameters. It’s likely there is a higher parameter count model in the works and this makes it easy to distinguish between the two.

drilbo•9mo ago
>It’s likely their is a higher parameter count model in the works and this makes it easy to distinguish between the two.

in this case it looks like this is the higher parameter count version, the 2b was released previously. (Not that it excludes them from making an even larger one in the future, altho that seems atypical of video/image/audio models)

re: GP: I sincerely wish 'Open'AI were this forthcoming with things like param count. If they have a 'b' in their naming, it's only to distingish it from the previous 'a' version, and don't ask me what an 'o' is supposed to mean.

ericrallen•9mo ago
Good point! I forgot there was a smaller one out there already.

OpenAI’s naming conventions have gotten out of hand.

I believe the “o” is supposed to mean “Omni” and indicate that the model is multi-modal.

turnsout•9mo ago
I just tried out the model via LTX Studio, and it's extremely impressive for a 13B model, let alone one that allegedly performs in real-time.
moralestapia•9mo ago
It runs on a single consumer GPU.

Wow.

jl6•9mo ago
The example videos look very short, maybe 1-2 seconds each. Is that the limit?
snagadooker•9mo ago
The model supports both multi-scale rendering and autoregressive generation. With multi-scale rendering, you can generate a low-resolution preview of 200-300 frames and then upscale to higher resolutions (with or without tiling).

The autoregressive generation feature allows you to condition new segments based on previously generated content. A ComfyUI implementation example is available here:

https://github.com/Lightricks/ComfyUI-LTXVideo/blob/master/e...

unicornporn•9mo ago
Uncanny as always. What do people use these for? Except for flooding Meta platforms with slop, that is...
snagadooker•9mo ago
Keyframing. Great for previz stages:

https://x.com/lenscowboy/status/1920353671352623182

https://x.com/lenscowboy/status/1920513512679616600

https://discord.com/channels/1076117621407223829/13693260067...

snagadooker•9mo ago
I'm not certain what that website is or whether it's affiliated with the model developers.

For more information about the model, refer to these sources:

Model repository: https://github.com/Lightricks/LTX-Video

ComfyUI integration: https://github.com/Lightricks/ComfyUI-LTXVideo

Early community LoRAs: https://huggingface.co/Lightricks/LTXV-LoRAs

Banadoco Discord server, an excellent place for discussing LTXV and other open models (Wan/Hunyuan):

https://discord.com/channels/1076117621407223829/13693260067...

sigmoid10•9mo ago
OP seems to be making tons of these "fan" pages for AI tools according to his HN submission history. It's also the same design every time. Smells fishy.
hereonout2•9mo ago
> Disclaimer: This is a fan-made website created by AI enthusiasts. We are not affiliated with, endorsed by, or connected to manus.im. This website is an independent project and operates separately from the official Agenttars

Loads of sites like this submitted, what's is the motivation I wonder?

AtlasBarfed•9mo ago
Ai trains/farms/steals from the internet, internet releases/publishes/steals the AI back?
ronreiter•9mo ago
I work with the Lightricks team.

This is not an official page created by Lightricks, and we do not know who the owner of this page is or why he created it.

xg15•9mo ago
what's going on here?
simonw•9mo ago
This is so weird. The domain has whois information withheld and the site is hosted on Vercel.

Best hint is the submission history of https://news.ycombinator.com/submitted?id=zoudong376 which shows similar unofficial sites for other projects.

tough•9mo ago
so there's like a little network of chinese fake AI product shipping or something?
theyinwhy•9mo ago
Someone farming fake internet points and real internet traffic.
jbkkd•9mo ago
It's in the Lightricks repo, and your CEO just posted above.
xg15•9mo ago
Yeah, but the webpage itself is unauthorized, even though it links to the official sources.

Best case, an overenthusiastic fan, worst case, some bad actor trying to establish a "sleeper page".

popalchemist•9mo ago
Is the information in the video accurate? (specifically wondering about the multi-keyframe function)
statusreport•9mo ago
Co-founder & CTO of Lightricks here. Cool to see our new model gaining traction on HN!

If you’re looking for the official LTXV model and working ComfyUI flows, make sure to visit the right sources:

- Official site: https://www.lightricks.com

- Model + Playground: https://huggingface.co/Lightricks/LTX-Video

The LTXV model runs on consumer GPUs, and all ComfyUI flows should work reliably from these official resources. Some third-party sites (like ltxvideo.net or wanai.pro) are broken, misconfigured, or heavy on unnecessary assets—so stick to the official ones to avoid issues and missing content.

liuliu•9mo ago
Hi! Draw Things should be able to add support in the next 2 weeks after we get video feature a little bit more polished out with existing video models (Wan 2.1, Hunyuan etc).
qwertox•9mo ago
Surprising to see that you don't know who made this page. When I saw it I wanted to know who was behind this, so I found "© 2025 Lightricks. All rights reserved." at the end of the page, where Google led me to lightricks.com. I had my bets on a Chinese company, but I was wrong.

I am surprised that it can run on consumer hardware.

yorwba•9mo ago
There's a bit of a trend of impersonating newly launched AI products. Just check out the submitter's other projects: https://news.ycombinator.com/submitted?id=zoudong376

I assume it's for SEO or supply chain attacks or overcharging for subscriptions.

tough•9mo ago
Seems they took that strategy from crypto memes, vampire attacks into anything getting minimal exposure
alphan0n•9mo ago
Maybe related to this malware campaign:

https://www.bleepingcomputer.com/news/security/fake-ai-video...

tough•9mo ago
you might want to write to dang (email on footer) and they might change the URL to an official one.

seems unsafe to have a weird fake landing as main article

simonw•9mo ago
> Is LTXV-13B open source?

> Yes, LTXV-13B is available under the LTXV Open Weights License. The model and its tools are open source, allowing for community development and customization.

UPDATE: This is text on an unofficial website unaffiliated with the project. BUT https://www.lightricks.com/ has "LTXV open source video model" in a big header at the top of my page so my complaint still stands, even though the FAQ copy I'm critiquing here is likely not the fault of Lightricks themselves.

So it's open weights, not open source.

Open weights is great! No need to use the wrong term for it.

From https://static.lightricks.com/legal/LTXV-2B-Distilled-04-25-... it looks like the key non-open-source terms (by the OSI definition which I consider to be canon) are:

- Section 2: entities with annual revenues of at least $10,000,000 (the “Commercial Entities”) are eligible to obtain a paid commercial use license, subject to the terms and provisions of a different license (the “Commercial Use Agreement”)

- Section 6: To the maximum extent permitted by law, Licensor reserves the right to restrict (remotely or otherwise) usage of the Model in violation of this Agreement, update the Model through electronic means, or modify the Output of the Model based on updates

This is an easy fix: change that FAQ entry to:

> Is LTXV-13B open weights?

> Yes, LTXV-13B is available under the LTXV Open Weights License. The model is open weights and the underlying code is open source (Apache 2.0), allowing for community development and customization.

Here's where the code became Apache 2.0 6 months ago: https://github.com/Lightricks/LTX-Video/commit/cfbb059629b99...

jeroenhd•9mo ago
Are weights even copyrightable? I'm not sure what these licenses do, other than placate corporate legal or pretend to have some kind of open source equivalent for AI stuff.
badsectoracula•9mo ago
Depends on how they're made. If they're fully automated and copyrights do not transfer from training data to trained weights (which is what everyone assumes at the moment) then they're the same as any machine output: not copyrightable, just like AI output isn't copyrightable.

However if there is any active human involvement during training, one could claim that this makes it human work so they're copyrightable. For example not too long ago i wrote a simple upscaler for gamescope when i was learning how to implement neural networks and i did it in a somewhat "manual" manner by running the training for a bit, testing output, modifying a bit the code, adding/changing training data, then picking up from where the training stopped and continuing from there, etc, so one could claim that the weights i ended up with are the result of my own creative process (though TBH i wouldn't nor i am comfortable with the idea myself since we're talking about a few hundred numbers).

gnarlouse•9mo ago
AI video seems like it needs to be outlawed man. I just don’t see how the marginal value (irrelevant) it creates for people like advertisers could ever outweigh the huge downside and risk it comes with for society at large.
gnarlouse•9mo ago
People downvoting me with no reply are clearly in advertising.
chmod775•9mo ago
Looks to be on roughly the same level as whatever ARK used for their AI trailer.

No good yet for anything professional, but more than enough for pornography.