NVIDIA 4090/5090 GPU 8GB+ VRAM (Full Version)
I have a 3070 w 8GB of VRAM.
Is there any reason I couldn’t run it (albeit slower) on my card?
But I don't own an AMD card to check, because when I did it randomly crashed too often doing machine learning work.
*OOM = Out Of Memory Error
Also shown: cdn.tailwindcss.com should not be used in production. To use Tailwind CSS in production, install it as a PostCSS plugin or use the Tailwind CLI: https://tailwindcss.com/docs/installation
There are a couple of JS errors, which I presume keep the videos from appearing.
[0] https://github.com/Lightricks/LTX-Video/blob/main/configs/lt...
The parameter count is more more useful and concrete information than anything OpenAI or their competitors have put into the name of their models.
The parameter count gives you a heuristic for estimating if you can run this model on your own hardware, and how capable you might expect it to be compared to the broader spectrum of smaller models.
It also allows you to easily distinguish between different sizes of model trained in the same way, but with more parameters. It’s likely there is a higher parameter count model in the works and this makes it easy to distinguish between the two.
in this case it looks like this is the higher parameter count version, the 2b was released previously. (Not that it excludes them from making an even larger one in the future, altho that seems atypical of video/image/audio models)
re: GP: I sincerely wish 'Open'AI were this forthcoming with things like param count. If they have a 'b' in their naming, it's only to distingish it from the previous 'a' version, and don't ask me what an 'o' is supposed to mean.
Wow.
pwillia7•2h ago
[1]: https://www.youtube.com/watch?v=_18NBAbJSqQ