frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
426•klaussilveira•5h ago•97 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
21•mfiguiere•42m ago•8 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
775•xnx•11h ago•472 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
142•isitcontent•6h ago•15 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
135•dmpetrov•6h ago•57 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
41•quibono•4d ago•3 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
246•vecti•8h ago•117 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
70•jnord•3d ago•4 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
180•eljojo•8h ago•124 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
314•aktau•12h ago•154 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
12•matheusalmeida•1d ago•0 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
311•ostacke•12h ago•85 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
397•todsacerdoti•13h ago•217 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
322•lstoll•12h ago•233 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
12•kmm•4d ago•0 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
48•phreda4•5h ago•8 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
109•vmatsiiako•11h ago•34 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
186•i5heu•8h ago•129 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
236•surprisetalk•3d ago•31 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
976•cdrnsf•15h ago•415 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
144•limoce•3d ago•79 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
17•gfortaine•3h ago•2 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
49•ray__•2h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
41•rescrv•13h ago•17 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
35•lebovic•1d ago•11 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
52•SerCe•2h ago•42 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
77•antves•1d ago•57 comments

The Oklahoma Architect Who Turned Kitsch into Art

https://www.bloomberg.com/news/features/2026-01-31/oklahoma-architect-bruce-goff-s-wild-home-desi...
18•MarlonPro•3d ago•4 comments

Claude Composer

https://www.josh.ing/blog/claude-composer
108•coloneltcb•2d ago•71 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
39•nwparker•1d ago•10 comments
Open in hackernews

FFmpeg 8.0

https://ffmpeg.org/index.html#pr8.0
987•gyan•5mo ago

Comments

oblio•5mo ago
First of all: congratulations!!!

Secondly, just curious: any insiders here?

What changed? I see the infrastructure has been upgraded, this seems like a big release, etc. I guess there was a recent influx of contributors? A corporate donation? Something else?

exprez135•5mo ago
Not an insider, but I noticed that there is now a filter for using Whisper (C++) for audio transcription [1]. It looks like you provide the path to a model file [2].

[1]: https://github.com/ggml-org/whisper.cpp

[2]: https://git.ffmpeg.org/gitweb/ffmpeg.git/commit/13ce36fef98a...

ukuina•5mo ago
This is big news if it means realtime subtitle generation.
ranger_danger•5mo ago
in my experience whisper (at least on my 3070 Ti) is not capable of high quality real-time transcription. A few seconds per second of audio, maybe.
perihelions•5mo ago
You missed out on the thread!

https://news.ycombinator.com/item?id=44886647 ("FFmpeg 8.0 adds Whisper support (ffmpeg.org)"—9 days ago, 331 comments)

pmarreck•5mo ago
Impressed anytime I have to use it (even if I have to study its man page again or use an LLM to construct the right incantation or use a GUI that just builds the incantation based on visual options). Becoming an indispensable transcoding multitool.

I think building some processing off of Vulkan 1.3 was the right move. (Aside, I also just noticed yesterday that Asahi Linux on Mac supports that standard as well.)

Culonavirus•5mo ago
> incantation

FFmpeg arguments, the original prompt engineering

Keyframe•5mo ago
with gemini-cli and claude-cli you can now prompt while it prompts ffmpeg, and it does work.
NSUserDefaults•5mo ago
Curious to see how quickly each LLM picks up the new codecs/options.
baq•5mo ago
the canonical (if that's the right word for a 2-year-old technique) solution is to paste the whole manual into the context before asking questions
xnx•5mo ago
Gemini can now load context from a URL in the API (https://ai.google.dev/gemini-api/docs/url-context), but I'm not sure if that has made it to the web interfaces yet.
stevejb•5mo ago
I use the Warp terminal and I can ask it to run —-help and it figures it out
conradev•5mo ago
Yeah, you can give an LLM queries like “make this smaller with libx265 and add the hvc1 tag” or “concatenate these two videos” and it usually crushes it. They have a similar level of mastery over imagemagick, too!
turnsout•5mo ago
Yeah, LLMs have honestly made ffmpeg usable for me, for the first time. The difficulty in constructing commands is not really ffmpeg's fault—it's just an artifact of the power of the tool and the difficulties in shoehorning that power into flags for a single CLI tool. It's just not the ideal human interface to access ffmpeg's functionality. But keeping it CLI makes it much more useful as part of a larger and often automated workflow.
lukeschlather•5mo ago
It's funny because GPU stuff like what this article is about is where the LLMs totally fall apart. I can make any LLM produce volumes hallucinations at the drop of a hat by asking it how to construct ffmpeg commands that use hardware acceleration.
profsummergig•5mo ago
Just seeking a clarification on how this would be done:

One would use gemini-cli (or claude-cli),

- and give a natural language prompt to gemini (or claude) on what processing needs to be done,

- with the correct paths to FFmpeg and the media file,

- and g-cli (or c-cli) would take it from there.

Is this correct?

RedShift1•5mo ago
Yes. It works amazingly well for ffmpeg.
profsummergig•5mo ago
Thank you.
logicalmind•5mo ago
Another option is to use a non-cli LLM and ask it to produce a script (bash/ps1) that uses ffmpeg to do X, Y, and Z to your video files. If using a chat LLM it will often provide suggestions or ask questions to improve your processing as well. I do this often and the results are quite good.
jeanlucas•5mo ago
nope, that would be handling tar balls

ffmpeg right after

porridgeraisin•5mo ago
Personally I never understood the problem with tar balls.

The only options you ever need are tar -x, tar -c (x for extract and c for create). tar -l if you wanna list, l for list.

That's really it, -v for verbose just like every other tool if you wish.

Examples:

  tar -c project | gzip > backup.tar.gz
  cat backup.tar.gz | gunzip | tar -l
  cat backup.tar.gz | gunzip | tar -x
You never need anything else for the 99% case.
drivers99•5mo ago
Except it's tar -t to list, not -l
porridgeraisin•5mo ago
Whoops, lol. Well that's unfortunate.
sdfsdfgsdgg•5mo ago
> tar -l if you wanna list, l for list.

Surely you mean -t if you wanna list, t for lisT.

l is for check-Links.

     -l, --check-links
             (c and r modes only) Issue a warning message unless all links to each file are archived.
And you don't need to uncompress separately. tar will detect the correct compression algorithm and decompress on its own. No need for that gunzip intermediate step.
porridgeraisin•5mo ago
> -l

Whoops, lol.

> on its own

Yes.. I'm aware, but that's more options, unnecessary too, just compose tools.

sdfsdfgsdgg•5mo ago
That's the thing. It’s not more options. During extraction it picks the right algorithm automatically, without you needing to pass another option.
bigstrat2003•5mo ago
The problem is it's very non-obvious and thus is unnecessarily hard to learn. Yes, once you learn the incantations they will serve you forever. But sit a newbie down in front of a shell and ask them to extract a file, and they struggle because the interface is unnecessarily hard to learn.
encom•5mo ago
It's very similar to every other CLI program, I really don't understand what kind of usability issue you're implying is unique to tar?
mrguyorama•5mo ago
As has been clearly demonstrated in this very thread, why is "Please list what files are in this archive" the option "-t"?

Principle of least surprise and all that.

encom•5mo ago
And why is -v the short option for --invert-match in grep, when that's usually --verbose or --version in lots of other places. These idiosyncrasies are hardly unique to tar.
tombert•5mo ago
Yeah I never really understood why people complain about tar; 99% of what you need from it is just `tar -xvf blah.tar.gz`.
aidenn0•5mo ago
You for got the -z (or -a with a recent gnutar).
adastra22•5mo ago
It’s no longer needed. You can leave it out and it auto-detects the file format.
CamperBob2•5mo ago
What value does tar add over plain old zip? That's what annoys me about .tar files full of .gzs or .zips (or vice versa) -- why do people nest container formats for no reason at all?

I don't use tape, so I don't need a tape archive format.

diggernet•5mo ago
A tar of gzip or zip files doesn't make sense. But gzipping or zipping a tar does.

Gzip only compresses a single file, so .tar.gz lets you bundle multiple files. You can do the same thing with zip, of course, but...

Zip compresses individual files separately in the container, ignoring redundancies between files. But .tar.gz (and .tar.zip, though I've rarely seen that combination) bundles the files together and then compresses them, so can get better compression than .zip alone.

fullstop•5mo ago
zip doesn't retain file ownership or permissions.
diggernet•5mo ago
Good point. And if I remember right, tar allows longer paths than zip.
pmarreck•5mo ago
I think the Mac version may?
beagle3•5mo ago
The zip directory itself is uncompressed, and if you have lots of small files with similar names, zipping the zip makes a huge difference. IIRC in the HVSC (C64 SID music archive), the outer zip used to save another 30%.
dns_snek•5mo ago
Plain old zip is tricky to parse correctly. If you search for them, you can probably find about a dozen rants about all the problems of working with ZIP files.
jeanlucas•5mo ago
it was just a reference to xkcd#1168

I wasn't expecting the downvotes for an xkcd reference

BeepInABox•5mo ago
For anyone curious, unless you are running a 'tar' binary from the stone ages, just skip the gunzip and cat invocations. Replace .gz with .xz or other well known file ending for different compression.

  Examples:
    tar -cf archive.tar.gz foo bar  # Create archive.tar.gz from files foo and bar.
    tar -tvf archive.tar.gz         # List all files in archive.tar.gz verbosely.
    tar -xf archive.tar.gz          # Extract all files from archive.tar.gz
mkl•5mo ago
> tar -cf archive.tar.gz foo bar

This will create an uncompressed .tar with the wrong name. You need a z option to specify gzip.

Intermernet•5mo ago
Apparently this is now automatically determined by the file name, but I still habitually add the flag. 30 years of muscle memory is hard to break!
mkl•5mo ago
I tried it to check before making the comment. In Ubuntu 25.04 it does not automatically enable compression based on the filename. The automatic detection when extracting is based on file contents, not name.
BenjiWiebe•5mo ago
If you add a for auto, it will choose the right compression based on the file name.

tar -caf foo.tar.xz foo

Will be an xz compressed tarball.

themafia•5mo ago

    gzip -dc backup.tar.gz | tar -x
You can skip a step in your pipeline.
sho_hn•5mo ago
nope, it's using `find`.
beala•5mo ago
Tough crowd.

fwiw, `tar xzf foobar.tgz` = "_x_tract _z_e _f_iles!" has been burned into my brain. It's "extract the files" spoken in a Dr. Strangelove German accent

Better still, I recently discovered `dtrx` (https://github.com/dtrx-py/dtrx) and it's great if you have the ability to install it on the host. It calls the right commands and also always extracts into a subdir, so no more tar-bombs.

If you want to create a tar, I'm sorry but you're on your own.

diggan•5mo ago
I used tar/unzip for decades I think, before moving to 7z which handles all formats I throw at it, and have the same switch for when you want to decompress into a specific directory, instead of having to remember which one of tar and unzip uses -d, and which one uses -C.

"also always extracts into a subdir" sounds like a nice feature though, thanks for sharing another alternative!

mkl•5mo ago
> tar xzf foobar.tgz

You don't need the z, as xf will detect which compression was used, if any.

Creating is no harder, just use c for create instead, and specify z for gzip compression:

  tar czf archive.tar.gz [filename(s)]
Same with listing contents, with t for tell:

  tar tf archive.tar.gz
fullstop•5mo ago
I have so much of tar memorized. cpio is super funky to me, though.
fuzztester•5mo ago
cpio is not that hard.

A common use case is:

  $ cpio -pdumv args 
See:

  $ man cpio 
and here is an example from its Wikipedia page, under the "Operation and archive format" section, under the Copy subsection:

Copy

Cpio supports a third type of operation which copies files. It is initiated with the pass-through option flag (p). This mode combines the copy-out and copy-in steps without actually creating any file archive. In this mode, cpio reads path names on standard input like the copy-out operation, but instead of creating an archive, it recreates the directories and files at a different location in the file system, as specified by the path given as a command line argument.

This example copies the directory tree starting at the current directory to another path new-path in the file system, preserving files modification times (flag m), creating directories as needed (d), replacing any existing files unconditionally (u), while producing a progress listing on standard output (v):

$ find . -depth -print | cpio -p -dumv new-path

fullstop•5mo ago
I think that it's the fact that it requires a pipe to work and that you add files by feeding stdin that throw me for a loop.

I also use it very infrequently compared to tar -- mostly in conjunction with swupdate. I've also run into file size limits, but that's not really a function of the command line interface to the tool.

mrandish•5mo ago
I'd also include Regex in the list of dark arts incantations.
RedShift1•5mo ago
I'm ok with regex, but the ffmpeg manpage, it scares me...
quectophoton•5mo ago
Ffmpeg was designed to be unusable if it falls into enemy hands.
falloon•5mo ago
I defer understanding FFMPEG arguments to the LLMs.
zvr•5mo ago
I am perfectly at home with regexp, but ffmpeg, magick, and jq are still on the list to master.
lukeschlather•5mo ago
Regex is only difficult because it's complicated, the primitives are all sensibly arranged and predictable. FFmpeg is layers of dark magic where the primitives are often inscrutable before you compose them.
agos•5mo ago
OT, but yours has to be the best username on this site. Props.
bobsmooth•5mo ago
Culón is Spanish for big-bottomed, for anyone else wondering.
jjcm•5mo ago
LLMs are a great interface for ffmpeg. There are tons of tools out there that can help you run it with natural language. Here's my personal script: https://github.com/jjcm/llmpeg
pmarreck•5mo ago
i wrote a command “please” that allows me to say “please use ffmpeg to do whatever” and it generates the command with confirmation
agys•5mo ago
LLMs and complex command line tools like FFmpeg and ImageMagick are a perfect combination and work like magic…

It’s really the dream UI/UX from sience fiction movies: “take all images from this folder and crop 100px away except on top, saturate a bit and save them as uncompressed tiffs in this new folder, also assemble them in a video loop, encode for web”.

Barrin92•5mo ago
it can work but it's far from science fiction. LLMs tend to produce extremely subpar if not buggy ffmpeg code. They'll routinely do things like put the file parameter before the start time which needlessly decodes the entire video, produce wrong bitrates, re-encode audio needlessly, and so on.

If you don't care enough about potential side effects to read the manual it's fine, but a dream UX it is not because I'd argue that includes correctness.

amenhotep•5mo ago
ffmpeg -i in -ss start -to end out is wrong and bad? You can -ss before -i? TIL!
pmarreck•5mo ago
what... wait... In what universe do people write argument processing that doesn't process the content of all the arguments upfront AND THEN do things in the right order??
xandrius•5mo ago
Had to do exactly that with a bunch of screenshots I took but happened to include a bunch of unnecessary parts of the screen.

A prompt to ChatGPT and a command later and all were nicely cropped in a second.

The dread of doing it by hand and having it magically there a minute later is absolutely mind blowing. Even just 5 years ago, I would have just done it manually as it would have definitely taken more to write the code for this task.

euroderf•5mo ago
Are you accusing Blade Runner of infringing FFmpeg IP ?
0xbeefcab•5mo ago
Linking a previous discussion to FFMPEG's inclusion of whisper in this release: https://news.ycombinator.com/item?id=44886647

This seemed to be interesting to users of this site. tl;dr they added support for whisper, an OpenAI model for speech-to-text, which should allow autogeneration of captions via ffmpeg

Culonavirus•5mo ago
these days most movies and series already come out with captions, but you know what does not, given the vast amount of it?... ;)

yep, finally the deaf will able to read what people are saying in a porno!

0xbeefcab•5mo ago
True, but also it can be hard to find captions in languages besides english for some lesser known movies/shows
yieldcrv•5mo ago
And also pirated releases are super weird and all over the place with subtitles and video player compatibility

This could streamline things

bobsmooth•5mo ago
There's websites where you can download subtitles. Usually from very obviously pirated released.
PokestarFan•5mo ago
This is because blurays ship their subtitles as a bunch of text images. So pirates have 3 options:

1. Just copy them over from the Bluray. This lacks support in most client players, so you'll either need to download a player that does, or use something like Plex/Jellyfin, which will run FFMpeg to transcode and burn the picture subtitles in before sending it to the client.

2. Run OCR on the Bluray subtitles. Not perfect.

3. Steal subtitles from a streaming service release (or multiple) if it exists.

bachittle•5mo ago
Heads up: Whisper support depends on how your FFmpeg was built. Some packages will not include it yet. Check with `ffmpeg -buildconf` or `ffmpeg -filters | grep whisper`. If you compile yourself, remember to pass `--enable-whisper` and give the filter a real model path.
JadoJodo•5mo ago
I don't know a huge amount about video encoding, but I presume this is one of those libraries outlined in xkcd 2347[0]?

[0] - https://xkcd.com/2347/

zhengyi13•5mo ago
Yes, this is a pretty fundamental building block; just not so rickety.
0xbeefcab•5mo ago
Yeah, basically anytime a video or audio is being recorded, played, or streamed its from ffmpeg. It runs on a couple planets [0], and on most devices (maybe?)

[0] https://link.springer.com/article/10.1007/s11214-020-00765-9

neckro23•5mo ago
Not necessarily. A lot of video software either leverages the Windows/MacOS system codecs (ex. Media Player Classic, Quicktime) or proprietary vendor codecs (Adobe/Blackmagic).

Linux doesn't really have a system codec API though so any Linux video software you see (ex. VLC, Handbrake) is almost certainly using ffmpeg under the hood (or its foundation, libavcodec).

deaddodo•5mo ago
FFMpeg is definitely fairly ubiquitous, but you are overstating its universality quite a bit. There are alternatives that utilize Windows/macOS's native media frameworks, proprietary software that utilizes bespoke frameworks, and libraries that function independently of ffmpeg that offer similar functionality.

That being said, if you put down a pie chart of media frameworks (especially for transcoding or muxing), ffmpeg would have a significant share of that pie.

aidenn0•5mo ago
Pretty much.

It also was originally authored by the same person who did lzexe, tcc, qemu, and the current leader for the large text compression benchmark.

Oh, and for most of the 2010's there was a fork due to interpersonal issues on the team.

syockit•5mo ago
Brings back memories. There was a time when the fork, libav, became the default on Ubuntu, and ffmpeg commands would say "this command is no longer maintained" or so. That was where I learned that there was a fork, and I thought ffmpeg was going to die as a result because there was heavy development activity on libav compared to ffmpeg initially. Surprise, ffmpeg outlived its fork!

This post talks about the situation back then: https://blog.pkh.me/p/13-the-ffmpeg-libav-situation.html

tombert•5mo ago
Yeah I think pretty much everything that involves video on Linux or FreeBSD in 2025 involves FFmpeg or Gstreamer, usually the former.

It’s exceedingly good software though, and to be fair I think it’s gotten a fair bit of sponsorship and corporate support.

_kb•5mo ago
It's the big flat one at the bottom.
joshuat•5mo ago
Some Netflix devs are going to have a busy sprint
elektor•5mo ago
For those out of the loop, can you please explain your comment?
henryfjordan•5mo ago
Netflix uses FFMPEG, will have to update
Am4TIfIsER0ppos•5mo ago
Have to? They don't have a kill switch in there, probably.
TeeMassive•5mo ago
And some influencers ;)
hexfish•5mo ago
Indeed: https://m.youtube.com/watch?v=YVI6SCtVu4c
eviks•5mo ago
Why would they be tied to this release number when they can build themselves at their own schedule?

> Note that these releases are intended for distributors and system integrators. Users that wish to compile from source themselves are strongly encouraged to consider using the development branch

jeanlucas•5mo ago
cheers for one more release, hope it gets attention and necessary funding
brcmthrowaway•5mo ago
How much ARM acceleration vs x8664?
ekianjo•5mo ago
Vulkan based encoders and decoders are super exciting!
larodi•5mo ago
Is anyone else on the opinion that ffmpeg now ranks 4th as the most used lib after ssl, zlib, and sqlite... given video is like omnipresent in 2025?
pledg•5mo ago
libcurl?
encom•5mo ago
libc :D
npteljes•5mo ago
It's up there in the hall of fame, that's for sure!
zaik•5mo ago
You can check, at least for Arch Linux: https://pkgstats.archlinux.de/packages
_kb•5mo ago
You can pull the nix logs from here: https://github.com/NixOS/infra/blob/main/metrics/fastly/READ...

Could be an interesting data source to explore that opinion.

PokestarFan•5mo ago
FFMpeg is probably not as up high since video processing only needs to be done on the servers that receive media. I doubt most phones are running FFMpeg on video.
larodi•5mo ago
Well I would imagine portions of it are on every mobile device, and also Netflix and alike surely use it to encode video.
neRok•5mo ago
Chrome and Firefox use FFmpeg libraries to decode media, so it's in more places than you might think! (But also, ChatGPT said it's not used in Android browser apps because they would use Android's "native" media stack).
zvr•5mo ago
Curl should be up there, and "SSL" might be lower because of different implementations would split the numbers.
larodi•5mo ago
Curl perhaps yes, but it employs zlib and libssl to operate, right so?
zvr•5mo ago
Yes, it uses zlib and some implementation of SSL.

My earlier comment about "SSL" is that the actual library might be OpenSSL, BoringSSL, WolfSSL, GnuTLS, or any one of a number of others. So the number of uses of each one is smaller than the total number of "SSL" uses.

IshKebab•5mo ago
I think there's quite a few above it. Qt, libpng, libusb etc.
account42•5mo ago
libpng and libjpeg I can see.

But Qt and libusb above ffmpeg? No way.

GZGavinZhao•5mo ago
*sad curl noises
xnx•5mo ago
Changelog: https://github.com/FFmpeg/FFmpeg/blob/master/Changelog
y_sellami•5mo ago
about time vulkan got into the game.
qmr•5mo ago
Exciting news.

https://youtu.be/9kaIXkImCAM?si=b_vzB4o87ArcYNfq

outside1234•5mo ago
Is this satire, serious, or both. :)
KolmogorovComp•5mo ago
It’s satire done seriously
patchtopic•5mo ago
you know I cut a whole documentary in ffmpeg?
oldgregg•5mo ago
LLMs have really made ffmpeg implementations easy-- the command line options are so expansive and obscure it's so nice to just tell it what you want and have it spit out a crazy ffmpeg command.
instagraham•5mo ago
I remember saving my incantation to download and convert a youtube playlist (in the form of a txt file with a list of URLs) and this being the only way to back up Chrome music bookmark folders.

Then it stopped working until I updated youtube-dl and then that stopped working once I lost the incantation :<

noman-land•5mo ago
Check out yt-dlp. It works great.
TeeMassive•5mo ago
yt-dlp works really well, and not only for YouTube ;)
Dwedit•5mo ago
Has anyone made a good GUI frontend for accessing the various features of FFMPEG? Sometimes you just want to remux a video without doing any transcoding, or join several video and audio streams together (same codecs).
joenot443•5mo ago
Handbrake fits the bill, I think!

It's a great tool. Little long in the tooth these days, but gets the job done.

selectodude•5mo ago
Handbrake receives pretty regular updates.
kevinsync•5mo ago
Seconded, HandBrake[0] is great for routine tasks / workflows. The UI could be simplified just a tad for super duper simple stuff (ex. ripping a multi-episode tv show disc but don't care about disc extras? you kind of have to hunt and poke based on stream length to decide which parts are the actual episodes. The app itself could probably reliably guess and present you with a 1-click 'queue these up' flow for instance) but otherwise really a wonderful tool!

Past that, I'm on the command line haha

[0] https://handbrake.fr

balder1991•5mo ago
Handbrake can’t even reencode a video keeping the audio intact.
pseudosavant•5mo ago
I haven't used a GUI I like, but LLMs like ChatGPT have been so good for solving this for me. I tell it exactly what I need it to do and it produces the ffmpeg command to do it.
ricardojoaoreis•5mo ago
You can use mkvtoolnix for that and it has a GUI
patapong•5mo ago
I have found the best front-end to be ChatGPT. It is very good at figuring out the commands needed to accomplish something in FFmpeg, from my natural description of what I want to do.
AlienRobot•5mo ago
It would need to be a non-linear editor node-based editor. Pretty much all open source video editors are just FFMPEG frontends, e.g. Kdenlive.
jazzyjackson•5mo ago
check out https://github.com/mifi/lossless-cut
mrguyorama•5mo ago
Shotcut is an open source Video production toolkit that is basically just a really nice interface for generating ffmpeg commands.

https://www.shotcut.org/

toxicosmos•5mo ago
Shotcut uses the MLT Multimedia Framework. It is not just a "really nice interface for generating ffmpeg commands"

https://www.mltframework.org/

mkl•5mo ago
That framework seems to based on ffmpeg: https://www.mltframework.org/faq/
TiredOfLife•5mo ago
ChatGPT and other llms
cubefox•5mo ago
Pretty sure ChatGPT counts as a CLI, not as a GUI.
1bpp•5mo ago
CLII (command line interface interface)
onehair•5mo ago
There is handbrake, vidcoder and all sorts of frontend.
filmgirlcw•5mo ago
For Mac users, ffWorks [1] is an amazing frontend for FFmpeg that surfaces most of the features but with a decent GUI. It’s batchable and you can setup presets too. It’s one of my favorite apps and the developer is very responsive.

Handbrake and Losslssscut are great too. But in addition to donating to FFmpeg, I pay for ffWorks because it really does offer a lot of value to me. I don’t think there is anything close to its polish on other platforms, unfortunately.

[1]: https://www.ffworks.net/index.html

janandonly•5mo ago
Is it worth €22?

If it was priced 1-5€ would just buy it I guess. But this.

neRok•5mo ago
Joining videos together sounds easy, but there's tons of ways it can go wrong! You've got time bases to consider, start offsets, frame/overscan crops, fps differences (constant vs variable), etc. And even though your videos might both be h264, one might be encoded with B frames and open GOP, and the other not, and that might cause playback issues in certain circumstances. Similarly, both could be AAC audio, but one is 48kHz sample rate, the other 44.1kHz.

Someone else mentioned Lossless-Cut program, which is pretty good. It has a merge feature that has a compatibility checker ability that can detect a few issues. But I find transcoding the separate videos to MPEG-TS before joining them can get around many problems. If you fire up a RAM-Disk, it's a fast task.

  ffmpeg -i video1.mp4 -c copy -start_at_zero -fflags +genpts R:\video1.ts;
  ffmpeg -i video2.mp4 -c copy -start_at_zero -fflags +genpts R:\video2.ts;
  ffmpeg -i "concat:R:\video1.ts|R:\video2.ts" -c copy -movflags +faststart R:\merged.mp4
avhon1•5mo ago
Every frontend offers only a small subset of ffmpeg's total features, making them usable only for specific tasks.
balder1991•5mo ago
Unfortunately. As an example, so many people recommend Handbrake, which doesn’t even have the option to simply copy the audio stream.
josteink•5mo ago
Nice! Anyone have any idea how and when this will affect downstream projects like yt-dlp, jellyfin, etc? Especially with regard to support for HW-acceleration?
fleabitdev•5mo ago
Happy to hear that they've introduced video encoders and decoders based on compute shaders. The only video codecs widely supported in hardware are H.264, H.265 and AV1, so cross-platform acceleration for other codecs will be very nice to have, even if it's less efficient than fixed-function hardware. The new ProRes encoder already looks useful for a project I'm working on.

> Only codecs specifically designed for parallelised decoding can be implemented in such a way, with more mainstream codecs not being planned for support.

It makes sense that most video codecs aren't amenable to compute shader decoding. You need tens of thousands of threads to keep a GPU busy, and you'll struggle to get that much parallelism when you have data dependencies between frames and between tiles in the same frame.

I wonder whether encoders might have more flexibility than decoders. Using compute shaders to encode something like VP9 (https://blogs.gnome.org/rbultje/2016/12/13/overview-of-the-v...) would be an interesting challenge.

mtillman•5mo ago
Exciting! I am consistently blown away by the talent of the ffmpeg maintainers. This is fairly hard stuff in my opinion and they do it for free.
droopyEyelids•5mo ago
Could you explain more about it? I assumed the maintainers are doing it as part of their jobs for a company (completely baseless assumption)
refulgentis•5mo ago
Reupvoted you from gray because I don't think that's fair, but I also don't know how much there is to add. As far as why I'm contributing, I haven't been socially involved in the ffmpeg dev community in a decade, but, it is a very reasonable floor to assume it's 80% not full time paid contributors.
happymellon•5mo ago
> Happy to hear that they've introduced video encoders and decoders based on compute shaders.

This is great news. I remember being laughed at when I initially asked whether the Vulkan enc/dec were generic because at the time it was all just standardising interfaces for the in-silicon acceleration.

Having these sorts of improvements available for legacy hardware is brilliant, and hopefully a first route that we can use to introduce new codecs and improve everyone's QOL.

gmueckl•5mo ago
I haven't even had a cursory look at decoders state of the art for 10+ years. But my intuition would say that decoding for display could profit a lot from GPU acceleration for later parts of the process when there is already pixel data of some sort involved. Then I imagine thet the initial decompression steps could stay on the CPU and the decompressed, but still (partially) encoded data is streamed to the GPU for the final transformation steps and application to whatever I-frames and other base images there are. Steps like applying motion vectors, iDCT... look embarrassingly parallel at a pixel level to me.

When the resulting frame is already in a GPU texture then, displaying it has fairly low overhead.

My question is: how wrong am I?

fleabitdev•5mo ago
I'm not an expert, but in the worst case, you might need to decode dense 4x4-pixel blocks which each depend on fully-decoded neighbouring blocks to their west, northwest, north and northeast. This would limit you to processing `frame_height * 4` pixels in parallel, which seems bad, especially for memory-intensive work. (GPUs rely on massive parallelism to hide the latency of memory accesses.)

Motion vectors can be large (for example, 256 pixels for VP8), so you wouldn't get much extra parallelism by decoding multiple frames together.

However, even if the worst-case performance is bad, you might see good performance in the average case. For example, you might be able to decode all of a frame's inter blocks in parallel, and that might unlock better parallel processing for intra blocks. It looks like deblocking might be highly parallel. VP9, H.265 and AV1 can optionally split each frame into independently-coded tiles, although I don't know how common that is in practice.

dtf•5mo ago
These release notes are very interesting! I spent a couple of weeks recently writing a ProRes decoder using WebGPU compute shaders, and it runs plenty fast enough (although I suspect Apple has some special hardware they make use of for their implementation). I can imagine this path also working well for the new Android APV codec, if it ever becomes popular.

The ProRes bitstream spec was given to SMPTE [1], but I never managed to find any information on ProRes RAW, so it's exciting to see software and compute implementations here. Has this been reverse-engineered by the FFMPEG wizards? At first glance of the code, it does look fairly similar to the regular ProRes.

[1] https://pub.smpte.org/doc/rdd36/20220909-pub/rdd36-2022.pdf

averne_•5mo ago
Do you have a link for that? I'm the guy working on the Vulkan ProRes decoder mentionned as "in review" in this changelog, as part of a GSoC project.

I'm curious wrt how a WebGPU implementation would differ from Vulkan. Here's mine if you're interested: https://github.com/averne/FFmpeg/tree/vk-proresdec

dtf•5mo ago
I don't have a link to hand right now, but I'll try to put one up for you this weekend. I'm very interested in your implementation - thanks, will take a good look!

Initially this was just a vehicle for me to get stuck in and learn some WebGPU, so no doubt I'm missing lots of opportunities for optimisation - but it's been fun as much as frustrating. I leaned heavily on the SMPTE specification document and the FFMPEG proresdec.c implementation to understand and debug.

averne_•5mo ago
No problem, just be aware there's a bunch of optimizations I haven't had time to implement yet. In particular, I'd to remove the reset kernel, fuse the VLD/IDCT ones, and try different strategies and hw-dependent specializations for the IDCT routine (AAN algorithm, packed FP16, cooperative matrices).
emersion•5mo ago
Pretty much reverse engineered: https://mk.pars.ee/notes/a9ihgynpvdo6003w
mappu•5mo ago
NVENC/NVDEC could do part of the processing on the shader cores instead of the fixed-function hardware.
ok123456•5mo ago
Finally! RealVideo 6 support.
mappu•5mo ago
Kostya did a lot of the RV60/RMHD reverse engineering work for NihAV back in 2018! His blog also talks about the GPL violations from Real.

The old RV40 had some small advantages over H264. At low bitrates, RV40 always seemed to blur instead of block, so it got used a lot for anime content. CPU-only decoding was also more lightweight than even the most optimized H264 decoder (CoreAVC with the inloop deblocking disabled to save even more CPU).

waihtis•5mo ago
T3.gg in shambles
wordofx•5mo ago
Wouldn’t be surprised if Theo did a video about investing in ffmpeg and how he revived it and has been consulting to the developers and we should bow down and praise him for resurrecting ffmpeg.
waihtis•5mo ago
Hahah

I rarely take enjoyment from online battles but that one was a very pleasing putdown

zzzeek•5mo ago
ffmpeg is a treasure to the open source and audio technology communities. The tool cuts right through all kinds of proprietary and arcane roadblocks presented by various codecs and formats and it's clear a tremendous amount of work goes into keeping it all working. The CLI is of course quite opaque and the documentation for various features is often terse, but it's still the only tool on any platform anywhere that will always get you what you need for video and audio processing without ever running up against some kind of commercial paywall.
np1810•5mo ago
Thank you FFmpeg developers and contributors!

If there's anything that needs audio/video automation, I've always turned to FFmpeg, it's such a crucial and indispensible tool and so many online video tools use it and are generally a UI wrapper around this wonderful tool. TIL - there's FFmpeg.Wasm also [0].

In Jan 2024, I had used it to extract frames of 1993 anime movie in 15 minutes video segments, upscaled it using Real-ESRGAN-ncnn-vulkan [1] then recombining the output frames for final 4K upscaled anime [2]. FWIW, if I had built a UI on this workflow it could've become a tool similar to Topaz AI which is quite popular these days.

[0]: https://github.com/ffmpegwasm/ffmpeg.wasm

[1]: https://github.com/xinntao/Real-ESRGAN-ncnn-vulkan

[2]: https://files.horizon.pics/3f6a47d0-429f-4024-a5e0-e85ceb0f6...

idoubtit•5mo ago
Even when I don't use directly ffmpeg, I often use tools that embed ffmpeg. For instance, I've recently upscaled an old anime, ripped from a low quality DVD. I used k4yt3x/video2x, which was good enough for what I wanted, and was easy to install. It embedded libffmpeg, so I could use the same arguments for encoding:

    Video2X-x86_64.AppImage -i "$f" \
     -c libvpx-vp9 -e crf=34 -o "${f/480p/480p_upscale2x}" \
     -p realcugan -s 2 --noise-level 1
To find the best arguments for upscaling (last line from above), I first used ffmpeg to extract a short scene that I encoded with various parameter sets. Then I used ffmpeg to capture still images so that I could find the best set.
bena•5mo ago
About 10-ish years ago, my then employer was talking to some other company about helping them get their software to release. They had what they believed to be a proprietary compression system that would compress and playback 4k video with no loss in quality.

They wouldn't let us look into the actual codecs or compression, they just wanted us to build a front-end for it.

I got to digging and realized they were just re-encoding the video through FFMpeg with a certain set of flags and options. I was able to replicate their results by just running FFMpeg.

They stopped talking to us.

Telaneo•5mo ago
One more taking part in a time-honoured tradition of taking someone else's thing, adding your own dipping mustard (if even that), and calling it your own.

A new chatbot? Another ChatGPT wrapper. A new Linux Distro. Another Arch with a preinstalled desktop environment. A new video downloader? It's yt-dlp with a GUI.

If they were just honest from the get-go, it'd be fine, but some people aren't.

np1810•5mo ago
> If they were just honest from the get-go, it'd be fine, but some people aren't.

If it were just individuals doing it, maybe it would've been somewhat digestible. But it's a pity that sometimes even trillion-dollar companies do it.

Pre-LLM days, the doers were atleast aware of their copy/clone/wrapper, but now it's happening unintentionally when LLMs give out modified versions of someone else's code without binding to its license, because AFAIK LLMs do not automatically add licensing details of libraries used inside their outputted code, or do they?

brookst•5mo ago
Trillion dollar companies are made up of individuals. People don’t start being honest just because they sign on with a Fortune 500.
ChrisMarshallNY•5mo ago
There’s folks that make entire careers, from tuning ffmpeg.

I’d suspect that this is exactly the type of thing that could be achieved with AI tools, though, so that might be a nervous bunch of people.

pwn0•5mo ago
I tried the exact same steps you did with the exact same movie but with Topaz AI and got very bad results which made me abondon the project. I'd be greatful if you could share the upscaled movie.
balder1991•5mo ago
I always assumed Topaz AI would do a more sophisticated upscaling while FFMpeg only has simpler algorithms. Isn’t that the case?
shmerl•5mo ago
Nice! Looking forward to try WHIP/WebRTC based streaming to replace SRT.
Sean-Der•5mo ago
What are you using WHIP against today?

I am curious about adoption and features that would make big difference to users :)

shmerl•5mo ago
I'm not using it yet, I'm using SRT for LAN streaming, and it was hard to reduce latency. I managed to bring it down to just a bit below 1 second, but supposedly WHIP can help to make it very low which would be neat.
JimmaDaRustla•5mo ago
broadcast box
javier2•5mo ago
What is the performance like for AV1 / h264 in vulkan vs not vulkan?
cronelius•5mo ago
August 23nd
gyan•5mo ago
corrected
1zael•5mo ago
The Vulkan compute shader implementations are cool...particularly for FFv1 and ProRes RAW. Given that these bypass fixed-function hardware decoders entirely, I'm curious about the memory bandwidth implications. FFv1's context-adaptive arithmetic coding seems inherently sequential, yet they're achieving "very significant speedups."

Are they using wavefront/subgroup operations to parallelize the range decoder across multiple symbols simultaneously? Or exploiting the slice-level parallelism with each workgroup handling independent slices? The arithmetic coding dependency chain has traditionally been the bottleneck for GPU acceleration of these codecs.

I'd love to hear from anyone who's profiled the compute shader implementation - particularly interested in the occupancy vs. bandwidth tradeoff they've chosen for the entropy decoding stage.

scyzoryk_xyz•5mo ago
It must have been maybe 5 years ago a dev showed me FFMPEG and it blew my mind for dealing with video.

When I later wound up managing video post production workflows my CMD line or terminal use dropped a few jaws.

I've since been relying on LLM's to make FFMPEG commands so I don't even think about it.

cogogo•5mo ago
I had a bad experience with chatgpt think maybe 3 and stopped trying. My thought was the training examples were sparse given how hard a time I had finding what I needed via search. You’ve encouraged me to revisit (and yes I know models have made big gains since then).
scyzoryk_xyz•5mo ago
Well. Obviously if you have the attention span it probably makes most sense to actually learn the flags and teach yourself to write FFMPEG commands. That's the serious way to do it if you have a serious workflow.

But I've found it easier to brute force with LLM's because, like, every time I had to do video work it'd be something different. Prompts like 'I need to remove this and this and change the resultion from this to that', 'I need it to be this fps or that, or even I want this file to weigh this much. Or I 'need to split these two' or 'combine those three'. It'll usually get you a chunk of the way there. Another prompt or two of double-checking, copy paste into CMD line or terminal and either brr or error copy paste what does this mean. 3 minutes later it's doing the thing you wanted, and you're more or less understanding what's it giving you.

But I keep an Obsidian file with a bunch commands that made me happy before. Dumping that I to the context window helps.

Another one has been multi camera, multi screen recordings with OBS. I discovered it was easier to do the math, make a big canvas, record all the feeds onto those so I don't have to think about syncing anything later. Then brr an FFMPEG command to output that 1920x1080 and that 3840x2160

Whisper is great with that too - raw recording, output just the audio. 'give me whisper command to get this as srt'. Then 'now render subtitles onto this video'

There was an experiment I tried that kinda almost worked where I had this boring recording of some conversation but needed to extract scattered bits. Used whisper to get transcript, put that into LLM, used that to zero in on the actual bits that were important, then got it to spit out the timecodes. Then hobbled together this janky script that cut out those bits and stitched them together. That was faster than taking the time to do it with a GUI and listening it all through.

Of course there are tools like opus clip that spit that out for you now so...

Although to be honest, when the stakes go high and you're doing something serious that requires quality you do it slow.

The point at which I was doing this most was when I was doing video UX/UI research on a hardware/software product. We would set up multi-cams, set and forget so we could talk to subjects and not think about what's being captured.

Dozens of hours of footage, little clips that would end up as insights on the Product Discovery Jira for the thing. So quality wasn't really important.

JSR_FDED•5mo ago
Tangentially, 50% of effort goes into assembling long complex CLI commands, and 50% fighting with escaping for the shell. Adding text to a video adds it’s own escaping hell for the text.

Has anyone found a bulletproof recipe for calling ffmpeg with many args (filters) from python? Use r-strings? Heredocs?

edge17•5mo ago
Agree with this, but I think LLM's have been a net positive in helping generate commands? Admittedly, getting working commands is still tough sometimes, and i'm 50/50 on whether ChatGPT saved me time vs reading docs.
ElectricalUnion•5mo ago
subprocess.run, with list args?
tush726•5mo ago
ffmpeg is one of the backbones of so many tools that people don’t even realize how much it has contributed to the media landscape. It’s my go to tool for any kind of audio/video automation.
vismit2000•5mo ago
Is there an easy way to denoise an audio file using ffmpeg to remove constant hum sound from an old audio recording introduced due to low quality of recording instrument?
Ey7NFZ3P0nzAe•5mo ago
You should take a look at sox instead. What ffmpeg is to video, sox is to audio.
pbmahol•5mo ago
What is latest sox release? Why you ignore all ffmpeg audio filters?
pabs3•5mo ago
Has anyone got files/formats that can't be decoded by ffmpeg?
renewiltord•5mo ago
Pretty insane software. I use it all the time. Only thing I've wished for is animated webp support because I'm lazy.