Instead of: ffmpeg -i video.mp4 -vf "fps=15,scale=480:-1:flags=lanczos" -loop 0 output.gif
You write: ff convert video.mp4 to gif
More examples: ff compress video.mp4 to 10mb ff trim video.mp4 from 0:30 to 1:00 ff extract audio from video.mp4 ff resize video.mp4 to 720p ff speed up video.mp4 by 2x ff reverse video.mp4
There are similar tools that use LLMs (wtffmpeg, llmpeg, ai-ffmpeg-cli), but they require API keys, cost money, and have latency.
Ez FFmpeg is different: - No AI – just regex pattern matching - Instant – no API calls - Free – no tokens - Offline – works without internet
It handles ~20 common operations that cover 90% of what developers actually do with ffmpeg. For edge cases, you still need ffmpeg directly.
Interactive mode (just type ff) shows media files in your current folder with typeahead search.
npm install -g ezff
HelloUsername•1mo ago
Tempest1981•1mo ago
geysersam•1mo ago
> Write an ffmpeg command that implements the "bounce" effect: play from 0:00 to 0:03, then backwards from 0:03 to 0:00, then repeat 5 times.
Tempest1981•1mo ago
Maybe this should be an AI reasoning test.
Here is what eventually worked, iirc (10 bounces):
Terr_•1mo ago
... Provided that the user sees what's being made for them and can confirm it and (hopefully) learn the target "language."
Tutor, not a do-for-you assistant.
eviks•1mo ago
left-struck•1mo ago
serial_dev•1mo ago
skydhash•1mo ago
rolfus•1mo ago
* - Just a few days ago I used ImageMagick for the first time in at least three years. I downloaded it just to find that I already had it installed.
lukeschlather•1mo ago
xattt•1mo ago
famahar•1mo ago
skydhash•1mo ago
You can’t verify LLM’s output. And thus, any form of trust is faith, not rational logic.
ben_w•1mo ago
With an LLM’s output, it is short enough that I can* put in the effort to make sure it's not obliviously malicious. Then I save the output as an artefact.
* and I do put in this effort, unless I'm deliberately experimenting with vibe coding to see what the SOTA is.
skydhash•1mo ago
In the case of npm and the like, I don't trust them because they are actually using insecure procedures, which is proven to be so. And the vectors of attacks are well known. But I do trust Debian and the binaries they provide as the risks are for the Debian infrastructure to be compromised, malicious code in in the original source, and cryptographic failures. All threes are possibles, but there's more risk of bodily harm to myself that them happening.
josephg•1mo ago
Well, you can verify an LLM's output all sorts of ways.
But even if you couldn't, its still very rational to be judicious with how you use your time and attention. If I spent a few hours going through the ffmpeg documentation I could probably learn it better than chatgpt. But, its a judgement call whether its better to spend 5 minutes getting chatgpt to generate an ffmpeg command (with some error rate) or spend 2 hours doing it myself (with maybe a lower error rate).
Which is a better use of my time depends on lots of factors. How much I care. How important it is. How often that knowledge will be useful in the future. And so on. If I worked in a hollywood production studio, I'd probably spend the 2 hours (and many more). But if I just reach for ffmpeg once a year, the small% error rate from chatgpt's invocations might be fine.
Your time and attention are incredibly limited resources. Its very rational to spend them sparingly.
d-us-vb•1mo ago
It isn’t fair to say “since I don’t read the source of the libraries I install that are written by humans, I don’t need to read the output of an llm; it’s a higher level of abstraction” for two reasons:
1. Most Libraries worth using have already been proven by being used in actual projects. If you can see that a project has lots of bug fixes, you know it’s better than raw code. Most bugs don’t show up unless code gets put through its paces.
2. Actual humans have actual problems that they’re willing to solve to a high degree of fidelity. This is essentially saying that humans have both a massive context window and an even more massive ability to prioritize important things that are implicit. LLMs can’t prioritize like humans because they don’t have experiences.
beepbooptheory•1mo ago
imiric•1mo ago
And, realistically, compute and power is cheap for getting help with one-off CLI commands.
pixelpoet•1mo ago
RadiozRadioz•1mo ago
chpatrick•1mo ago
skydhash•1mo ago
NewsaHackO•1mo ago
beepbooptheory•1mo ago
lukeschlather•1mo ago
ThrowawayTestr•1mo ago
beepbooptheory•1mo ago
geysersam•1mo ago
corobo•1mo ago
The problem is someone decided that and the contents of Wikipedia was all something needs to be intelligent haha
madeofpalk•1mo ago
Marazan•1mo ago
It is almost like there is hardwiring in our brains that makes us instinctively correlate language generation with intelligence and people cannot separate the two.
It would be like if for the first calculators ever produced instead of responding with 8 to the input 4 + 4 = printed out "Great question! The answer to your question is 7.98" and that resulted in a slew of people proclaiming the arrival of AGI (or, more seriously, the ELIZA Effect is a thing).
Kiro•1mo ago
vovavili•1mo ago
andrepd•1mo ago