FFmpeg – Basic Filter Graphs
FFmpeg is considered the Swiss Army knife of video transcoding/streaming. Let’s start with some very basic examples today of what we can do with FFmpeg.
ffmpeg -y -i input.mkv output.mp4
In this simplest example, FFmpeg produces MP4 output from MKV input.
-y denotes that we want to overwrite output.mp4 if it already exists.
-i marks the input. In addition to these two, FFmpeg supports many other popular multimedia file formats, including MXF, AVI, WAV, M4A, JPG, PNG etc.
ffmpeg -y -i input.mkv -vcodec libx264 -acodec flac output.mkv
It is common to use FFmpeg to transcode from one codec to another. Here, we specify the video and audio codecs with
Take only video or audio
ffmpeg -y -i input.mkv -map 0:v output.mp4
-map 0:v selects the video streams from the first input (we have only one input file here anyway, marked by
0:a means the audio streams.
Resize the video
ffmpeg -y -i input.mkv -filter_complex scale=720:480 output.mp4
-filter_complex can be used to combine many filters together. Filters sit between the input and the output and make some change to the media flowing through them. Here we are using the
scale filter and specifying the output width and height.
ffmpeg -y -i input.mkv -ss 30.4 -to 40.15 output.mp4
-to are used to specify the start and end times in seconds.
Snapshot at time
ffmpeg -y -i input.mkv -ss 2.598 -vframes 1 output.jpg
We can give a filename with
.jpg extension, and
-vframes 1 will tell FFmpeg to produce a single image at the specified time instance.
Thumbnails at interval
ffmpeg -y -i input.mp4 -filter_complex "fps=1/5,scale=320:180" thumbnail-%03d.jpg
fps filter is used here to say that we need 1 frame every 5 seconds. We scale it to required dimensions and give a pattern for the thumbnail filename to be numbered as we like.
Extract audio channels
ffmpeg -y -i input.mxf \ -filter_complex "[0:a]asplit=2[a1][a2];[a1]pan=mono|c0=c0[ch1];[a2]pan=mono|c0=c1[ch2]" \ -map [ch1] channel_1.wav \ -map [ch2] channel_2.wav
Getting slightly complicated now, but the graph image above should make it easier to understand. We first fork the input audio with the
asplit filter into two identical branches labeled
a2. Then we connect each of these to
pan filters that are each set up to produce one mono channel. The first pan filter will select the first channel (
a1 and the second one will select the second channel (
a2, and we label the outputs from these pan filters
ch2 respectively. Lastly, we map
ch2 to produce two separate WAV files.
Complex graphs with shared filters
In this final example, we try to reuse common filters and produce simultaneous outputs in parallel.
ffmpeg -y -i input.mkv \ -filter_complex "[0:v]format=yuv420p,yadif,split=3[in1][in2][in3];[in1]scale=1920:1080[hd];[in2]scale=720:576,hflip[sd];[in3]fps=1/5,scale=320:180[thumbnails];[0:a]aresample=48000,asplit=2[a1][a2]" \ -map [hd] -map [a1] hd.mov \ -map [sd] -map [a2] sd-flipped.mp4 \ -map [thumbnails] thumbnail-%03d.jpg
As shown above, we apply the common operations early in the graph and try to reuse the filters whenever possible. For example, we reuse
aresample filters for the HD and SD outputs, but do the scaling for them separately since they require different dimensions. In addition, we do a horizontal flip with
hflip for the SD output only, just for fun! On the other hand, the thumbnails share some common filters (deinterlacing with
yadif) with the video filters, but then needs
scale to adjust the frame rate and size specific to thumbnails only.
FFmpeg supports a vast collection of many other filters which can be combined to implement a lot of advanced requirements. But I hope the above examples are helpful to understand the basics and get started with this versatile and extremely powerful tool.
If you are interested to learn more, here is a short FFmpeg course on Udemy, with concepts explained with easy diagrams and step-by-step hands-on demo videos with lots of practical use cases and examples.