Video editing

With AviSynth+

Installation

  1. Download aged AviSynth+ installer
  2. Install the old version of AviSynth+
  3. Download newest version of AviSynth+ as .7z
    • Open the file with 7zip
  4. Copy-paste its files to your AviSynth+ directory

Your first plugin: FFMS2

You will need FFMS2 to handle your video files.

First you install it:

  1. Download FFMS2
  2. Copy-paste to x86 and x64 plugin folders plugins+ and plugins64+ respectively

Workflow

So how do you create and edit stuff?

Basically, what you do is:

  1. Have a video ready for editing
  2. Create an .avs file
  3. Write a script for editing your video file

Already, you can just open this .avs file in a video player and see the result! It’s pretty crazy.

Several programs including many of Adobe’s support .avs files. You can also encode1 your .avs script to a new video directly.

Let’s get started with an example.

Your first AviSynth script
  1. Have a test video file ready; its format shouldn’t matter
  2. Download and install AvsPmod
  3. Install ffmpeg
    • I don’t want to repeat the instructions from the mpv guide here
    • Installing ffmpeg can be tricky, so you can put off doing it if you want
    • You might need ffmpeg built with --enable-avisynth. If you’re on Windows, you should be fine and can move on

Alright. First, get the path of your video file, eg: C:\Users\Cygnatus\Desktop\avisynthguide\test.mkv:

Now open AvsPmod.

Enter this and save the file as whatever.avs:

1
2
3
4
5
myvid = "C:\Users\Cygnatus\Desktop\avisynthguide\test.mkv"
LoadPlugin("C:\Program Files (x86)\AviSynth+\plugins64+\ffms2.dll")
FFmpegSource2(myvid, atrack=-1)
Trim(0, 500)
Subtitle("This is dope af")

Remember how we installed the FFMS2 plugin?

  1. Creates variable for your video file
  2. Loads FFMS2 plugin (x64)
  3. Opens your myvid
    • atrack=-1 loads the audio track
  4. Trims your video to its first 500 frames
    • AviSynth seeks through videos by frames, not time
  5. Superimposes text
    • “Subtitle” was probably not the best way to name the function

Now press F5 to update your video preview and see the result.

How cool is this?!

And we did it with tools that are 100% free, too.

If you look to the right of your video preview, you can see a UI for changing your function parameters.

A UI for changing the function parameters

You know what else you can do? Save your changes and open whatever.avs in something like mpv or MPC-HC.

It works!

Obviously, you can’t just upload the .avs file as it merely includes scripts and file references, but all your operations and work are contained in that tiny little file. You don’t need to operate on a giant video file and project that demands a lot of CPU and RAM juice.

To learn more about AviSynth (non-plus), I recommend the Let’s Play wiki. It’s a little old, so disregard some of its tips like using AviSource() and Import() for opening video files; they won’t open formats like MKV.

Clearly, the current Subtitle() text isn’t remotely satisfying. I’m not going to do a deep dive to explain how to bend it exactly to your will, but here is a better example of how you’ll use it:

Subtitle("THIS IS DOPE AF", \
    align=5, \
    size=128, font="Impact", \
    text_color=$FFFFFF, halo_color=$000000 \
)

(\ is used to escape for line breaks inside the function.)

A quick note on multithreading

AviSynth+ comes with support for multithreading; check out the docs on multithreading.

This is fairly overwhelming to get into so take comfort in the fact that your FFmpegSource2() has automatic multithreading:

[D]efaults to the number of logical CPU’s reported by the OS. Note that this setting might be completely ignored by libavcodec under a number of conditions; most commonly because a lot of decoders actually do not support multithreading.

FFMS2 docs

I’ll eventually take a look at how this works in detail, but for now I just want you to know that multithreading is a feature that does exist in AviSynth+.

External video player

AvsPmod can also open the video preview in an external player with F6. You can set this in the Program Settings.

One thing you will notice in AvsPmod when editing videos is that the previews don’t have audio. What I do instead is set VirtualDub as my external video player, hit F6 and check whether the audio’s as it should be.

Be aware that copy-pasting frame numbers doesn’t work; you’ll have to right click and pick Copy for some reason. Make sure you copy the right number.

You can also use Aegisub, but its controls take some getting used to so you may want to start out with VirtualDub first.

AvsPmod shortcuts

Video preview
Shortcut Command
F5 Refresh video preview
Shift+F5 Switch video/text focus
F6 Open video preview in external video player
Video navigation
Shortcut Command
Left/Right Move by 1 frame
Up/Down Move by 1 second2
Page Up/Down Move by 1 minute
Ctrl+B Bookmark frame
F2 Move to next bookmark
Shift+F2 Move to prev bookmark

There are no default shortcuts for moving to the first and last frames, but you can add ones yourself.

  1. Go to Options -> Edit shortcuts.
  2. Go down the list almost halfway to Video -> Navigate -> Go to first/last frame.
  3. Let’s go with Ctrl + Left/Right.
Video playback
Shortcut Command
Ctrl+R Play/Pause
Shift+Num- Decrease speed
Shift+Num+ Increase speed
Shift+Num/ Normal speed
Shift+Num* Maximum speed
Troubleshooting

MediaInfo is a fantastic tool for getting a full breakdown of information about the video you want to edit. Drag and drop your video and choose Text under View. This also makes it easy to copy-paste when asking for help.

The colours are off and the image is distorted

You might be using the wrong pixel format. Pass the colorspace argument like so:

FFmpegSource2(myvid, atrack=-1, colorspace="RGB24")
The audio does not sync with the video

It could be the case of a variable framerate, not bitrate—instead of a constant framerate. You can inspect your video with MediaInfo to find out.

If you’re a Twitch streamer, you should be streaming with a framerate of 60, so try fpsnum=60 for size and make it a default part of your scripts:

FFmpegSource2(myvid, atrack=-1, fpsnum=60)
My .avs video is stuttering

That your .avs preview stutters doesn’t mean the final encoding will.

Most likely, your player or computer can’t keep up with the on-the-fly processing it takes to render the video. Try encoding the video—or just the trimmed offending section—and see if it helps.

I need to change the sample rate of the video

Remember you can always inspect your videos’ audio sample rate in MediaInfo.

When converting between 44.1 kHz and 48 kHz, be sure to use SSRC() which has better quality than ResampleAudio().

I’m getting a useless error

Make sure you’re consistent in using using either the 32- or 64-bit version of your

64 vs 32 bit

In general, 32-bit software will access plugins+/, and 64-bit software plugins64+/. To some extent, you can handle this by creating separate functions and .avsi files for plugins+ and plugins64+ that load the 32- and 64-bit logic respectively. Eg:

# cygnatus32.avsi
LoadPlugin("C:\Program Files (x86)\AviSynth+\plugins+\" + "ffms2.dll")

# cygnatus64.avsi
LoadPlugin("C:\Program Files (x86)\AviSynth+\plugins64+\" + "ffms2.dll")

All the more reason to abstract some of your code rather than hardcoding everything.

MeGUI (see below) can also provide you with very helpful error messages if you feed it your .avs file and let it encode. Same with ffmpeg (see below), although it’s not as friendly as MeGUI

Encoding your video the hard way with ffmpeg

For now, change the plugin directory in your .avs file from plugins64+ to plugins+. This is because we installed the x86 version of ffmpeg instead of x64.

Open your command prompt by just clicking the address field of the folder where you keep your .avs script, type cmd, and enter:

ffmpeg -i input.avs -c:v libx264 -crf 0 output.mp4

This may take a while. If you lose your patience, you can encode with a lossy method by increasing your crf; ffmpeg accepts 0–51, and 23 is the default.

Here is what I said about ffmpeg presets in my OBS guide:

The available presets are:

  • ultrafast
  • superfast
  • veryfast (default)
  • faster
  • fast
  • medium
  • slow
  • slower
  • veryslow
  • (placebo)

A preset is a collection of options that will provide a certain encoding speed to compression ratio. A slower preset will provide better compression (compression is quality per filesize). This means that, for example, if you target a certain file size or constant bit rate, you will achieve better quality with a slower preset. Similarly, for constant quality encoding, you will simply save bitrate by choosing a slower preset.

ffmpeg preset documentation

You can view the specific options for all ffmpeg presets in the repo a nice bloke compiled. Or this simplified preset overview.

The default in OBS is the veryfast preset, but if you find yourself with CPU usage to spare, try a slower preset—but don’t pick one that drops you below your 60/30 FPS. If you’re using a streaming PC, it literally makes no sense to use a slower preset if your CPU can handle it just fine.

As for performance, the ffmpeg x.264 docs have this to say:

How do the different presets influence encoding time?

This depends on the source material, the target bitrate, and your hardware configuration. In general, the higher the bitrate, the more time needed for encoding.

Here is an example that shows the (normalized) encoding time for a two-pass encode of a 1080p video:

Encoding time for each preset plotted

Going from medium to slow, the time needed increases by about 40%. Going to slower instead would result in about 100% more time needed (i.e. it will take twice as long). Compared to medium, veryslow requires 280% of the original encoding time, with only minimal improvements over slower in terms of quality.

Using `fast` saves about 10% encoding time, `faster` 25%. `ultrafast` will save 55% at the expense of much lower quality.

Note that this refers to 2-pass encoding; for livestreaming, you’ll be using 1-pass encoding.

To learn more about this, I heartily recommend the OBS blog post.

When you think about getting a streaming PC, the preset should be what decides whether you go low-end or high-end. If you’re just going to use the default veryfast, maybe you should just go with a CPU that’s just fast enough to run it at 1080p60 with Lanczos downscaling.

Always remember, the higher the bitrate, the less you have to worry about CPU presets. If you can record your 720p60 video in 25 Mbps instead of the usual 6 Mbps, you’re going to get a much better result just encoding that.

This is a more opaque approach that makes more sense with regards to explaining the CPU demands.

There are also configurations I won’t dwell on known as tunes:

Tunes are additional settings, configurations, profiles, whatever in OBS that are not enabled by default. We won’t be using them either here.

Screenshot of the **Tunes** dropdown in OBS

You can optionally use -tune to change settings based upon the specifics of your input. Current tunings include:

  • film – use for high quality movie content; lowers deblocking
  • animation – good for cartoons; uses higher deblocking and more reference frames
  • grain – preserves the grain structure in old, grainy film material
  • stillimage – good for slideshow-like content
  • fastdecode – allows faster decoding by disabling certain filters
  • zerolatency – good for fast encoding and low-latency streaming
  • psnr – ignore this as it is only used for codec development
  • ssim – ignore this as it is only used for codec development

For example, if your input is animation then use the animation tuning, or if you want to preserve grain in a film then use the grain tuning. If you are unsure of what to use or your input does not match any of tunings then omit the -tune option. You can see a list of current tunings with -tune help, and what settings they apply with x264 --fullhelp.

ffmpeg tune documentation

Oh yeah, there’s also something called profiles best described as a supported feature set with varying compatibility depending on what technology you’re working with. In increasing order of features and decreasing order of compatibility, the profile options are:

  • baseline
  • main
  • high

This is what I use:

ffmpeg -i input.avs -preset slow -c:v libx264 -crf 18 -c:a aac -ar 48000 -pix_fmt yuv420p output.mp4

(-strict 2, shorthand for -strict experimental ie experimental features, isn’t required with this codec in newer versions of ffmpeg4. You probably want to use -libfdk_aac instead of -aac, too, but it’s not as simple to use on Windows.)

This encodes with 16 FPS (ie one fourth of a video second per second) on an eight-year-old i5-760 CPU (ie no Hyper-threading). Use the FPS metric at the bottom of your ffmpeg progress line to gauge whether your encoding seconds are too slow for your patience. Just increase the crf or speed of the preset until you find something tolerable.

You should also check Task Manager to make sure ffmpeg uses as much of your CPU as you want it to; try right-clicking your CPU graph and choose Logical processors under Change graph to.

Default CPU graph display in Task Manager

CPU graph display in Task Manager with **Logical processors** display

To maximize CPU utilization, ffmpeg has a -threads option—that goes at the start as a global command—to specify the number of CPU threads used for encoding. ffmpeg should use use all your threads by default, assuming you’re using a decent codec, but never expect software to do what you want it to. Unfortunately, there doesn’t seem to be official documentation about this, at least not in the top Google hits.

Another useful ffmpeg feature is support for multiple inputs and outputs. With this feature, you can create separate videos for different resolutions, bitrates, and platforms using this simple syntax:

ffmpeg -i input \
    -acodec ... -vcodec ... output1 \
    -acodec ... -vcodec ... output2 \
    -acodec ... -vcodec ... output3

ffmpeg is the Swiss army knife of video; you can do literally anything with it, so we won’t ever cover everything it can do. Like GIFs.

Word of advice! Unlike Latin, syntax order matters in ffmpeg. Go by the examples on the official ffmpeg documentation when you can.

If you encoding crashes, your CPU may not be able to keep up. Check your CPU usage, close some programs, and use a faster preset.

Make sure to also check whether the ffmpeg you’re calling is actually the one from C:\ffmpeg\bin—or whereever you put it—by opening PowerShell and typing either of these:

(Get-Command ffmpeg).Path
(gcm ffmpeg).Path

I found out that ImageMagick had changed my system to use its ffmpeg.exe, despite my PATH environment variable. Renaming the file to _ffmpeg.exe did the trick, although you’ll still have to make sure your ImageMagick works down the line.

Encoding your video the easy way with MeGUI

(WIP.)

When I’m making a dumb video, I’ll admit to just using MeGUI as backup when ffmpeg throws errors I can’t wrap my head around. It is also disgustingly easy to use once your get going.

MeGUI interface

The interface can seem overwhelming, but it’s quite manageable if you just take it from top to bottom.

MeGUI with script selected

First, pick your AviSynth script by dragging and dropping it to the app or by selecting it manually.

Settings

Go to Settings and change some defaults in Main Configuration:

You might want to experiment with FFMS Thread Count under External Program Configuration, but it appears to be an experimental feature. Leave it at the default 1 for now.

Hit Save and close the settings dialogue.

Video encoder config

Next up, click Config next to Encoder settings under Video encoding.

MeGUI default encoder settings

The encoder here is essentially ffmpeg, and what you see here is the configuration you’ll use for it to encode your video.

Start by creating a new preset with your own settings by clicking New below and naming it whatever you want.

You’ll want to direct your attention at two things here:

  1. the Quality setting
  2. the Preset setting

Quality corresponds to the crf setting in ffmpeg as seen in the dialogue’s output box below. Recall what I said about ffmpeg and crf before:

-crf: Constant rate factor (0=lossless)

[Y]ou can encode with a lossy method by increasing your crf; ffmpeg accepts 0–51, and 23 is the default.

I prefer setting this to 18 instead of the default 23.

Let’s revisit what a Preset is:

The available presets are:

  • ultrafast
  • superfast
  • veryfast (default)
  • faster
  • fast
  • medium
  • slow
  • slower
  • veryslow
  • (placebo)

A preset is a collection of options that will provide a certain encoding speed to compression ratio. A slower preset will provide better compression (compression is quality per filesize). This means that, for example, if you target a certain file size or constant bit rate, you will achieve better quality with a slower preset. Similarly, for constant quality encoding, you will simply save bitrate by choosing a slower preset.

ffmpeg preset documentation

You can view the specific options for all ffmpeg presets in the repo a nice bloke compiled. Or this simplified preset overview.

The default in OBS is the veryfast preset, but if you find yourself with CPU usage to spare, try a slower preset—but don’t pick one that drops you below your 60/30 FPS. If you’re using a streaming PC, it literally makes no sense to use a slower preset if your CPU can handle it just fine.

As for performance, the ffmpeg x.264 docs have this to say:

How do the different presets influence encoding time?

This depends on the source material, the target bitrate, and your hardware configuration. In general, the higher the bitrate, the more time needed for encoding.

Here is an example that shows the (normalized) encoding time for a two-pass encode of a 1080p video:

Encoding time for each preset plotted

Going from medium to slow, the time needed increases by about 40%. Going to slower instead would result in about 100% more time needed (i.e. it will take twice as long). Compared to medium, veryslow requires 280% of the original encoding time, with only minimal improvements over slower in terms of quality.

Using `fast` saves about 10% encoding time, `faster` 25%. `ultrafast` will save 55% at the expense of much lower quality.

Note that this refers to 2-pass encoding; for livestreaming, you’ll be using 1-pass encoding.

To learn more about this, I heartily recommend the OBS blog post.

When you think about getting a streaming PC, the preset should be what decides whether you go low-end or high-end. If you’re just going to use the default veryfast, maybe you should just go with a CPU that’s just fast enough to run it at 1080p60 with Lanczos downscaling.

Always remember, the higher the bitrate, the less you have to worry about CPU presets. If you can record your 720p60 video in 25 Mbps instead of the usual 6 Mbps, you’re going to get a much better result just encoding that.

Set this to Slow instead of the default Medium.

MeGUI custom encoder settings

Save your settings and move on. As always, you can always lower both of these two settings; it’s all a matter of how much bang for your buck you get. It’s also worth considering that I’m writing this guide using an almost 10-year-old CPU, i5-760.

Change the File format to something else than MP4 if you’re so inclined.

Audio encoder config

Next, click on Config for Encoder settings under Audio encoding.

Create a new profile and change Bitrate to 160, since this is our default when streaming on Twitch.

Queues and analysis passes

You might have noticed three panes at the top of the main app:

  1. Input
  2. Queue
  3. Log

Click Queue, and you’ll see an empty list of jobs.

(WIP.)

Free graphical video editors

I haven’t used it myself, but OpenShot looks like a fun, free, and simple visual video editor.

Unfortunately, it currently doesn’t support .avs files.

Another editor which is also very popular is DaVinci Resolve. Downloading it requires going through an annoying registration, though.

It doesn’t support .avs files either—and the interface is a looot more confusing to people new to video editing, so definitely start out with OpenShot.

Aegisub

AviSynth and AvsPmod are excellent for coding, but aren’t ideal for finding the exact keyframes for video and audio, especially when you want to add subtitles.

Aegisub is the perfect tool for this. It’s been around for literally a million years when weebs had to sub their anime, so it’s well polished over the years.

Before you dive into tutorials and change the settings, go to Subtitle and Styles Manager. Under Catalog of available storages at the top, click New and make your own. Then go to Preferences (Alt+O) and under General and Default style catalogs import your own style catalog wherever applicable.

Your catalog will be available under %APPDATA%\Aegisub\catalog as a .sty file.

Aegisub is so awesome it actually takes .avs files. However, its dependencies are a little old and won’t work for AviSynth+ without some work some work. You first need to move some .dll files. Assuming you’re using the 32-bit versions of both and installed them in Program Files (x86), do the following:

To get started with Aegisub, I recommend Bellmaker’s two videos. What I really like about Bellmaker is that he teaches you the shortcuts right off the bat so you get into the actual workflow immediately.

Once you’ve created your subtitle files, load them in AviSynth using the xy-VSFilter plug-in and the TextSub/VobSub function. (VSFilter docs.)

You can see a full list of subtitle plug-ins on the AviSynth wiki.

Free image editors

You think you have everything down as you get ready to upload your video, but then you suddenly remember you have to add a custom thumbnail, too.

I’ve found Gimp and Paint.net to get the job done just fine. The main thing to get right with a thumbnail is to add the title—hed or dek—of the video to the thumbnail in big, fat letters. The only thing you have to figure out is which font to use, so take your time to find on so you don’t have to fidget with it just before you finish uploading your video.

Free audio editors

Audacity, Audacity, Audacity.

I won’t dwell on besides three things.

First you should always use WAV when you get the chance. There’s always AIFF if you so desire.

Second, the easiest way to pull the audio from a video

Third, here are some shortcuts to live and die by:

Selection

Shortcut Command
Ctrl+T Remove everything outside selection
Shift+Left Extend selection on left
Ctrl+Shift+Right Contract selection on left
Shift+Right Extend selection on right
Ctrl+Shift+Left Contract selection on right

Playback

Shortcut Command
Shift+F5 Play short period before selection start
Shift+F6 Play short period after selection start
Shift+F7 Play short period before selection end
Shift+F8 Play short period after selection end
Ctrl+Shift+F5 Play short period before and after selection start
Ctrl+Shift+F7 Play short period before and after selection end

Seeking

Shortcut Command
Left/, Short seek backwards
Right/. Short seek forwards
Shift+Left/, Long seek backwards
Shift+Right/. Long seek forwards

Check out the manual for more Audacity shortcuts.

The nitty gritty

Resizing and sampling

As I point out in the optimization guide, resizing isn’t just resizing:

What also sucks is that we have to use compression algorithms for extremely generalized purposes. Maybe we’d be better off with one compression algorithm for movies and another one for videogames. But we all know how much diversity is contained in videogames alone: car games, twitchy FPSes, RTS, cinematic games, lush and epic narrative games, and so on.

A paper from 2014 suggests that we employ an algorithm to detect the nature of a scene and choose the optimal compression codec for it. And the idea of Dynamic HDR is to dynamically optimize HDR for a given scene instead of merely it merely once for a video.

One imagines videogames and streaming might one day warrant their own codec—or codecs.

Update Jan 04, 2018: Streamlabs have released their own OBS fork doing just that.

Oh, and there’s also resizing or downscaling algorithms to consider. At least OBS Studio limits our options to Lanczos.

These are all the options in the otherwise userfriendly ImageMagick:

-sample
-resample
-scale
-resize
-adaptive-resize
-thumbnail

By default, ImageMagick’s convert --resize downscales images with a Lanczos filter, but upscales with a Mitchell filter. You can read the full ImageMagick guide to resampling filters if you want to go down that rabbit hole.

Imagine being in Hollywood and making sure no quality is lost during this.

When you record in OBS Studio with my recommended settings, you:

Normally, we just use the term “resize”; it’s what the setting is called in macOS’s Preview.app, but as with everything in computer science, naming things optimally is hard.

But we already know that Lanczos is the way to go. Job done, right?

Well, first off, Lanczos isn’t just Lanczos; this is what the AviSynth wiki on resizing says about the LanczosResize() function:

LanczosResize(clip clip, int target_width, int target_height [,
    float src_left, float src_top, float src_width, float src_height, int taps ] )
Lanczos4Resize(clip clip, int target_width, int target_height [,
    float src_left, float src_top, float src_width, float src_height ] )

LanczosResize is a sharper alternative to BicubicResize. It is NOT suited for low bitrate video; the various Bicubic flavours are much better for this.

Lanczos4Resize is a short hand for LanczosResize(taps=4). It produces sharper images than LanczosResize with the default taps=3, especially useful when upsizing a clip.

In other words, LanczosResize isn’t even the optimal Lanczos resize! You’ll want to use Lanczos4Resize() or LanczosResize(taps=4) instead.

Here is someone listing their preferred resizing functions to give you an idea of what you’re up against:

from soft to sharp (according to my experience):

  • Bilinear
  • Bicubic(b=1./3, c=1./3)
  • Spline16
  • Bicubic(b=0, c=0.75)
  • Spline36
  • Blackman(taps=4)
  • Spline64
  • Lanczos3
  • Lanczos4

Update: New scaling algorithms keep getting released; as of version 2.60 of AviSynth+, SincResize() is also available. Take a look at that and compare the results to Lanczos.

The great thing about writing your own resizing function in an AviSynth .avsi file is that you only have to change the algorithm in that one place instead of all the files using calling it.

As I point out in the quote from the optimization guide above, different algorithms will often be optimal for different situations. When you move up from bicubic resizing, you start seeing so-called ringing effects. visit some of the screenshot examples link at the bottom of the Resize() wiki to see how the results differ between resizing algorithms.

Modern resizing functions include anti-ringing measures to mitigate these issues. AviSynth is not moving as fast on this as we’d like, but another Resize8() function with built-in anti-ringing and native cropping is available and worth exploring. Because things are never easy with this stuff, the author is Chinese so I hope your Mandarin isn’t rusty as you try to grok the original documentation. While the AviSynth only mentions Lanczos, it looks like v1.2 of this filter also supports Sinc.

Resize8()’s default algorithms:

By default, “Lanczos4” is set for luma upscaling, “Lanczos” is set for chroma upscaling, “Spline36” is set for luma/chroma downscaling.

For now, we’re sticking to Lanczos and the Resize() function in AviSynth—after all, things are always changing and I’m just doing the best I can to catch up. Finding the right balance between stable and experimental features is also a personal preference.

Some also suggest that something else than Lanczos might be preferable for “lower bitrates”. Make of that what you will feel free to experiment for your own use cases.

Don Munsil’s elaboration on LanczosResize() is also excerpted in the wiki, emphases mine:

For upsampling (making the image larger), the filter is sized such that the entire equation falls across 4 input samples, making it a 4-tap filter. It doesn’t matter how big the output image is going to be—it’s still just 4 taps. For downsampling (making the image smaller), the equation is sized so it will fall across 4 destination samples, which obviously are spaced at wider intervals than the source samples. So for downsampling by a factor of 2 (making the image half as big), the filter covers 8 input samples, and thus 8 taps. For 3× downsampling, you need 12 taps, and so forth.

The total number of taps you need for downsampling is the downsampling ratio times the number of lobes, times 2. And practically, one needs to round that up to the next even integer. For upsampling, it’s always 4 taps.

Quick terminology primer:

Let’s break down what that quote just said. Of course, it’s way more complicated than that.

Sampling maths

Ceil() is a common function that rounds up to the next integer. Its opposite is Floor() which rounds down. Round(), of course, rounds to the nearest integer.

If we downsample from 1080p to 720p, we downsample by a factor of

1080/720 = 1.5

This means the number of taps we need are 4 * 1.5, ie 6, ie LanczosResize(taps=6).

Sampling table

As a table with our horizontal base resolutions and vertical output resolutions, here are your ideal LanczosResize() taps:

Output/Base resolution 720p 1080p
720p - taps=6
1080p taps=4 -
1440p taps=4 taps=4
2160p taps=4 taps=4
Up- and downsampling with AviSynth
# Upsample 720p to 1080p
LanczosResize(1920, 1080, 4)

# Downsample 1080p to 720p
LanczosResize(1280, 720, 6)
# or
LanczosResize(1280, 720, Int(Ceil(4*Float(1080)/Float(720))))

If I’ve understood this currectly, you can express it as these functions:5

# eg plugins+/cygnatus.avsi
# eg plugins64+/cygnatus.avsi

function Downsample(clip c, int new_width, int new_height) {
    samples = Int(4*Ceil(Float(c.height)/Float(new_height)))
    return c.LanczosResize(new_width, new_height, samples)
}

function Upsample(clip c, int new_width, int new_height) {
    return c.LanczosResize(new_width, new_height, 4)
}

And use them like so:

# 1080p (or something else) to 720p, ie 1280x720
Downsample(1280, 720)

# 720p to 1080p, ie 1920x1080
Upsample(1920, 1080)

Update Feb 02, 2018: I refactored the resizing functions to a single function:

function CygResize(clip c, int new_width, int new_height) {
    # Set samples depending on up- vs downscaling
    samples = (c.width < new_width) \
        ? 4 \
        : Int(4*Ceil(Float(c.height)/Float(new_height)))

    return c.LanczosResize(new_width, new_height, samples)
}

If you’re a Twitch streamer, your recordings are probably only available in 720p60 at a 6,000 kbps bitrate. If, however, you record in a higher bitrate, you can take advantage of upsampling to upload your videos to YouTube in higher resolutions.

When you use something like ffmpeg to encode your .avs to .mp4, you can specify the maximum (constant) bitrate specified on the page.

How the hell do I magnify parts of my video?

Trim, crop and resize, basically.

Unfortunately, you can’t just apply a function to a select range of frames like with Subtitle(); you’ll have to splice the video and stitch it back together.

Say you want to feature the top right quadrant of your 720p video at frames 1000 and 1500.

Trim

If you aren’t already splicing your video, splice it into three pieces: before, during, and after magnificaton:

Trim(0, 999) ++ \
Trim(1000, 1500) ++ \
Trim(1501, 0)
Crop

The Crop() function works like so:

Crop(clip clip, int left, int top, int width, int height [, bool align ] )

Because we want to magnify the top right, we crop half the width of the left side and leave the top alone. We’ll end up with a quadrant with a width of width/2 and height/2. We can get these values from our active clip with last.width and last.height.

In other words:

Crop(last.width/2, 0, last.width/2, last.height/2)

For other sections of the screen, you can also use this Crop() variation:

Crop(clip clip, int left, int top, int -right, int -bottom [, bool align ] )

If you want to crop 50px from each side, you use:

Crop(50, 50, -50, -50)
Resize

Last, resize our quadrant to the full size of our original video.

We already made a neat function for this:

Upsample(1280, 720)
Putting it all together

Et voilà:

Trim(0, 999) ++ \
Trim(1000, 1500).Crop(last.width/2, 0, last.width/2, last.height/2).Upsample(1280, 720) ++ \
Trim(1501, 0)

If you used a resizing function directly, say ResizeBicubic(), instead of our own function you could also shorten this like so:

Trim(0, 999) ++ \
Trim(1000, 1500).ResizeBicubic(last.width/2, 0, last.width/2, last.height/2, 1280, 720) ++ \
Trim(1501, 0)

Of course, you can also add animation to this to show off.

Further reading

  1. Do not get into my mentions over whether the proper term is “encoding” or “transcoding”, or even “re-encoding”, for digital-to-digital conversions. Some people, god help them, even use the term “transsizing”. ↩︎

  2. A second is usually 24, 25, 30 or 60 frames for obvious reasons.

    Of course, it’s never that simple; 24p is technically 23.976 FPS.

    One of AviSynth’s great features is how it helps you convert between all these formats. ↩︎

  3. There are two main factors to consider for video encoding: quality and size/bitrate, the central trade-off.

    • For quality where size/bitrate is secondary, use a constant rate factor (CRF).
    • For size/bitrate where quality is secondary, use a constant bitrate (CBR).

    As you can probably imagine, it’s not exactly either-or, but these are the fundamentals to think of when encoding with ffmpeg.

    For more into, check out the “Encoding Strategy & Passes (--pass)” section of Jason Robert Carey Patterson’s encoding guide ↩︎

  4. “Still seeing -strict -2 or -strict experimental being used unnecessarily. It is no longer needed for the FFmpeg AAC encoder unless your ffmpeg cli tool is really old, and if that’s the case please update.” @FFmpeg ↩︎

  5. This glosses over the use of .avsi files that basically work like libraries or packages from other other programming languages with their own scope. They are also loaded automatically.

    To create variables with a global scope, you need to do something different; create a regular .avs file defining your variables, and load it with Import(). Check out its optional bool utf8 parameter in AviSynth+. eg: Import("C:\Users\Cyg\AviSynth\foo.avs", utf8=true). ↩︎