< Previous | Contents | Next >

Material ID: These are user-assigned integers to materials.

Texture Coords: Normalized texture coordinates stored as (u, v) pairs.

Normal Vector: Normal vector (nx, ny, nz) where the components are typically in the range [–1, +1].

Background Color: The color of the pixel if the frontmost layer were removed, used for antialiased Z-compositing.

Vector: The forward motion vector is an offset (vx, vy) that compares every pixel’s position in one frame to the same pixel’s position in the next frame.

Back Vector: The backward motion vector is an offset (vx, vy) that compares every pixel’s position in one frame to the same pixel’s position in the previous frame.

World Position: The position (wx, wy, wz) of the pixel in world coordinates.

Disparity: An offset (dx, dy) that maps a pixel in the Left > Right or Right > Left frames.

Some extra channels are used by specific Fusion nodes.

For example:

— Merge can use the Z channel to perform a depth merge. If the Coverage and BackgroundColor channels are present, it can do a better job on antialiased edges during the Z merge.

— Most image-processing nodes (e.g., BrightnessContrast) have options on their common controls tab to limit their processing by MaterialID and ObjectID.

— The Fog and DepthBlur nodes make use of the Z channel.

— The Texture node makes use of the TexCoord channel.

— The Shader node makes use of the Normal channel.

There are a couple of ways to retrieve or generate those extra channels within Fusion.

For example:

— The Renderer3D node is capable of generating most of these channels.

image

— The OpticalFlow node generates the Vector and BackVector channels, and then TimeStretcher and TimeSpeed can make use of these channels.

— The Disparity node generates the Disparity channels, and then DisparityToZ, NewEye, and StereoAlign nodes can make use of the Disparity channels.

— The OpenEXR format can be used to import or export aux channels into Fusion by specifying a mapping from EXR attributes to Fusion Aux channels using CopyAux.


Optical Flow Workflows

The Optical Flow analysis is a non real-time process, and depending on your computer, the clip’s resolution, and the duration of the clip, it can take some time. Because of this, the general idea is that you pre-generate the motion vectors, either by performing the analysis overnight or using a render farm, and save results into an OpenEXR sequence. The Optical Flow toolset is designed around four types of nodes that either generate, destroy, pass through, or construct the motion vectors.

 

OpticalFlowTimeSpeed, TimeStretcherSmoothMotionRepair Frame, TweenAdvanced Optical Flow Processing