Eyeon:Manual/Fusion 6/Frame Formats

From VFXPedia

< Eyeon:Manual | Fusion 6
Revision as of 13:44, 26 August 2010 by SirEdric (Talk | contribs)
(diff) ← Older revision | Current revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Frame Formats

[ Main Manual Page ]


Frame Formats

One of the first tasks that will most likely be done with Fusion is loading footage from the hard drives into Fusion. Depending on the type of work, this footage may be NTSC, PAL, HD, film or even multimedia.

Each of these formats has different properties, from pixel aspect ratios to field order, and resolution to color depth. Since Fusion can mix and match material of different formats together seamlessly in a single composite, it is important to note how Fusion configures and combines materials of different formats when loading and merging them together.


Setting The Frame Format

The first thing to do when starting on a new project is to set the preferences to match the intended format of the final output.

The Frame Format preferences are used to determine the default resolution used for new creator tools (i.e. Text or Background), pixel aspect for display and rotation, as well as the frame rate used for playback.

If the same frame format is used day after day, the global frame format preferences should match the most commonly used footage. For example, with a HD Altitude system where the majority of the footage will be high definition, it will make sense to set up the preferences to match the HD frame format usually used.

The preferences are described in greater detail in their own chapter later in this manual. This chapter focuses solely on the behavior of the frame format preferences.


Setting Up The Default Frame Format for New Compositions

Open the Preferences dialog from the File menu. The preferences are organized into separate trees; one for the globals and new comps, and one for showing the preferences for each composition that is loaded into Fusion at that moment.

Open the tree for globals by clicking on the + sign next to its label. Click on the Frame Format entry in the list to open its controls with a dialog like the one pictured below.

The options selected here determine the default frame format for each new composition. Changing frame format options in the Globals section does not affect currently open compositions. It affects only the default used when creating new compositions.


Changing the Frame Format in existing Compositions

Each composition also presents its own copy of the preferences settings in the tree menu to the left. Because each comp maintains a copy of its own preferences, there can easily be one comp with a default PAL format open while also working on a second comp set to Film defaults.

To change the settings for the current comp, access the Frame Format controls listed under that comp's name in the tree menu.


For additional details on the settings in this preferences panel, see the Preferences documentation.


Fields

Although it is standard to say that the frame rate for NTSC is 30fps (or 29.97fps) and PAL is 25fps, this is actually misleading. Televisions display twice this number of frames per second. This is accomplished by storing two separate images in each frame of video. Each image contained in the frame is called a Field.

Image:F61_FrameFormat_Fields.png

The two fields are combined into a single frame by alternating scanlines (rows of pixels).

When a television receives the frame and displays it, the television first shows every second scanline, leaving a blank line between each to avoid squashing the image. It then fills in the blank lines with the scanlines it missed in the first pass. In this manner, two images are stored in a single frame by interleaving the images into alternate scanlines. The television displays a new field every 1/60th of a second (NTSC) or every 1/50th of a second (PAL).

This technique dramatically increases the smoothness of motion but at the expense of image quality. Both NTSC and PAL employ fields in this manner, as do several of the HD standards. There are no commonly used film formats that use fields to increase apparent frame rate.

Although each frame of NTSC video is said to be 720 x 486 pixels, each frame actually contains two half-height fields, for an actual resolution of 720 x 243. PAL video frames are 720 x 576 pixels, with half-height fields at a resolution of 720 x 288. The ordering of these fields is different between NTSC and PAL.

When processing fielded video for effects and image processing, it is important that each field is treated as a separate frame. Because of the field order difference between NTSC and PAL, to separate each frame of video into its component fields, Fusion requires the field order for that footage. To do this, select the Field Order from the Process Mode control in the Loader.

If the first scanline at the topmost row of pixels in an image is the upper field and the second row of pixels is the lower field, it is easy to see how each video format alternates fields.

NTSC Fields
With NTSC Fields, the first field is stored in the lower, or second, scanline. This is occasionally referred to as Odd Field Order because the first field is stored on the odd numbered scanlines (rows 1, 3, 5, 7...).
PAL Fields
With PAL Fields, the first field is stored in the upper, or first, scanline. This is occasionally referred to as Even Field Order because the first field is stored on the even numbered scanlines (rows 0, 2, 4, 6...).
HD Fields
With HD Fields, the first field is stored in the upper, or first, scanline, as with PAL.

Fusion and DFX+ will automatically process a frame as two separate fields if the loader's process mode is set to anything other than Full Frames. The second field is always shown as x.5 (as in frame 1, 1.5, 2, 2.5...).

Combining Fielded And Full Frame Images

Although the combining of fields and frames is handled by Fusion, it takes some experience to understand exactly what happens.

Accidentally processing fielded footage as if it were frames can seriously affect the quality of the results, particularly if warps or transformations are applied to that layer. Fields can be incorrectly blended together, causing ghosting and stuttering on playback.

Conversely, processing full frame footage as if it were fields does no serious harm but may lower the general quality of the results, mostly because there is less information for Fusion to work with when calculating blurs and other filter type effects.

Hold the mouse pointer over a tool's tile in the flow view to see if it is outputting fields or frames. The tooltip that appears will read Field x: Frame Size if the tool is processing fields, otherwise it will only read Frame Size. This information will also appear immediately on the status bar. Image:F61_FrameFormat_ToolTip.png

When combining field-processed footage with full frame-processed footage in a merge, the merge will output the same type of images, frames or fields as the tool connected to the background input. In other words, when merging frames over fields, the result will be fielded, and when merging fields over frames, the result will be full frames.

For example, picture a flow with two loaders, connected to a merge. The loader that is connected to the background input of the merge tool is set to process as NTSC fields. The foreground loader is set to process as full frames. The merge will output fields, and any tools connected to the output of the merge will process the footage as separate fields.

The merge tool is particularly good at handling foreground and background images with varying sizes, aspects and field orders. Most other multi-input tools (such as dissolve or channel boolean) should be given the same size and type of images.

Note

For this reason, the comp's overall process mode, whether the saved results contain fielded motion or not, is determined by default by the process mode setting of the bottom layer in the composition; the first background or loader that is merged over all other layers. This can also be overridden by the saver's process mode setting, as described below.

The basic rule of thumb is simple. Always set loaders to the type of footage they are loading. It is also recommended to set the generated source tools (such as background or fast noise) to Full Frames to maximize the amount of information available to Fusion.

Setting Field Processing For The Flow

Since the mode of the tool in the background-most position determines the processing and field ordering when merging images together, one way to get the exact field order is to start every flow with a black Background tool that is set to the desired Process Mode, and with the color as R0, G0, B0, A0 (black with no alpha channel).

Another way is to set the Process Mode in the Saver tool. This defaults to Auto, which will use the bottom layer's mode. It can, however, can be forced to Full Frames or NTSC or PAL Fields. This effectively overrides the default process mode and renders the entire flow with fielded motion or unfielded frames, as desired.

Note

This does not override how the images are loaded or created by other source tools, only how those images are processed. It will change how the animation is completed with field or frame accuracy, but not the size of the images that are processed. In all cases, footage will be re-interlaced, if necessary, and saved as frames on disk.

For example, a still image can be loaded as Full Frames, with title text (created with a full frame Text+ tool) merged over the top and set to scroll upward. Saving this using a Saver with Process Mode set to Auto will result in frame-accurate animation; the frames on disk will not contain fielded motion. Setting the Saver's Process Mode to NTSC Fields will result in field-accurate animation; the text will be merged over the background twice as often.

The resulting frames will have the appropriate scanlines extracted and interlaced together and the frames saved to disk will contain smoother, fielded motion.

As another example, consider a non-animated title (created as a full frame) merged over fielded video footage (loaded as fields). By default, this would be saved with field motion, as that is the mode of the bottom layer (the footage). By forcing the Saver's Process Mode to Full Frames, only every second field is rendered. Every other field of the video footage would be dropped to avoid getting fielded motion in the results. The remaining field is interpolated up to full frame size, then saved.


Pixel Aspect

Pixel Aspect refers to the dimensions of an individual pixel. While computer screens and most film formats use square pixels, many video formats use rectangular pixels. When a computer is used to produce images for display on a device that does not use square pixels, it is important to account for the pixel aspect of the source material as well as the eventual display device.

Pixel aspect affects the way that images are displayed in the views, in addition to the way they are merged and transformed (especially rotation). If the pixel aspect of the footage is set to match its format, the images will be distorted.

Fusion represents pixel aspect as a ratio between width and height. A pixel with an aspect of 1:1 is perfectly square. A pixel with an aspect of 1:2 is twice as high as it is wide. Listed below are the pixel aspects for some of the more common formats.

NTSC (D1) - 0.9: 1 
NTSC (widescreen) - 1.2: 1 
PAL (D1) - 1: 0.9375 
PALPlus (widescreen) - 1: 0.703125 

How Pixel Aspect Affects The Display Views

When an image is displayed with a non-square pixel aspect in the display views, Fusion must transform the image slightly to compensate for the fact that the image is viewed on a computer monitor with square pixels. This ensures that the correct geometry is always viewed; circular shapes are not squashed into ellipses, squares do not become rectangles.

A very fast method is used to apply this transformation, which prevents it from slowing down the display and playback. The side effect to using a faster transformation is that the image quality may suffer slightly and occasional steps, or aliasing, may appear on diagonal edges.

If there is this type of aliasing in the image, try overriding pixel aspect compensation in the view by clicking on the display view toolbar button labelled 1:1.

Image:F61_FrameFormat_Smr.png

This will show the true square pixels without compensation. If the aliasing disappears, there will be nothing to worry about when the image is displayed in its final output format.

Alternatively, enable Smooth Resize (from the view context menu's Options submenu). This uses a filtered aspect transformation that eliminates the aliasing, but slows image display, sometimes quite noticeably.

Image:F61_FrameFormat_Smr2.png

Another way to detect apparent aliasing problems arising from aspect compensation is to view the image in a display that matches the image's pixel aspect. For example, if text for NTSC broadcast is created, viewing the image on the framebuffer of a supported device, such as the Bluefish444 or Video Toaster, will help identify if there is a real problem.

Setting Pixel Aspect For Creator tools

When a new creator tool is added to the flow in the composition, the pixel aspect for the image created is determined by the default in the comp's frame format preferences. Set a new aspect for the creator tool with the Pixel Aspect control in the tool's Image tab. Right-clicking on the label of this control will display a menu of default values for each format, or a value can be typed in manually.

Image:F61_FrameFormat_BGAspect.png


Setting Pixel Aspect For Loader tools

The handling of Pixel Aspect for loaded footage is decided by the Pixel Aspect button array under the Import tab of the Loader's tool controls. The following options are available.

Image:F61_FrameFormat_LDAspect.png

From File
This is the default mode for new loaders. When the From File button is selected, Fusion first looks to see if the pixel aspect for the image is stored in the header of the image file on disk. Certain formats (notably TIFF) can contain this information. If the aspect is present, that value will be used. Otherwise, the default value specified by the frame format preferences for the flow is used.
Default
The pixel aspect defined in the flow's frame format preferences is used, even if pixel aspect information for the image is stored in the file header.
Custom
Clicking on this button will expose a pixel aspect control that can be used to manually set the aspect for this footage. Try right-clicking on this control to display a menu of default values for each format, or type a value in manually.

If the pixel aspect of a tool is uncertain, hold the mouse over the tool's tile in the flow and a tooltip will appear with details on the resolution, color depth and pixel aspect of the image last processed by that tool, as well as on the status bar.

Combining Images With Different Pixel Aspects

When combining footage with different pixel aspects with a merge or dissolve tool, the output of the tool will have the same pixel aspect as the background input. The foreground input will be correctly transformed to produce a correct image. Most other multi-input tools (such as dissolve and channel boolean) ignore pixel aspect differences.


Color Depth And Processing

In Fusion, the Color Depth of an image refers to the number of bits used to represent each color channel. An 8 bit color channel will therefore use 8 bits of data for each pixel in the Red, Green, Blue, and Alpha channels, for a total of 32 bits per pixel. The higher the color depth, the more colors are available to represent an image and the more accurate the manipulation of those colors will be.

Image:F61_FrameFormat_DepthBG.png‎

Another factor to consider when discussing color depth is whether the color is stored and processed as integer values or floating point values. Integer processing limits the color so that it can never be brighter than white or darker than black. Float processing allows out of range data to exist. File formats such as OpenEXR, Cineon, HDRI and some variations of TIFF, FLX and RPF can all store out of range color values.

Supported Color Depths

8 Bit (Integer)
Generally, 8 Bit color processing is used for all video formats (although some equipment capable of producing video at 10 bit depths now exists). 8 bit is the lowest color depth and gradients produced at this depth may have visible steps that prevent them from appearing smooth. Also, significant gamma or color corrections may produce more visible banding.
16 Bit (Integer)
16 Bit color depth doubles the amount of precision used for images, eliminating problems with stepping in gradients and improving the resolution of the color. 16 bit color comfortably includes all of the colors that can be represented in standard digital film processing. Processing 16 bit per channel color takes longer and uses more memory than 8 bit processing.
Using 16 bit processing with footage that is stored in formats that only contain 8 bit data (for example, from tga files, bmp files, AVI or QuickTime movies, etc.) will not reduce banding in the original data but will provide greater precision when processing the tool with additional effects. Additionally, many 10 bit and even 8 bit YUV video formats will see some benefit when loaded with 16 bits of RGBA rather than 8.
16 Bit (Float)
This color depth sacrifices a bit of the dynamic range from 16 bit integer to allow storage of color values greater than white and less than black. This mode contains more than enough dynamic range for most film purposes, yet requires significantly less memory and processing time than is required for 32 bit float images.
32 Bit (Float)
32 bit float is like 16 bit float, except with a much greater range of values for even greater precision.

Setting Color Depth

As with image resolution, field order and pixel aspect, each loader or creator tool can define the color depth used to process the image it produces. The frame format preferences are used to set a default value for color depth, applied when a source tool is added to the flow.

There are three drop-down menus to configure color depth in the preferences. They specify the different color depths for the interactive session, final renders and preview renders.

Setting the Interactive and Preview depth to 8 bits per channel can improve system performance while assembling a flow, allowing the work to be faster, while final renders still get full 16 bit color resolution.

We do not recommend that integer color depths are mixed for interactive vs. final renders, since the way that float handles out of range values may produce significantly different results to color depths using integer values.

When adding a creator tool such as background or text, the composition's frame format preferences are used to determine its color depth. Later, if the color depth in the composition's preferences is changed, the color depth of that tool will also change.

Override the default color depth used for source tools by setting the Color Depth control located under the tool's Common Controls tab. When any option other than Default is specified, that image will always be processed at the specified depth, regardless of the setting in the preferences.

Image:F61_FrameFormat_DepthBG.png‎

Loaders handle color depth somewhat differently than creator tools. The Color Depth control for loaders is found under the Import tab, not under the Common Controls tab. There is an additional option in the button array, labelled Format. This is the default setting for new loaders.

Image:F61_FrameFormat_DepthLD.png‎

Fusion uses the color depth that makes the most sense for the file format stored on disk. For example, load a TGA file from disk and the color depth for the tool will be set to 8 bits per channel. The TGA format is an 8 bit format so loading the image at a greater color depth would generally be wasteful. If a 16 bit TIFF is loaded, the color depth will be 16 bits. Loading a Cineon or DPX file defaults to 32 bit float, whereas OpenEXR generally defaults to 16 bit float.

As with source tools, setting the Color Depth control to Default will force the loader to process at whatever color depth is configured in the flow's preferences. The other color depths in the array can be selected to force that image to process at a specified color depth, regardless of the preference or format.

If the color depth at which a tool processes is uncertain, place the mouse over the tool's tile in the flow and a tooltip will appear, listing the color depth for that tool as well as on the status bar.

Image:F61_FrameFormat_ToolTip.png‎

Combining Images With Different Color Depths

Fusion can combine images with different color depths together in a single composition. When images of different color depths are combined, the image from the foreground input of the tool will be adjusted to match the color depth of the background.

Advantages Of Floating Point Processing

Floating point images use significantly more RAM and take much longer to render than any other color depth. So what advantage does Floating Point Processing offer to offset the disadvantages?

There are two major advantages to floating point processing. The first is that floating point values do not lose accuracy through integer rounding. The second is the preservation of out of range values.

Accuracy

To demonstrate how floating point values prevent loss of accuracy caused by integer rounding, imagine an 8 bit pixel that has a red value of 75. An 8 bit color value ranges from 0 (black) to 256 (white), so our pixel is a bit dark.

Imagine that the gain of that pixel is halved with a color gain tool. Now the pixel's red value is half of 75, or 37.5. We cannot, however, store decimal or fractional values in 8 bits, so we must round that value to 37. Now we double the brightness of the pixel with another color gain tool. Now the pixel's R value is 37 multiplied by 2, or 74.

We lost a full value of precision due to integer rounding in this example, a problem which can result in visible banding over several color corrections. Similar problems arise when merging images together, or transforming them. The more operations that are applied to an image the more color precision is lost to rounding. The problem is much less noticeable at 16 bits per channel, although only because the differences are much smaller.

Floating point values range from 0.0 to 1.0. The value for our example pixel was 75 in 8 bit processing, but the same pixel processed in floating point color depth would have a value of 0.2941176 (75 divided by 255).

Because floating point processing allows decimal or fractional values for each pixel, it is not required to round off the values of the pixel to the closest integer. As a result, color precision remains virtually perfect, regardless of how many operations are applied to an image.


Out Of Range Values

For this example, imagine an 8 bit pixel that has a red value of 200 (bright red). Now add a color gain tool and double the brightness of the red channel. The result is 200 x 2 = 400. As mentioned above, however, 8 bit color values are limited to a range of 0 through 255. So the pixel's value will be clipped to 255, or pure red. If now the brightness of the result is halved, the result will be half of 255, or 127 (rounded), instead of the original value of 200.

When processing floating point colors, pixel values greater than white or darker than black are maintained. There is no value clipping. The pixel is still shown in the display view as pure red but, if float processing is used instead of 8 bit, the second operation where the gain was halved would have restored the pixel to its original value of 200.


Clipping Out Of Range Values

When processing in floating point, there may be situations where the out of range values in an image need to be clipped. The Brightness/Contrast tool provides checkboxes that can be used to clip out of range values to 0 or 1.

Image:F61_FrameFormat_Clip.png‎

For example, there may be files that contain out of range alpha values. Since the alpha channel represents the opacity of a pixel, it makes little sense to be more than completely transparent or more than fully opaque, and compositing such an image may lead to unexpected results. To easily clip alpha values below 0 and above 1, add a Brightness/Contrast tool set to Clip Back and Clip White, with only the Alpha checkbox selected.

Alternatively, clip the range by adding a Change Depth tool and switching to one of the integer color depths.


Detecting Out Of Range Values

Although floating point processing preserves out of range values, they are still shown as black or white on the display views. This can make it difficult to determine the overall dynamic range of an image. To discover if there are out of range values in a viewed image, click on the Show Full Color Range button in the display view toolbar.

Image:F61_FrameFormat_Normalize.png‎

Enabling this display mode will rescale the color values in the image so that the brightest color in the image is remapped to white and the darkest is remapped to black. Out of range colors will be brought back into visible range, and the overall contrast of the image will be lowered as a result. If there are out of range values in the image, the image will appear washed out while displayed in this mode.

The 3D Histogram view type can also be helpful in visualizing out of range colors in an image. See the Display Views chapter for details.


When To Use Floating Point

Out of range values are frequently used in film work, as film has a very high dynamic range and maintains color detail even in heavily over- or under-exposed frames. While it is possible to work with film frames using integer data, one must either clip all out of range values, losing all detail in the image highlights and shadows and reducing the range of possible color corrections, or apply some tricks to fit more of the color data within the limited integer range while still processing and viewing the image as it is intended to appear. Float processing makes working with film considerably easier by preserving highlight detail without the need for tricks.

Image:F61_FrameFormat_ChangeDepth.png‎

Images with brighter than white color detail can also give very nice, natural results when blurred, glowed, color corrected, or even just when faded or dissolved.

There is also some value to using float color depths with NTSC, PAL or HD video. A good time to use float processing with video is when there will be a lot of color correction. Using float will help maintain precision by avoiding the rounding errors common to 8 bit processing (as described above).