Texture Depth Preference

From VFXPedia

Jump to: navigation, search

(TODO - add screenshot)

The Texture Depth preference can be used to control how Fusion stores image data on your graphics card. You will want to change this preference if you are working with float16/float32 footage and a non-Nvidia video card and:

  • you find playback to be too slow
  • you are working with shader LUTs that affect out of range colors and the color values are getting crushed to [0, 1] in the view

This preference only applies to images viewed in a 2D view (not 3D views). This preference was added in Fusion 5.1. Before version 5.1 it was effectively hardcoded to 'Auto'.


To understand this preference it is necessary to understand what happens when an image is viewed in Fusion:

1) If hardware LUTs are disabled, a software LUT is applied to the image data. Hardware LUTs can be disabled using the Disable View LUT Shaders preference.

2) The image's data is uploaded into a texture on the graphics card.

3) If hardware LUTs are enabled, Fusion creates a hardware shader LUT and applies it.

4) The texture is then displayed in the view.

This preference affects what happens in step (2). Like images, textures may have different datatypes such as int8/int16/float16/float32.


Tweaking the Texture Depth preference affects two things:

LUTs: The datatype of the texture affects how LUTs are applied. For example, if you are viewing a HDR float16 image but Fusion is using an int8 texture, all the values outside of [0, 1] will get crushed. This is a bad thing if you are using LUT shaders because afterwards the LUT shader would get applied to the crushed colors.

Playback Performance: The datatype of the texture also affects playback performance. On some cards playback is faster if you match the texture datatype to the image datatype, while on other cards playback is faster if you use an int8 texture. During the beta of Fusion 5.0, users had problems with playback and we profiled many video cards and found some cards had poorer than expected floating point upload performance. In particular, we found Nvidia cards were good, but ATI Radeon cards were only achieving half the bandwidth with floating point data that they had with int8 data. Other non-Nvidia/non-ATI cards also had driver problems with floating point data. ATI did not support the ARB_pixel_buffer_object extension that we'd found improved playback performance on Nvidia cards. Furthermore, because of unimplemented OpenGL extensions, ATI didn't allow us to supply float16 data directly to the OpenGL API and rather we were forced to do a time-consuming conversion float16->float32 and then supply the float32 data to OpenGL which it then put into a float16 texture. It was for these reasons that we forced float32/float16 to int8 on non-Nvidia cards. That was the situation as of early 2006. We have not done any testing since then on non-Nvidia cards to see if new drivers or newer cards improved floating point playback.


Here is what the 3 options do:

Auto - put floating point images into a floating point texture on Nvidia cards, but into an int8 texture on other cards
int8 - put floating point images into an int8 texture (crushing the colors)
Native - put floating point images into a floating point texture (this preserves all the out of range colors)