< Previous | Contents | Next >

The typical workflow for picking is:

1 View StereoAlign in the left view.

2 View the node upstream of StereoAlign in the right view.

3 Pick the Disparity value from the left eye in the right view.


Although this picking functionality does not operate any differently from normal picking of color channels, this issue may cause some confusion. If it helps, the analogous workflow mistake with color nodes would be a user trying to pick a gradient color for a Background node from a view showing the Background node itself (you are trying to pick a color for a node from its own output).

Another issue that you need to be aware of is which eye you are picking. To avoid problems, it’s a good idea to always pick from the left eye. The reason is that the Disparity channels for the left and right eyes are different, and when you pick from a horizontal/vertical stereo stack, Fusion has no way of knowing whether you picked the Disparity value from the left or right eye.

The above are not hard and fast rules; rather, they are guidelines to prevent foot shootings. If you understood the above reasoning fully, you’ll realize there are exceptions, like picking disparity from the left output of DisparityToZ and Z from the left/right output of ZToDisparity, where everything is okay.

Vector and Disparity Channels

The Vector and BackVector channels store the forward and reverse optical flow.

The Vector channel might be better named “forward vector” or “forward flow,” since the name

“Vector” to describe a channel is “not technically correct,” as the more mathematically-inclined user might recognize that all the channels except the scalar channels Z/ID are technically “vector” channels. A frames Vector aux channel will store the flow forward from the current frame to the next frame in the sequence, and the BackVector aux channel will store the flow backward from the current frame

image

to the previous frame. If either the previous or next frames do not exist (either not on disk or the global range of a Loader does not allow OpticalFlow to access them), Fusion will fill the corresponding channels with zeros (transparent black).

The Disparity channel stores the displacement vectors that match pixels in one eye to the other eye. The left image’s Disparity channel will contain vectors that map left > right and the right image’s Disparity channel will contain vectors that map right > left.

For example:

(xleft, yleft) + (Dleft. x, Dleft. y) -> (xright, yright) (xright, yright)

+ (Dright. x, Dright. y) -> (xleft, yleft)


You would expect for non-occluded pixels that Dleft = -Dright, although due to the disparity generation algorithm, this is only an approximate equality. Note that Disparity stores both X and Y values because rarely are left/right images perfectly registered in Y, even when taken through a carefully set up camera rig.

Disparity and Optical Flow values are stored as un-normalized pixel shifts. In particular, note that this breaks from Fusion’s resolution-independent convention. After much consideration, this convention was chosen so the user wouldn’t have to worry about rescaling the Disparity/Flow values when cropping an image or working out scale factors when importing/exporting these channels to other applications. Because the Flow and Disparity channels store things in pixel shifts, this can cause problems with Proxy and AutoProxy. The convention that Fusion follows is that, for proxied images,

these channels store unscaled pixel shifts valid for the full-sized image. So if you wish to access the disparity values in a script or via a probe, you need to remember to always scale them by (image. Width/image. OriginalWidth, image. Height/image. OriginalHeight).

When using Vector and BackVector aux channels, remember that all nodes expect these aux channels to be filled with the flow between sequential frames.

More precisely, if you have sequence of three frames A, B, C, then:


B

Vector will contain the flow B>C

B

BackVector will contain the flow B>A

A

Vector will contain the flow A>B

A

BackVector is written with zeros as there is no frame before A

C

Vector is written with zeros as there is no frame D to flow C>D

C

BackVector will contain the flow C>B


image


NOTE: Currently DoD/RoI is not supported for all Fusion nodes.


NOTE: Currently DoD/RoI is not supported for all Fusion nodes.


NOTE: Currently DoD/RoI is not supported for all Fusion nodes.

image

When working with these channels, it is the user’s responsibility to follow these rules (or for clever users to abandon them). Nodes like TimeStretcher will not function correctly since they still expect the channels to contain flow forward/back by 1 frame.