|PFTrack Documentation||Node Reference|
The Disparity Solver node can be used to estimate a disparity vector for each pixel in a left/right eye stereo pair. It has two inputs and two outputs, corresponding to the left-eye (first input) and right-eye (second input) of the stereo pair. The clips attached to each input must be of the same resolution and length.
Disparity maps generated by this node can be passed down-stream for use elsewhere. The Keystone Fix, Colour Match, Sharpness Match, Disparity Adjust and Disparity-To-Depth nodes all require disparity maps at each frame to do their processing.
The disparity at a pixel is a 2D vector that points to the location of the same scene point in the other eye. Once disparity is known (from every pixel in the left-eye to a pixel in the right-eye and vice-versa) it can be used to perform adjustments to the stereo pair such as adjusting the perceived stereo effect when the stereo pair is viewed or matching the colour of one eye to another, as well as being converted to a depth value at every pixel when the relationship between the stereo cameras has been calibrated.
The following screengrabs show the disparity vectors overlaid on the left-eye of a stereo pair, along with a false-colour representation of the disparity vector, where the vector direction is encoded as a colour and the vector magnitude is encoded as intensity. Red arrows (and white pixels) correspond to occlusions, where the pixel in the left eye does not match to any pixel in the right eye, because the true match is either hidden behind another object or is not within the camera frustum and therefore not visible in the image.
For stereo pairs shot using parallel cameras, the disparity vector should be horizontal with no vertical component. The disparity vector for each pixel in the left-eye will point to the left (negative), and the disparity vector for each pixel in the right-eye will point to the right (positive), as illustrated in the left-hand figure below. The magnitude of each disparity vector is proportional to the distance from the cameras to the object, with zero disparity corresponding to the object being an infinite distance from the cameras.
For stereo pairs shot using converging camera, the direction and magnitude of the disparity vectors will depend on whether the object is in front or behind the convergence plane. For objects closer than the convergence point (middle figure below), the disparity vector in the left-eye will point left (negative), and in the right-eye it will point right (positive) as before. For objects at the same distance as the convergence point, the disparity vector will be zero, and for objects further than the convergence point (right-hand figure below) the direction of the disparity vectors is reversed, with vectors in the left-eye pointing right and vice-versa.
When converging cameras are used to shoot stereo, the disparity vectors will also have a non-zero vertical component to them (especially towards each corner of the image). This is due to the natural properties of perspective projection when the camera viewing axes are not aligned.
Overall performance is greatly affected by the resolution of the input images, with high resolution images taking longer to process than lower resolutions. For many shots, disparity can be calculated at a lower resolution and automatically scaled back to full resolution without affecting the accuracy of the results. This allows disparity to be calculated much more quickly than at full resolution. The region of interest for the current eye can also be set by editing it in the clip input node. Pixels outside this region will be ignored during disparity calculations.
As disparity is calculated for each frame between the left and right-eyes (and vice-versa), it is stored to disk using a compressed binary format. The Quantization parameter controls how much this data is compressed when written to disk. The actual size of each file will vary according to the image content, but for a typical 1920x1080 HD image the amount of data stored for each frame is as follows:
- No quantisation: 2 x 4 Mb per frame.
- 1/100th pixel: 2 x 3.25 Mb per frame.
- 1/10th pixel: 2 x 1.25 Mb per frame.
- 1 pixel: 2 x 0.35 Mb per frame.
Note that as the quantisation level increases, the accuracy at which the flow vectors are stored is decreased. For certain operations (such as colour matching) this is unlikely to affect the quality of the final results, but for other tasks such as adjusting the stereo effect using the Disparity Adjust node or converting the disparity vectors to depth values using the Disparity-To-Depth node, care should be taken that the results are not adversely affected by the increasing the level of quantisation.
Once disparity has been calculated, it can be displayed in the Cinema window in several forms. The Display mode menu can be used to choose between Vectors, showing the each disparity vectors as an arrow, "Colours" which encodes the direction of each vector as a colour hue, and the magnitude as a intensity, Grey-Scale which encodes the magnitude of each vector as pixel intensity, and Alignment Error which displays the difference between the input image and a warped version of the image from the other eye to show where the disparity vectors are unable to match the image correctly.
Masks can be used to exclude areas of the image from the disparity calculation, or to indicate boundaries over which disparities will not be smoothed during the calculations. Each mask that is connected to the the node will appear in the Mask list, where its behaviour can be controlled. The ordering of individual masks is important, because it specifies the relative depth ordering of the objects defined by each mask. Masks at the top of the mask list define objects that are farther away from the camera than masks at the bottom of the list.
When a mask is set to Exclude, pixels covered by the mask will be excluded from the disparity calculation. This is often useful when the shot contains a region of green-screen. The top-left image below shows the left-eye of a green-screen stereo pair with an overlaid mask (generated elsewhere using a colour keyer and imported into PFTrack as a image-based mask). The disparity map that is generated when the mask is set to Exclude is shown at the bottom-left.
When a mask is set to Boundary, pixels covered by the mask are assumed to belong to an object which exists at a significantly different depth from the pixels outside the mask. Boundary masks can prevent smoothing between areas of the image inside and outside the mask. The bottom-right image below shows the disparity map that is generated when the mask is set to Exclude. Comparing this to the disparity map generated when no masks are used, it is clear that the disparity estimated around the actor's head have been improved, where previously they could not be estimated accurately because of the lack of detail in the green-screen areas of the image
Note that masks should be created in both the left and right-eye clips of the stereo pair.
The disparity histogram in the centre of the interface displays a graph illustrating the binocular disparity values in the current frame. The histogram shows the number of pixels that have a particular disparity value, in the range
max is the Max Disparity parameter value, and can be used to better understand the distribution of scene elements with respect to the convergence plane. A large peak in the disparity histogram means that many pixels in the image have that particular disparity value. The Max Disparity parameter can be changed by dragging the vertical yellow lines to the left or right using the left mouse button.
The image below shows disparity histograms for the left-eye in three separate frames in a clip. This clip was shot with a converging camera rig, and the first frame illustrates that there are scene elements distributed fairly evenly in front (i.e. nearer) and behind (i.e. further away) the convergence plane, at which the disparity of a pixel will be zero. There are larger peaks at positive disparities which indicate that more of the scene is further away from the camera than the convergence point.
The middle histogram is taken from later in the clip. From this, it can be seen that almost all of the elements in the scene have now moved further away from the camera than the convergence point. Finally, the right-hand histogram is taken from the end of the clip, where there is now a clear separation between scene elements much nearer to the camera than the convergence point, and other elements further away.
Note that when viewing the disparity histogram for the right-eye, the direction of disparity will be reversed, so positive disparity values become negative and vice-versa.
The disparity histogram can also be used to help set the Max Disparity parameter. After navigating to a frame where there is a strong stereo effect, the left-hand image below shows the disparity histogram that was generated for that single frame using an initial Max Disparity value of around 30 pixels. After increasing the Max Disparity parameter by dragging the right-hand yellow line to the right, it can be seen that many pixels in the image have been assigned the maximum value. This indicates that the maximum value is probably incorrect.
After re-calculating the disparity map for the same frame using a new Max Disparity value of around 48 pixels, there is still a large peak at the maximum value so the parameter must be increased again (middle image). Only after increasing the Max Disparity value to just above 60 pixels and re-calculating the disparity map does the histogram show that no clamping is occurring (right-hand image). This indicates that this Max Disparity value is suitable for the current frame.
The vertical range in the disparity graph can be adjusted by enabling the Use logarithmic scale button. This can be useful to see the distribution of less frequent disparity values more easily in the histogram graph.
Proxy: Set the proxy resolution for generating the disparity map. This can be used to trade accuracy against processing time. The default setting is Half, indicating that the disparity map will be generated at half resolution and then automatically scaled up to full resolution.
Quantization: The quantisation level to which disparity maps will be compressed when storing data to disk. This can be used to trade accuracy against storage space. The default setting is 1/100th pixel.
Frame range: The processing range for disparity calculations. Options are Clip, to generate a disparity map for each frame in the clip; From/To to generate a disparity map for a specific range of frames; and Current to generate a disparity map for the current frame only.
From: Set the From frame to the current frame when the frame range is set to From/To. The frame number can also be adjusted in the edit box.
To: Set the From frame to the current frame when the frame range is set to From/To. The frame number can also be adjusted in the edit box.
Clear: Clear all disparity maps from the node and remove the corresponding binary data files from disk.
Solve: Start the solver, generating disparity maps for the frames specified in the Frame range menu. To run the solver in background threads, hold the Shift key whilst clicking the button.
Edit ROI: Adjust the region of interest in the Cinema window using the left mouse button to restrict where disparity is calculated.
Use masks: Use the masks in the Masks list to influence the disparity solver.
Max disparity: The value between which disparity will be clamped at each pixel. The default value is 100, indicating that any disparity value outside the range (-100 to 100) will be clamped. The Max disparity value can also be changed by dragging the vertical yellow lines in the Disparity Histogram.
Red, Green and Blue weight %: The relative influence of each colour channel to the disparity calculations. These can be increased or decreased when colour channels contain more or less useful information. The default values are 30% red, 60% green and 10% blue.
Smoothness %: The amount of smoothing that will be applied to the disparity map. Increasing this value can reduce the amount of noise and errors in areas of the image where there is not enough detail to accurately estimate disparity. The default value is 30%.
Stereo weight %: The amount to which disparity values should be constrained to the current camera configuration if available. If the left and right-eye cameras have been tracked (or generated using the Build Stereo Camera node), increasing this value can reduce errors caused by incorrect disparity values. When no cameras are available and this value is increased above the default of 0%, it will be assumed that the stereo pair was shot using a parallel camera rig.
Update display whilst solving: When this option is enabled, the Cinema window will be updated to show the results of the disparity calculations at each frame. Note that this can increase the overall time it takes to process an entire clip. When time is important, disabling this option will reduce the overall processing time for the clip.
Name: Displays the name of each active mask.
Colour: Displays the mask overlay colour for each active mask. Double click in this column of the selected mask to change its overlay colour.
State: Right-click in this column to change the behaviour of each mask to either Exclude or Boundary. When set to Exclude, no disparity estimate will be produced for pixels covered by the mask. When set to Boundary, no smoothing will be performed across the edge of the mask.
Move Closer: Move the selected mask closer towards the camera. Note that the ordering of masks is important only if they overlap.
Move Away: Move the selected mask away from the camera. Note that the ordering of masks is important only if they overlap.
Current clip: The current left or right-eye clip that is displayed in the Cinema window.
Display mode: The type of overlay that is used to display the disparity map for the current frame. Options are None, Vectors, Colours, Grey-scale and Alignment Error.
Vector density: The density of arrows that are displayed when the display mode is set to Vectors.
Disparity scale: The scale of disparity values for display in the Cinema window. Note that this does not affect the actual disparity map, only how it is displayed.
Transparency %: The amount of transparency used to overlay the disparity map for display.
Show ROI: Display the region of interest in the Cinema window.
Show occlusions: When this option is enabled, occluded pixels will be specially marked when displaying the disparity map. Pixels can be occluded if the corresponding point in the other eye is hidden from view because it is covered by a closer object in the scene or is outside the camera frustum and therefore not visible.
|Move Closer Clip||Shift+M|
|Display Alignment Error||Ctrl+5|