|PFTrack Documentation||Node Reference|
The Z-Depth Solver Node can be used to estimate the distance of every pixel in an image from the camera frame, producing a grey-scale depth map image encoding z-depth. Depth maps can be calculated from a single tracked camera that is undergoing translation and (optionally) rotation, and viewing a static scene.
The algorithm that is used to estimate depth requires finding correspondences between pixels in one frame and pixels in another. Because of this, the z-depth algorithm will function best when there is little variation in the illumination or shading in a scene when viewed from different camera positions. Shiny surfaces that reflect light in a specular way are less likely to be matched accurately, as are scenes viewed with different camera exposure levels or colour balances.
Masks can be used to specify areas that should be excluded from the depth map, or define object boundaries across which no smoothing will be performed. Using masks to specify object boundaries can often improve the quality of depth maps.
As z-depth is calculated, it is written to disk in a floating-point file format. Typically, for a 2048x1556 image, these files occupy about 13Mb of disk space for each frame.
Calculating z-depth from a tracked camera requires that the camera is undergoing some translation through the scene. Cameras that are stationary or only rotate around their centre cannot be used to estimate z-depth, for the same reasons why 3D tracker positions cannot be calculated from a rotation-only camera. When calculating z-depth from a single camera, the depth of moving objects cannot be accurately estimated because the 3D position of the object in one frame is only ever observed at one instant in time.
When calculating z-depth from a moving camera, the most important parameter to set correctly is the Look-ahead value. This specifies the number of frames to look before and after the current frame to find a camera position from which to estimate a pixel depth using triangulation. If there is not a sufficient translation between the current frame and the previous or next frames, the estimates of depth will be unreliable. However, increasing the lookahead value too much will mean that some parts of the scene visible from the current frame are no longer visible in the previous and next frames which will again result in unreliable depth estimates.
For a camera that is moving slowly, it may be necessary to increase this lookahead value in order to ensure that the depth estimates are reliable. The diagram below shows the previous and next frames with a lookahead value of 2:
The Automatic Lookahead option can be enabled if required, and doing so will automatically estimate a suitable lookahead value for each frame.
The depth estimation algorithm can use optical flow hints to assist with estimating depth values for each pixel.
Masks can be used to exclude areas of the image from the depth map calculation, or to indicate boundaries between objects at different depths. The ordering of individual masks is important, because it specifies the relative depth ordering of the objects defined by each mask. Masks at the top of the mask list define objects that are farther away from the camera than masks at the bottom of the list.
When a mask is set to Exclude, pixels covered by the mask will be excluded from the depth map calculation. This is often useful when the shot contains a region of sky, or a moving object for which depth cannot be estimated (generally because only one camera is being used to estimate depth). The following example shows how an exclude mask is used to define the sky region in a frame (left), and the depth maps obtained with (middle) and without (right) the mask. Note that depth values in the sky region cannot be estimated accurately because the sky is a uniform blue colour and therefore the reliability of finding correct pixel correspondences is low.
When a mask is set to Boundary, pixels covered by the mask are assumed to belong to an object which exists at a significantly different depth from the pixels outside the mask. Boundary masks can prevent smoothing between areas of the image inside and outside the mask. The example below shows a boundary mask drawn along the edge of a tree trunk (left) and the depth maps obtained without the mask (middle) and with the mask (right). Note the improved depth estimated along the boundary of the masked object in the right-hand depth map.
Name: The name of each active mask.
Colour: The mask overlay colour for each active mask. Double click in this column of the selected mask to change its overlay colour.
State: Exclude or Boundary. When set to Exclude, no depth estimate will be produced for pixels covered by the mask. When set to Boundary, no smoothing will be performed across the edge of the mask. Right-click in this column to change the behaviour of each mask
Move Closer: Move the selected mask closer towards the camera. Note that the ordering of masks is important only if they overlap.
Move Away: Move the selected mask away from the camera. Note that the ordering of masks is important only if they overlap.
Frame range: The processing range for z-depth calculation. Options are Clip, to generate a depth map for each frame in the clip; From/To to generate a depth map for a specific range of frames; and Current to generate a depth map for the current frame only.
From: Set the From frame to the current frame when the Frame range is set to From/To. The frame number can also be adjusted in the edit box.
To: Set the To frame to the current frame when the frame range is set to From/To. The frame number can also be adjusted in the edit box.
Step: The number of frame to step when solving a depth map using a frame range if Clip or From/To. For example, setting this value to 1 will generate a depth map for evert frame, whereas setting it to 4 will generate one depth map every 4 frames.
Layers: The initial number of depth layers that will be used to estimate depth maps. Note: this option is not available when using the Per-Pixel segmentation mode.
Lookahead: The number of frames to look ahead and behind of the current frame when estimating depth. For cameras that are moving slowly, this value can be increased to improve the quality of depth estimates.
Iterations: The number of iterations that the algorithm will run for. Increasing this value may improve the quality of depth map estimation for problematic scenes, but will also increase the time it takes to calculate each depth map. Note: this option is not available when using the Per-Pixel segmentation mode
Smoothness %: The amount of smoothing to apply between adjacent areas in the image. Increasing this value will produce smoother depth depth estimates. Increasing this value will produce smoother depth maps, but potentially at the cost of decreased accuracy along the boundaries between objects at different depths. Masks can be used to define the boundaries of objects at different depth, as described in the Using Masks section.
Filter %: The amount of per-pixel smoothing to apply when generating the final depth map for each frame.
Reset: The z-depth calculation parameters to their default values.
Edit ROI: Allow the region of interest (ROI) to be changed in the Cinema window. This specifies the region of the image in which z-depth values will be estimated.
Channels: The image channels to use when estimating depth.
Near plane: The distance of the near plane from the camera. Depth estimates will be generated between the near and far planes, so this value should be set appropriately before solving for any depth maps.
Far plane: The distance of the far plane from the camera. Depth estimates will be generated between the near and far planes, so this value should be set appropriately before solving for any depth maps.
Segmentation: The way image pixels are segmented into groups when estimating depth. Options are Coarse, Normal, Fine and Per-Pixel. Fine will take longer to process than either a Normal or Coarse segmentation, but is likely to produce a more accurate result. Note also that the amount of segmentation used can also affect how well the depth estimation algorithm is able to resolve depth estimates for regions of the image with little texture. The Per-Pixel option will generate a depth value at every pixel instead of using image segmentation.
Note: the Per-Pixel segmentation mode will produce a much finer resolution depth-map than the other modes, and employs OpenCL GPU accelerated processing. Because of the nature of the per-pixel algorithm, depth values may not be estimated for all pixels. In these cases, holes can be filled automatically for each frame by enabling the Automatic Hole Filling option.
Auto lookahead: When enabled, the lookahead value will be estimated automatically for each frame.
Hole filling: When enabled, pixels that do not have a depth value will be filled automatically using information from surrounding areas. Note: this option is only available when using the Per-Pixel segmentation mode.
Compensate for illumination: When enabled, the depth map algorithm will attempt to compensate for moderate illumination differences between frames. Note: this option is not available when using the Per-Pixel segmentation mode, and in this case illumination will always be compensated for automatically.
Use optical flow hints: When enabled, optical flow will be used to provide a hint to the depth map solver.
Clear: Clear the depth map from the current frame and delete the data file from disk.
Clear All: Clear depth maps from all frames and delete the data files from disk.
Solve: Solve for new depth maps over the frames specified by Frame range. As a depth map for each frame is constructed, it will be displayed in the Cinema and Viewer windows. This operation can be run in the background by holding the Shift key when clicking on the button.
Display mode: The method that will be used to display depth maps in Viewer windows. The options are Triangle mesh, which will render a triangular mesh containing every pixel in the image, and Point Cloud which will render a point for every pixel in the image.
Display proxy: The resolution used to display triangular meshes and point clouds in Viewer windows. For high resolution images, selecting Half, Third or Quarter will increase rendering performance.
Grey-scale gamma: The amount of gamma correction that is used to display a grey-scale depth map in the Cinema window.
Transparency %: The amount of transparency that is used to display a grey-scale depth map in the Cinema window.
Show ground: When enabled, the ground-plane will be displayed.
Show horizon: When enabled, the horizon line will be displayed.
Show trackers: When enabled, 3D tracking points will be displayed in the Viewer windows.
Show geometry: When enabled, geometric mesh objects will be displayed in the Viewer windows.
Show depth map: When enabled, a grey-scale depth map will be displayed in the Cinema window if one is available for the current frame.
Show depth mesh: When enabled, a triangular mesh or point cloud will be displayed in both the Viewer windows.
Show frustum: When enabled, the camera frustum for the current near/far planes will be displayed in the Viewer windows.
Project depth mesh: When this option is enabled and the Cinema is viewing a frame that does not contain a depth map, the triangular mesh from the nearest frame will be projected into the Cinema window at the current camera position.
Keyboard shortcuts can be customised in the Preferences.
Show Depth Map
Show Depth Mesh
Project Depth Mesh