PFTrack Documentation Node Reference  

Camera Solver

Overview  |  Preparing the shot  |  Automatic lens distortion correction  |  Solving multiple cameras  |  After the camera solve  |  
Influencing the camera solve  |  Tracking difficult shots  |  Auto-undistort  |  Solver controls  |  Display controls  |  
Camera controls  |  Auto-Undistort controls  |  Trackers controls  |  Constraint controls  |  The Errors graph  |  
The Coverage Panel  |  The Solver log

Overview

UI

The Camera Solver node can be used to estimate camera motion using a set of feature tracks generated by one or more Auto Match, Auto Track or User Track nodes.

Camera Solver Tree

The Camera Solver node can have multiple inputs and multiple outputs, and it can solve for more than one camera at the same time provided the set of trackers have been tracked in each input clip. Helper frames can also be used to assist with estimating tracker positions.

Constraints on tracker positions can be defined, ensuring a set of tracker lies exist at the same point in 3D space, or lie a flat plane or a straight line, and lens distortion can be corrected for automatically.

The camera solving process can be influenced by the user in many ways, such as specifying a pair of initial frames to start from, specifying approximate feature distances from the camera, or even providing a hint to how the camera is moving. Error graphs are available to assess which features are not being solved accurately.

Note that automatic estimation of lens distortion coefficients requires a fairly large set of trackers, distributed over as much of the image area as possible. Without this, the lens distortion coefficient might not be estimated accurately.

The Camera Solver works by examining the motion paths of tracking points and trying to work out suitable camera parameters (such as focal length) and a motion transformation that can explain the paths. Because trackers have so much influence over the Camera Solver, it is very important to use a set of good quality tracker points.

Note that the Camera Solver is only able to function using the tracking points provided to it. If those tracking points are not in the correct position, or do not provide enough parallax information to resolve the camera motion, the results generated by the solver may not be what you expect.

This can especially be true in situations where there is a low amount of parallax, or the trackers have not been well distributed over the scene. In these cases, there can sometimes be several different camera motions that "fit" the tracker positions, and the Camera Solver is unable to distinguish one from another. In these cases, further information (such an approximate tracker distances) must be provided in order to reduce the ambiguity in the motion.

Preparing the shot

The Camera Solver is able to function when four or more trackers are tracked between adjacent frames, although using more then four trackers will increase the accuracy of the solution, and will reduce the amount of error caused by noise in the tracker paths. Each tracking point should be tracked over multiple frames, and tracking points that exist over a long period are often those that provide most benefit to the solver.

For simple shots, the Auto Match or Auto Track nodes can be used to generate a set of tracking points automatically.

Alternatively, a User Track node can be used to manually create tracking points. This is often necessary in situations where there is complex camera motion, or the image data is such that automatically generated trackers do not exist for very many frames.

Parallax

In order to estimate the camera position accurately, tracking points should be placed on both foreground and background image features. This means that the parallax motion of the trackers can be used to estimate both the position of the camera and the distance from the camera to the tracking point.

There should also be an approximately even number of foreground and background trackers for the best results: Having too many trackers in one part of the scene may overwhelm the solver, decreasing the accuracy of the camera path.

Tripod-mounted shots

When tracking shots from cameras mounted on a tripod, is is often tempting to assume that the virtual camera used by PFTrack is not translating at all, and there is no parallax in the shot. In the real-world however, this is rarely the case since the tripod mount point on the camera will not correspond exactly to its optical centre.

It these situations, is often useful to first assess the amount of parallax in a shot, which can be done easily using the User Track node as follows:

1. Place a tracker on a foreground feature, and track it through as much of the clip as possible.
2. Whist keeping the tracking point selected, enable the 'Centre View' option. This will centre the Cinema viewport on the tracking point.
3. Using the left mouse button, scrub left and right through frames in the [Scrub Bar](pftrack_movie_controls/md#scrub-bar).

By keeping the foreground tracker in a fixed location, you should be able to see how much parallax is in the shot by comparing it against the background motion.

Using large numbers of trackers

Sometimes, a better quality solve can be generated from six or more manually placed tracking points than from a set of 50 or so automatically tracked points.

The time it takes to solve for camera motion will depend on both the number of frames that require solving, and the number of trackers visible in each frame.

If a large number of trackers is required for other purposes further down the tree, it is often better to create a second tracker and camera solver node pair in which the 3D positions of the additional trackers can be generated whilst keeping the existing camera motion fixed.

Setting initial frames

The Camera Solver works by first constructing a partial solution between the initial frames. In order to get a good overall camera solve, it is important that this initial solution is fairly accurate, so the first step in tracking a problematic shot is to ensure a good initial solution can be obtained. This will then be extended outwards, adding more frames until the entire camera path is complete.

The solver can be halted once the initial solution is produced, making it easier to see if the starting point for the whole solution is accurate. Tools are provided to manually extend a solution outwards by one frame at a time, meaning small adjustments can be made manually to either the camera path (via the curve-editor) or the trackers (by adjusting their distance from the camera) whilst the solution is being completed.

For clips shot with a zoom lens, a good initial solution is more likely to be found if the initial frames are placed in part of the shot which is not zooming, if at all possible.

Setting camera parameters

The most important parameter that can be set before motion is solved is the camera focal length.

Note that entering a focal length value is a physical unit such as millimetres should only be done when the camera Sensor size is known. These two values are used together to calculate a field of view for the camera. If the sensor size is not accurate, the field of view will not be correct, and this may adversely affect the accuracy of the camera solve.

In order to change the sensor size of the camera, the value must be updated in the Clip Input node at the top of the tree where the clip parameters are defined.

Lens distortion correction

In order to get the most accurate estimate of camera motion and tracker positions, it is important to account for any lens distortion present in the images. This can be done in several ways:

- Pre-correcting lens distortion the clip using the Clip Input node

- Shooting calibration grids and building a distortion preset using the Movie Camera Preset editor

- Using Automatic distortion correction and specify approximate bounds on the distortion coefficient.

Solving multiple cameras

If additional Clip Input nodes are created and attached to the Camera Solver as secondary inputs, the cameras in those clips can be solved into the same scene as the primary camera. Secondary clips will be solved after the main camera path and tracker positions have been generated.

To do this, a User Track node must be created and attached to all input clips. An example tree for this situation is available in the Example Tree Layouts section.

Using Helper frames

If a Photo Input or Clip Input node is connected as the second input, the set of photos or images can be used as 'Helper Frames' to assist with solving the primary camera, provided that a User Track node has been used to been position trackers in both the primary clip and and photos.

At least four trackers must be shared between clips before a helper frame can assist the camera solve, although using six or more will often provide a better solve.

For helper frames to be most effective, they should be taken from a similar position to the primary camera, showing a similar view of the scene but from a slightly different perspective.

This can be helpful in situations where the primary camera does not contain enough parallax to effectively solve for 3D feature positions, for example, if it was shot on a tripod. An example tree for this situation is available in the Example Tree Layouts section.

Tracker Constraints

Constraint groups can be created to restrict the positioning of trackers to exist at a single point in 3D space, or lie on a plane or straight line. The constraint table lists all available constraint groups, along with the constraint type (Point, Plane or Line) and the number of trackers participating in the constraint.

After the camera solve

After the camera motion has been solved, the 3D path and tracker positions can be viewed in the Viewer windows.

Trackers are also shown in the Cinema window coloured according to how well their projected position matches their 2D tracker locations.

Trackers that match their 2D location well (with a projection error less than 1 pixel) are coloured green, trackers with projection errors less than 2 pixels are coloured orange, and trackers with projection errors larger than 2 pixels are coloured red.

An error line is also drawn connecting the projected tracker position with its 2D tracker location. The difference between the projected tracker
position and the 2D location is referred to as the Residual Error.

The tracker list, displayed by clicking the Trackers tab will also display the residual error and the distance of each tracker from the camera in the current frame.

Editing trackers and refining the solve

The Errors tab will display a graph showing the projection errors for each tracker, along with the average error for all trackers in white. Trackers can also be selected from here by clicking in the graph with the left mouse button.

The Coverage tag will also display keyframe and projection information for each tracker individually.

Trackers with large residual errors should be examined, and deactivated or edited up-stream if they correspond to points that are moving independently from the camera.

After edits, the camera motion and tracking points can be refined to improve the solution.

If all tracking points look good, but the camera motion is still incorrect after refinements, this may be an indication that other data such as the focal length or distortion parameters are wrong.

Influencing the camera solve

There are several steps that can be taken to influence the camera solve and refinement process, including:

1. Removing bad trackers

If trackers have not been tracked accurately, or are still present in frames where they should not be visible, this can adversely affect the accuracy of the camera solve. These trackers should be corrected or removed entirely before solving.

2. Enter a known camera focal length

In order to enter a focal length measured in a physical unit such as millimeters, the camera sensor/film back size must be set correctly. Entering a known focal length will make it easier to solve for camera motion, and the solver will also run at a faster speed.

3. Changing the initial frames

The initial frames should be set so that there is a noticeable amount of parallax in the tracker positions between the initial frames. There also needs to be at least trackers common to both initial frames for the solver to run, although it is recommended to use six or more if possible, as this will greatly increase the likelyhood of getting a good initial solve.

Either one or two initial frames can be specified manually. If only one is specified, the other will be estimated automatically.

For shots with a variable focal length, it is also recommended that the initial frames be placed in a region where the camera focal length is not changing too much, although this is not a strict requirement.

4. Enter some tracker distances

Entering known (or approximately known) distances of trackers from the camera can be used to help obtain a good initial solution, or to correct situations where foreground and background features are confused.

To correct this, enter two or more tracker distances, one in the background and one in the foreground. Fairly large uncertainty values can be used if the actual tracker distances are not known.

5. Setting hard/soft tracker constraints

If a tracker has been generated automatically (by an Auto Track or Auto Match node), and its tracker path is seen to be very accurate compared to other trackers, it can be enabled as a Hard constraint.

This will mean the camera solver tries harder to ensure that the 3D tracker position matches the tracker path exactly.

Tracking points that are generated manually (by a User Track node) are assumed to be accurate and are set to Hard constraint by default. However, switching these to a Soft constraint can sometimes improve the solve for difficult shots.

6. Increasing tracker weights

Increasing tracker weight values can be used to help the solver focus more on certain trackers, also reducing their projection error (possibly at the expense of increasing error elsewhere, however).

7. Creating a hint for camera motion

If suitable metadata is available in the source media that describes the camera motion (or focal length in the case of zoom shots), it can be used as a hint to the camera solver.

Alternatively, hints can be generated manually by placing an [Edit Camera] node up-stream from the Camera Solver node, and keyframing an approximate camera path (including camera translation and rotation).

The path does not need to be keyframed at every frame, but should match the overall camera motion fairly well. It is often sufficient to only keyframe every 10 or 20 frames or so, assuming the camera is not moving too much in-between.

A Test Object node can also be created to place objects to help animating the camera hint.

8. Extending motion outwards from an initial or partial solution

If an initial solution has been generated and looks good, it can be extended outwards to complete the shot manually using the 'Extend' buttons. This can also help identify parts of the shot which cause problems for the overall camera solve.

You can also extend motion outwards from a partial camera solve. For example, it may be possible to get a good camera solve from the first half of a shot by placing the end frame in the middle of a shot. Once this is done, reset the end frame to its original position and extend the solution outwards by clicking the 'Extend' button.

9. Creating constraint groups

Constraint groups can be used to help ensure a set of trackers exists at a single point in 3D space, or all lie on a flat plane or in a straight line.

10. Specifying a constraint on camera motion

Camera motion can be constrained so the camera is not translating, or is only moving in a certain path such as along a straight line or a flat plane.

11. Trimming trackers and refining the solution

Trackers can be disabled or adjusted after the solve, before a refinement pass is applied by clicking the 'Refine All' button.

The Error Graph can be helpful to identify trackers which are not solved well, or are solved well in certain frames but not others.

By removing the tracking points in the bad frames and then refining the entire solution, the overall error can often be reduced significantly.

Tracking difficult shots

Here are several recommendations that may help when attempting to track difficult shots:

1. Make sure the initial keyframes are in a good position. When
estimating the position of the keyframes automatically, check their position and make adjustments if necessary. The initial keyframes should be placed in a region where there is a significant amount of parallax in the tracker motions, so avoid areas where the camera is not moving very much or is only rotating.

1. Make sure the trackers are well distributed over the image frame,
and in both the foreground and background of the shot, and make sure there are not too many poor quality Auto Tracks.

2. Make sure there are not too many trackers positioned on
independently moving objects. Note that sometimes it can be quicker to simply disable the offending trackers instead of drawing a mask around the object and re-tracking everything.

3. If the shot contains a significant amount of lens distortion, make
sure that is corrected before solving, or at least set bounds on the low order coefficient that are approximately correct.

5. Make sure the camera focal length looks sensible, and enter an
approximate value if necessary.

6. Try solving for an initial solution only to begin with, or only
solving part of the whole shot. Once the initial or partial solution looks good, the camera focal length can be set to Known to prevent it being changed and the solution can either be re-solved with known initial frames or it can be extended outwards to fill the rest of the shot by clicking the 'Extend' button.

Auto-undistort

When the camera is set to use automatic lens distortion correction in the Clip Input node, you can indicate roughly how much distortion is present in the clip using the Range option and enable automatic undistort using the Estimate check-box.

The Range can be set to either Minimal, Moderate, Significant or Custom, where Minimal corresponds to a very small amount of distortion, and Significant corresponds to a much wider-angle lens.

Lower and upper bounds on the distortion coefficient are provided next to the Range control, and these can be edited manually when in Custom mode.

After solving the camera, the image in the Cinema window will autmatically adjust in size to represent the full undistorted image. If desired, the size of the image can be fixed to match the original image by enabling the Crop to input image size option.

After solving, the actual distortion value calculated for the current frame is displayed to the right:

Auto Undistort

Handy Tip: if you aren't sure what distortion range to use, try guessing first, solving your camera and then look at the calculated distortion value. If it's at the maximum of your range this means PFTrack tried to increase it more but couldn't.

In this case, adjust the Range control upwards by one setting to increase the maximum allowable value, solve again, and see if that gives better results.

For example, if you set the Range to Moderate (0.1 to 0.2) and then solve your shot and the final distortion estimate is 0.2, try changing the Range to Significant and solving again to see if this gives a better result. You can always undo afterwards if the result isn't any better.

Solver controls

Current clip: The clip that is being displayed in the Cinema window. The Camera control tab will display information for the camera associated with the current clip.

Current group: The tracker group that will be used to solve for camera motion.

Start/end frames: Start and end frames to use for the camera solve.

S: Store the current frame number as either the start or end frame.

Initial frames: The pair of frames to use to construct the initial solution.

S: Store the current frame number as either the first or second initial frame. Either one or two initial keyframes can be specified manually. If only one is specified, the other is estimated automatically.

Set initial frames automatically: When enabled, the initial frames will be set automatically by searching for a span of frames that contains enough trackers to solve for camera motion.

Solve for initial solution only: When enabled, the camera solve will stop once the initial solution has been generated. The initial solution is camera motion and tracker positions for all frames between the two initial frames.

Use tracker z-depth when available: This option is available if the input clip has a z-channel available, containing z-depth values at each pixel. Features tracked throughout such a clip will have a z-depth value associated with each track position, and the Camera Solver node can make use of this information to increase the accuracy and robustness of the camera solve. See the documentation for the Attach Z-Channel node for further information.

Preview: When enabled, a preview of the solution at the initial camera frames will be generated. This can be used when adjusting the initial frames to determine which values perform best.

Show matches: When enabled, only trackers that are present in both initial frames will be displayed in the Cinema window. This can be used when adjusting the initial frames to identify situations where there are enough trackers available to generate an initial solution.

Exhaustive: When enabled, the solver will spend more time adjusting the overall solution each time a new frame is added. This can result in better quality solves, but for long clips can also increase processing time significantly.

Solve All: Solve for camera motion and tracker positions. The camera solve can be run in the background by holding the Shift key whilst clicking on the Solve All button.

Solve Trackers: Solve for tracker positions only. This can be used in situations where additional tracker have been created up-stream to estimate new tracker positions without re-solving for camera motion.

Refine All: Adjust tracker positions and camera motion to better match the tracker paths. Refinement can be run multiple times, and a longer refinement can run in the background by holding the Shift key whilst clicking on the button.

Refine Camera: Refine the current camera only, leaving trackers in their current positions. This can be useful to bring the camera into alignment with trackers after their distances are manually adjusted using the Push/Pull tool whilst holding the Shift key to adjust the distance of a tracker from the camera.

Unsolve: Un-solve the camera at the current frame. This will remove the translation and rotation keyframe, causing the camera path the be linearly interpolated from nearby keyframes. After a frame has been un-solved, it can be solved again using the Extend buttons described below.

Extend: Solve for any frames between the start/end frame that are currently un-solved. This can be used to extend a partial solution outwards into more camera frames, or when footage has been replaced with a longer clip. After the solution has been extended, it is often helpful to click the Solve Trackers and Refine All buttons to make sure as many trackers as possible are solved and reduce the overall solution error.

Extend <: Extend the current solution by one frame towards the start of the clip. After the solution has been extended, it is often helpful to click the Solve Trackers and Refine All buttons to make sure as many trackers as possible are solved and reduce the overall solution error.

Extend >: Extend the current solution by one frame towards the end of the clip. After the solution has been extended, it is often helpful to click the Solve Trackers and Refine All buttons to make sure as many trackers as possible are solved and reduce the overall solution error.

Display controls

Show Ground: When enabled, the ground plane will be displayed.

Show Horizon: When enabled, the horizon line will be displayed.

Show Geometry: When enabled, the geometric objects from up-stream will be displayed.

Show Projections: When enabled, projections of trackers that have been solved in 3D space, but are not tracked in the current frame will be displayed as white dots in the Cinema window.

Show Names: When enabled, selected tracker names will be displayed.

Show Info: When enabled, selected trackers will have position and residual error information displayed.

Marquee: Allow a tracker selection marquee to be drawn in the Cinema window or in a Viewer window. Holding the Ctrl key whilst drawing will ensure that previous selections are kept. Holding the Shift key will allow a lasso selection to be used instead of a rectangle.

Centre View: When enabled, the Cinema window will be panned so the projection of the first selected tracker is fixed at the centre of the window.

Orientation controls

Once the camera is solved, it can quickly be oriented using these controls. Alternately, the Orient Camera node can be used, which provides a richer toolset.

Set Origin: Translate the entire scene so the ground-plane origin is at the average 3D position of all selected trackers.

Set Axis: When two trackers are selected, re-orient the ground-plane so an axis direction matches the tracker positions.

Set Plane: When three or more trackers are selected, fit an axis plane to the tracker positions.

Frame controls

Frame Menu: This menu can be used to speed up the solving process for very long shots, by only solving for a sub-set of the total number of frames at first and then adding the missing frames once the partial solution is complete. Options are Every Frame to solve for every frame in the clip (the default), Every 2 Frames, Every 5 Frames and Every 10 Frames to solve for one frame every 2, 5 or 10 frames in the clip.

Skip Frame: Skip the current frame during the camera solve. Once skipped, the button label will change to Un-Skip Frame to include the frame in the solve. When a frame is skipped, the camera motion will not be updated, and will instead be interpolated from nearby frames. Skipping frames can be useful when one frame is missing or corrupted due to some sort of image degradation.

Camera controls

The Camera tab contains information about the camera associated with the current clip. If more than one input clip is present, the current camera (and clip) can be changed using the Current clip menu option.

Focal length: Displays the camera focal length at the current frame. Focal length can be set as Known, Unknown or Initialised. If the focal length of the camera is known beforehand, entering the value here can often improve both the speed and accuracy of the camera solver. Setting focal length to Initialised will allow a minimum and maximum value to be specified in the focal range edit boxes. For cameras with a constant focal length, setting focal length to Initialised will also allow an initial value to be entered into the focal length edit box. Note that in order to enter a focal length measured in any unit other than Pixels requires that the camera sensor width and height is set correctly.

R: If an input camera focal length was available up-stream, clicking this button will reset the current focal length to this value. This can be useful in situations where the input focal length is used as a hint to the camera solver.

Focal range: When focal length is set to Initialised, these edit boxes define the minimum and maximum allowable values of focal length.

Variable focal: Allow the camera focal length to vary throughout the clip. If focal length is set to Initialised, the minimum and maximum values over which focal length can vary may be entered into the Focal range edit boxes.

Field of view: The horizontal and vertical field of view at the current frame, measured in Degrees.

Sensor size: The horizontal and vertical sensor size. The sensor size can be changed in the Clip Input node.

Pixel aspect: The current pixel aspect ratio, which can be changed in the Clip Input node.

Helper: This checkbox indicates that a set of photos attached to the secondary input will be used as helpers to assist in the main camera solve.

Motion hints and constraints

The Translation and Rotation menus can be used to specify various hints and constraints for how the camera is moving. The first menu is used to control the amount of smoothing that is applied to either the translation or rotation components of motion. Smoothing options are None, Low, Medium and High.

The second menu can be used to specify either a constraint on the motion (for example, 'No Translation'), or indicate that a hint should be used. Translation constraints are as follows:

- No Translation: the camera is not translation at all, and there is no parallax in the shot at all. Note that this option is rarely used, since it refers to the position of the camera's optical centre. Even when mounted on a tripod, the camera will still be translating slightly, since the centre of rotation (the tripod mount point) will not correspond exactly to the camera's optical centre.

- Unknown: A freely moving camera

- Off-Centre: A camera that is mounted on a tripod, rotating around a point slightly offset from the true optical centre

- Small: A camera that translates a small distance compared to the distance from the camera to the tracking points

- Linear: Restrict camera motion to a straight line

- Planar: Restrict camera motion to a flat plane

- Metadata hint: When available, use metadata in the source media to provide a hint to camera translation

- Upstream hint: Use the upstream camera translation as a hint (for example, generated manually using the Edit Camera node).

Rotation constraints are similar, but are limited to No Rotation, Unknown, Metadata hint and Upstream hint.

Lock roll: Lock the camera roll (i.e. rotation around the Z axis). This can often increase the quality of the camera solve in situations where the camera is only rotating around the X (pitch) and Y (yaw) axes.

Focal smooth: Specify how smooth the camera focal length changes are for cameras with a variable focal length. Options are None, Low, Medium and High.

Constant focal length between initial frames: In situations where the camera focal length is varying in only part of the shot, a better quality solution can often be obtained if the initial frames are positioned such that the focal length is constant between those frames. If this can be done, enabling this option will mean the camera solver is more likely to find a good quality initial solution.

Auto-Undistort controls

When the camera is set to use automatic lens distortion correction in the Clip Input node, these controls can be used to indicate roughly how much distortion is present in the clip.

Range: This menu can be used to define the distortion range: Minimal, Moderate, Significant or Custom, where Minimal corresponds to a very small amount of distortion, and Significant corresponds to a much wider-angle lens. The lower and upper bounds on the distortion coefficient are provided next to the Range control, and these can be edited manually when in Custom mode. The actual distortion value found during the solve is displayed to the right next to the Estimate control.

Estimate: Enabling this option means the camera solver will attempt to estimate a suitable lens distortion coefficient during the camera solve.

Account for lens breathing: When enabled, the camera solver will attempt to automatically correct for additional motion in the image that occurs when focus changes also affect the camera focal length.

Trackers controls

The trackers list contains information about all trackers passed into the camera solve node.

Columns

Name: The tracker name.

Active: Indicates whether the tracker is active in the solve or not.

Hard: Indicates whether the tracker path should be considered as a hard or soft constraint on camera motion. By default, automatically generated trackers (from an Auto Track or Auto Match node) are defined as soft constraints, and manually placed trackers (from a User Track node) are defined as hard constraints. The camera solver assumes that the path provided by a tracker marked as a hard constraint does not contain errors. Trackers that are marked as soft constraints may have small errors in their tracker paths without affecting the overall camera motion.

Weight: The weight given to a particular tracker in the solution. The default value is 1.0. A higher value will mean the camera solver expends more effort to match a 3D tracker position to its path, possibly at the cost of decreased accuracy elsewhere. Changing the tracker weight can often help to lock a solution down onto a particular tracker.

Residual: The residual projection error (measured in pixels) for the tracker in the current frame. The projection error is the difference between the tracker path position and the projection of the 3D tracker point onto the camera plane. Ideally, the projection error should be close to zero for each tracker.

Distance: The distance from the current camera frame to the tracker's 3D position in space.

Frame: The frame number in which the tracker distance has been initialised.

Initialised: The distance to which the tracker has been initialised in the frame.

Uncertainty: The uncertainty in the initialised tracker distance.

Controls

Min/max tracker distance: For trackers that do not have an initial distance and uncertainty, a minimum and maximum distance can be entered here to assist the camera solver. This can be useful in situations where the camera is viewing a scene that is bounded (for example, by walls) and an approximate distance from the camera to the boundary is known.

All/None: Select all or none of the trackers from the list.

Activate: Activate all selected trackers in the camera solver. Active trackers will contribute to the solution. Trackers can also be activated individually by ticking the Active column in the tracker list.

Deactivate: Deactivate all selected trackers in the camera solver. Inactive trackers will not contribute to the solution. Trackers can be deactivated individually by un-ticking the Active column in the tracker list.

Weights: Display a popup window allowing the weight for all selected trackers to be set at the same time.

Distances: Display a popup window allowing the initialised distance and uncertainty for all selected trackers to be set at the same time.

Hard: Change all selected trackers to hard constraints. Trackers that are marked as hard constraints are assumed to have accurate tracker paths that do not contain any errors. Trackers can also be set to hard constraints individually by ticking the Hard column in the tracker list.

Soft: Change all selected trackers to soft constraints. Trackers that are marked as soft constraints may have small errors in their tracker paths without affecting the overall camera motion. Trackers can also be set to soft constraints individually by un-ticking the Hard column in the tracker list.

Push/Pull: When enabled, initialised tracker distances and uncertainties can be set interactively in a Viewer window, as described above.

Constraint controls

Create: Create a new empty constraint

Delete: Delete a selected constraint

Add To: When a constraint and one or more trackers are selected, add the selected trackers to the constraint

Remove From: Remove selected trackers from a constraint

All/None: Select all/none trackers in the constraint.

Once created, constraints are listed on the left, including:

Name: Which can be changed by double-clicking in the Name column

Active: A check-box indicating whether the constraint is active or not

Type: The type of constraint (which can be changed by right-clicking in the column and choosing an option from the popup menu)

Count: The number of trackers in the constraint.

Trackers should be selected in either the Cinema or Viewer windows and then added to the constraint.

The Errors graph

The error graph in Camera Solver

The errors graph plots the projection error (also called the Residual Error, measured in pixels) for each tracker in each frame, along with the average projection error for all trackers visible in a frame.

The projection error is the difference between the tracker path position and the projection of the 3D tracker point onto the camera plane. Ideally, the projection error should be close to zero for each tracker.

Selected trackers are shown in yellow, and unselected trackers are shown in blue. The average projection error graph is shown in white. The error graph can be translated and scaled by clicking and dragging with the right or middle mouse buttons.

A tracker may have a large projection error for several reasons:

1. If a tracker is marked as a Hard constraint and has a large projection
error in most frames, it often means that the tracker path does not correspond to a fixed point in 3D space.

This can be caused by a tracker being positioned over a "virtual corner" (i.e. an image feature that looks like a corner in the image, but is formed by the intersection of edges in the scene at different distances from the camera. As the camera moves, the apparent intersection of the edges changes). In these cases, the tracker should probably be de-activated.

2. If a tracker is marked as a Hard constraint, and is tracked
accurately but has a projection error that increases significantly in certain frames, it often indicates that the camera position in those frames is incorrect.

Adding more trackers to the solution, or providing estimates of tracker distances or camera focal length can often help out here.

3. Trackers that are marked as Soft constraints may also have an small
overall error that increases significantly in certain frames. This can often be caused by the tracker jumping onto a different image feature in those frames.

These frames do not influence the overall average error by much because of the soft constraint, and these jumps can be removed by trimming the error graph.

Error Graph controls

Click and drag right mouse button in error graph to pan the graph. Click and drag middle mouse button in error graph to zoom the graph (or use the mouse wheel, holding Alt/Option to zoom vertically instead of horizontally).

Show All: When enabled, error graphs will be shown for all trackers, otherwise graph will only be shown for selected trackers.

Trim: Display a trim line, allowing all trackers whose projection errors are larger than a particular value to be ignored during a camera solve or refinement.

Edit Trim Curve: Toggles between moving the trim line as a whole and editing the shape of the trim line to allow more flexible trimming where a single value for the entire sequence is not sufficient. The trim line can be edited using the standard controls for manipulating a Bezier curve. The R button resets the shape of the trim line.

Fit View: Scale and translate the error graph so all tracker error lines are visible.

Trimming trackers

The trim line can be moved up and down by dragging it with the left mouse button. Once activated, trimming will remain active even if the Trim button is disabled - the Trim button only controls whether the trim line is displayed or not; not whether trimming itself is active. To disable trimming click the D button next to the Trim button (only available when the trim line is not displayed).

The Coverage Panel

The Coverage Panel displays information about the frames in which each tracking point has been tracked:

The coverage panel in Camera Solver

This can be used to evaluate how well tracking points are distributed throughout the clip, which will help to provide an accurate camera solve without any jumps in the camera path.

Coverage Keys display

By default, the Coverage Panel displayed keyframe information showing how tracking points have been positioned and tracked. Each frame in which the tracker is present is shown with a blue square. Light-blue squares indicate where automatically generated tracking points were initially placed, and yellow squares show frames in which manually generated tracking points were keyframed.

Frames in which the tracker is visible but has not been positioned are displayed in dark red. It is important to ensure that tracking points have been positioned in all frames in which they are visible, as this can significantly affect the accuracy of the camera solve.

Coverage Error display

The coverage panel in Camera Solver

Alternatively, the Coverage Panel can also display the projection error for each tracker by clicking the Errors button. This switches the colour-coding of each indicator to show the error of the solved tracking point in each frame. This is colour-coded to show green for less than 0.5 pixels error, yellow for 1.5 pixels and red is greater than 2.5 pixels.

Coverage Panel controls

The Coverage Panel can be panned horizontally or vertically by clicking and dragging with the right mouse button. Clicking and dragging with the middle mouse button will zoom either horizontally or vertically to increase the number of tracking points and frames that are displayed in the panel.

The mouse wheel can also be used to zoom horizontally, or vertically if the Alt/Option key is held.

Clicking on an indicator with the left mouse button will select the tracking point and display that frame in the Cinema window. Holding the Ctrl key will allow multiple tracking points to be selected.

Double-clicking on an indicator with the left mouse button will select the tracking point and immediately switch to display the node which generated that tracking point. This can be used to quickly jump to a User Track node to manually adjust a tracking point to correct a tracking error.

All: Switch between displaying all trackers, or only those trackers visible in the current frame.

Keys: Display keyframe information, showing where targets have been tracked and manually positioned.

Errors: Display projection error information.

Name: Sort the tracking points by name, in alphabetical order.

Start: Sort the tracking points according to the first frame in which they are tracked.

End: Sort the tracking points according to the last frame in which they are tracked.

Hard: When both Hard and Soft constraint trackers are present, this button can be used to switch between displaying all trackers, or only those marked as a Hard constraint.

Fit: Fit the tracking points display to the window. This will zoom in or out as necessary, displaying as many tracking points and frames as will fit in the viewport.

The Solver log

This window contains useful information generated by the solver as the camera path is estimated, including the initial frames used to build the solution, the estimated field of view and focal length, and the average pixel error of each frame as it is solved. By default, the solver log is not stored in the project file, although this behaviour can be changed from within the General Preferences window.

Here is an example output when building the initial solution for a camera with unknown focal length:

Using initial frames 22 and 119..
FOV: 21.63 x 12.27 (focal= 38.62 mm) Error= 1.08448
FOV: 30.80 x 17.62 (focal= 26.78 mm) Error= 0.358187

In this case, a field of view of 30.8 x 17.6 degrees was found between frames 22 and 119, with an average pixel error of 0.35. This error is low, so it is likely that the initial solution will be accurate.

Initial Solution:
FOV: 30.88 x 17.66 (focal= 26.71 mm)
Error: 0.20(0.48) 0.22(1.11)

After the initial solution was completed, a focal length of 26.7mm was found. The average error in the first initial frame was 0.2 pixels, and the largest error was 0.48. The average error in the second initial frame was 0.22, with a maximum of 1.11.

By way of contrast, here is another example from the solver log, where the shot has not been solved accurately. In this case, foreground and background distances were confused:

Using initial frames 22 and 119..
FOV: 22.19 x 12.59 (focal= 37.62 mm) Error= 2.03055
FOV: 55.73 x 33.13 (focal= 13.95 mm) Error= 1.84218
FOV: 80.80 x 51.16 (focal= 8.67 mm) Error= 1.5118
The field of view here is fairly large, which may be an indication that the solver has not been able to estimate focal length accurately. Similarly, the error that has been found is much larger than the previous case.
Initial Solution:
FOV: 93.02 x 61.33 (focal= 7.00 mm)
Error: 0.95(2.12) 0.87(3.99)
The errors in the first and last initial frames here are larger: the largest errors in each frame over 2 and almost 4 pixels.

Once the initial solution has been completed, the solver log will display average pixel errors as additional frames are introduced into the solution. The field of view and focal length for each frame are also displayed. In the case of a bad quality solve, these may look something like:

Solved frame 15: FOV: 90.80 x 59.40 (focal= 7.28 mm) Error= 1.71392
Solved frame 14: FOV: 90.80 x 59.40 (focal= 7.28 mm) Error= 1.79394
Solved frame 13: FOV: 90.80 x 59.40 (focal= 7.28 mm) Error= 1.88736
Solved frame 12: FOV: 90.80 x 59.40 (focal= 7.28 mm) Error= 1.99767
Solved frame 11: FOV: 90.11 x 58.81 (focal= 7.36 mm) Error= 1.91009
Solved frame 10: FOV: 90.11 x 58.81 (focal= 7.36 mm) Error= 2.02863
Solved frame 9: FOV: 89.56 x 58.34 (focal= 7.44 mm) Error= 2.28063
Solved frame 8: FOV: 89.06 x 57.92 (focal= 7.50 mm) Error= 2.42212
Solved frame 7: FOV: 88.57 x 57.50 (focal= 7.56 mm) Error= 2.54336
Solved frame 6: FOV: 88.17 x 57.17 (focal= 7.62 mm) Error= 2.65623
Solved frame 5: FOV: 87.86 x 56.90 (focal= 7.66 mm) Error= 2.70416
Solved frame 4: FOV: 87.64 x 56.72 (focal= 7.69 mm) Error= 2.75542
Solved frame 3: FOV: 87.51 x 56.61 (focal= 7.71 mm) Error= 2.82041
These can be a useful indicator of when the initial camera focal length is wrong. In this case, the error is increasing significantly as more frames are added, and the focal length is also changing. This is likely to be because the focal length estimated in the initial solution was wrong, and no longer fits the structure of the scene.

After correcting for the inverted foreground/background problem in the shot by initialising two tracker distances, the solver log shows the following, with a low error and stable focal length estimate:

Solved frame 15: FOV: 30.88 x 17.66 (focal= 26.71 mm) Error= 0.318843
Solved frame 14: FOV: 30.88 x 17.66 (focal= 26.71 mm) Error= 0.333848
Solved frame 13: FOV: 30.88 x 17.66 (focal= 26.71 mm) Error= 0.317608
Solved frame 12: FOV: 30.88 x 17.66 (focal= 26.71 mm) Error= 0.352471
Solved frame 11: FOV: 30.88 x 17.66 (focal= 26.71 mm) Error= 0.301172
Solved frame 10: FOV: 30.88 x 17.66 (focal= 26.71 mm) Error= 0.329006
Solved frame 9: FOV: 30.88 x 17.66 (focal= 26.71 mm) Error= 0.324059
Solved frame 8: FOV: 30.88 x 17.66 (focal= 26.71 mm) Error= 0.334113
Solved frame 7: FOV: 30.88 x 17.66 (focal= 26.71 mm) Error= 0.325696
Solved frame 6: FOV: 30.88 x 17.66 (focal= 26.71 mm) Error= 0.37913
Solved frame 5: FOV: 30.88 x 17.66 (focal= 26.71 mm) Error= 0.369218
Solved frame 4: FOV: 30.88 x 17.66 (focal= 26.71 mm) Error= 0.347188
Solved frame 3: FOV: 30.88 x 17.66 (focal= 26.71 mm) Error= 0.354713

Default Keyboard Shortcuts

Keyboard shortcuts can be customised in the Preferences.

Set First Initial Ctrl+A
Set Last Initial Ctrl+L
Preview Ctrl+R
Solve All Shift+S
Solve Trackers Shift+T
Refine All Shift+R
Refine Camera Shift+Y
All/None Trackers Shift+L
Activate Shift+A
Deactivate Shift+D
Hard Constraint Shift+H
Soft Constraint Shift+O
Push/Pull Shift+P
Show Ground Ctrl+G
Show Horizon Ctrl+H
Show Geometry Ctrl+E
Show Projections Ctrl+P
Show Names Ctrl+N
Show Info Ctrl+I
Show Frustum Ctrl+F
Marquee Shift+M
Centre View Shift+C
All Errors Shift+E
Trim Shift+I
Edit Trim Curve Alt/Option+I
Fit Shift+F
Next Clip C