1.1 Tracking and Solving a Camera in PFTrack


This article is part of Level 1. An Introduction to Matchmoving in PFTrack of The Pixel Farm’s Training Courses for PFTrack. Find out more and register for the next available class.

The Introduction to Matchmoving in PFTrack class explained the necessary steps to get from a fresh install of PFTrack to getting a solved camera, testing it and orienting the scene. This article serves as an overview of the steps that have been covered.


01. Getting to Know PFTrack 

02. Tracking Trees

03. It All Starts With a Clip

04. Tracking and Solving

– Determining Camera Movement

– Using Masks

– The Consistency Check

– Solving

05. Orienting the Scene

06. Testing the Result

07. Conclusion

– Further Reading

Tutorial Footage

The clip used during the class:

Footage: PFTPropWalk.zip


01. Getting to Know PFTrack

If you haven’t done so already, make sure to read the PFTrack Getting Started Guide PDF  which you will find in the doc folder of your installation directory.

The PDF guide will help you understand key parts of PFTrack’s user interface and explains how to perform tasks such as creating projects and importing clips in PFTrack.

02. Tracking Trees

PFTrack’s tree based workflow means all work is being performed in nodes, which are connected to form a tracking tree. Image data (a movie clip or collection of still images) usually forms the root of the tree. The image data “flows” downstream into connected nodes. Arrows in the connection illustrate the direction of this virtual data flow.

The nodes create more data, which will in turn be passed further downstream, along with the image and everything else accumulated so far in the tree.

The tree build during the live class is fairly straightforward:

Clip → Auto Track → Camera Solver → Orient Scene → Test Object → Export

The Auto Track node is used to track the clip. The trackers are passed into the Camera Solver node, which turns them into a 3D scene. The scene is then oriented and scaled in an Orient Scene node and passed into a Test Object node to test the quality of the result. Finally, it is passed into an Export node, where the clip, trackers, scene, and test objects are all available for export.

You can learn more about the many possibilities with tracking trees in 1.2 Tracking Trees in PFTrack.

03. It All Starts With a Clip

Tracking trees are usually started with a clip. Dragging and dropping a movie clip from the Media Bins into the Tree View implicitly creates a Clip Input node, which represents that clip.

As the root of a tracking tree, the Clip Input node presents the first opportunity to provide available additional information about the clip.

As a rule of thumb, the more correct information can be provided to PFTrack about a camera, lens or location, the easier it becomes to get a high quality result.

Of particular interest in the Clip Input node is the Camera Preset, which defines camera attributes such as its film back (equivalent sensor size). If you know the focal length of the lens used during the shoot, setting the correct film back size is very important, as there is an inherent relationship between the focal length (when measured in mm), and the size of the camera’s sensor of film back. Often, with a film back or sensor size other than 35mm, focal length information is given as the actual focal length, as well as its 35mm equivalent. The fact that those two values are different is a first indication of how the focal length depends on the size of the sensor. You can read more about this relationship, and how to set up a known film back size and focal length in 1.3 The Importance of Film Back.

The Clip Input node can read ARRI and RED camera metadata from various file formats. You can also load Cooke /iData files containing camera and lens information. Check the Clip Input node’s reference help page for more information.

04. Solving the Clip

With the Clip Input node acting as the root, a tracking tree is build to perform the necessary steps to extract a 3D scene, which involve tracking, solving the camera, orienting the scene and testing the result. These steps are outlined for a different clip in this tutorial.

Determining Camera Movement

When solving for camera motion, PFTrack analyses the movement of features through the sequence. To give a very simple example, if features move to the left, then the camera must be moving to the right, as we look through the lens.

This is true, as long as we look at features that never move of their own accord, such as a building. Other features, such as cars on the road or pedestrians, however, may be moving from right to left even if the camera is not moving at all! Tracking these kinds of features provides no information towards solving the camera motion. In the worst case, they might negatively influence the result.

Using Masks

The clip used during the class shows an actor carrying a box. His movement through the scene doesn’t provide any valuable information towards solving the camera. The traditional way of avoiding features being picked up on moving objects like the actor is by masking them out.

However, no masks are used during the class, as in many cases PFTrack can recognise and reject features that move in different ways compared to the rest of the frame, as is explained in the next section.

In cases where masks are required, they can be created for the Auto Track node (as well as many other nodes), in the Mask panel.  Click the Mask button in the node’s editor to open the Mask panel.

If you would like more information on how to use masks in PFTrack, check out the Mask panel’s documentation by opening the Mask panel and clicking the Help button.

The Consistency Check

In PFTrack’s Auto Track node, trackers can be checked whether they follow an expected type of motion, relative to the image. Trackers that do not move in the expected way, such as trackers tracking the actor, will be automatically rejected.

During the class, the Consistency parameter is left at its default, Local Motion. This compares trackers in an image area against each other and rejects the trackers whose motion is inconsistent with overall motion.


For the most parts in the Auto Track and Camera Solver node, the default parameters are suitable for tracking and solving this clip. Only the Search mode in the Auto Track node is set to Better Accuracy, to improve the reliability of the tracking, due to the handheld nature of the footage and motion blur in some frames.

No parameters need to be adjusted in the Camera Solver node.

05. Orienting the Scene

Orienting the scene plays an important part when it comes to using the result in a third-party 3D application. Ideally, the scene’s ground plane would match the actual ground or some other suitable surface, for example.

Some scene orientation tasks can be performed in the Camera Solver node, with a dedicated Orient Scene node available for more fine grained control of the scene’s coordinate system.

06. Testing the Result

The Test Object node allows you to position geometric objects in your scene to help judge the quality of the camera track. These artificial objects should act as if they were a part of the original location.

07. Conclusion

In this article, we have briefly recapped the steps taken in the live training to track and solve the clip.

Further Reading

The Getting Started in PFTrack and Auto Track and Camera Solve tutorials walk you through the basic steps again with a different clip.

1.2 Tracking Trees in PFTrack provides an introduction of key features of PFTrack’s Tree View that will help you to build and organise more complex tracking trees.

In 1.3 The Importance of Film Back the relationship between the camera’s film back and the focal length of a lens is explained in more detail.

Related Posts


Submit a Comment

Your email address will not be published. Required fields are marked *

Share This

Share This Post On: