top of page

The How and Why of Feature Tracking in PFTrack


image showing a manual feature track

What’s the difference between automatic and manual tracking? Which is better? When should I use one instead of the other? And how do the differences affect the camera solver?


In this article we’ll take a look at some of the more technical details of how trackers are used in #PFTrack, and suggest some ways of getting the most out of PFTrack’s advanced feature tracking tools.



 


What is a tracker?


A tracker defines the location of a single point in 3D space, as viewed by a camera in multiple frames. In PFTrack, trackers are generally created using two nodes: Auto Track and User Track. The Auto Track node is able to generate a large number of estimated trackers automatically, and the User Track node provides manual tracking tools for precise control over exactly where each tracker is placed in each frame.


PFTrack's node tree showing the Auto Track and User Track nodes

Trackers form the backbone of any camera solve, and they are used to work out how the camera is moving along with its focal length and lens distortion if they are unknown. But how many trackers do you need, and what is the best way of generating them?



How are trackers used to solve the camera?


When solving for the camera motion in a fixed scene under normal circumstances, PFTrack needs a minimum of 6 trackers to estimate the motion from one frame to the next. This is the bare minimum, however, and we generally recommend using at least 8 or 10, especially if you’re not sure of the focal length, sensor size, or lens distortion of your camera. Using a few more than the minimum can also help smooth out transitions in the camera path from one frame to the next, where one tracker might vanish and another one appears in the next frame.


A point cloud of solved feature points with a virtual camera following the camera path

Trackers should be placed at points that are static in the real world (i.e. do not move in 3D space), such as the corner of a window frame or a distinguishable mark in an area of brickwork. This allows the 3D coordinates of the point to be estimated, which in turn helps to locate where the camera is in each frame.


To help with estimating camera motion, trackers also need to be placed in both the foreground and background of your shot, especially when trying to estimate focal length, as this provides essential parallax information to help the solve. It’s also important to have trackers placed in as many parts of the frame as possible, rather than just bunching them together in a single area. Think of your camera’s grid display as dividing your frame into a 3x3 grid of boxes - try to have at least one tracker in each box in every frame, and you’ll have good overall coverage.


An example of good manual user track placement in a clip


Not every tracker is equal


We’ll get into the details of how to generate trackers shortly, but before we do it’s important to understand that not every tracker is considered equally when solving the camera. The most significant distinction is whether a tracker is defined as being a soft or hard constraint on the camera motion.


Hard constraints mean the placement of the tracker in every frame is assumed to be exact. If you’ve generated trackers manually using a User Track then these will be set as hard constraints by default. The solver will try to adjust the camera position and orientation to make the tracker’s 3D position line up with its 2D position exactly in every frame when viewed through the camera lens.


A table demonstrating the difference between hard and soft constraints


On the other hand, trackers that are generated automatically with the Auto Track node are marked as soft constraints and don’t have to be placed exactly in every frame. The camera solver is able to recognise that some errors in the 2D positions exist and ignore them. These are often referred to as “outliers” and might correspond to a temporary jump in the tracker position for a couple of frames or the subtle motion of a background tree in the wind, resulting in the 3D location of the tracking point changing from frame to frame.


So now that we’ve explained some of the details about how the camera solver uses trackers, what is the best way of generating them? Auto-Track? User-Track? Or both? Ultimately, the answer to this comes down to experience with the type of shot you’re tracking, how much time you have to spend on it, and the final level of accuracy you need to complete your composite.


To get started, here are some guidelines that should help you quickly get the most out of PFTrack’s tools.



 


Automatic feature tracking


If you have all the time in the world to track your shot, then of course, manually placing each tracker in every frame is the way to go, as this ensures each one is placed exactly where it should be.


Alternatively, automatic feature tracking is a way of generating a large number of trackers very quickly, but because the tracking algorithm is attempting to quickly analyse the image data and work out the best locations to place them, not every tracker is going to be perfect.


Automatic feature tracks in action

With default settings, the Auto Track node tries to pick around 40 trackers in each frame and distributes them as best it can over the image area.


However, these trackers may end up being placed on objects that are moving independently from the camera, or at other locations that cannot be resolved to a single point in 3D space. For example, so-called “false corners” that result from the intersection of two lines at different distances from the camera can often be indistinguishable from real corners when looking at a single image.


Whilst the camera solver will ignore these outliers to a certain extent, having too many trackers falling into these categories can adversely affect the solve, so how should you deal with them?



Identifying errors


Whilst PFTrack will attempt to detect when tracking fails, not every glitch can be easily detected, especially when your shot contains motion blur or fast camera movement. It’s always worth reviewing automatic tracking results to check whether there are any obvious errors.


For example, the motion graphs in the Auto Track node can be used to quickly identify trackers that are moving differently from the others.


Motion graph in PFTrack showing a highlighted glitch

The “Centre View” tool can also be used to lock the viewport onto a single tracker. Scrubbing forwards and backwards through the shot will often expose motion that is subtly different from the background scene, which may indicate a false corner or other gradual object movement.


To fix or disable?


So now you’ve identified some trackers that need some attention. What’s next?

The Auto Track node is built to quickly generate your tracking points, and the User Track node provides you with full control to address any issues and manually place trackers yourself.


Fixing a tracking point is easy enough - just use the Fetch tool in the User Track node to convert the automatic tracker into a manual one, and all the tools of the User Track node are available to you to adjust the tracker as needed.


Quick video demonstrating how to manually adjust Auto Tracks


You can manually correct every single one of your automatic trackers if you wish, but as we mentioned earlier, the Auto Track node generates many more trackers than are actually needed to solve the camera motion. This means you may well be spending a lot of time unnecessarily correcting trackers if you have a particularly tricky shot.


It can often be just as effective to quickly disable the bad trackers, especially if time is short. This is certainly the case if you’ve only got a few outliers, and also have other trackers nearby that don't need fixing.


Quick video demonstrating how to disable Auto Tracks


You could also use the masking tools in PFTrack to mask out any moving objects before automatic tracking, although it’s important to weigh the time it will take you to draw the mask against the time it takes to identify and disable a handful of trackers afterwards.



Remember that trackers should be distributed over as much of the frame as possible, and we recommend a minimum of around 10 in each frame, so keep this in mind when disabling. If you end up having to disable a lot and are approaching single-figures, then maybe a different strategy is going to be necessary: supervised tracking.



 


Supervised feature tracking


Ultimately, a lot of shots will need some level of manual, or 'supervised', tracking using the User Track node.


A image of a supervised feature track

This is especially important if you’re tracking an action shot with actors temporarily obscuring the background scene. One limitation of automatic feature tracking is that it can’t connect features from widely different parts of the shot together if something is blocking their view or the feature point moves out of frame for a significant length of time.


In these cases, human intervention is often necessary, and this is where the User Track node comes into play, allowing you to create trackers from scratch to perform specific tasks.


For example, you may have a shot where the camera pans away from an important area for a few seconds and then pans back. Or an actor may walk in front of an important point before moving out of frame. In these cases, you want to make sure the 3D coordinates of points at the beginning are the same as at the end. Creating a single tracker and manually tracking over frames where it is visible (whilst hiding the tracker in frames where it is not visible) will achieve this goal.


PFTrack's UI showing a gap in the tracked feature

The same guidelines apply when creating tracking points manually - try to distribute them over your entire frame, and make sure that you’ve got a good number of trackers in each frame.


Also, try not to have many trackers stop or start on the same frame (especially when they are treated as hard constraints), as this can sometimes cause jumps in your camera path during the solve that will require smoothing out. If you do, adding a couple of “bridging” trackers elsewhere in the image that are well tracked before and after the frame in question can often help out.



 


Wrap Up


Hopefully, this article has shed some light on things to consider when tracking your points. In the end, this all comes down to experience, and as you track more shots, you’ll get a better feel for when to use specific tools, and whether to start with supervised tracking straight away or give the Auto Track node a go first of all. 


If you are using automatic tracking, you can easily place an empty user track node between the Auto Track and Camera Solver to hold any user tracks that need adjusting, or any you need to create manually.


Also, don’t worry about getting every tracker perfect before you first attempt a camera solve. It’s often possible to try auto tracking first and see where that gets you, then consider how to address any problems and add a few user tracks to help the solver out.


PFTrack lets you adjust and change your trackers however you want. If you’ve almost got a solve but can see a bit of drift in a few frames, try creating a single manual tracker over those frames in a sparsely populated area of the image, then solve for the 3D position of that tracker alone, fix your focal length and refine your solution - you don’t have to solve from scratch every time.


If you’re interested in some more details, stay tuned for a follow-on post that will explain some of the finer details of the Camera Solver node, including how to detect and handle ambiguous camera motion, how to bootstrap solves using feature distances, and exactly what an initial frame is and when to change them.



 

If you're interested in trying out automatic or supervised tracking, or if you have questions about tracking that you'd like to discuss with others, join our community and start a conversation using the links below.


Download PFTrack here 

Start a discussion here 

Commenti


bottom of page