2.1 Solving Multiple Cameras Into a Single Scene

20 Jun, 2017 | KNOWLEDGE BASE, PFTRACK

This article is part of Level 2. Extended Matchmoving in PFTrack of The Pixel Farm’s Training Course for PFTrack. Find out more and register for the next available live class.

When tracking multiple shots from the same location, it is often important that every shot shares the same coordinate system. This way we can ensure that key locations are identical independent of which camera solve is worked on. A straightforward way to ensure a shared coordinate system for multiple cameras is to track and solve them into the same scene in PFTrack.

In the Extended Matchmoving in PFTrack class we have used two different approaches to tracking multiple moving cameras into a single scene. First by using shared user tracks, and then by using survey photographs of the location. This article serves as an overview of the techniques used.

Finally, we tracked an independently moving object in one of the clips, resulting in a scene of three different moving elements (2 cameras and 1 object).

Contents

01. Importing Clips and Photos

02. Solving Cameras Using Common Trackers

– Tracking Common Features

– Solving the Cameras

– After the Solve

03. Solving Cameras with Survey Photographs

– Using Still Images in the Tracking Tree

– The Photo Survey Node

– Orienting the Scene

– Solving the Movie Cameras

04. Tracking the Object

– Tracking the Object

– Solving the Object

05. The Result

06. Conclusion

– Further Reading

Training Footage

The clips and images used in this training.

Footage: PFTClocktower.zip

Download

01. Importing Clips and Photos

The task achieved in the live training was to solve two moving cameras into one scene, so they would share a common coordinate system. For the photogrammetry approach, we also chose 38 still photos to survey the scene.

Importing the movie clips is a straightforward drag and drop operation from the File Browser into the Media Bins.

When importing still images, a little more care must be taken. Make sure to switch the File Browser into single frames mode before importing the stills. This is especially important when the still images have different orientations or non-sequential file name numbering. To avoid cluttering your Default media bin, you can drag and drop the whole directory rather than the individual images.

02. Solving Cameras Using Common Trackers

Using common trackers to solve two or more cameras into a single scene is an approach that does not require any additional data and always works as long as there are enough common features to track in each clip.

Tracking Common Features

To identify and track common features in multiple clips, they must be connected to a single User Track node. You can create common features in the node by placing and tracking appropriate trackers in all clips.

The screenshot below shows an example of a suitable common features in the two clips.

The camera_trackers.txt file, which is included in the archive with the clips and still images, holds a number of trackers that are both, common to both clips, as well as some additional unique trackers in each clip. These can be imported and reviewed in the User Track node.

Solving the Cameras

Next, all of the User Track node’s outputs have to be connected to a Camera Solver node.

In the Camera Solver node both cameras can be solved simultaneously by clicking Solve All. Solve parameters can be adjusted individually for each camera before the solve.

Once the solve is complete, both camera movements can be inspected in the 3D views.

After the Solve

The scene can now be passed downstream in the tracking tree, for example into Orient Scene, Test Object and Export nodes. Each output of the Camera Solver node represents one camera, so all outputs need be connected to nodes that will affect all cameras. This is particularly important for nodes that may change the coordinate system, such as an Orient Scene node.

All cameras can be exported into a single 3D scene file, or individually.

03. Solving Cameras with Survey Photographs

Using survey still images of a location is a different and dedicated approach to solving multiple moving cameras into one scene. With this method, a Photo Survey node is used to set up a scene from still images, into which moving cameras will be sequentially solved in a Scene Solver node.

Using Still Images in the Tracking Tree

The Image Input node is the recommended way to using still images in a tracking tree. As mentioned above when importing the still images, this is particularly important when the images have different orientations or non-sequential file name numbering.

Still images are added to the Image Input node by dragging and dropping the images into the node’s editor.

Some operations and management can be performed in the Image Input node, such as specifying the orientation of images, or flip and flop operations. A full description of operations is available in the nodes reference help page.

During the training course, the still images were flipped and flopped, so they wouldn’t be displayed upside down in the Cinema.

The Photo Survey Node

The Photo Survey node creates a scene and point cloud from the still images in two steps. The first step is to automatically find matching features. 

A point cloud is created from these features in the second step.

EXIF data should be read from the image files, if available.

Orienting the Scene

In the class, we orient the scene at this stage before solving the movie cameras. However, it is not necessary to orient the scene before using the Scene Solver node and it could be done at a later stage.

When it comes to scaling the scene in the live training, we use a known distance between two trackers to set a correct scale. The distance between the two hinges on the piece of metal leaning on the wall has been measured on location. A known distance between two selected trackers or point cloud points can be provided to the Orient Scene node to scale a scene.

Solving the Movie Cameras

The Scene Solver node is a specialised node to solve moving cameras into a scene that has been surveyed photographically with the Photo Survey node. The Photo Survey point cloud has to be connected to the Scene Solver’s first input, with movie cameras connected to additional inputs.

After the Photo Survey dataset has been initialised for the node (which has to be done once for each node), additional cameras can be tracked and solved into the scene.

04. Tracking the Object

There are several ways of tracking the motion of an independent object within PFTrack. If a geometric model of the object is available, geometry tracking could be used. A more traditional way involves tracking the object with user tracks before solving for the motion in an Object Solver node, which is the approach taken in the live training.

Tracking the Object

Using trackers to track the object in a User Track node follows the same steps as outlined above. However, since the trackers are used to solve an object as opposed to a camera, a new motion group must be created.

The file box_trackers.txt holds the trackers used in the live training for comparison.

Solving the Object

The Object Solver node converts 2D tracks into object motion in the same way a Camera Solver can be used to solve a camera from 2D tracks.

If the object is only visible from one camera, the Object Solver cannot determine the scale of the object automatically. It could be either a large object far away or a small object closer to the camera. In this case, the Distance From Camera orientation mode can be used to scale the object.

During the class, we scale the object such, that its distance is approximately 3.5m in the first frame.

05. The Result

The resulting scene from the second approach contains all locations of the still images, the point cloud, the two moving cameras and the moving object.

06. Conclusion

In this article, we have briefly recapped the techniques covered in the training course to get to the final scene. Whilst we “only” matched three moving elements (plus the many still images), the amount of cameras and objects you can add to a scene is virtually unlimited.

Further Reading

In 2.2 Creating a Triangular Mesh from Still Images you can read on how the point cloud created in the Photo Survey node can be used to create a geometric model of the clocktower front.

In 2.3 Using Multiple Views to Solve Nodal Pans you can learn how the approach of tracking common features in multiple views helps to extract 3D information from nodal pans.

Related Posts

PFTrack 2017 Texture Maps – The Mari Workflow

PFTrack 2017 has seen a major update to it’s photogrammetry pipeline. The updated Photo Mesh node is now able to generate: Normal, Occlusion, Displacement and Colour maps; this functionality is hugely beneficial for both VFX and real time workflows. Building on the…

Read More

Unity Project: Utilising PFTrack’s New Photogrammetry Toolset

The month of August was set aside for research and development within the walls of The Pixel Farm. New features have been added to PFTrack’s already powerful toolset, making the Photo Mesh node more versatile and more relevant to an unprecedented number of industries…

Read More

New Photo Mesh Simplification Tools and Texturing Pipeline in PFTrack

To expand our renowned photogrammetry toolset and unlock innovative pipelines in PFTrack, we have introduced simplification tools to the Photo Mesh node. Now with the option to simplify a mesh model, the Photo Mesh is more powerful than ever, allowing you to pass a mesh…

Read More

Solving Nodal Pans in PFTrack

Nodal pans, where the camera is mounted on a tripod, provide a specific challenge to 3D camera tracking, due to the lack of parallax in the shot. In this tutorial, you will learn what that means when solving your camera, and how to use reference frames in order to…

Read More
Share This

Share This Post On: