Creating a Triangular Mesh in PFTrack


An updated version of this tutorial is available as part of our PFTrack Training Course.

In this tutorial you will learn how to create a mesh model from multiple source clips in PFTrack’s Photo Mesh node. After creating the dense triangular mesh, you will simplify its geometry to pass the model downstream to create a texture in the Texture Extraction node and export your solved scene and the textured model.


01. Set Up the Scene

02. Create the Mesh

– Create the Photo Mesh Node

– Adjust the Bounding Box

– Create Depth Maps

– Review the Depth Maps

– Built the Mesh

– Review the Mesh

03. Using the Mesh

– Export the Full Resolution Mesh

– Simplify the Mesh

– Create a Texture for the Mesh

– Export the Mesh and Texture

Tutorial Footage



01. Set Up the Scene

Follow the steps in this tutorial to set up the scene with the two clips. You should at least complete the first four sections to have your scene created from the still images and oriented.

When you are done, your tree should look similar to the one in the screenshot.

02. Create the Mesh

The Photo Mesh node is used to create a dense triangular mesh from one or more solved cameras or point clouds. Note that if you are using more than one input, the cameras must be part of the same coordinate system.

Create the Photo Mesh Node

Create a Photo Mesh node and connect both outputs of the Orient Scene node.

Adjust the Bounding Box

The first step towards creating your mesh is to define which part of the scene to use by adjusting the bounding box. By default, the bounding box encompases a majority of the scene, but more often than not you will only be interested in creating a mesh of a specific object.

Select Edit to start editing the bounding box. Split the view into 4 windows to get a better overview of your scene. The white edges define the bounding box. Click and drag with the left mouse button to adjust the nearest, highlighted face of the box. Hold the Control key on Windows or Linux, or the Command key on Mac, to adjust the face opposite the nearest.

Edit your bounding box so that it fits the church structure and doesn’t include much else.

When you’re done, de-select Edit.

Create Depth Maps

Next click Create to create a depth map for each of the 12 still images. This is the first step towards creating the mesh. In the second step, the mesh will be constructed from points from each of the depth maps. As the depth maps are created, you will see them displayed in the perspective and orthogonal windows as well as listed in the Depth Maps table.

Creating depth maps involves complex computing. Even though the algorithms take advantage of all available GPU and CPU resources, it can nevertheless be a time consuming process. When dealing with moving cameras in particular, the Path Sampling parameter can be used to reduce the workload. At its default value 1, a depth map is created for every frame. If you increase this value to 2 for example, a depth map is created for every other frame only. A value of 5 would create a depth map for every fifth frame and so on.

Review the Depth Maps

Once this is done, switch back to two horizontal viewports to examine the depth maps in more detail in the perspective window. As you step through the images you can see the vertices of the current depth map in colour. In some frames, parts of the sky have been picked up as well.

Click this buttondepth_map_confidence_button in the Depth Maps Display section. The mesh is now shown with colours indicating confidence values. Increasing the Min. Confidence setting will hide all depth map pixels below that threshold. You can interactively increase and decrease values in text boxes by clicking and dragging with the left mouse button. Change the value to about 50%. Click the depth_map_coloursbutton to see the vertex colours again.

To prevent the sky being picked up in the first place, you could have masked it out with, for example, a Fuzzy Selection mask.

Build the Mesh

Before building the mesh, you have the option to adjust a separate bounding box for the mesh. This can be useful if you only want to create a mesh from parts of the depth maps. This will not be necessary for this tutorial so you are ready to build the mesh by clicking Create in the Mesh section of the UI.

Review the Mesh

Before taking a closer look at the mesh, turn off the display of both the Mesh and Scene bounding box by de-selecting this buttonsurface_normals in the Display column of both the Scene and the Mesh section. You can also turn on surface normalssurface_normals and turn off vertex coloursvertex_colours to better see the detail in the mesh. Remember that it only took 12 still images to create this mesh and you could expand it and create a more detailed mesh by adding more images.

03. Using the Mesh

You can now export the mesh from within the Photo Mesh node, or simplify it to pass it downstream and use the simplified version in PFTrack.

Export the Full Resolution Mesh

You can now export the full resolution mesh in the Mesh Export tab. The available export formats are FBX, Open Alembic, OBJ and PLY. Click the button to open a file browser to choose the export location and filename, then click Export Mesh to export.

Simplify the Mesh

Please note, this feature is available from PFTrack 2016.09.16.

To pass the mesh downstream in PFTrack, it needs to be simplified first. Switch back to the Simplification tab. The Number of triangles box holds the target number of triangles for the mesh. You can see the current amount of triangles displayed in the top left of the Cinema. Leave the value at its default 100000 and click Simplify.

Once mesh simplification is completed, you can turn Show simplified on and off to switch between viewing the simplified mesh and the full resolution mesh. Only the simplified mesh will be passed down the tracking tree in PFTrack.

Create a Texture for the Mesh

Create a Texture Extraction node and make sure it is connected to the Photo Mesh node. First, you need to generate a UV Map for the mesh. Select the Photo Mesh entry in the Objects table, then choose Custom from the UV Projection menu and click Generate. The Custom setting instructs the node to use the UV map generated during the simplification  process in Photo Mesh.

You can then view and edit the UV map by clicking Edit UV. First you have to un-pack the individual UV charts by clicking the Un-Pack button. This will separate the custom UV map into non-overlapping regions. You can also change the resolution of the texture if desired, then close the window again.

Click Extract to start the texture extraction. The Best frame per triangle entry from the Extract From menu means that for each triangle UV group, PFTrack will use the best of the available still images to generate the texture. Current frame on the other hand would only use the current frame to texture the model. Once the texture extraction is completed, change the Render style in the Display section to Textured so you can see the textured model in the viewers.

Click the Help button to find out more about the Texture Extraction node or any other node in PFTrack.

Export the Mesh and Texture

Create an Export node and connect it to Texture Extraction node. You can see the model listed in the Objects tab, and the texture in the Textures tab. Click Export Scene to export.

Related Posts

PFClean: The Workflow Manager

In this tutorial you will learn how to use the Workflow Manager to direct the flow of clips through restoration and remastering nodes to the final export. You will create a tree containing all the node types in the Workflow Manager and flow your clips into the File Out…

Read More

3.3 Stabilizing Footage Using Tracking

In our previous extended study material Level 3.2 Creating Custom Standards, we covered using an text editor to create custom standards for our imported footage. Find out more about The Pixel Farm Training Academy and register for the next available class here. In…

Read More

Tracking a 360º Clip with PFTrack’s Photogrammetry Toolset

In this tutorial you will learn how to track and solve a 360º spherical camera into a scene that has been surveyed photographically with a Photo Survey node. By doing so you will learn how to use PFTrack’s familiar tracking toolset to solve 360º footage, …

Read More

3.3 Using Survey Data in PFTrack

This document is part of Level 3. Advanced Matchmoving Strategies in PFTrack of The Pixel Farm Training Academy’s PFTrack course. Find out more and register for the next available live class. When survey data is available from the location of a shoot, feature tracks can…

Read More
Share This

Share This Post On: