2.2 Creating a Triangular Mesh from Still Images


This tutorial is part of Level 2. Extended Matchmoving in PFTrack (overview) of The Pixel Farm’s Training Course for PFTrack. Find out more and register for the next available live class.

In the Extended Matchmoving in PFTrack class we have used PFTrack’s photogrammetry tools to help us solve multiple camera motions into a single scene. These tools can also be used to create textured triangular meshes. This tutorial will walk you through the necessary steps to create a mesh from the scene created during the class.


01. The Photo Mesh node

02. Using Masks

– Using a Fuzzy Selection Mask

03. Generate Depth Maps

– Specify the Scene Bounding Box

– Create the Depth Maps

04. Build the Mesh

05. Using the Mesh

– Export the Full Resolution Mesh

– Simplify the Mesh

06. Beyond Photogrammetry

07. Conclusion

Tutorial Footage

To learn this tutorial you will need to download and use the footage below.

Footage: PFTClocktower.zip


01. The Photo Mesh Node

You can create dense triangular meshes from one or more solved cameras or Photo Survey point clouds in the Photo Mesh node. Note that if you are using more than one input, the cameras must be part of the same coordinate system, like we have done in the live session.

Create a Photo Mesh node from the Photogrammetry group and connect it to the Orient Scene node below the Photo Survey node.

02. Using Masks

Many of the photos used in the live session contain large area of blue sky. In cases like this, it is often helpful to mask out the sky to restrict the mesh creation to the foreground (similar cases would be a green screen shot). Fuzzy Selection masks provide a quick way of creating masks for the sky, as they mask areas of similar colour. In the Photo Mesh node, click the Mask button to open the Mask panel.

This section will now walk you through the necessary steps to create and adjust a Fuzzy Selection mask. Click the Help button whilst still in the Mask panel to open the mask documentation for more information on masks in PFTrack.

Using a Fuzzy Selection Mask

Click the Fuzzy Selection button to create the mask. Then move the centre point into the area of the sky that you want to mask out. You can see how pixels of similar colour to the centre point are automatically included in the mask.

The Colour Falloff value controls how similar pixel colours have to be to the centre point to be included in the mask. Adjust the vertical Colour Falloff handle, until the sky on the right is completely covered. The actual Colour Falloff value necessary depends on where you have placed the centre point.

In most images, areas of the sky are separated by the clocktower. You could add a second Fuzzy Selection mask, or create a second selection point by holding the Shift key and clicking with the left mouse button.

When creating your masks, take care of the keyframes, and the in and out frames of each mask, so not to accidentally mask out areas of the image you would like included in your model. A new keyframe is created for every frame where you change a mask property.

A sample selection of possible masks for this image collection is shown in the screenshot below. The actual masks you create may vary.

Once you have made sure all parts of the sky that are visible in the photos have been masked out, close the Mask panel by clicking the Mask button again.

03. Generate Depth Maps

Creating a mesh in the Photo Mesh node consists of two steps. First you will have to generate depth maps for each frame, from which the geometry will be generated in a second step. Before creating the depth maps, you can adjust the scene bounding box to specify the area of interest in the scene.

Specify the Scene Bounding Box

Select Edit to start editing the bounding box.

It often helps to split the view into 4 windows to get a better overview of your scene. The white edges define the bounding box. Click and drag with the left mouse button to adjust the nearest, highlighted face of the box. Hold the Ctrl key (Windows/Linux), or Command key (Mac), to adjust the face opposite the nearest.

Note that for the best results, the bounding box should be roughly aligned with the major axes of the scene. For this reason, it may be necessary to use an Orient Scene node before the Photo Mesh node to adjust the overall scene orientation, ensuring the vertical Y axis is pointing upwards, and either the X or Z axis is aligned in the major horizontal direction. For this example, this has been done during the live training.

Edit your bounding box so that it fits the church structure and doesn’t include much else. The screenshot below shows an example bounding box for this scene.

When you’re done, de-select Edit.

Create the Depth Maps

Next click Create to create the depth maps for each image of the image collection. As the depth maps are created, you will see them displayed in the perspective and orthogonal windows as well as listed in the Depth Maps table.

Creating depth maps involves complex computing. Even though the algorithms take advantage of all available GPU and CPU resources, it can nevertheless be a time consuming process.

Once the depth maps are completed, you can review them in more detail in the perspective and orthographic views.  As you step through the images you can see the vertices of the current depth map in colour.

04. Build the Mesh

Before building the mesh from the depth maps, you have the option to adjust a separate bounding box for the mesh (displayed in purple) by clicking the Edit button in the Mesh section. This can be useful you only want to create the mesh from a localised part of the depth maps, but shouldn’t be necessary in this example.

Click the Create button to start building the mesh.

Once the mesh has been completed, you can take a closer look in the Cinema and the 3D views. The top left corner of the cinema displays some further information about the mesh such as the number of vertices and triangles

Turn on shading surface_normals and turn off vertex colours vertex_colours to better see the detail in the mesh.

05. Using the Mesh

You can export the full resolution mesh from within the Photo Mesh node, or simplify it to pass it downstream and use the simplified version in your tracking tree.

Export the Full Resolution Mesh

The full resolution mesh, with its often millions of vertices and triangles, is too large to be passed downstream in the tracking tree. Instead, you can export it directly from within the Photo Mesh node. The available export formats are FBX, Open Alembic, OBJ and PLY. Click the button to open a file browser to choose the export location and filename, then click Export Mesh to export.

Simplify the Mesh

If you require a lower resolution mesh, or want to pass the mesh downstream in the tracking tree, you can simplify it in the Simplification tab.

During simplification, you can also generate a texture, as well as normal map, occlusion map and displacement map, which will help you to retrain as much apparent detail in the simplified mesh as possible. These options are covered in detail in this article.

The simplified mesh will be referenced by the name you give it in the Mesh name field. Choose the target number of triangles and the resolution for your texture map. Then select the options for the Colour map, Normal map, Occlusion map and Displacement map. Finally click Simplify to start the simplification process.

Once completed, you can toggle Show simplified to display the original or simplified mesh in the viewports.

The simplified mesh can now also be passed downstream in your tracking tree and included in the exported scene with all the cameras and object motion.

06. Beyond Photogrammetry

While this tutorial focuses on generating the mesh from a Photo Survey point cloud, the Photo Mesh node is not limited to just photogrammetry. Meshes can be created from any solved camera in PFTrack, and a single mesh can also be created from multiple cameras that share a coordinate system, like the ones solved in the Level 2. Extended Matchmoving in PFTrack class.

When dealing with moving cameras, instead of an image collection, it is often not necessary to create a depth map for every single frame. In these cases, the Path Sampling parameter can be used to reduce the workload. At its default value 1, a depth map is created for every frame. If you increase this value to 2 for example, a depth map is created for every other frame only. A value of 5 would create a depth map for every fifth frame and so on.

07. Conclusion

Building on the scene created in the Level 2. Extended Matchmoving in PFTrack class you have learned how to further extend it in order to generate a triangular mesh.

Related Posts

PFTrack 2017 Texture Maps – The Mari Workflow

PFTrack 2017 has seen a major update to it’s photogrammetry pipeline. The updated Photo Mesh node is now able to generate: Normal, Occlusion, Displacement and Colour maps; this functionality is hugely beneficial for both VFX and real time workflows. Building on the…

Read More

2.1 Solving Multiple Cameras Into a Single Scene

When tracking multiple shots from the same location, it is often important that every shot shares the same coordinate system. This way we can ensure that key locations are identical independent of which camera solve is worked on. A straightforward way to ensure a shared coordinate system for multiple cameras is to track and solve them into the same scene in PFTrack.

Read More

The Pixel Farm Training Academy

Your journey to becoming a professional in the VFX and restoration industries starts here. The Pixel Farm Training Academy is your exclusive opportunity to learn PFTrack and PFClean for free, directly from our product specialists. Whether you’re a new user looking to…

Read More

Shooting 360° Footage for Spherical Camera Tracking

We have recently embarked on a 360° adventure, and it has been eye-opening to say the least. Building up to the recent release of spherical tracking in PFTrack, we have been learning all there is to know about the world of 360°, from cameras and filming, to syncing…

Read More
Share This

Share This Post On: