#postshot #exportscripts This is an experimental export script for exporting cameras and points from PFTrack 24.12.19 and later to Jawset Postshot for Gaussian Splat training https://www.jawset.com/
Before using the script, please review the usage guidelines below to get the best results.
Download zip file
Shot Setup
The script can export movie or photogrammetry cameras, along with tracking points and point clouds. Download and unzip the file into your Documents/The Pixel Farm/PFTrack/exports folder and relaunch PFTrack. This will create a new export format in the Scene Export node called "Jawset Postshot (.json)"
When setting up your cameras in PFTrack, it is important to ensure you are correcting for lens distortion.
Gaussian Splatting is initialised from your point dataset and trains a radiance field to match your image data, so the results you get will depend strongly on the quality of your input cameras and points. A sparse set of points may not initialise the training as well as a dense point cloud, and datasets with low parallax or coverage may not give the best results when viewed from angles other than your original cameras, so please refer to the the Postshot user guide for capturing guidelines: https://www.jawset.com/docs/d/Postshot+User+Guide
Point density
You should make sure your shot has enough tracking points to initialise the Postshot training. If you've just used a few User Track points or a small number of Auto Track points, you will probably get better results by adding some more to your shot.
You can do this easily by placing an empty Auto Track node upstream from your Camera Solver before solving. Then, after you've solved your camera and are happy with the result, go back to your Auto Track node and generate more tracking points, increasing the Target Number up to 500 or more. In your camera solver, select all your tracking points and click the Solve Trackers button to solve for their 3D positions whilst keeping the camera fixed.
Alternatively, you can use the Select Frames node to decimate your movie clip into a set of photos, and then use the Photo Cloud node to create a dense point cloud, and attach both the camera and dense point cloud to the export node as shown here:
Tree layout for adding a dense point cloud
The Select Frames node could also be used before exporting your movie camera to reduce the number of frames being loaded into Postshot if you have a very long image sequence.
Exporting from PFTrack
Select the "Jawset Postshot (.json)" export format and make sure to enable the "Undistorted clip" Distortion export option, setting the image format to either JPEG, TIFF or OpenEXR with suitable frame number padding.
We recommend using TIFF or OpenEXR as this will ensure invalid pixels around the boundary of your undistorted images are written with a zero in the alpha channel and ignored by Postshot. Alternatively, make sure your undistorted images are cropped to the original image size during the solve to reduce empty pixels as much as possible.
After exporting, you will find a .json file and a .ply file in the export folder containing your camera and point data respectively, along with your undistorted images in the clips folder.
Importing into Postshot
You can drag-and-drop the entire export folder directly into Postshot, but it is important to ensure no other files are present in the folder. Macos users in particular should remove all .DS_Store files that are created when opening the folder in Finder, as they will prevent the dataset from loading and give an "invalid string position" error message.
In Postshot, make sure to enable the "Treat Zero Alpha as Mask" option to ensure the boundary pixels in the undistorted images are ignored during training. Please refer to the Postshot user guide for all other settings.