Data Capture for Photogrammetry: The Spooky Asset Project
This Halloween we were getting into the swing of things, as the winter nights draw closer we felt like spreading some spook! The best way to do this was by using PFTrack’s comprehensive photogrammetry toolset to create some quality assets. The project was a good opportunity to experiment with different methods of data capture. Our chosen objects and their different characteristics meant that an adaptive approach to data capture was needed.
Borrowing from the biology department of a nearby school, we were able to source an anatomically accurate skull. This was a great subject for the Halloween project, not only because of the obvious connection between skulls and Halloween, but also because of it’s intricate details. The fine details present in the teeth and cheekbones would be a great test of the photogrammetry tools.
To create a sufficiently detailed data set, several factors must be considered. Primarily, uniformity in the relative distance between camera and object was important to give the tracking algorithm the best chance to accurately model the object. At the same time, we needed to recreate a moving camera. To this end, we repurposed the humble bar stool to act as a turntable and set up the Nikon D800 on a tripod. Images were taken at low, medium and high angles to cover the entirety of the model.
We then followed the typical photogrammetry workflow: Camera Solve, Orient Scene, Photo Mesh, Texture Extraction and Export. It was necessary to mask out the skull during the Camera Solve. Normally the masking is done when building the depth maps and creating the models mesh, however by using the turntable and tripod it was necessary to mask out the skulls background at this stage of the process. The reason is that by using the turntable we were essentially feeding the tracking algorithm contradictory information which was disrupting the process. By masking out the background, PFTrack was able to focus solely on the skull and treat its movement as the movement of the camera.
To give the asset pack more versatility and as a practical demonstration of the Mesh Simplification tool, we created the assets at two different resolutions.
High resolution skull –
Low resolution skull –
The pumpkin threw up some interesting challenges, mainly due to the fact that it is, in essence, an orange sphere with little to differentiate one side from the other. That being said, interesting detail was still present on our model; detail that would be laborious for a traditional artist to replicate authentically – detail that would also aid the photogrammetry process.
By adjusting the pick threshold we were able to increase the number of trackable points, producing a dense point cloud and mesh. However, under closer examination of the model after it had been texture mapped, a flaw in our capture method was revealed. Because the pumpkin was being photographed on the turntable, and because we were shooting outside, the lighting environment was rotating relative to the model, creating a type of ‘halo’ that was baked into the model texture. The skull was not subject to this lighting problem due to its material, polyurethane is less reflective than the outer skin of the pumpkin and so was resistant to this ‘halo’ phenomenon.
It was decided that the turntable and tripod had to be ditched, instead we would physically move the camera around the subject. It was not necessary to mask out the pumpkin during the Camera Solve, meaning that background information could be used during the process to create a more accurate track. Distance between camera and subject was not as important of a factor in creating the pumpkin model because there was less fine detail to capture.
Also, by physically moving around the subject we avoided creating the ‘halo’ lighting effect because the light cast on the subject remained constant throughout the data set.
Our previous tests of the photogrammetry workflow had demonstrated that gravestones are excellent subjects. Follow the download link here to access our gravestone asset pack. The gravestones provide a rich palette of texture that would cause problems for a traditional artist to replicate. They also seemed a good fit for the project thematically.
The gravestone data set was captured during one of the team’s morning commute, being caught ill prepared for this situation gave us another opportunity to test the software in a way we had not initially intended. The results show the adaptability of PFTrack in managing data sets produced from various sources. In this project’s case from a 36 megapixel camera to a smartphone.
The majority of our data was captured using the Nikon D800, but as we have touched upon briefly we also used the iPhone. Overall, the Nikon is preferable as far more detail can be picked up in the photogrammetry pipeline with the data set it produces. Its 36 megapixels coupled with its single focal length prime lens maximises the camera’s performance and results in low distortion, high micro contrast characteristics; ideal to produce tracking data.
Not all PFTrack users have access to such a sophisticated piece of technology, but one could safely assume that all have access to an smartphone camera. As we have demonstrated with the gravestone asset, users can still generate great results with a smartphone, however to generate the best possible models it is preferable to use a higher quality camera, such as the one mentioned, to generate your data.
You can find the asset pack available to download here.