Good evening all,
I am a collision investigator and am trying to determine the proper method to track a vehicle's dashcam when LiDAR survey data of the roadway and environment is available.
Importantly, I am trying to get a good track for the camera, and then set the coordinate system and scale based on the 3D coordinates of the LiDAR survey. That way, I know the camera position in real-world coordinates, so the vehicle can be placed on the roadway and speeds can be calculated based on frame timing.
My current node structure goes:
Clip input (where the frames from the video are imported as JPGs)
User track. Here I carefully created 60 trackers that span substantial segments of the 4.6 video. Coverage looks good, with something like at least 10 trackers per frame.
Survey Solver. Here I imported the LiDAR and attached the 3D coordinates from the survey to 14 of the trackers, in hopes of defining the coordinate system/scale.
Is this the proper workflow?
When I solve with the Survey Solver, it does not properly track the camera. The residuals on the trackers is huge, with the largest being 450 and the average at ~100. I accounted for distortion with the straight line method, but the video I'm using here was shot with a GoPro in linear mode, so I'm not expecting big distortion capable of blowing the project up.
Thanks for any help!
Lou
Simon,
I just wanted to report back and let you know that hiding the trackers appropriately was indeed the issue, and that solved things quickly.
Thanks so much!
Lou
Simon,
Thank you for the prompt and detailed reply!
I was not hiding my trackers… hopefully that’s it! I will also focus on setting up the camera better, and try the estimate focal node.
I have the camera in-hand and a calibration sheet from PhotoModeler, so I ought to try a distortion correction routine as well. Is there a video or article detailing that process?
I’m confident in my scan data attachments, as the data was generated by a Leica RTC360, is dense, and I’ve QC’d the attachments a few times.
Thanks also for the note regarding the Enterprise solution.
I’ll attempt your suggestions and report back. Thanks again,
Lou
Hello Lou,
Your workflow looks correct from your description, so I'd guess the reason why your solve isn't working is probably related to either the setup of your camera, your 2D tracking points, or how you are attaching them to your LiDAR scan.
First of all, you should check your camera setup in the Clip Input node. If you've entered a known focal length, make sure that the sensor size is also correct for your camera, as both these pieces of information are necessary for PFTrack to work out the field of view. You can see a readout of the horizontal and vertical field of view if you have entered these values. If you are unsure, you could either keep the focal length as unknown, or try using the Estimate Focal node to estimate the focal length from a pair of well-defined vanishing points in one frame of your clip.
Next, you should take a look at your 2D trackers. Whilst it's important to ensure each tracker is positioned correctly in the frames where the feature is visible, it's just as important to make sure all trackers are hidden from view in other frames where the feature is not visible. You can quickly check this in the User Track node by opening the Coverage panel and making sure there are no red indicators for any of your trackers in any frames. Red indicators show frames where the tracker has been neither tracked not manually positioned. If you see any of these, either complete the track or hide the tracker in those frames using the Hide buttons:
After that, the next thing to check would be the 3D LiDAR points you've attached your trackers to in the Survey Solver node. Make sure your trackers are attached to the correct point in your scan, especially if you have many similar looking areas in scene, such as repeated identical road markings. If your LiDAR scan is not dense enough to contain the precise point you need for your tracker, you should set the Uncertainty value as well to compensate for any difference:
Finally, you mentioned that you've attached 14 out of your 60 trackers to your LiDAR. The accuracy of your solve is going to depend on how many of these 14 trackers are present in each frame of your clip. Whilst PFTrack only needs 3 surveyed trackers to solve for the camera position and rotation (assuming your focal length is known), more are recommended where possible, and they should be well distributed in your 3D space.
If you've got some frames where all your surveyed trackers are lying along a straight line (or a flat plane if your focal length is unknown), this can cause problems. An example of this would be 3 trackers attached to points on road markings in a straight line. This is known as a degenerate configuration for the solver, and will prevent PFTrack from accurately calculating the position and orientation of your camera since multiple solutions exist for your particular configuration of points. In this case you'll need to attach additional tracking points to other locations in your scan that don't lie on the line to remove the ambiguity.
You can use the Preview Initial Solution option to see what your solve looks like at each frame before you start. Enabling this option will make PFTrack try and position the camera in the current frame, and update the Cinema and viewer window accordingly, and you can change frame to see how this is working elsewhere in your clip. If you find a frame where the camera position looks wrong (or the field of view is way off), see if you can update your trackers in that frame to improve the solution.
If you'd like us to look at your particular dataset, you can post a download link to it here, but we'd need to see your footage and LiDAR files to make any specific suggestions.
Alternatively, your company could also consider purchasing an Enterprise PFTrack license as, amongst other benefits, this will allow you to open private support tickets and securely send your files directly to our team for assistance with specific shots.