![]() |
PFTrack Documentation | Node Reference |
The Stereo Survey Solver node can be used to estimate stereo camera motion using a set of feature tracks generated by a User Track node that have specific 3D position specified for each feature.
The Stereo Survey Solver node can have multiple inputs and outputs. The first two inputs must correspond to the left and right-eye clips. Any additional inputs can provide an existing solved camera that can be used to generate survey coordinates if they are not available. For example, the third input could contain another moving camera viewing the same scene that has been solved with the Camera Solver node. In this case, the Stereo Survey Solver node can be used to track the stereo camera into the same world space by generating appropriate survey coordinates for each tracker.
Alternatively, the solved camera in the third input could be generated using an Photo Survey node and many reference frames of the set. In this case, the Photo Survey node would estimate the position of each reference frame and construct a 3D point cloud for the set. These camera positions can then be used in the Stereo Survey Solver node to generate survey coordinates for trackers in the same world space, and the stereo camera can be tracked.
The Stereo Survey Solver node can also be used to generate an ASCII survey data file containing the survey coordinates for individual trackers.
Many of the controls and parameters in the Stereo Survey Solver node are described in the documentation for the Survey Solver node, so please refer to that section for further details.
The additional features specific to the Stereo Survey Solver node are as follows:
Interocular distance: The distance between the left and right-eye cameras. Interocular distance can be set to Known, Unknown or Initialised. An initialised distance will be limited to lie between the minimum and maximum values of the Interocular range.
Interocular range: The minimum and maximum allowable interocular distances. Enabling Variable will allow the interocular distance to vary throughout the clip.
Vertical offset: The fixed vertical offset between the centre of projection of the left and right-eye cameras. The vertical offset corresponds to a distance along the Y (up) axis of the left-eye camera.
Depth offset: The fixed depth offset between the centre of projection of the left and right-eye cameras. The depth offset corresponds to a distance along the Z (forward) axis of the left-eye camera.
Pitch offset: The fixed rotation angle between the forward axes of the left and right-eye cameras. This corresponds to a rotation around the horizontal axis of the left-eye camera.
Roll offset: The fixed rotation angle between the up axes of the left and right-eye cameras. This corresponds to a rotation around the vertical axis of the left-eye camera.
Convergence distance: The convergence distance between the cameras. Convergence can be set to Known, Unknown, Initialised or Parallel. An initialised distance will be limited to lie between the minimum and maximum values of the Convergence range. Parallel convergence corresponds to the cameras having parallel forward axes that never converge.
Convergence range: The minimum and maximum allowable convergence distances. Enabling Variable will allow the convergence distance to vary throughout the clip.
Keyboard shortcuts can be customised in the Preferences.
Set Initial Frame | Ctrl+A |
Solve All | Shift+S |
Refine All | Shift+R |
All/None Trackers | Shift+L |
Show Survey | Shift+H |
Activate | Shift+A |
Deactivate | Shift+D |
Enable | Shift+B |
Disable | Shift+N |
Set Position | Shift+W |
Show Ground | Ctrl+G |
Show Horizon | Ctrl+H |
Show Geometry | Ctrl+E |
Show Names | Ctrl+N |
Show Info | Ctrl+I |
Move Pivot | Shift+P |
Marquee | Shift+M |
Centre View | Shift+C |
All Errors | Shift+E |
Fit | Shift+F |
Attach | Shift+T |
Show LIDAR | Ctrl+L |
Show LIDAR 3D | Ctrl+K |
Next Clip | C |
Fly mode | Shift+G |
Translate mode | Shift+H |
Rotate mode | Shift+J |
Scale mode | Shift+K |