Hi!
I have a tracked shot that doesn't have that much movement in it and not a lot depth in it. It has layers in the shot, but I would count them as the middle ground but with different distances. The solve it gives me is very stable and sticks to the points nicely. But when I look at the 3d information, I can see that the points are at the right spot but almost like inverted in the depth.
So things a little further away, get's a little closer and the things closer get's a little further away. Is there a way to tell pftrack to just invert the depth or if I can tell it that these points have x distance and these have y distance. Just so it get's that the pillars are in front and the door is behind them and not the other way around.
Am I making sense?😅
Hi Jonas, Yes, you can initialise some distances in the Trackers list in the camera solver node. If you know an approximate foreground distance and a background distance then just enter these and some uncertainty values and re-solve. The solver will try to pick the camera motion that satisfies these values. You can also use the Push/Pull tool to visually adjust the tracker distances as well in the viewer window. There's more information in the camera solver node documentation - just have a look at the section called "influencing the camera solve". For some further explanation of why this can happen, if you've not got much depth or parallax in your shot then there can often be multiple combinations of tracker distances and camera motion that match your 2D tracker positions. In these cases, your 2D tracker points alone aren't enough to define a unique solution, so you have to provide some additional information to the solver to help it out.