top of page

PFTrack Support Community

Public·44 Users

Using GoPro metadata as IMU for a different camera

I've been playing with the idea to use the Gyro data within action cameras for a long time (And, scattered through the web, lots of evidence are of people using it, but not of the "how"). The idea is simple: Attach the GoPro to the main camera rig, record at the same time of the footage (perfect sync doesn't matter, IMU data records at 240 tps), Then sync and export the IMU as .csv (as how I see other people deliver motion data). ... how do we use said .csv file inside pftrack? How do people do this the propper way? Say we have a Lumix GH5s with the GoPro attached to the hot shoe (Or an Insta360 or a DJI Osmo) and we have both the footage and the .csv file, how can we use that within PFTrack?. Having to always embed that data externally through EXR seems unnecessarily complicated when we could just input the .csv We have even tried using a 360 camera to track a "perfect" point in space as a reference for the hero cam... but again, no way to make that data usefull since there is no way to connect that data to the clip input or the camera solver or the auto track in the first place!.

111 Views

Hello Angel, welcome to the community.


There are essentially two separate problems to solve here:


  1. Loading the metadata from a file and device that PFTrack doesn’t natively support.

  2. Applying that data to another camera, in this case, the main (hero) camera, that’s slightly offset from the IMU.

The first step is to write a small Python script to parse the CSV and translate the data into something PFTrack understands. You’ll need to inspect what the CSV actually contains and determine how its information maps to PFTrack’s coordinate system. For example, there are many different conventions for pitch, yaw, roll data, different axis orders, sign conventions, and units, so you’ll have to work out how the IMU defines its rotations.


Frame synchronisation is another important consideration. You need to make sure each line in the CSV corresponds to the correct frame of the hero shot. This can get tricky when working with streaming formats like MP4, which are common for these action cameras.


The benefit of EXR is that the metadata is embedded directly in the frame exactly where it's needed.


Once the data is successfully imported, you could use it as a guide or hint to help solve the hero camera. In practice, IMU or gyro metadata often isn’t precise enough to be used directly for compositing, but it can provide a useful starting point for the camera solve.


You also mentioned an alternative workflow involving a “perfect point.” I’m not entirely sure what’s meant by that term, but it sounds similar to using a survey point. You could potentially track your 360° footage, export the resulting point cloud, and then re-import that data into PFTrack as survey points. From there, you could use the Survey Solver node to help align and solve the hero camera.

bottom of page