In the winter of 2019 I was in Montreal, Canada visiting friends, and happen to have my GoPro Fusion 360 camera with me so I decided to walk around a parkette (small park) at the corner of Place Phillips and René-Lévesque Blvd W.

Final 3d model

The camera was set to record JPG time-lapse with an interval of 1 sec, and about 950 shots were taken. The camera has 2 fish-eye lenses which translated to about 1900 images in total.

Sidenote: I considered capturing the images in DNG format, but it takes the camera around 7-10 sec to save each DNG and I didn’t feel like waiting around.

The sensor resolution is 3000 x 3000 pixels but maybe only 75% of that resolution is likely usable for reconstruction due to black areas not being used and the extreme areas around the fish-eye capture not being much use for depth map creation.

Sidenote: I didn’t use the GoPro software to stitch the images together. I worked with the JPG images straight out of the camera.

Before doing any alignment steps, the images from each lens were grouped into 2 lens groups in RC.
Also the Prior Pose was set to Unknown for all the images. This step is critical since Fusion 360 has a GPS receiver and the Front camera images get tagged with GPS data. While you may think this is useful, the accuracy is quite horrible and it only messes up the alignment. That GPS needs to be stripped away, and that’s what we’re doing by setting the Prior Pose to Unknown.

The alignment process took a few steps. After a few trials using different “Distortion Models” in RC alignment settings, I found that doing an initial alignment with Brown 3 aligned about 25% of the images and then a subsequent alignment using Brown 4 with tangential2 aligned 50% of all the images. This was the best alignment I could achieve. Running subsequent alignments only added a few more images to the main component.

After the alignment is done, it’s business as usual. The 3d reconstruction resulted in a 110 million poly mode, which was then decimated to the final 3.4 million model.

Then unwrap to desired level, create textures and export mesh.

The only extra step I took was to color correct the exported textures since the Fusion 360 images are captured with a very flat color profile (as can be seen in the image above). I brought the exported texture images in to a photo editing software and increased Saturation by a lot, and also bumped up Contrast.
Then re-saved the texture images.