Photogrametry - Detailed 3D Scans from Photographs

Over in the Projects forum i shared how I used a technique called photogrametry to create scans of an object from photographs.
I figured I’d make a new thread to show a bit more of the photogrametry process for anyone interested.
Here’s a screenshot of the resulting STL from a pair of Raybans I scanned using photogrametry.

Photogrametry software doesn’t work on glossy or transparent objects. It also needs lots of small details for the software to reference.
I’ve found it works best when I spay the parts with a washable chalk spay and then use a toothbrush to create splatters.

You should be able to walk around your object to take photos, however I got the best results when I shot the object on a green screen with a turntable.
Photogrametry software expects to see the background move with each change in angle. The creases in the green screen will mess up the software so
I used a Photoshop batch command to delete the green screen in my photos.

Having a detailed 3D scan is especially useful on parts that have compound curvature or geometry that’s difficult to measure.

I tried both Meshroom (free) and Autodesk RePhoto (paid) and got simmilar results from both.
The resuting scans seem to be quite accurate. I don’t have a way to measure accuracy but I’d guess they’re about +/- 0.2mm.
Not bad for free software!

Here’s a great video from Prusa that explains the process in more detail.

Thanks for sharing. The video explains the whole process clearly.

About the accuracy: do the dimensions reflect the real world physical dimensions? If the accuracy is 0,2mm, do you mean the total length of the glasses for example or the thickness of the rim?

The Rayban scans look amazing. How long did it take and on what kind of processor?

Meshroom is indeed a great photogrammetry solution. Colmap is not bad either for free and fast software!
But the best solution for product scanning is RealityCapture and they have a free version as well.

Accuracy overall is about 0.15-0.2 mm but there are some deviations in underlit areas, as well as in unpredictable locations due to the feature extraction algorithms which one needs to tweak as much as possible for optimal results.

I have written an extensive article reviewing a range of solutions for one of my bigger clients here:

Check it out!