You may have read our earlier show report on the Embedded Vision Summit. Now the Embedded Vision Alliance has published two more videos that highlight our presence at the show. The first video gives an overview of the demonstrations we had on display. The second is a video of our 30-minute presentation on Structure from Motion that we gave at the conference. Structure from motion uses a unique combination of algorithms that extract depth information using a single 2D moving camera.

Our demonstrations:

Structure from Motion tutorial intro video:

The full 30-minute Structure from Motion presentation is available at the Embedded Vision Alliance website.

The tutorial discusses how you can use a calibrated camera, feature detection, and feature tracking, to calculate an accurate camera pose and a 3D point cloud representing the captured scene. This 3D scene information can be used in many ways, such as for automated car parking, augmented reality, and positioning. In the talk, Marco introduces the theory behind structure from motion, provides some representative applications that use it, and explores an efficient implementation for embedded applications.

You can visit the Embedded Vision Alliance website for many more resources about computer vision for embedded applications.