Demo videos

Demo: Structure from Motion at the Embedded Vision Summit 2016

Marco Jacobs demonstrates a robust implementation of Structure from Motion that videantis developed together with VISCODA. This algorithm allows the capture of 3D information from standard monocular cameras and can be used for automotive, augmented and virtual reality, drones, and other applications. The demo shown runs easily in real-time at very lower power on a chip that videantis developed.

Demo: Pedestrian Detection at the Embedded Vision Summit 2016

Marco Jacobs demonstrates the company’s latest embedded vision technologies and products at the May 2016 Embedded Vision Summit. Specifically, Jacobs demonstrates an implementation of the pedestrian detection algorithm based on OpenCV’s HOG/SVM routine. The demonstration shown runs easily in real-time at very lower power on a chip that videantis has developed.

Demo: Structure from Motion at CES 2015

This demonstration was developed together with VISCODA. Structure from Motion is a technique that allows the capture of 3D information using a standard, single, 2D camera and can be used for automotive, augmented reality, and positioning applications.

Demo: Pedestrian detection using HOG/SVM, and Haar-based object detection

Our pedestrian detection demonstrations implements the Histogram of Oriented Gradients (HOG) technique, combined with a Support Vector Machine (SVM) classifier. The Haar-based object detector is demonstrated here performing face detection, but can also be trained to detect different types of objects.

Demo: Low-delay, High Intra H.264 video encode and decode for Ethernet AVB

This demonstration shows video encode of a live video stream, transmission over an Ethernet AVB link, then decode and display on the other side. The overall glass-to-glass latency is sub 10ms, with latency within the encoder or decoder being under 1ms.

Tutorial: Embedded Vision Summit - 3D from 2D: Structure from Motion

Structure from motion uses a unique combination of algorithms that extract depth information using a single 2D moving camera. Using a calibrated camera, feature detection, and feature tracking, the algorithms calculate an accurate camera pose and a 3D point cloud representing the surrounding scene. This 3D scene information can be used in many ways, such as for automated car parking, augmented reality, and positioning.

Tutorial: Embedded Vision Summit – HOG+SVM

This video explains the Histogram of Oriented Gradients method, which is a key algorithm for object detection, and has been used in automotive, security and many other applications. This tutorial shows how it is implemented in real-time on the videantis high-performance, low-cost, and low-power parallel vision processor, including performance results.

Tutorial: Embedded Vision Summit – Feature Detection

This tutorial provides an overview of commonly used feature detectors, and explains in detail how the Harris feature detector works. Then explains a pyramidal implementation of the Lucas-Kanade algorithm to track these features across a series of images. The tutorial also explains how videantis has optimized and parallelized the OpenCV versions of these algorithms, resulting in a real-time, power efficient embedded implementation on a videantis unified video/vision processor.

Tutorial: Embedded Vision Summit Summit – Software Approach for Easing Embedded Acceleration of OpenCV Applications

Mark Kulaczewski, VP System Integration, describes our OpenCV implementation at the Embedded Vision Summit.

Thanks to the Embedded Vision Alliance and Design and Reuse for shooting the videos.