Demo videos

Below you’ll find an overview of demo, tutorial and presentation videos that are available online. Are you integrating or developing for videantis processors? Here’s an overview of our development resources.

Demo: Deep Learning and Computer Vision at CES 2018

Videantis demonstrates the company’s latest embedded vision technologies and products at the January 2018 Consumer Electronics Show. Marco Jacobs gives an introduction to videantis, its position in the market, and its low power, scaleable deep learning and vision processor family. Videantis demonstrations include SLAM, CNNs, pedestrian detection, and video coding for automotive and consumer applications.

Tutorial: Embedded Vision Summit – 360-degree Video Systems

360-degree video systems use multiple cameras to capture a complete view of their surroundings. These systems are being adopted in cars, drones, virtual reality, and online streaming systems. At first glance, these systems wouldn’t seem require computer vision since they’re simply presenting images that the cameras capture. But even relatively simple 360-degree video systems require computer vision techniques to geometrically align the cameras – both in the factory and while in use. Full video here.

Demo: Structure from Motion at the Embedded Vision Summit 2016

Marco Jacobs demonstrates a robust implementation of Structure from Motion that videantis developed together with VISCODA. This algorithm allows the capture of 3D information from standard monocular cameras and can be used for automotive, augmented and virtual reality, drones, and other applications. The demo shown runs easily in real-time at very lower power on a chip that videantis developed.

Demo: Pedestrian Detection at the Embedded Vision Summit 2016

Marco Jacobs demonstrates the company’s latest embedded vision technologies and products at the May 2016 Embedded Vision Summit. Specifically, Jacobs demonstrates an implementation of the pedestrian detection algorithm based on OpenCV’s HOG/SVM routine. The demonstration shown runs easily in real-time at very lower power on a chip that videantis has developed.

Tutorial: Embedded Vision Summit – Current State and Future of ADAS

Just as horse carriages were replaced by cars in the 1920s, human operators in our cars will be replaced by electronics in the 2020s. The benefits are tremendous: self-driving cars save lives, save time and save cost. For car manufacturers, this will be a gradual change. With each new model year, they’re adopting increasingly sophisticated advanced driver assistance systems (ADAS) that aid the driver, instead of taking full control. Full video here.

Demo: Structure from Motion at CES 2015

This demonstration was developed together with VISCODA. Structure from Motion is a technique that allows the capture of 3D information using a standard, single, 2D camera and can be used for automotive, augmented reality, and positioning applications.

Demo: Pedestrian detection using HOG/SVM, and Haar-based object detection

Our pedestrian detection demonstrations implements the Histogram of Oriented Gradients (HOG) technique, combined with a Support Vector Machine (SVM) classifier. The Haar-based object detector is demonstrated here performing face detection, but can also be trained to detect different types of objects.

Demo: Low-delay, High Intra H.264 video encode and decode for Ethernet AVB

This demonstration shows video encode of a live video stream, transmission over an Ethernet AVB link, then decode and display on the other side. The overall glass-to-glass latency is sub 10ms, with latency within the encoder or decoder being under 1ms.

Tutorial: Embedded Vision Summit - 3D from 2D: Structure from Motion

Structure from motion uses a unique combination of algorithms that extract depth information using a single 2D moving camera. Using a calibrated camera, feature detection, and feature tracking, the algorithms calculate an accurate camera pose and a 3D point cloud representing the surrounding scene. This 3D scene information can be used in many ways, such as for automated car parking, augmented reality, and positioning.

Tutorial: Embedded Vision Summit – HOG+SVM

This video explains the Histogram of Oriented Gradients method, which is a key algorithm for object detection, and has been used in automotive, security and many other applications. This tutorial shows how it is implemented in real-time on the videantis high-performance, low-cost, and low-power parallel vision processor, including performance results.

Tutorial: Embedded Vision Summit – Feature Detection

This tutorial provides an overview of commonly used feature detectors, and explains in detail how the Harris feature detector works. Then explains a pyramidal implementation of the Lucas-Kanade algorithm to track these features across a series of images. The tutorial also explains how videantis has optimized and parallelized the OpenCV versions of these algorithms, resulting in a real-time, power efficient embedded implementation on a videantis unified video/vision processor.

Tutorial: Embedded Vision Summit Summit – Software Approach for Easing Embedded Acceleration of OpenCV Applications

Mark Kulaczewski, VP System Integration, describes our OpenCV implementation at the Embedded Vision Summit.

Thanks to the Embedded Vision Alliance and Design and Reuse for shooting the videos.