Show report: Embedded Vision Summit

“The Embedded Vision Summit is bigger than ever,” I wrote in last year’s show report. It was the same again this year. With over 40 exhibitors and 1000 registrations, the show again grew 30-40%. Besides attracting more people, there was also more to see: the exhibition ran for two days instead of one and there were two days filled with 3 parallel tracks of talks. There were even additional workshops on a third day.

The show’s growth reflects the state of the computer vision industry. You may not realize it, but you’re probably already using computer vision techniques many times a day. Your phone’s camera automatically focuses on faces in view. Apps like snapchat or face swap carefully analyze the captured images using computer vision and manipulate them in unique ways.

Continue reading our full show report

Video of our automotive talk

Just as cars replaced horse carriages in the 1920s, electronics is bound to replace the human operator in the 2020s. The benefits are tremendous: self-driving cars save lives, save time and save cost. For car manufacturers, change will be gradual. With each new model year, they’re adopting increasingly sophisticated advanced driver assistance systems (ADAS) that aid the driver, instead of taking full control. These systems use cameras and computer vision techniques to understand the vehicle’s surroundings. In our talk we presented an overview of the state of ADAS today and gave a glimpse into the future. We highlighted technology trends, challenges, and lessons learned, with a focus on the crucial role that computer vision plays in these systems.

See the 3-minute preview or watch the full 25-minute talk after free registration.

Demo videos: structure from motion and pedestrian detection

The summit organizers were kind enough to shoot two videos of our demos at the show. We demonstrated a robust implementation of Structure from Motion that we developed together with VISCODA. This algorithm allows the capture of 3D information from standard monocular cameras. Another demonstration we showed was a pedestrian detection algorithm. Both algorithms run in very low power on our vision processor and can be used in a wide variety of applications, from automotive to drones to surveillance.

View the demo videos

Industry news

dentCHECK and Nailbot win Vision Tank
The Vision Tank, a unique spin on the Shark Tank reality show, introduced companies that incorporate visual intelligence in their products. The judges awarded first prize to the dentCHECK, a tool that analyzes surface deformation problems. The audience award went to Nailbot, which turns a smartphone into a portable nail salon that prints instant custom nail art directly onto fingernails.
See all competitors

Computer vision and intelligence big themes at Google IO
Sundar Pichai, who took the I/O stage for the first time since becoming CEO in October, said he’s on a “journey” from mobile to AI. One key component of that AI is for sure machine vision. Major updates from Google IO included computer vision as a key component: Tensor Processing Units, Google Tango, Cloud Vision API.
Playlist of the 171 videos from Google IO

Google runs neural networks in your browser
Looking at playing around with a neural network, without having to install software, understand how to run scripts, or writing code? Take a look at the high-level deep learning simulator that Google put on the web. You can play around with different training data and add nodes or extra layers with just a few clicks.
Play around with deep learning in your browser

Upcoming events

AutoSens September 20-22, Brussels Come see us at this key automotive sensor event.
Ethernet & IP @ Automotive Technology Day September 20-21, Paris Meet us at the automotive Ethernet event of the year.

Schedule a meeting with us by sending an email to sales@videantis.com. We’re always interested in discussing your video and vision SOC design ideas and challenges. We look forward to talking with you!