Last week the Embedded Vision Summit West took place again in Santa Clara, California. The summit is organized by the Embedded Vision Alliance, an organization that’s been formed to inspire and empower embedded system designers to use embedded vision technology. This was the fourth summit, and it was bigger than ever. There were close to 500 attendees, 30 companies exhibiting, and two tracks of talks. We’ve been members since the early beginnings and the summits are a great way to see what the industry is up to and show our latest technologies.

TalksThe focus is on practical applications of computer vision. There are no academic research topics that detail variations of algorithms and try to raise the bar from a theoretical standpoint. Instead, there are lots of people talking about how to bring computer vision applications to devices that people use daily. The summit is all about making low-cost silicon implementations that can run for many hours on a battery. Academia hasn’t come up yet with algorithms that are computationally light, and that’s probably unlikely to happen. The human brain is pretty efficient, but we still use about 50% of our trillions of synapses to process the images our eyes capture. Efficient silicon implementations are key to bring these complex algorithms to market.

It’s clear that embedded vision is taking off. The image sensor market is growing at double the pace of the semiconductor as a whole, and intelligent algorithms are processing many of the images these cameras capture.

The keynote speakers were excellent, with talks by Facebook and Google.

Key driving applications from the show:

  • Automotive: cameras are replacing your eyes and help you drive safely on the roads.
  • Mobile: augmented reality, gesture interfaces, depth cameras, smart camera applications, all of them rely on embedded vision and are making its way into the mobile phones and tablets first, then spread to other application areas.
  • New class of emerging applications: autonomously flying and tracking consumer drones and always-on camera-enabled wearable devices.

TableOn our booth we displayed key computer vision algorithms such as face detection, feature detection / tracking, and multi-format HD codecs, all running on the same low-power platform. We showed how we seamlessly accelerate the OpenCV library, offloading the host-processor, and gaining a power savings of 1000x and a performance gain of 100x. For automotive applications, we showed pedestrian detection and our low-delay (<1ms!) H.264 10/12-bit codec. On display was our development system which holds our 40nm-based 10-core SOC, and can run stand-alone, hooked up to a PC, or connected to an Android-based tablet.

Our Marco Jacobs presented a talk “Implementing Histogram of Oriented Gradients on a Parallel Vision Processor”. In the talk, we showed how object detection in images is one of the core problems in computer vision. The Histogram of Oriented Gradients method (Dalal and Triggs 2005) is a key algorithm for object detection, and has been used in automotive, security and many other applications. We gave an overview of the algorithm and showed how it can be implemented in real-time on our high-performance, low-cost, and low-power parallel vision processor. We demonstrated the standard OpenCV based HOG with Linear SVM for Human/Pedestrian detection on VGA sequences in real-time. The SVM Vectors used came from OpenCV, using the Daimler Pedestrian Detection Benchmark Dataset and the INRIA Person Dataset.

Day 2 – member meeting

The day after the summit was reserved for a members-only meeting. Again, there were some great speakers there, notably Johnny Lee from Google, which gave a demonstration of Google’s Project Tango. Johnny’s team focuses on bringing computer vision to the mobile world.

Tango

They use a wide variety of cameras, including fish-eye and depth, integrated with the mobile phone. Key thing is that they can very accurately calculate camera pose, which means they can do indoor navigation, and cool augmented reality games.

Roger Lanctot, Associated Director of Strategy Analytics, gave an overview of the vision opportunities in, on, and around automobiles. Key thing is that cameras are growing faster in automotive than anywhere else, and most of them are used for computer vision.

All in all it was a great week in the bay area again, and it’s clear that embedded vision is going places, and that our highly-efficient video and vision processing architecture is a key component to bring such systems to market.