Last week the Embedded Vision Summit West took place again in Santa Clara, California. The summit is organized by the Embedded Vision Alliance, an organization that’s been formed to inspire and empower embedded system designers to use embedded vision technology. This was the fourth summit, and it was bigger than ever. There were close to 500 attendees, 30 companies exhibiting, and two tracks of talks. We’ve been members since the early beginnings and the summits are a great way to see what the industry is up to and show our latest technologies.

It’s clear that embedded vision is taking off. The image sensor market is growing at double the pace of the semiconductor as a whole, and intelligent algorithms are processing many of the images these cameras capture.
The keynote speakers were excellent, with talks by Facebook and Google.
Key driving applications from the show:
- Automotive: cameras are replacing your eyes and help you drive safely on the roads.
- Mobile: augmented reality, gesture interfaces, depth cameras, smart camera applications, all of them rely on embedded vision and are making its way into the mobile phones and tablets first, then spread to other application areas.
- New class of emerging applications: autonomously flying and tracking consumer drones and always-on camera-enabled wearable devices.

Our Marco Jacobs presented a talk “Implementing Histogram of Oriented Gradients on a Parallel Vision Processor”. In the talk, we showed how object detection in images is one of the core problems in computer vision. The Histogram of Oriented Gradients method (Dalal and Triggs 2005) is a key algorithm for object detection, and has been used in automotive, security and many other applications. We gave an overview of the algorithm and showed how it can be implemented in real-time on our high-performance, low-cost, and low-power parallel vision processor. We demonstrated the standard OpenCV based HOG with Linear SVM for Human/Pedestrian detection on VGA sequences in real-time. The SVM Vectors used came from OpenCV, using the Daimler Pedestrian Detection Benchmark Dataset and the INRIA Person Dataset.
Day 2 – member meeting
The day after the summit was reserved for a members-only meeting. Again, there were some great speakers there, notably Johnny Lee from Google, which gave a demonstration of Google’s Project Tango. Johnny’s team focuses on bringing computer vision to the mobile world.

They use a wide variety of cameras, including fish-eye and depth, integrated with the mobile phone. Key thing is that they can very accurately calculate camera pose, which means they can do indoor navigation, and cool augmented reality games.
Roger Lanctot, Associated Director of Strategy Analytics, gave an overview of the vision opportunities in, on, and around automobiles. Key thing is that cameras are growing faster in automotive than anywhere else, and most of them are used for computer vision.
All in all it was a great week in the bay area again, and it’s clear that embedded vision is going places, and that our highly-efficient video and vision processing architecture is a key component to bring such systems to market.
