“The Embedded Vision Summit is bigger than ever,” I wrote in our last show report, as well as in 2015, and in 2014. This year the show only grew a little bit. There were again about 1,000 attendees, just like last year, which may be a reflection of the cost to attend going up. The slower growth in attendees doesn’t reflect the status of the industry though, more and more products are adopting embedded vision and this conference is the key conference of the year on embedded vision.
There were more speakers, with 90 presentations across six tracks, one more track than last year. I recommend going through the slides, although there’s quite a few. All the presentations are available for download on the Alliance’s website. Registered users can login and download the proceedings (242MB) or watch presentation videos. Not registered yet? Go ahead here, it’s free. Our talk from last year on computer-vision-based 360-degree video systems is still available too.
Many presentations discussed real-world applications of embedded vision. Examples shown ranged from cloud-connected intelligent baby monitors to applications in retail, manufacturing and robots for the smart home — iRobot mentioned they have sold over 20 million vacuum robots for instance.
Prof. Takeo Kanade, probably most famous due to his seminal work in optical flow, presented “Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vision”, giving an overview of his past and more recent work. Dr. Kanade concluded that computer vision is in a state of a perfect storm today with image sensors being ubiquitous, good algorithms being freely available on the net, and low power high performance vision processors coming to market. Another keynote speaker was Dean Kamen, the famous inventor that developed the first drug infusion pump and of course the Segway.
The Alliance’s chairman, Jeff Bier, gave an excellent talk outlining the status of the industry: “The Four Key Trends Driving the Proliferation of Visual Perception”, showing that computer vision today works well enough for real-world applications, can be deployed at low cost with low power, and is increasingly used by non-specialists.
Since its foundation in 2011, the Embedded Vision Alliance grew to over 90 members, adding almost 30 in the past year. Almost 60 exhibitors filled up the show floor where many companies showed their latest products and demonstrations.
In our booth we showed our latest deep learning v-MP6000UDX processor and several vision demonstrations running on our silicon, including SLAM/structure from motion, CNN-based object detection and recognition, pedestrian detection for automotive or surveillance applications, and our optimized vision library that accelerates vision and imaging algorithms, enabling a power reduction of 1000x compared to CPUs or GPUs. Low power is key to embedding deep learning and computer vision into small camera modules or battery-operated devices.
Not only did we witness embedded vision at the show, but also outside. While spending the week in Silicon Valley, we saw five self-driving vehicles on the road. Every day a different one. Whether they were all driving themselves or primarily gathering data was hard to see, but it’s clear lots of companies are investing in this area. Videantis has a wealth of experience in automotive ADAS and self-driving vehicles, it was great to see this technology on the road.
All in all, it was another excellent conference with two days of learning and talking about computer vision, and two days of workshops. It was great to speak to our customers and industry colleagues again and to discuss what happened last year and what’s needed for next year’s products. Embedded vision continues to gain speed, and we’re proud of our highly efficient deep learning and vision processing architecture that we announced at CES in 2018 and is doing very well in the market. Of course, we would like to thank the EVA organization again for running such a smooth conference and in driving this industry forward. See you all next year!