Show report: Embedded Vision Summit bigger than ever
That’s what I wrote last year on our blog, and was true again this year. The summit was bigger than ever, by a large margin. Even though the entrance fees were higher than last year, the event almost doubled in size. About 750 engineers, algorithm designers, systems engineers, marketers, and businessmen spent a long but productive day together in the Santa Clara Convention Center.
The field of computer vision, and embedding it into high-volume consumer products, is growing rapidly, and the summit is one of the key events of the year.
There were 3 tracks with presentations, and workshop attendance more than tripled compared to last year. The showfloor was busier too, with over 30 exhibitors showing their latest demonstrations.
Trends from the show
One clear trend that everyone talked about was convolutional neural networks (CNNs). Also known under the very enchanting term “deep learning”, these algorithms detect and classify objects in images. While CNNs seem to be at the peak of their Hype Cycle, the results are impressive. Baidu, Microsoft, and Google are tripping over each other to reach the highest scores on the ImageNet benchmark, with Baidu announcing at the show that they’ve regained the lead. CNNs clearly are more than hype only. Still, the focus of these companies today is simply on reaching the highest detection rates, not on finding the right trade-offs between accuracy, power and silicon area consumption. Even if, or perhaps when, detection rates reach 100%, there’s still plenty of work left. Since there’s so much hype around CNNs, quite a few people think that any problem in Computer Vision can be solved with CNNs, and that’s certainly not the case. Object detection and recognition is an important and difficult part of computer vision, but by no means the only algorithm that’s required in typical embedded vision systems.
Augmented reality and wearables
Oculus Rift, Magic Leap, Microsoft’s Hololens, you’ve probably heard of them. Many high profile and well-funded companies are targeting this space that Ivan Sutherland pioneered over 50 years ago. While there aren’t really any consumer products out there that have seen mass adoption, this is a very exciting field that relies greatly on image sensors and computer vision to seamlessly mix the real world with the synthetic. While none of these companies were exhibiting, augmented reality was often spoken about.
Algorithms keep evolving
One thing that won’t change is that algorithms will keep changing. In addition, the scope of computer vision is growing. While some people are considering hard-wiring vision algorithms into silicon, most people agree that the algorithms will continue to evolve and improved over time. This requires power-efficient software programmable architectures such as our v-MP4000HDX vision processor architecture.
Cars are dangerous machines, and most of us spend many hours operating them. Computer vision has the potential to make cars very safe and very easy to operate, without continuous monitoring by us humans. No wonder that automotive is the key driver behind computer vision. The automotive industry this year found its way to the Embedded Vision Summit, with several key car manufacturers and suppliers to the automotive industry sending representatives to the show.
Another trend at the show was the increase in system designers attending. We spoke to drone and toy developers, factory automation companies, car manufacturers like Volkswagen and Audi, and many more. It’s not just the embedded community anymore that attends the event. This is a very good sign for the show too, that the whole value chain is now clearly involved in bringing next-generation vision-enabled products to market.
Videantis at the summit
Our booth was next to Altera, in one of the busiest sections of the show floor. We showed a concept demonstration that we jointly developed with Altera. This demonstration combines an Altera Cyclone V FPGA with the videantis v-MP4280HDX vision processor. The combined solution ran a wide variety of computer vision functions like pedestrian detection, face detection, feature detection and optical flow.
In addition, we developed a technology demonstration together with VISCODA, an algorithms partner of videantis. VISCODA’s Structure from Motion algorithm allows the capture of 3D information using a standard, single, 2D camera. The application was also shown running on the videantis v-MP4280HDX processor in real-time at very low power consumption. We presented an accompanying talk on the subject titled “3D from 2D: Theory, Implementation and Applications of Structure from Motion”, which was very well attended. The Embedded Vision Alliance has recorded the talks and will put them on YouTube later.
In addition, videantis had the following technologies on display at the show:
- High-performance, scalable, licensable video/vision processor IP
- Pedestrian detection using histogram of oriented gradients descriptors
- Face detection using Haar object detection
- Low-delay, 10/12-bit video codecs for Ethernet AVB
- Seamless OpenCV acceleration providing a 1000x power reduction
Final versions of all Summit presentations, including ours, are now available for download from the Alliance website as a single ZIP file. Registered users may login and download the file (93MB). Not registered yet? Registration is free here.
All in all this was a great day at the Santa Clara Convention Center again. Embedded vision is going places, and our highly efficient video and vision-processing architecture is a key component to bring such systems to market. It was great to again meet our customers, and industry colleagues and discuss the future of embedded vision and how we’re bringing value with our products to the industry.
Thanks also to the EVA organization for running an excellent show. We’re very much looking forward to next year’s event, hopefully with over 1000 people attending.