This was the first year that the show was held in Detroit, at the COBO center. In previous years, this event took place in the automotive capital of Europe, Germany. Attendance seemed down a bit compared to last year in Stuttgart, but that doesn’t mean the technology adoption is slowing down. On the contrary. Ethernet is coming to all cars, that’s clear. Last year, the first production car with Ethernet AVB was announced: the BMW X5. Now, besides at BMW, there are now many more models with Ethernet inside in production, including at Volvo and Mercedes. Using Ethernet in the car has many advantages, including lower cabling cost, and ease of integration. More and more companies are addressing this opportunity and were showing their Ethernet-based PHYs, switches and image sensing solutions.
Videantis showed a variety of video coding and computer vision applications, all running on the same multicore, extremely low-power, high-performance v-MP4000HDX silicon. Since videantis licenses its processor, similar to ARM, our product can be integrated into an SOC for automotive cameras, as a companion SOC to the head-unit, or inside the main applications processor. Low power is especially key for integration into cameras. Power consumption translates into heat, and these cameras are weatherproof and closed, making it hard to dissipate heat. Any extra heat will result in the image sensor generating more noise. Besides lowering power consumption, integration of our architecture into the camera has another advantage: the video analytics algorithms can run on the raw, uncompressed images, before they’re encoded and transmitted. As a result, the ADAS algorithms don’t suffer from the artifacts introduced during compression and instead retain their accuracy.
Structure from motion
Structure from motion is a technique that extracts depth information from a single standard camera. Using feature detection and feature tracking, the algorithm calculates the camera post and 3D point cloud. This resulting information can be used for parking assist for instance. For a video impression of what’s possible, see the parking assist on VISCODA’s YouTube channel or Audi’s Piloted Parking. Our demonstration runs all the structure from motion pixel operations on the uncompressed images on the camera side, then transmits the calculated data over an Ethernet link along with compressed video data to a host for further processing.
H.264 High Intra encode/decode
In addition, we showed our low-delay, H.264 High-Intra, 10/12/14-bit, DRAM-less codec. The codec provides significantly higher picture quality than what’s achievable with JPEG compression. We showed a setup that encodes images coming directly from the camera, transmitting them over a twisted pair Ethernet wire, then decoding and showing them on the display. Within the codec, there’s less than 1ms delay, which is important as any delay can hurt downstream video analytics and actions downstream, or cause laggy displays.
Another demo on display was our pedestrian detection implementation using the Histogram of Oriented Gradients (HOG) method combined with a Support Vector Machine. The HOG algorithm is quite computationally demanding, but we’re able to run it in real-time, at 30fps on our low-power silicon.
Haar-based object detection
We also showed our implementation of the Haar-based object detection. We were showing the algorithm trained with a face detector, but the same algorithm can be trained with automotive datasets to detect objects like pedestrians or cars.
Our OpenCV library includes over 100 functions that have been optimized for our multicore high-performance architecture. Vision programmers simply call the OpenCV API functions on the host, after which we accelerate the entire heavy-duty pixel processing on our very efficient processor.
All in all, another very successful show, and we’re excited to continue our work with all our customers we met there. We are looking forward to next year’s show, targeted to be held in Asia.