Show report: Image Sensors Automotive
The Image Sensors conferences have been held since 2008, and have grown to become one of the main image sensor conferences in Europe. This was the first image sensor conference from this organization that focused entirely on automotive applications. The two-day conference in June attracted about one hundred attendees, each with key roles in the automotive supply chain. Everyone working with cameras for automotive seemed to be there: semiconductor IP companies, image sensor makers, automotive SOC vendors, Tier 1s, and the OEMs. There were sixteen presentations, addressing a wide range of topics. Not only covering image sensors, but also the other components that make up an imaging system for automotive applications.
The conference was held in Brussels, and Belgium won their world cup game during the event. For many hours in the evening, cars were honking and people were shouting to celebrate the victory.
The following topics were recurring themes at the conference that were most often discussed.
ADAS (advanced driver assistance systems)
Driver assistance systems such as adaptive cruise control, lane departure warning, and pedestrian protection often use an image sensor to understand the environment in the vicinity of the car. ADAS are one of the big drivers behind the growth of image sensors in the automotive industry, and was therefore an often-discussed subject at the conference. The videantis processor architecture is ideally suited to efficiently run the computer vision algorithms that provide the technology basis for these systems. Image analysis algorithms such as feature detection, feature tracking and object detection run very efficiently on the videantis architecture, providing low power and performance levels that have not been seen before.
Besides using cameras to increase safety, cameras can also be used for viewing applications. Most cars have three mirrors, and of course the windows to see what’s going on around the car, but cameras and displays can help in giving a much better overview to the driver. Top view system combine and reproject image data from multiple image sensors into a single “birds-eye view” that presents the full surroundings of the car in one image on a display. Another new application is to replace the mirrors with cameras, which causes much less drag, resulting in a car that uses less energy (your car will last one mile longer per gallon). For such applications, the unified videantis architecture can perform video compression and decompression for distribution in the car over Ethernet, or can do the warping and stitching of multiple camera images for viewing applications.
Image quality is inherently subjective. “Beauty is in the eye of the beholder.” The traditional way of measuring the image sensor quality is by using PSNR or MTF but these metrics don’t necessarily reflect what’s really important. There are a lot of different ways you can measure image sensor picture quality, and a couple of presentations focused on this topic.
Safety is key in automotive. Over one million people die on the roads each year due to accidents, and the electronic components inside the car should be as close to 100% reliable. Recalls and liability are a big issue for the OEMs. The ISO standard 26262 focuses on the functional safety of the electronic systems in vehicles, and was one of the topics of this conference since it has technical implications. We’ve been shipping our IP into automotive applications for many years, so are well aware of the issues. Ask us how we address automotive safety.
Cars get really hot in the sun and there’s not much heat dissipation possible inside the camera module. Therefore, it’s important to have very low power silicon. When the system gets too hot, the sensor is affected, things may melt, and components stop working. Low power has always been a design criterion for us, primarily to extend battery life of handheld applications. It’s clear that low power consumption is key for automotive also.
Many different sensors are being used in automotive: radar, laser, infrared, visual, ultrasonic and depth sensors. Each of these sensors has their own strengths and weaknesses. Combining, or fusing, this data from multiple sensors can yield a more robust vision-based system. Due to the flexible nature of our architecture, we can address such sensor fusion applications well.
The next Image Sensor conference is in San Francisco in September. We’re looking forward to connecting with people again there.