In the past decades that I have been developing electronics I’ve heard this a lot: “The algorithms our engineers developed are way beyond what our competition has.” I’ve seen it when developing image enhancement algorithms, I’ve heard it about video codecs, and audio quality. And now I’m seeing the same ‘my algorithm is better than yours’ claims in the field of AI. The annual ImageNet Large Scale Visual Recognition Challenge even makes a competition out of it, ranking everyone’s algorithms on their ability to detect and classify images correctly. There are thousands and thousands of papers published in the field of deep learning every year, all of them claiming a new technique that improves some aspects of deep learning. The result: algorithms keep getting better and better. This is great for the consumer as it enables our electronics to get better and provide an ever-improving user experience. In automotive, it’s even more important, since it’s not just about the user experience there, but also about safety. Better algorithms mean the chances of getting into an accident are smaller too.
So, what’s the problem?
One problem is that our vehicles last for at least ten years. In these years, the algorithms will improve a lot, meaning there needs to be a way to upgrade the vehicle with the latest software. Tesla does this well and provides a means to update the car’s software over the air. They not only change the infotainment system’s features or the battery management algorithms over night, but also the safety-related self-driving algorithms. Other vehicle OEMs are quickly following and adopting the same over-the-air upgrade capabilities.
But there’s one big assumption here: that the hardware can run these new algorithms. And not just run them, but run them just as efficiently as the old algorithms. If the processors aren’t capable of running the new algorithms efficiently, they wouldn’t be able to run in real-time, which is key since there can be no delays when controlling a vehicle on the road. However, since deep-learning-based AI algorithms require many teraops of computation, many semiconductor companies have been hard-wiring them. Hard wiring an algorithm provides an easy path toward high performance while remaining low power and cost efficient. This is crucial to bringing these algorithms to consumer price points, to keep the systems relatively small, and without active cooling fans that are prone to break down. However, hardwired implementations give up one key trait: they can’t be upgraded to the latest algorithms. They’re not implemented in software but in fixed electronics circuits in hardware.
At videantis we combine software programmability with efficiency, providing extreme performance and low power with the ability to upgrade the algorithms. Ever since we started the company in 2004, our processor architecture has been fully software programmable. It’s a lot more work to design, develop and optimize such an architecture and the accompanying software development suite of tools, but it pays off. And our customers experience this. We see that videantis-based semiconductors stay in the market longer, are used for a wider range of use cases, and are always upgraded to run the latest algorithms.