forestWhen chips get hot, thermal management quickly becomes difficult. The chips need more complex power grids, more expensive packages and active cooling fans for instance. Lots of GPU-based systems even need water cooling. And just in case you can get away with using only passive cooling, even a little heat greatly affects the housing and mechanical design. There needs to be enough surface material to dissipate the heat, resulting in the final device to become larger, heavier, and more expensive. In addition, if your power comes from a battery, that’ll have to have more capacity too, again adding cost and bulk. That’s a lot of reasons to keep power consumption low.

And chips do burn a lot of power. Especially when they’re running compute-intensive deep learning workloads. Deep learning algorithms require many tera-ops of multiplications per second and move massive amounts of data. At videantis, we designed our processor architecture from the ground up to consume as little energy as possible. Optimizing for low power has always been in our DNA at videantis – with a history in designing for mobile phone applications that are battery operated.

But besides these implications on form factor and design complexity, there’s another reason to keep power consumption low: it’s good for our planet, since most of our energy comes from fossil fuels that release carbon dioxide into the atmosphere. “But chips just consume a few Watts,” I hear you say. “That’s nothing compared to the energy that our heating, our cars, and manufacturing industries use!”. Well, let’s take a deeper look, do a quick quantitative analysis, and see what the impact really is. We’ll focus on the automotive use case, since we’ve been doing quite a bit of work there and have millions of vehicles with videantis technology inside already on the road.

Let’s first look at how much gas is needed to run our deep learning algorithms. A typical lifetime for a new vehicle is roughly 300,000 km. At an average speed of 50km per hour, this means such a car is in operation for 6000 hours. So for every Watt a chip uses, this adds up to 6kWh over the lifetime of the vehicle. There’s 9.5 kWh per liter of gasoline, in theory, but since a typical engine only has a thermal efficiency of about 25%, each liter of gasoline consumed produces 2.4kWh. So, each Watt that a chip burns translates into 6kWh over the lifetime of the vehicle, for which you need 2.5 liters of gas (6kWh / 2.4kWh/l). So for a typical GPU-based ADAS system that consumes of 250W this adds up to about 630 liters of gas.

In addition, the weight of the compute module also contributes to gas usage. Each kilogram the vehicle has to carry around over its lifetime translates into another 12 liters of gas burned. For a typical system that weighs 2.5kg, this amounts to 30 liters of fuel. In total, these big ADAS systems require about 660 liters of gas over their lifetime in the vehicle. Since each liter of gasoline during combustion turns into 2.34kg of CO2, the total amount of CO2 is about 1.5 tons per vehicle.

The vision processing efficiency of the videantis processor solution is much higher compared to the systems that are on the market today. Benchmarking has shown that systems based on the videantis processor architecture typically provides 50x better deep learning and visual computing performance per Watt, and a weight reduction of about 80%. This results in a savings of at least 1 ton of CO2 per vehicle over its lifetime.

There are currently about 300 million vehicles on the road in the EU today. If all those vehicles were using the videantis processing architecture instead of systems that are similar to NVIDIA’s or Tesla’s, then all those vehicles combined over their lifespan would save about 300 million tons of CO2 . It takes about 2 million acres of trees to offset such a carbon footprint, a forest the size of a small country. It’s another good reason for our engineers at videantis to optimize for low power.