Show report: Linley Spring Processor Conference 2018

Linley Spring Processor Conference logo smlThe Linley group has been covering processors for what seems like forever. They’re especially known for their excellent Microprocessor report, which has been running since 1987. These MPR articles provide a thorough analysis of the inner workings and feature sets of microprocessors, which are becoming more often application specific. The Linley Group also organizes two events a year: a spring and a fall processor conference. We joined about 400 microprocessor engineers at the spring processor conference in Santa Clara last month.

There were 21 talks in a single track distributed over 2 days. We presented our talk “A Scalable Platform for Deep Learning and Visual Compute” on the first day of the conference. There was also an exhibition on day 1 where we showed our low power deep learning processor demos. The conference covers different topics centered on processors and semiconductor technologies for use in deep learning, embedded, communications, automotive, IoT, and server designs.

Linley’s opening talk

Linley Gwennap himself opened the conference with an overview of the market, technologies, equipment-design, and silicon trends. Linley first gave an overview of where AI is providing value to the consumer, which in turn is influencing processing architectures. The data center is leading there, providing voice interfaces, more accurate search engines, face recognition, language translation, spam filtering, etc. In these cases, queries from phones or PCs are sent to datacenters for processing, where AI is quickly becoming a larger portion of the workload, resulting in datacenters needing to be optimized for this new workload. FPGAs, GPUs (e.g. NVIDIA), ASICs (e.g. Google’s TPU), are showing the way there.

As a next key application area for AI, Linley highlighted the automotive industry. According to Linley, the smart car of the very near future includes 6-10 cameras, radar, and lidar sensors, and processes this data using deep learning neural network techniques to identify road markings, signs, vehicles and pedestrians, and determine speed and direction.

A third key trend is to embed AI into client computing. While connected devices can run this functionality in the data center, it’s cheaper and more scalable to place this functionality inside the edge device. This also reduces latency and improves reliability and privacy. High-end smartphones already include AI accelerators, and these are likely to trickle down to lower-cost phones. PCs are likely to follow in 2-3 years’ time according to Linley.

Lastly, Linley showed the IoT domain as a key application that’s adopting AI. Intelligent cameras, drones, and voice interface devices such as Alexa are in this category. Industrial applications include smart metering, parking, vending, and asset tracking. The intelligent consumer IoT industry is still slow to develop says Linley, primarily due to devices being too expensive, difficult to use or not delivering a compelling use case.

Linley then gave an overview of AI engines that are available for licensing on the market. Videantis was shown on his slide to provide the highest-performance solution, while we also provide the smallest configuration due to our fine-grain scalability of the architecture.

Our talk: A Scalable Platform for Deep Learning and Visual Compute

videantis at Linley Spring Processor ConferenceWhile some of the talks discussed important SOC design topics like security, memory, RISC-V, and interconnect fabric, the key topics that almost all presentations referred to were really AI and deep learning. About half of the presentations at the conference covered this exciting new area. We presented in the autonomous cars session. We gave an overview of the image processing and image understanding pipeline, showing how CPUs, GPUs and hard-wired processing architectures can’t address the requirements. Our processor architecture scales in performance down to ultra-low power and small area with a single core, and to extreme performance levels beyond 25 TMACs per second with 256 cores. In our talk we showed that the architecture can run a wide variety of visual computing applications, including deep learning, computer vision, imaging and video compression/decompression. This simplifies the overall design and allows targeting of different use cases. We showed a smart rear camera and driver monitoring application as examples running on the same SOC.

Wrapping up

It was a great show again with lots of things learned. For those we met there: it was great talking to you. The complete proceedings are available for free, after filling out a request form. Computer vision, AI, cameras, automotive, and intelligent sensing were all key topics at the show, as you can see there. We’re exciting to be one of the leaders in this space and to be working with some of the biggest names in the industry. Looking forward to seeing you there again next time!

17/05/2018 / Marco Jacobs