Sorting Out Embedded Vision Systems
Ron Wilson, Editor-in-Chief at Altera recently published an article titled “Sorting Out Embedded Vision Systems” which includes a section about our talk at the Embedded Vision Summit on the theory, implementation, and applications of Structure from Motion. The Embedded Vision Alliance also reprinted the article on their website (free login required). The article presents an overview of a computer vision processing pipeline, with examples from videantis, Dyson, and ends with an overview of convolutional neural networks.
The first two paragraphs of the article follow below:
Papers at this year’s Embedded Vision Summit suggested the vast range of ways that embedded systems can employ focused light as an input, and the even vaster range of algorithms and hardware implementations they require to render that input useful. Applications range from simple, static machine vision to classification and interpretation of real-time, multi-camera video. And hardware can range from microcontrollers to purpose-built supercomputers and arrays of neural-network emulators. Perhaps surprisingly, most of the systems across this wide spectrum of requirements and implementations can be described as segments of a single processing pipeline. The simplest systems implement only the earliest stages of the pipeline. More demanding systems implement deeper stages, until we reach the point of machine intelligence, where all the stages of the pipeline are present, and may be coded into one giant neural-network model. Read the rest of the article on the Altera website