Microprocessor report reviews our new
v-MP6000UDX deep learning processor
The Linley Group’s Microprocessor Report did a thorough write up on our new deep learning processor that we announced at CES. In the article, senior analyst Mike Demler labels videantis a vision specialist, reporting that while our competition has just entered ADAS and the autonomous-vehicle segment, we have already won designs and are well-established.
Mike provides an analysis of our processor’s performance, and concludes that our closest competitor tops out at 25% of the v-MP6000UDX performance, showing that videantis provides far more MAC capacity than any previously announced neural-network engine. Additional conclusions are that videantis offers much-finer-grain scalability and offers a complete vision-processing subsystem.
We will present our new architecture at Linley Spring Processor Conference in Santa Clara, CA from April 11 – 12 and are looking forward to meeting you there.
Interested in reading the full article? Drop us an email and we will send you a copy of the 4-page analysis.
Article: Computer vision in surround view applications
We recently published “Computer Vision in Surround View Applications,” together with Embedded Vision Alliance member company AMD and our partner ADASens. In the article we go into more details on how to “stitch” together multiple images capturing varying viewpoints of a scene, in a variety of applications, including automotive, and showing how surround view systems use computer vision techniques to accomplish this task.
High quality of results is a critical requirement, one that’s a particular challenge in price-sensitive consumer and similar applications due to their cost-driven quality shortcomings in optics, image sensors, and other components. And quality and cost aren’t the sole factors that bear consideration in a design; power consumption, size and weight, latency and other performance metrics, and other attributes are also critical.
Looking for a position where you can learn a lot and make a big impact at the same time? At videantis we all take a regular deep dive into artificial intelligence, self-driving vehicles, and embedded vision. We’re growing quickly and are expanding our teams.
We are looking for stellar hardware and software engineers that can keep up with the pace of the rest of our team. Interested in taking on a challenge? Do you have experience in deep learning, embedded processing, low power parallel architectures, or performance optimization? We’d love to hear from you.
Waymo posts 360-degree video of what their cars see
Waymo began as the Google self-driving car project in 2009. A key hurdle to mass adoption of self-driving cars is getting legislators and the public to trust that they really work. Waymo posted a new 360-degree video on YouTube that provides a nice visualization of how the car senses its surroundings using cameras, radar and lidar. Move your mouse or phone around to get a complete view of the surroundings. See video
Samsung Galaxy S9 counts calories in your food using the camera
“This year over 1 trillion videos and images will be snapped, edited and shared. We haven’t just updated the camera. We’ve completely rethought the entire camera experience for how you communicate today,” says Samsung. Besides pushing picture quality to new levels, phones are integrating more computer-vision-enabled features to enhance the user experience. Samsung’s phone can auto-translate text in a foreign language or identify your food and spit out a calorie count. Deep learning and computer vision are rapidly making our devices more intelligent.
See videos at Image Sensors world or an overview of Bixby Food
|Linley Spring Processor Conference||April 11 – 12, 2018, Hyatt Regency, Santa Clara, CA||The only event solely focused on processors for deep learning, embedded, communications, automotive, IoT, and server designs.|
|Embedded Vision Summit||22 – 24 May, 2018, Santa Clara, California||The premier event for product creators who want to bring visual intelligence to product.|
Schedule a meeting with us by sending an email to firstname.lastname@example.org. We’re always interested in discussing your video, vision, and deep learning SOC design ideas and challenges. We look forward to talking with you!
Was this newsletter forwarded to you and you’d like to subscribe? Click here.