Come see us at ARM TechCon on October 1&2 in Santa Clara
Ranked one of the must-attend events in the embedded industry, ARM TechCon is more than a conference. Join 3,600+ hardware designers and software developers at the Santa Clara Convention Center and learn about the latest in SOC design, video coding and computer vision processing.
Demonstrations at our booth
We’ll be showing our video coding and computer vision capabilities during the event at booth #813. Please stop by and see our HD multi-standard video encode/decode and computer vision demonstrations: low-delay codecs, object detection, optical flow, face detection, and many more.
Our experts are happy to discuss how to future proof your SOC for the latest standards in video coding and computer vision techniques. We look forward to speaking to you.
Set up a meeting in advance by contacting email@example.com
New video: pedestrian detection, low-delay H.264 and optical flow
We recently demonstrated our embedded vision products for automotive, mobile and home applications at the Embedded Vision Summit. In this video, we’re showing our pedestrian detection implementation running on the videantis v-MP4000HDX multicore architecture. In addition, we’re demonstrating feature detection combined with optical flow, and our low-delay H.264 High Intra automotive Ethernet AVB codec. For more details on the HOG object detector, watch our video tutorial on histogram of oriented gradients for pedestrian detection.
View the demonstration video on the EVA website.
Employee interview: Andreas Dehnhardt, VP Applications Engineering, Co-founder
This is the second in a series of employee interviews. Today we speak to Andreas, who’s been leading the software engineering department at videantis since the company spun out in 2004. We started off by asking him which accomplishment he’s most proud of.
Andreas: I could tell you about all the software we’ve ported and optimized, or the clever new architectural features we’ve developed, but truth is that it’s much simpler than that. What I’m most proud of is that everything we’ve delivered to our customers worked as promised and is competitive. We have many customers that are shipping products in volume, and there’s nothing more satisfying than being able to buy a product that you know has your technology inside. Whether it is a car in a showroom, or a piece of electronics at the local Media Markt or Fry’s, I always think: “my stuff is in there”, and that’s something to be proud of.
New introductory article: embedded vision, we’re only at the beginning
Smart image analysis has enormous potential. An image sensor produces copious amount of data. The embedded vision algorithms and platforms that can interpret and give meaning to this data enable completely new user experiences, on the mobile phone, in the home, and in the car. Image processing, according to Wikipedia, is “Any method of signal processing for which the input is an image, like a photo or frame of video, and the output is either an image, or a collection of characteristics”. This kind of image processing is everywhere around us. Our mobile phones do it, for example, as do our TVs… and so do we.
For us humans, the eyes perform simple image processing tasks: focusing (sometimes with the help of glasses), controlling exposure, and capturing color and light information. The interesting part starts in our brain, however, where we interpret the images and give meaning to them. Research has shown that about half of our brain is allocated to image processing. Apparently this is a compute-intensive task, as it requires lots of synapses and neurons performing their magic. But it does pay off.
Interesting industry news
Build your own autonomous car with a can of Coke
This is, without a doubt, a really stupid thing to actually try. So don’t. It is also, without a doubt, awesome to see it in action. Strapping a can of Coke to the steering wheel proves what we knew all along: Mercedes’s Active Lane Assist is basically a hands-off autonomous cruise system if you disable the safety timeout. So watch, be amazed, but don’t be stupid enough to try it yourself. Watch the video
Google’s Johnny Lee describes Project Tango: integrating 3D Vision into smartphones
Project Tango is an effort to harvest research in computer vision and robotics and concentrate that technology into a mobile platform. It uses vision and sensor fusion to estimate position and orientation of the device in real-time, while generating a 3D map of the environment. This is the best explanation of Project Tango that we’ve seen (registration required). In the video, Lee shows how the system works, what it’s capable of, and its target applications they’re going after.
Robot vacuums now adding cameras and computer vision
Since iRobot launched the Roomba floor-vacuuming robot over ten years ago, 2014 seems to be the year that the mainstream domestic appliance companies are going after this market too. While the first models mainly used infrared and acoustic sensors, the new trend is to include cameras and embedded vision techniques to steer the robot. Samsung adds an upward looking camera, Dyson includes a 360-degree panoramic camera, LG uses 2 cameras, and Miele also decided for an upward looking camera. Read an overview of robot vacuums.
|IBC||Sep 13-17, 2014, Amsterdam, the Netherlands||Talk to us at this global broadcast event|
|ARM TechCon||October 1-3, 2014, Santa Clara, California, Booth 813||Meet us at this key SOC design event|
|2014 Ethernet & IP @ Automotive Technology Day||23-24 October 2014, Detroit, Michigan, Booth 304||Come see our automotive demonstrations|
Schedule a meeting with us by sending an email to firstname.lastname@example.org. We’re always interested in discussing your video and vision SOC design ideas and challenges. We look forward to talking with you!