News: videantis and gestigon partner to bring gesture recognition technology to the automotive market
Last week we announced our partnership with gestigon that teams our low-power vision processing IP platform with gestigon’s unique skeleton tracking and gesture recognition algorithms. Besides targeting consumer electronics, the main aim of the cooperation is to bring gesture and pose recognition technology to the automotive market, which is rapidly adopting a wide variety of vision-based technology.
We also conducted an interview with gestigon’s CTO, Sascha Klement. Sascha spoke about what’s hard about gesture recognition and skeleton tracking, and gave a glimpse into the future with buttons in the car that react differently based on who’s turning the knob.
Next events: Mobile World Congress and the Embedded Vision Summit
Next week all eyes will be on the Mobile World Congress in Barcelona again. HTC, Samsung, Sony and LG are all expected to announce their new flagship devices. Besides these consumer products, the Mobile World Congress is the place to talk about the future of key mobile technologies like new cameras, computational imaging, computer vision, coding and displays.
Another upcoming event is the Embedded Vision Summit in May. Register now for an early-bird discount. We’ll have a booth there and are looking forward to showing our new demonstrations and low power vision solutions.
New videos: our demos at CES
The Embedded Vision Alliance shot three videos of our product demonstrations during CES 2015 in Las Vegas.
Structure from Motion:Developed together with VISCODA, Structure from Motion allows the capture of 3D information using a standard, single, 2D camera.
Pedestrian detection: This demonstration shows our pedestrian detection, based on HOG/SVM. We also show a Haar-based object detector, which can be used for face detection or to recognize different types of objects.
High Intra H.264 video coding: This demonstration shows 10/12-bit High Intra H.264 video encode and decode of a live video stream for Ethernet AVB applications. The overall glass-to-glass latency is sub 10ms, with latency within the encoder or decoder under 1ms.
Interesting industry news
Driverless cars now out-perform skilled racing drivers
Engineers at Stanford University designed a souped-up Audi TTS dubbed ‘Shelley’ which has been programmed to race on its own at speeds above 120 mph. When they tested it against David Vodden, the racetrack CEO and amateur touring class champion, Shelley was faster by 0.4 of a second.
Read the story and watch the video.
Machine vision algorithms beat humans at image recognition
First computers beat the best of us at chess, then poker, and finally Jeopardy. The next hurdle is image recognition — surely a computer can’t do that as well as a human. Check that one off the list, too. Microsoft and Google developed vision algorithms that beat us at recognizing objects in a wide variety of images.
Read more at EETimes or Kitguru.
Uber teams up with Carnegie Mellon on self-driving cars
Looks like Uber want to start working on its own self-driving cars. With cars that drive themselves, you don’t need to own a car, since you can (hypothetically) have access to one whenever you need it, without depending on the availability of a human driver, which is how Uber currently works.
Read the article
|Mobile World Congress||March 2-5, 2015, Barcelona||Set up a meeting at this key global mobile event|
|Embedded Vision Summit||May 12, Santa Clara, California||Come see our demonstrations at the main embedded vision event of the year.|
|SAMOS||July 20, Samos, Greece||Marco Jacobs presents invited keynote “Visual processing sparks a new class of processors”|
Schedule a meeting with us by sending an email to firstname.lastname@example.org. We’re always interested in discussing your video and vision SOC design ideas and challenges. We look forward to talking with you!