Breakthrough Technology for Extracting Insights from Raw Video Footage at Scale
Today, billions of video cameras are being used by cities and businesses worldwide to collect potentially valuable information. However, until now, tagging content and extracting useful information from raw video footage has been a laborious, cost-prohibitive process requiring an initial frame-by-frame analysis of objects by human beings. In 2017, U-M computer science and electrical engineering professor Jason Corso and then-Ph.D. student Brian Moore used a $1.25 million federal grant to build a “road sensing” video analytics platform that automates the process of identifying key video content—ranging from road signs and painted markings to numbers of vehicles and pedestrians—at a rate roughly 2000 times faster than human annotators. In 2018, with $2 million in seed-round venture capital funding from eLab Ventures, they launched Voxel51. Now, with 15 employees, the Ann Arbor-based company is marketing its video understanding platform to enable customers to deploy models at scale and to make real-time, data-driven decisions that improve transportation and mobility solutions.
As academics, our research has been focused primarily on video understanding, on ways of leveraging AI, machine learning and computer vision to create tools for dynamic, video analysis. Through Voxel51, we do video understanding—providing a unique video-first platform that enables customers to connect their raw video data to our system, automatically process that data, and extract useful insights much more cost-effectively than they could on their own. Also, unlike competing companies, we take a friendly, hands-off approach to customer data.
– Jason Corso, Professor of Engineering and Computer Science; Co-Founder and CEO, Voxel51