3rd Open Call
Verchable’s contextual video understanding engine can scalably analyze and search through videos at scale. Powered by proprietary computer vision, Verchable’s engine generates time stamped and spatially aware metadata in realtime to understand events, actions, people and objects within videos.
Using video as input, this gives customers the power to choose parameters and generate the metadata type that they are looking for, and subsequently search through their large video libraries.
Discover the product’s business and technical approach:
At Verchable, we are building scalable next-gen video understanding. Our proprietary Multiple Object Tracking AI runs at 4000fps (150x faster than realtime video streaming speed), using low computation. We are able to understand videos contextually, identifying people, actions, and events, and enabling real-time metadata creation which is both spatially and time-series aware. Verchable’s engine is currently being applied to the media and entertainment industry, enabling customers to analyze massive video libraries at scale, cheaper and faster than anything on the market.
Verchable was founded in 2019 and is backed by Entrepreneur First and the University of Cambridge’s Accelerate program.