Google Releases Real-time Mobile Hand Tracking to R&D Community – Road to VR

  • The Entire VR Industry in One Little Email

    The Daily Roundup is our comprehensive coverage of the VR industry wrapped up into one daily email, delivered directly to your inbox. 

    Google has released to researchers and developers its own mobile device-based hand tracking method using machine learning, something Google Research engineers Valentin Bazarevsky and Fan Zhang call a “new approach to hand perception.”

    First unveiled at CVPR 2019 back in June, Google’s on-device, real-time hand tracking method is now available for developers to explore—implemented in MediaPipe, an open source cross-platform framework for developers looking to build processing pipelines to handle perceptual data, like video and audio.

    The approach is said to provide high-fidelity hand and finger tracking via machine learning, which can infer 21 3D ‘keypoints’ of a hand from just a single frame.

    “Whereas current state-of-the-art approaches rely primarily on powerful desktop environments for inference, our method achieves real-time performance on a mobile phone, and even scales to multiple hands,” Bazarevsky and Zhang say in a blog post.

     

    Google Research hopes its hand-tracking methods will spark in the community “creative use cases, stimulating new applications and new research avenues.”

    Bazarevsky and Zhang explain that there are three primary systems at play in their hand tracking method, a palm detector model (called BlazePalm), a ‘hand landmark’ model that returns high fidelity 3D hand keypoints, and a gesture recognizer that classifies keypoint configuration into a discrete set of gestures.

    SEE ALSOIndie Dev Experiment Brings Google Lens to VR, Showing Real-time Text Translation

    Here’s a few salient bits, boiled down from the full blog post: