Watch Meta demonstrate full-body VR tracking with just a Quest headset

Meta Reality Labs is making big strides in avatar rendering with the latest advances combining machine learning (ML) with sensor data from Quest VR headsets to show your full body, including arms, legs, torso, and head. The result is a very realistic and accurate representation of the poses and movements of a person wearing a Quest 2 headset.

QuestSim: Human Motion Tracking from Sparse Sensors with Simulated Avatars

This is quite amazing since only the positions and orientations of the Quest VR headset and its two controllers were used to estimate leg motion and position. There are no tracking bands placed on the legs and no external cameras used for this remarkable system. Meta Research Scientist Alexander Winkler shared several videos on Twitter along with links to the scientific paper on arXiv and a YouTube video with more detail.

In a scientific sense, this is known as motion tracking from sparse sensors and ML is particularly adept at extracting meaningful information from very little, if there are enough dependencies between what is known and unknown. Since we swing our arms for balance when walking and running, arm movement is a good indicator of what the legs are usually doing. Combined with head tilt and direction, the ML system can predict most human motion very accurately.

Body tracking in VR using a Quest headset.

More traditional approaches to tracking limbs rely on extra hardware, such as reflective markers placed on your legs and torso that are identified by external cameras, or bands with wireless beacons worn at various locations on your legs to transmit position and motion data.

While these methods work, those are typically sold as accessory items that cost more and aren’t well supported in most apps and games. If Meta can achieve such good results with a Quest 2 headset, developers will be more likely to build body tracking in their games and apps.

Despite these significant advances, Meta Reality Labs admits that more work needs to be done. If you move fast enough, the ML model fails to correctly identify your pose. Unusual stances are difficult for the computer to estimate and if the virtual environment has obstacles that don’t exist in reality, the movement won’t match. The overall effect seems to be very good, however, and it would be a nice upgrade to be able to see a full body instead of floating torsos when chatting with friends in VR.

Hopefully, this technology will be ready for launch soon. With Meta’s Quest Pro headset announcement expected in just a few weeks, the timing for full-body avatars seems perfect. The Meta Quest Pro can track eye movement and facial expressions, providing an improved sense of presence to your friends, family, and coworkers might be greatly enhanced in the near future.

Editors’ Recommendations
Go to Source