Meta demonstrates new research to optimize handleless full-body tracking,New avatar model "AGroL"
Meta AI, the artificial intelligence lab of Meta, is dedicated to developing various forms of artificial intelligence to improve augmented reality and virtual reality technologies. According to a May 11 news release, Meta AI researchers demonstrated a newest avatar body posture solution called "AGRoL" (Avatar Gait and Rotation Learning), which allows users to control their avatars in virtual worlds for more natural and realistic walking, running, jumping and other movements.
Meta AI says that AGRoL is an ongoing research project, and there is still a lot of room for improvement and optimization. Meta AI hopes to provide users with a higher quality and more diverse virtual reality experience through continuous research and innovation.
In the case of inside-out tracking for VR headsets such as the Meta Quest 2 or Quest Pro, the user's head and hands are primarily tracked. Since the tracking camera is integrated into the headset housing, the rest of the body remains largely hidden for motion detection.
In one study, Meta AI proposed a new avatar system that uses AI to synthesize fluid motion of the entire body from this sparse data, greatly surpassing previous avatar animations.
"Avatars Grow Legs, or "AGroL" for short, is a diffusion model specifically designed to track whole-body motion with only a small amount of upper-body signal. It is based on a multilayer perception (MLP) architecture and a new conditioning scheme for motion data.
According to the researchers, the model should be able to predict accurate and fluid motion of the entire body, which could solve the problem of VR full-body tracking, and traditional tracking of the more difficult lower body tracking should not be a problem. So far, artifacts have occurred only occasionally when touching the ground.
Since AGroL can be performed in real time, it should also work for online applications. This could soon make Horizon Worlds and other social VR apps without legged avatars a thing of the past.
The researchers used the motion capture dataset from AMASS to demonstrate the effectiveness of the model. Compared to other avatar systems such as AvatarPoser, AGroL has significantly fewer rotation, position and velocity errors.
As a result, the motion generated by AGroL is more accurate and smoother. Most importantly, avatars are said to exhibit arm and leg shaking much less frequently based on AGroL.
Related Article
-
OpenAI CEO debuts new AI healthcare company: largely inspired by ChatGPT visits to the doctor
-
"Apple's replacement wave is underestimated"! Damo expects more than 500 million iPhone shipments in
-
Current Development Status of Intelligent Driving
-
Main application directions of big models
-
The first "laborers" whose jobs were taken by AI have already appeared
-
With the integration of ChatGPT, this in-car AI voice assistant has "captured" many European countri