Are Facebook’s hyper-realistic avatars window dressing?

At the Meta Connect conference, Mark Zuckerberg showed the advances of his Avatar Codec technology. But the ultra-realistic modeling of his face is still far from accessible.

“Please don’t make people believe that hyper-realism can be done on the cheap. Today, it’s wrong,” says Vincent Haeffner, the production manager of Effigy, a studio specializing in digital creation and human modelling.

However, this technology, Mark Zuckerberg tried to sell it to the general public on October 11. During the Meta Connect conference, the boss of Facebook appeared in the form of an ultra-realistic avatar. Wow effect guaranteed as the virtual features, facial expressions, skin texture seemed more real than life.

The Avatar Codec technology – now in version 2.0 – is also one of the major challenges for Facebook to immerse us in its universe, or rather in its metaverse. Better still, the group hopes to do it “cheaply” by using a smartphone to scan their face in a few seconds.

Huge amount of data

Unveiled in 2019, the Codec Avatar project aims to break down the barriers of virtual reality to reproduce the authenticity of an exchange between two people in physics. The idea is to no longer distinguish the avatar from the person thanks to extremely faithful modeling. But for that, a visit to a motion capture studio is mandatory. Heavy technology for ordinary mortals.

In its American laboratory in Pittsburgh, Pennsylvania, Facebook has installed two such devices. In total, hundreds of cameras capture data at a rate of 1GB per second. In 2019, the company said capturing took about 15 minutes. To symbolize the gigantism of such an operation, a computer equipped with a 512 GB hard drive would be saturated in the space of three seconds.

This huge amount of data is then processed by photogrammetry. This technique makes it possible to determine the dimensions and volumes of an object – here of a face – from measurements taken on photographs showing the perspectives of these objects.

10 months to create the Aya Nakamura avatar

“There are several stages, explains Vincent Haeffner. First you have to create the volume of the face. Then, you have to set up a whole skeleton animation system, then the oral cavity, the mouth, the tongue. Once the model is ready to animate, it’s almost easy.”

Only problem, “there is no application to make a mech (the digital model of an object, editor’s note) automatically”, assures Vincent Haeffner.

Today, the maneuver remains artisanal, confirms Louis de Castro, the boss of Mado XR. Specialized in digital creation, this young French company worked on the Aya Nakamura show in Fortnite at the beginning of October. She was responsible for the video broadcast on the giant video game screens during the Franco-Malian singer’s interactive show.

“It took 10 months to create the video, says Louis de Castro. We scanned Aya Nakamura, recorded the performance and then animated the singer’s avatar.”

Further proof that creating a realistic 3D avatar is not a quick thing. Moreover, Mark Zuckerberg is well aware of the brakes that his technology can represent. That’s why he unveiled the Avatar Instant Codec.

this is a “degraded” version of the technology: no need for a capture studio but the result remains stunning, as shown in the presentation video.

A smartphone would thus suffice in the event of sufficient light. Nevertheless, during the test, the company took care to use an iPhone because the Apple device has a lidar sensor that greatly helps in correct face capture.

For two minutes, the person films themselves with a neutral face, then making expressions. This video is then sent to Facebook’s servers, which cut it into images. This data is then processed by a computing computer or a dedicated server. It is only several hours later that the person gets their avatar ready to be animated. Unsurprisingly, Facebook is trying to reduce its processing time to return an avatar to its users after sending their video.

“This is one of the challenges of our activity today: to simplify this processing which transforms photos into a 3D object”, recognizes the boss of Mado XR, who believes that the Instant Codec Avatar technology could be deployed from here. a year.

But in addition to this degraded solution, the use of which could suffice in the metaverse, the ultra-realistic version of Mark Zuckerberg poses a problem. It’s not so much about modeling: since this summer, Epic Games has made available free tools to design your own avatar from photos. The result is certainly less fine than with Codec Avatar 2.0, but more realistic than Instant Codec Avatar.

Google is also on the way

The real difficulty for Meta comes from the flow of data to be processed. It’s too important to hope for a real-time avatar animation in the metaverse. Especially since the boss of Facebook plays with the reflections of light on his face, a feature that consumes a lot of computer calculations.

This limitation by the amount of data to manage, Google should also confront it with its Project Starline. The company is working on a screen making it possible to film a person and display their interlocutor. The promise is to trick the human eye to simulate real face-to-face conversations, but from a distance.

To achieve this, more than a dozen cameras and sensors film and follow the person. The feat comes from the instantaneous compression of this data to send it instantly to the screen of its interlocutor. In addition, it is necessary to reckon with a 3D display which makes it possible to simulate the relief of a person seated in front of you.

If the rendering is immersive, it is not yet perfect, believes The Verge. An early access program has been set up to equip over 100 companies with this tool, affectionately referred to as the “magic window” by Andrew Nartker, director of product management for Project Starline.

Due to their financial or technological barriers, these technologies do not yet seem within the reach of the general public. Social interactions need more realism to overcome the feeling of isolation sometimes felt. But for now, we’ll have to make do with avatars straight out of a cartoon… and without legs.

Leave a Comment