EMOCA: Emotion Driven Monocular Face Capture and Animation (CVPR 2022)

  Рет қаралды 5,668

Michael Black

Michael Black

Күн бұрын

Пікірлер: 17
@lelluc
@lelluc 2 жыл бұрын
This is wild, well done!
@cedricvillani8502
@cedricvillani8502 2 жыл бұрын
Just curious why after 6:49 you decided not to show a continuous sentiment label or emotional label I guess, or the overall analytics at the end to show how many times a label appeared? It would have been nice to see what the overall clip was estimating, was this a fight or argument, was there deception going on, did this person seem to be in danger or something else ❤. Ya know. Anyway absolutely amazing work, congratulations 🎉🎉 I love all the work you and your team does, I follow and read everything completely out of your faculty, wish I was there 😮😢, 😂🎉😂
@MichaelBlackMPI
@MichaelBlackMPI 2 жыл бұрын
Super question. We were more focused on using emotion to get accurate 3D than on the emotion recognition per se. But you are right that we could just show this, even though the processing is all single frame and doesn't take the temporal nature into account. Emotions really evolve over time and so I think it is important to model that. My very first work on facial expressions with Yaser Yacoob, used a very simple parametric model of face motion. From the parameters of the model over time, we recognized expressions surprisingly well for 1995! Here's the old video: kzbin.info/www/bejne/kJ-mmo2Ng9OBeZY
@cedricvillani8502
@cedricvillani8502 2 жыл бұрын
He failed the test by the way, she was going back to her fiancé not her husband
@starstar-cr2hr
@starstar-cr2hr Жыл бұрын
Thank you for the amazing work! I’m wondering if there’s a way to apply this code to create a lively animated face, similar to Apple's Memoji, to replace man head in a video?.
@starstar-cr2hr
@starstar-cr2hr Жыл бұрын
Like, I will create a 3D animated character by analyzing the features of a person's face in a video. Using your code, I'll then map the appropriate facial expressions to this 3D character and replace the person's face with this animated figure. Does this sound feasible to you? Thanks in advance!
@mancumbus
@mancumbus Жыл бұрын
Hello guys, but may be do you have video how to install code if you a not programmer? Because i 3d character animator in maya, and very interesting to try! OR maybe detailed instructions? Thank you!
@boulimermoz9111
@boulimermoz9111 2 жыл бұрын
hello, thank you very much for your amazing work. Just asking : is there a way to apply this code and try this mocap system on my 3d characters ? thank you very much
2 жыл бұрын
in principle it's possible yes. but our code does not have this functionality. you would have to attach the FLAME face model (which is what we use) on your characters in place of the characters head. this is not trivial as there would probably be discontinuities around the neck which would then also have to be taken care of. btw, of you're interested in full body capture, be sure to check out projects such as PIXIE or Simplify-X.
@boulimermoz9111
@boulimermoz9111 2 жыл бұрын
@ Thank you very much, really appreciate
@ericmlevy
@ericmlevy Жыл бұрын
1. Can the model be exported without cropping to the roi box? 2. What can be done to improve the temporal stability / shakiness? Thank you!
@MichaelBlackMPI
@MichaelBlackMPI Жыл бұрын
1. The result is a full 3D FLAME head model. This cropping is only for display here. 2. EMOCA v2 is more stable (github.com/radekd91/emoca) and you can always run a 1-Euro filter if you still want more but it's pretty stable. 3. but also check out MICA, which is very stable. justusthies.github.io/posts/mica/
@phizc
@phizc Жыл бұрын
To me it looks like it totally loses the identity compared to DECA. Also, the expressions also look exaggerated and not like in the original image. It would have been interesting to see the rendered deformed mesh with the extracted textures.
@liam9519
@liam9519 2 жыл бұрын
Is this just DECA + an extra emotion detection model-based loss term?
@MichaelBlackMPI
@MichaelBlackMPI 2 жыл бұрын
Basically, yes. We take the DECA loss and add a term that says that the emotional content of the rendered image should match that of the original image. This is enough to improve the 3D realism of the mesh, without any explicit 3D training. This is what I find exciting. Emotion is a form of semantic "side information" (ie weak supervision) that is easy to get and can improve 3D shape estimation.
@liam9519
@liam9519 2 жыл бұрын
​@@MichaelBlackMPI thanks for the response! I was having a read through the supplementary material and it seems this was not nearly as simple as my initial comment perhaps made it out to be :D Appreciate you open-sourcing the code too!
@MichaelBlackMPI
@MichaelBlackMPI 2 жыл бұрын
@@liam9519 no worries. Happy to help.
[SIGGRAPH ASIA 2022] Video-driven Neural Physically-based Facial Asset for Production
5:14
ВЛОГ ДИАНА В ТУРЦИИ
1:31:22
Lady Diana VLOG
Рет қаралды 1,2 МЛН
Ozoda - Alamlar (Official Video 2023)
6:22
Ozoda Official
Рет қаралды 10 МЛН
I Remade Star Wars VFX in 1 Week
10:39
ErikDoesVFX
Рет қаралды 2,9 МЛН
I did the double slit experiment at home
15:26
Looking Glass Universe
Рет қаралды 2,2 МЛН
INSTA: Instant Volumetric Head Avatars (CVPR 2023)
4:34
Justus Thies
Рет қаралды 8 М.
FLAME: Learned face model from 4D scans (SIGGRAPH Asia, 2017)
6:01
Michael Black
Рет қаралды 31 М.
Level 1 to 100 Science Experiments
15:53
Hafu Go
Рет қаралды 19 МЛН
3D Face Reconstruction with Dense Landmarks
5:43
Microsoft Research
Рет қаралды 39 М.
Neural Head Avatars from Monocular RGB Videos (CVPR 2022)
3:36
Matthias Niessner
Рет қаралды 6 М.
Acoustic cameras can SEE sound
11:52
Steve Mould
Рет қаралды 2,6 МЛН