FAQs

Your facial expression and FACS questions answered!

The following is a mixture of frequently and infrequently asked questions relating to the Facial Action Coding System (FACS) and facial expressions. The answers are catered largely toward animators, modelers, riggers, and other artists as well as researchers and engineers working on face tracking tech and digital humans. Because there is a lot of text, browsing by section or using Ctrl+F to search key words is recommended.

The Basics

What is FACS (Facial Action Coding System)?

FACS, or the Facial Action Coding System, is a classification system designed to name and describe visible movements of the human face. Because FACS is anatomically-based and documented in great detail, it allows us to break down complex facial expressions in an objective and standardized manner. 

Full explanation post coming soon!

Learning & Study Tips

Where do you recommend beginners study FACS?

Depends. If you’re an artist, you can study FACS through my website, FACS Cheat Sheet, and training services; however, you might not find as much bang for your buck in the official FACS Manual unless you’re willing to put in the time and money ($350) to handle 500 pages of textbook language designed for a different audience. The Manual will not have as much relevant information for artists, as it’s designed specifically to teach users how to code facial movements with structured research methods.

If you’re in academia or tech looking to do behavioral research, study FACS officially through the FACS Manual and supplement any missing visuals with my website and FACS Cheat Sheet. For academic researchers, Erika Rosenberg hosts workshops for aspiring facial coders.

FACS seems very complicated. Where and how should students like me start?

FACS may seem complicated, but the actual face is even more complicated. I created Face the FACS, because the FACS Manual can be dense, and there aren’t accessible (or quality) resources out there. As mentioned above, you can take advantage of the resources on my website, both free and paid. You can also follow my socials for more free references, tips, and informational posts (Instagram: @manicexpression / Twitter: @melindaozel / LinkedIn).

Is there a FACS bible? Where can I find references of different FACS AUs (action units)?

I’m always adding to my FACS Cheat Sheet! This sheet is designed to showcase basic AUs. I have combination shapes available for Premium Members, and I often post free references on all of my social accounts (Instagram: @manicexpression / Twitter: @melindaozel / LinkedIn). To request specific AU combinations, fill out the form at the bottom of this page.

Any tricks to help artists not get overwhelmed when looking at the Face the FACS language? For greener artists find that they quickly get swept away by triggering too many shapes at once, which can end up counter animating.

Start with sections of the face. Begin with the eyebrows – those are the easiest. Pay attention to the FACS names; they may sound scary at first, but once you realize their names give away the definition of their movements, they become your friend! For example – What does “inner brow raiser” mean? It means the inner part of your eyebrow is being raised! Study basic anatomy along with FACS. Practice the performing the shapes on your own face. Start observing the movements in real life and when you watch movies / shows / social media videos. Save mouth-based movements for last.

As for getting carried away with triggering too many shapes at once – having a basic understanding of facial anatomy and FACS will allow you to study common movements and combinations that occur in the natural world. Real world observations will help guide your strategy for what to trigger and when.

What are your favorite resources / tutorials for becoming proficient in FACS?

I created Face the FACS, because resources outside of the official FACS Manual almost always have inaccurate information and references. The manual itself can also be vague regarding certain facial actions. It took me years of self study and observation to truly understand actions like “lip tightener,” “nasolabial furrow deepener,” “nostril compressor,” etc. The goal of this website is to minimize FACS confusion and combat false face news.

I became proficient in FACS through the FACS Manual and the mentors I had during my internship at the Zurich Interaction and Expression Laboratory: Willibald Ruch, Tracey Platt, and Jennifer Hofmann. From there, everything I learned was from experience, practice, and  real world observation.

For beginner anatomy resources, I recommend Human Anatomy for Artists: The Elements of Form by Eliot Goldfinger. I love how this book breaks down facial muscles in an effective yet simplistic manner. It is the clearest, most accurate book I have encountered, and it actually acknowledges facial muscle diversity. For advanced anatomy resources, you’ll have to go research paper hunting!

Are there any resources you could recommend for finding good facial references? We are building conversational AI. I work on the animation, expression, and body-idle movements. We mostly do our work manually and our characters often look unnatural.

My FACS Cheat Sheet and entire website are full of references for basic expressions, emotions, and speech. I’m available for consulting if you want help making your characters look more natural.

What is best approach to learn FACS, there’s a lot of information. Are there any strategies or techniques to learn them more efficiently?

The best approach is to break down the face into sections, e.g. eyebrows, eyes, middle face, lower face. Make sure you learn each section properly the first time. This website was built for that exact purpose. Once you have a basic grasp of FACS, your next move is to be patient. Things will make sense over time by maintaining a scrutinizing eye. 

I’ve been studying the face for over 10 years and am still confused by some things. Even anatomists struggle to agree on certain issues regarding facial muscle classification and breakdowns; so remember that next time you feel frustrated or overwhelmed!

Emotions, Behavior, & Culture

How does FACS help us recognize actual human emotions?

FACS helps us break down facial expressions and pinpoint which muscles are being used; it helps us recognize patterns and identify what is occurring on the face at different times and in different situations.

Is there a balance between breaking things down in art and ease of use for expressing emotion?

A lot of common facial expressions of emotion share the same facial actions; so it depends on the project. In some cases lumping things together as emotion sets will end up being clunky and limiting. In other cases, it may be helpful.

Are there cultural differences in FACS? There must be some differences in expressions and emotion presentation.

FACS is a system that describes muscle-based facial movements. Though we all have differences in facial muscle shapes, sizes, presence, configurations, and strengths – your muscles don’t change based on your culture; however, which facial actions you use to express yourself will vary, and those variations can be measured through FACS. The ability to identify patterns and differences in facial behavior is why FACS was developed in the first place!

Anatomy

Can you talk about how facial muscles produce certain expressions? Do the muscles all connect to bone, or do some connect to other muscles or fascia? What are some of the more complex muscle interactions involved?

Yes, I could talk about this for days and have multiple write-ups on this topic throughout this site. No, not all facial muscles are connected to bone. Some are; some aren’t. There is a ton of variation in terms of how facial muscles are connected. As facial muscle variation is one of my favorite research topics, I have compiled a library of information on facial diversity; however, due to the intensive time spent acquiring this body of research, I reserve my findings for advanced-level lectures or clients. That being said – I plan to make a cheat sheet laying out the origin and insertion of each facial muscle behind each facial action. Hope that will help 🙂 !

FACS is a fairly well documented system built around muscles, but it’s hard to find information about the layers on top of muscles. Do you have any advice for how we can learn more about things like facial fat pads and skin sliding or sagging?

Amen to that! There is definitely a lack of accessible resources on the topics you’ve described. I’ve seen some visuals and diagrams on Pinterest. There are a lot of good anatomy papers as well. You can never go wrong with the power of observation and documenting your own examples. When looking at facial anatomy diagrams, always remember that they are simplified representations of things that have great variability; this is why observing people in the natural world and reading research papers can be extremely beneficial.

 

Animation

When doing FACS for facial animation of subjects, do you need to create a custom set of FACS for each identity to maximize naturalness?

If your characters are on the more realistic end of the spectrum, it can be very effective to create variation options for expression shapes. For example, our foreheads all wrinkle differently when we raise our eyebrows. Though full eyebrow raises all involve inner brow raiser + outer brow raiser, the way inner brow raiser + outer brow raiser is expressed on each face will have unique characteristics. There are a number of ways to categorize and group these differences to create variety and options. I’ve worked with studios to build such solutions in the past and would be happy to do so again. Feel free to pitch a project to me at facetheFACS@melindaozel.com

Whats the typical process for animating the 3D face after having a FACS map? Can you go from video of person talking -> extract dense facial landmark map -> 3D animated face?

As a non-animator, I am not able to answer what the typical process is; however, I can say that automated attempts to recreate speech from videos of people talking always have nuanced – but highly reliable – mistakes for various lip formations required for speech.

What are common mistakes you see animation teams make when adopting a FACS-style rig into a digital human pipeline?

Having inaccurate FACS shapes is the biggest issue. Other big issues include: 1. not accounting for the secondary and tertiary movements of a shape – e.g. keeping the movement too isolated and stiff, 2. breaking down a basic movement to give animators more control – then animators not remembering to recombine the pieces later when driving the expressions. Some more subtle issues include mismatches between wrinkles and movement, surface-level movement not accounting for depth changes, etc.

Is there anything “surprising” that you’ve discovered in your studies that you wish more animators were aware of? Are there any FACS principles that you think gets overlooked by new animators?

I discover surprising things all the time! In terms of discoveries that I wish more animators were aware of – I actually wish animators were aware of basic principles of facial movement. It’s a huge obstacle when an animator is ignorant of basic facial mechanics and cannot effectively communicate their needs to other departments, like rigging.

Are there ‘sub’ action units or smaller action units that have been defined by animators to break things down further?

There are many sketchy “sub” action units that are made for ease-of-use purposes. I say “sketchy,” because if these sub units are not well thought out and used with caution, they can result in unnatural facial movements.

As for legitimate breakdowns in line with anatomy, I’m sure they exist. I’ve had to break down my own list of sub action units to help better define speech movements. I’ve had animators reach out to me saying they’ve broken down the same movements to create realistic speech. One of the critical sub action units I’ve added to my FACS resources is a shape I call “vertical lip tightener.” You can Google it or search it on my FACS Cheat Sheet to learn more.

 

Rigging

Do most riggers create FACS rigs using blend shapes or joints?

The ones I’ve worked with mostly use blend shapes. Blend shapes give you more quality results but can be costly and bulky. Joints are cheaper but also cheaper looking.

I’m not sure how to phrase the question exactly… With keeping in mind the point of a control rig is to simplify complex motions so an animator can work quickly and efficiently (otherwise we’d just be animating verts) – FACS seems to emphasize breaking things down into their individual muscle groups (anatomical level). How do you think animators can control complex groups of muscles accurately but also ‘conveniently’?

Convenient shape combinations will need to be made on a case-by-case basis. It depends on the context – like character design, character behavior, desired audience, etc. I’d love to work with a rigger to create tools that can assist animators with complex shape building for specific scenarios.

I worked on an episode of Marvel’s “What If?” on a character that only had 5 shapes on the face rig. Do you think there is a stylization that can get away with less FACS to convey variety of emotions?

Amazing that there were only 5 shapes on the face rig! That puts a lot of things into perspective. I do think stylization can get away with fewer FACS shapes. It depends on the character. For example, Emojis don’t have noses; so that takes out a few shapes. If you wanted to animate Hello Kitty, you wouldn’t have to worry about the mouth or brow shapes. In some stylized humans, you could remove a handful of shapes that look similar enough to other shapes. Again, it depends!

What do you feel are the 3-5 most important and useful AUs to build a fundamental character (maybe something like a small collection of rocks) to express / emote / “speak”?

LOL. A rock? If it’s just speaking: some eyebrow expressions and some simple mouth movements. For me to tell you more, you’d have to be a client!

What is the recommended workflow for working with FACS rigs?

Make checklists. Have documentation handy. There are so many expressions and options that it can be easy to forget about things / become repetitive with the same movements.

What main expressions (AUs) would you use if you wanted to make the most optimized blend shape system (for example, if you have a game with a very limited resources). How toned down could you make this system at its most basic form?

Depends on the game. What’s the game about? What do the characters look like? What do you want your audience to feel and experience? Happy to consult.

Modeling

How does FACS relate to blend shapes? And are action units performance friendly for games? 

Blend shapes are typically based on FACS shapes or a combination of various FACS shapes. Many game companies use FACS for their technology and animation. You can set up controllers that represent action units.

Are there any nuances to using FACS when creating a stylized yet realistic character such as Alita from Alita Battle Angel?

Of course. Any time you’re changing the structures of the face, there are challenges in how to translate the movements and migrate important facial landmarks.

What face building modeler has the most accurate FACS representations? I am familiar with Character Creator and FaceBuilder.

These days I reserve evaluating tools on an as-needed basis for clients. Feel free to reach out if you’d like to work together: facetheFACS@melindaozel.com

VFX

In the animation / VFX world, we often work on scarred / disfigured characters. Have you studied cases of injuries and how those can affect FACS?


I’ve studied a select number of injury and diseased-based conditions that affect facial expressions. So far, the most research I’ve done is on something called globe luxation – a condition where your eyes can significantly protrude out of their sockets (AKA pop out of your face). It can be caused by thyroid disease or head trauma. There are so many other interesting cases to study. I take it one day at a time! If there are specific types of scarring or injuries you’d like to see more research on, please fill out the form at the bottom of this page 😀

Do you have any tips to avoid “uncanny” feelings on talking animals, like how to apply anthropomorphic movements without creating strange feelings? For example the difference between Planet of the Apes versus Lion King movies?

If you make a lion look exactly like a lion but speak and sing exactly like a human, it’s going to be weird. There’s not much you can do about it besides compromise on the realism. Personally, I wish the trend of replicating formerly animated classics into photoreal remakes would stop. Personal feelings aside, if I put on my working hat, there are a number of hacks and compromises that can be done to lessen the uncanniness – but I don’t think you can fully get rid of it. As for Planet, it’s much more doable to anthropomorphize animals with similar capabilities to humans.

Are there any particular FACS human shapes / poses you think the VFX industry struggles to reach in photoreal characters?

The most notable ones I see are cheek raiser and nose wrinkler; there are many other subtle shapes that could be improved as well.

Technology

How do you compare FACS to what is done in deep learning?

Most face tracking and emotion tracking companies rely on FACS for machine learning. They use FACS to map out their data pipelines. They collect data with FACS-based expressions. They train their machines with FACS-based classification, etc. Even if you don’t use a FACS-based approach, you still need FACS to identify and communicate areas of failure in order to assess data needs to improve the tech.

Are there any existing FACS datasets we could use to train a machine learning model with face and speech?

From what I’ve seen working in tech on face tracking, most datasets are riddled with impure data. You would need to be able to scrutinize the data and clean up a lot of trash. If you have the resources, creating in-house data collection and labeling systems is recommended. If you are building your own data teams, please read Bias in Emotion Tracking.

Do you recommend any applications that extract FACS robustly? Can you recommend any app for FACS extraction?

If by application that extracts FACS you mean – an automated FACS detector / tracker – I’d say no. I don’t strongly recommend any applications. Some applications can detect very simple, high intensity expressions; however they often fail at detecting subtle or layered expressions. I think we are a ways away from something that’s actually robust, especially due to the data negligence of most leading tech companies. Read more here and here.

iPhones has 52 expressions, is that enough for good animation? If not, how do you use an iPhone to add more FACS shapes?

Are you going for a semi-stylized to very stylized character with basic movements? If so, it could be fine. If you’re going for more photoreal and expressive characters, it’s not going to get you to the level of quality you will need. As someone who is more on the observation and critiquing side of things, I could not tell you how to do add more FACS shapes; however, I am available for consultation if you would like to tell me what needs you have for your characters / what you want to accomplish in terms of expressivity / what shapes you might need to add to get what you want: facetheFACS@melindaozel.com

We are having a boom of 3D live avatars, but they are missing a lot of facial expressions. How do you think we can improve real-time digital human performances?

There are two parts to this:

1. The tech. Face tracking tech is not fully where we want it to be. Successful live avatars can be done by harmonizing the tech with the art. Extensive research and observation needs to be done to assess how the tracking tech responds to different faces. Additionally, the goals of the avatars need to be evaluated. Based on the context, what expressions are priority? Depending on the situations the avatars are being used for, different expressions will have different priority levels.

2. The art. Even though the tech is not where we want it to be, you can fake a lot of it through art. Faking it through the art side can be done once you have a clear idea of what’s not working in the tech and what you want to prioritize for your avatar expressions. When I was at Meta, I worked intensely with the data and studied how the tech performed on different faces. Being able to identify those deficiencies allowed me to effectively work with a modeler to mask recurring issues through the art side. Most successful-looking avatars are achieved by art masking tech issues.

How can different accents be supported by FACS, and how do different pronunciations impact facial expressions / movements for stylized characters?

The smallest meaningful units of sound in a spoken language are called phonemes. Different languages are made up of different phoneme sets. In art and speech tech, phonemes are grouped together by how they look when they are produced. In other words, phonemes are grouped together based on the general shapes our mouths form to produce them. These visually grouped phonemes are referred to as visemes. An example of distinct phonemes that form a single viseme are the bilabial sounds: B, M, and P. Though B, M, and P are distinct phonemes, when we produce those sounds, our lips take on essentially the same configuration. The production of B, M, and P looks the same; therefore, B, M, and P make up a single viseme. FACS shapes can be used to grossly represent a viseme, though finer details may be missing (discussed at the end of this answer).

Accents arise when different phonemes are applied to the same word. For example, consider the word “car.” The American English version pronounces the “a” in car differently from the British English version; therefore, American English uses a different phoneme Additionally, the American English version pronounces the “r” while the British English version does not. 

Though “car” is the same word with the same meaning in both American and British English, its pronunciation is made up of different phonemes depending on which English is being used.

If FACS shapes are used to create the skeleton of a viseme, and a word pronounced with different accents is made up of different phonemes, then that word is also often made up of different visemes. The varying viseme formula means different FACS shapes can be applied to the same words under different accents. This break down means it’s very important to consider the sounds of speech and phonemic options of words – not the letters that make up the words themselves.

The main issues with FACS and speech will come from phonemes produced outside of facial expression mechanics – e.g. guttural languages. Additionally, FACS in general, has a few missing links when it comes to speech shapes – even for American English. This deficiency is why I’ve define sub action units like “vertical lip tightener” and “Z-axis dimpler.” (See FACS Cheat Sheet for details.)

 

My Work

Do you practice facial exercises? If yes – how often?

I practice almost every day. Depending on what area of the face I’m studying at the time, what I practice changes. I don’t go through the full range of FACS actions every day, but I’m definitely always working out some facial muscle!

What’s the process of getting a portfolio review?

Check out my Portfolio Review page 😀

Did you play any role with Epic’s MetaHumans?

I did! I worked with some of their teams on improving the rig through facial anatomy training + consulting. Results should be visible once a later release of the product is out.

Off the top of your head, are there any movies or video games that you consulted for that you’re especially proud of? 

Many of the projects I’ve consulted on have NDAs or are not released yet. But as mentioned in the webinar, I was proud to work on Emoji design for Meta, a product that over a billion people use! I am also proud to have worked on an unreleased version of Epic’s MetaHumans and CG primates for upcoming films and TV shows. I’ll announce when they’re available!

Are there projects that speak to you more than others (dramatic/comedic, stylized/photorealistic, VR/2D)?

As a consultant, I enjoy both stylized and photoreal projects. They each have their own challenges. Same with drama and comedy! As a viewer, I like comedy and stylized more.

Miscillaneous

How would you approach zombie facial expressions?

I’d reserve zombie facial expressions for functional movements required for eating and attacking, since in classic zombie literature that’s primarily what they’re doing! Jaw and mouth would be the priority. You can also modify how those movements work based on varying levels of deterioration.

As a portrait photographer, I wonder where would FACS fit in with my work. Would it be useful to plan ahead of a photoshoot or analyze the final images to see which best conveys what emotion you’re after? Would FACS take away honesty?

As a photographer, I wouldn’t say you need to be a FACS master, but having an eye for stiff / fake / uncomfortable expressions can be useful especially when (1) deciding what shots to select for your clients and (2) helping your models look more relaxed.

I used to model for portrait photographers, and having good facial awareness allowed me to pose better. For example, I realized things like – holding a pose for too long activated additional muscles that I didn’t want to have in my shot; so I would ensure that I had good communication with my photographer / knew when they were able to snap the shot. Knowing when they would take the shot helped me hit the pose at the right time and make the expression look more natural. I also was able to reduce nervous expressions by relaxing my face when I felt it tensing up.

Similar things could be done on the photographer’s end as well. If you are not naturally skilled at reading people’s expressions / moods, FACS may be particularly beneficial. I have worked with many photographers who did not have a natural eye for picking out discomfort in poses, and they surely could have used some technical FACS knowledge.

Don't see what you're looking for? Ask your own question!