difficulties with AR development toolkits
If you or your team are using open-source face tracking kits to:
- animate faces
- overlay virtual content
- create expression-based events
. . . figuring out what’s what can be challenging – especially if you or your team do not have a strong background in:
- facial expressions
- face tracking
- the Facial Action Coding System (FACS)
vaguely-defined items
Face tracking software development kits (SDKs) can be confusing for a number of reasons. A large contributor to this confusion is the lack of detailed documentation available for users.
Human facial expressions are complex and rich with nuance. Understanding how to identify and break down expressions is not always simple or intuitive. Despite this reality, face tracking kits often only provide minimalist definitions for the expression shapes in their libraries.
Minimally-defined expression shapes:
- create room for user misinterpretation.
- increase user’s likelihood to confuse similar-looking shapes.
- limit the user’s potential to effectively use the product.
getting around the ambiguity
Expression shapes in most face tracking products (despite their names) are primarily FACS-based. For those who may be skeptical: FACS is derived from anatomy; so unless a face kit has completely annihilated the foundations of human facial anatomy, all shapes will have FACS equivalents. If you wish to foster a better understanding of the face tracking products you are using, you should familiarize yourself with FACS.
FACS naming is standardized. FACS is consistent. Each FACS shape has a detailed, well-defined, and heavily researched description. If you are well-versed in FACS, you can equip yourself with the tools you need to compensate for the ambiguity of most expression libraries.
Whether or not you are FACS-savvy, if you want a clearer breakdown of ARKit facial expression shapes, this is the document for you 🙂