big tech’s homogeneous hiring habits are harming our data

In our consumer-focused digital world, we have manufactured an urgency to innovate and develop emerging products. With machine learning trending and a self-perpetuating pressure to move fast and break – I mean “build” – things, Big Tech is in constant need of builders. As a result, software engineers have become a highly coveted commodity, dominating headcount and inciting bidding wars across companies. However, as machine learning ambitions grow, data needs grow as well, transforming engineer-centric problems into cross-disciplinary matters. Projects yielding highly ambiguous data – like facial expressions for face tracking – demand an understanding of the data beyond the scope of engineering; they call for a cross-disciplinary marriage between engineering and the complementary field being applied. Therefore, it is critical for tech firms to take responsibility for data integrity by incorporating field experts into the product development process.

While it is not in Big Tech’s culture to prioritize non-engineering roles, other industries acknowledge the importance of topical expertise in machine learning. The biotech field, for example, relies on the collaboration between both medical experts and engineers. In this case, the need for collaboration is obvious, but in more obscure realms of specialization, the need is not always evident. As referenced earlier, one major area suffering from obscurity and lack of understanding is face tracking. Common use cases for face tracking – including face filters, feature detection for product advertising (e.g. lip detection for testing lipstick products or eye detection for glasses) and avatars – are relatively benign. However, when considering less “cute” use cases such as emotion detection, behavior monitoring, and deception detection with applications in the criminal justice system, insurance sector, or cyber security world – a machine learning model’s performance becomes contentious; and if done the wrong way, it becomes dangerous. There are already many known issues with facial recognition technology and its unregulated usage in different countries and industries. Since facial expression tracking does not simply identify a person, but rather, observes and draws conclusions about that person’s behavior, it has the capacity to be far more invasive.

Despite the fundamental importance of expression data for face tracking, Big Tech often fails to prioritize expression data quality. Product managers, engineering managers, user researchers, and software engineers commonly rely on their own superficial understanding and ad hoc searches rather than benefiting from the depth of understanding an expert could provide. While software engineers are masters in creating algorithms, they often only possess cursory knowledge of what goes into the data. Given their high-pressure workload and focus on their own specialized knowledge, it is not feasible for engineers to develop an additional expertise in scrutinizing subtle expression data or mastering complex concepts in emotion research.

Determining what type of data is needed, how to collect it, and how to label it is a delicate process. If you choose to target the wrong data, it won’t matter how well you collect or label it. If you target useful data but collect it improperly, it will also fail. It will fail yet again if you do not label it precisely and/or accurately. Due to morphological differences in facial features, inherent biases in expression interpretation, and controversy among both emotion researchers and facial anatomists, any group developing face tracking algorithms with intentions beyond try-before-you-buy lipstick must accept accountability and ethical responsibility for data integrity.

Rather than ensuring the building blocks of their algorithms are well understood by those using them, the unfortunate state of Big Tech is to collect or acquire mass amounts of data and pass it on to third-party labelers. The labelers are typically outsourced, on contract, and almost always undervalued. To monitor the quality of labels, the standard is to create and enforce various Key Performance Indicators, or KPIs, but because Big Tech chooses not to invest resources in people who can legitimately supervise the quality of advanced data, the KPIs are generally arbitrary and hold little merit. Furthering the problem, if engineers are not properly equipped with the depth of understanding to identify ground truths, what are they actually measuring? There’s unsupervised learning. And then there’s unsupervised engineering.

When I worked for one of the Big Five companies in Silicon Valley, I was constantly shocked by the nonchalant mindset regarding complex face tracking data. Though my colleagues were leading minds in algorithm development, they possessed a shallow understanding of facial anatomy, core emotion concepts, and expression behavior. Like any other highly-specialized subject matter, understanding the nuances of human expression takes years of intensive study and experience. Despite the fact that I was the resident facial expression expert who had dedicated my life and career to understanding the nuances of the human face, I was regularly excluded from important meetings and planning sessions. I often caught coworkers haphazardly scrambling through outdated and inaccurate expression reference pages (which I am currently providing new solutions for here) in attempts to hack together a data pipeline plan. My expertise was frequently relegated to a supervisory role, and my skills were misused on irrelevant tasks like classifying beard types and hair color. When I flagged trends in data inconsistencies or foresaw hardware issues, I was sat down and offered a lesson in “how machine learning works.”

I see many job listings requesting X years of experience in face tracking, but requesting related experience is not enough. Just as an engineer’s past work in medical technology does not qualify them to be a medical practitioner, simply having worked on tracking technology does not qualify an engineer to be an authority on facial expressions or emotions. If there were more focus on hiring the appropriate experts, perhaps the struggle to find engineers with rare and specific experience would be alleviated; such alleviation could make room for more innovative collaboration between engineering and the complementary disciplines it can be paired with.

Big Tech’s tunnel vision focus on engineering is a negligent habit that needs to change. While software engineers are indeed essential to the machine learning process, machine learning ambitions have brought us to a point where we must recognize the need for cross-disciplinary action. Data for systems contingent on nuances should not be taken lightly – especially when they bleed potential for invasive use cases. If companies valued expertise in data subject matter with the same reverence and support they do with engineering expertise, the algorithms and the data from which they are built would be more comprehensive and less fallible. Lack of holistic data systems will leave us with unregulated products susceptible to bias. An imbalance of investment skewed toward algorithms and away from the data quality will lead to wasted engineering effort, deficient products, and the propagation of unethical technology.

Don’t put all your headcounts in one basket. Hire responsibly.

One thought on “big tech’s homogeneous hiring habits are harming our data

  1. Pingback: ARKit & other face tracking mistakes – Face the FACS

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.