India’s market for emotion recognition technologies has grown significantly in the last few years. AI start-ups are offering Indian employers the “dark personality inventory” that claims to identify “negative” traits like self-obsession and impulsiveness in potential hires. “Human Resources tech features claims of providing emotion recognition systems to classify the mood of employees as they walk into their workplace. Some companies are also investing in emotion recognition systems to monitor customer responses to ads, videos, and other stimuli. These assumptions are made by quantifying a variety of input data – from facial expressions, to gait, to vocal tonality and brain waves.

Emotion recognition technology involves the use of biometric applications that claim to be able to infer a person’s inner emotional state from external markers like facial expressions, vocal tones, and other biometric signals. Using machine-learning techniques, emotion recognition technology classifies inner emotional states into discrete categories like fear, anger, surprise, happiness, etc.

Advertisement

The rationales for utilising this technology in workplaces include marketing it as critical to enhancing workplace safety and promoting it as a way to ease pressure from HR departments. Business leaders see it as a natural evolution from employers to technology surveilling employees, so that human beings are free to pursue more “strategic” decisions. This market is not unique to India. It is steadily growing across jurisdictions, from the European Union trialing it to detect deception at borders to the Chinese market focusing on driving safety and education to South Korean companies using it to hire tech.

But this use of emotion recognition technology is neither a straightforward or innocuous pivot. It represents a fundamental shift from biometric systems simply identifying or verifying particular people to asking “What type of person is this?” or What is this person thinking/feeling?” While this may primarily seem like a privacy and data protection issue, the real problems at hand are far deeper.

Fundamentally, emotion recognition systems are based on Basic Emotion Theory – a set of pseudoscientific assumptions which claim there is a link between a person’s external appearance and their inner emotional state, and that such basic emotions are discrete and uniformly expressed across cultures. It represents the legitimisation of discredited scientific ideas of physiognomy and phrenology through the vehicle of artificial intelligence. It would, therefore, seem that simply addressing privacy or data protection concerns would be a feeble attempt at addressing harms; the very existence of such technologies must be tested and scrutinised and their use should potentially be banned.

Advertisement

As companies enthusiastically embrace emotion recognition systems, the repercussions of their choices transcend their workspaces in two important ways. First, emotion recognition technologies usher in a new phase of biometric surveillance by which those being surveilled are subject to unilateral and consequential assumptions about their characters and emotional states, with little to no avenue for meaningful accountability.

Individuals are increasingly transparent to entities that wield power over them, while the entities themselves become increasingly opaque and unaccountable. For instance, if an emotion recognition system flags you as temperamental and thrill-seeking, aka, a person with multiple “dark traits,” it can be difficult or impossible to prove or disprove this assumption.

Second, it represents the gradual but consistent normalisation and “improvement” of these technologies in the private sector, paving the way for future use in the public sector. As the face recognition trajectory showed us, the pipeline from private to public sector use is remarkably efficient given the significant appetite for adopting AI solutions in the latter.

Credit: Arno Senoner via Unsplash.

Another consequence of this private to public trajectory is that it de-facto places private sector power and influence over eventual uses in the public sector. As the only entities who know how to develop and operate seemingly complex and magical technology like emotion recognition, they will market their products as such and influence how public policy will embrace the uses of such tech.

Advertisement

The seeds for this have already been sowed. In 2021, the Uttar Pradesh police put forth a tender for its Safe City Initiative that included a requirement for the winning bid to include AI systems that would be able to detect “distressed” women. Beyond this, the public sector is generally well primed to adopt, enforce, and invest in worker surveillance tools, a practice that has witnessed a sharp rise, particularly since the beginning of the pandemic.

Sanitation workers in Haryana have been provided with smart watches equipped with microphones and GPS trackers so that their movements are seen and heard by supervisors. Poshan Tracker, an app launched by the Ministry of Women and Child development, monitors Anganwadi workers and claims to bring about transparency in India’s nutrition delivery services – despite multiple points of protest being raised by the workers who are not only surveilled but also overburdened and disadvantaged by the app.

The infrastructure and willingness for workplace surveillance is already thriving. Unless robust and thoughtful regulation is put in place, it is only a matter of time before newer technologies like emotion recognition enter the fold. In considering what regulation is needed, it is helpful to think of the harms along two axes: data and power; and to think of how the burden of action and transparency must be placed on entities in power and not those potentially subject to these systems.

Advertisement

At the outset, collecting, analysing, using, selling, or retaining this nature of biometric data should be banned, given all of the shortcomings and harms arising from the use of emotion recognition systems. While India does not have data protection legislation at this time, the only other legal requirement regarding sensitive personal data is found in 43A of the Information Technology Act, which puts in place a relatively feeble requirement for private actors to maintain “reasonable security practices and procedures.”

The underlying problem though, is the very collection, use, and potential transfer of such data – working within the assumption that such data will be collected and optimised for responsible use – is ineffective at best. Data protection can, thus, be a useful level insofar as it outright bans the collection, use, sale, transfer, and retention of this type of sensitive personal data altogether.

On power, as discussed above, emotion recognition technologies can facilitate and widen asymmetries between employers and employees. This must be critically evaluated and explicitly addressed through regulation surrounding the surveillance of individuals and groups in the workplace. This must include limitations on what information can be gleaned about workers by AI tools, in general, and preclude any such inferences made from emotion recognition tools in particular. Measures must also empower workers to resist, question, and opt-out of algorithmic management tools in cases that utilise applications with an AI component but not necessarily emotion recognition.

Advertisement

Emotion recognition technology is based on a legacy of problematic and discredited science and exacerbate power differentials in multiple ways. No amount of careful data protection practices can legitimise its use. Failing to put in place robust regulations for the private sector will eventually pave the way for public sector use. Much like we saw with face recognition, originally used as a way to grant or deny individuals access to university labs and workplaces, it is now used to monitor peaceful protestors, make unilateral arrests, and fundamentally threaten civil space.

Vidushi Marda is the Co-Executive Director of REAL ML. Views expressed here are personal.

This article was first published on India in Transition, a publication of the Center for the Advanced Study of India, University of Pennsylvania.