Photo by Alex Knight from Unsplash
Click here to download the report as a PDFAs digital technology has become more integrated in every aspect of our daily lives, our relationship with our computing devices have evolved. In earlier eras of digital computing, our transactional interactions with these technologies were more like users of tools. Early digital scholars argued that our use of computing technology and the Internet was more like a “hammer” and that “the computer simply does not care what its user thinks or feels.” In the era of artificial intelligence (AI), the modern Internet, and social media, technology has become increasingly human-centred, designed not just to predict how we think and feel, but also to influence how we think and feel. The integration of these technologies and “smart” devices into our homes, our families, and even our bodies have “become part of our very humanity,” improving our daily lives, but also coming at a cost. They have also been designed for commercial, corporate, and control purposes, collecting our data and selling personal information and our attention for financial benefit. The state has adopted AI and automated decision-making systems, not just to make government processes more efficient, but to surveil and police – often with negative consequences.
The next quest for human-centred technology is to move beyond our tool-like relationship with computers and to build technology that detects or recognizes human emotion and reacts to it – to design empathy into digital technology. Emotion AI promises the fulfillment of this vision, presenting great opportunities to benefit our lives, but also creating even greater risks for substantial harms.
Emotion AI, at its simplest, are technologies that (attempt to) detect or recognize human emotions through machine learning algorithms and other AI techniques. These technologies are being used and experimented in various applications across sectors, such as health, education, law-enforcement, the automotive industry, and in the workforce. It is predicted that the industry will be worth over $90 billion by 2024. While there is excitement over the potential benefits that these technologies will bring, there is tremendous concern about their risks: Privacy, surveillance and the collection of biometric data; bias and discrimination in datasets and algorithmic design; the risk of substantial harm for wrong decisions in high-stakes contexts like employment and housing; and most concerningly, skepticism in the underlying science, reliability, and accuracy of the predictions.
With these unresolved questions and concerns around the science and the potential to significantly affect people’s lives, policymakers need to anticipate the opportunities of these innovative technologies but also mitigate their risks and harms by developing regulatory frameworks to govern the responsible and careful use of emotion recognition technologies. Policymakers have yet to recognize the impact of emotion AI, mostly because recent digital policy discussions have been focused elsewhere – including technologies related to emotion AI like facial recognition technology (FRT), wearables, and digital surveillance, but not directly on the intersection of these technologies in the context of emotion recognition, which is often seen as a “value add” or synthesis of these more prominent technologies. As emotion AI becomes more widely adopted, policymakers should recognize that emotions are an intimate aspect of our lives as humans and develop governance frameworks on emotion AI, particularly in sensitive, high-risk use cases. Emotion AI is here now and will only grow to become even more integrated in our lives.
While this project will use primarily use the term “emotion AI,” the concept is also known by many other names that will be used interchangeably, including: “emotional AI,” “artificial emotional intelligence,” “emotion recognition technology,” “empathic AI,” “affective AI,” and “affect recognition technology.”
There are multiple definitions for emotion AI and it would be useful to outline the most prominent ones:
|
Meredith Somers at MIT Sloan |
“Emotion AI is a subset of artificial intelligence (the broad term for machines replicating the way humans think) that measures, understands, simulates, and reacts to human emotions.” |
|
AI Now 2019 Report |
“Affect recognition is an AI-driven technology that claims to be able to detect an individual’s emotional state based on the use of computer-vision algorithms to analyze their facial microexpressions, tone of voice, or even their gait.” |
|
Andrew McStay |
“Technologies capable of ‘interpret[ing] feelings, emotions, moods, attention and intention in private and public places.’” |
|
Rosalind W. Picard |
“Computing that relates to, arises from, or deliberately influences emotions.” |
|
Bard |
“If, as I will describe, Emotion Al involves a process of both reading and manipulating emotions for the purpose of either identifying or manipulative human behavior, then its product could fall into many different legal categories which would further shape its regulation.” |
The definition used in this report is broken down into 4 key factors:
Emotion AI technologies…
Emotion AI relies on a variety of biometric data collected by different sensors including cameras, wearables, and microphones. These kinds of data include, but are not limited to:
Emotion AI as a concept has its origins in the field of “Affective Computing,” pioneered by Rosalind W. Picard at MIT Media Laboratory (“Lab”) when she published her 1995 article “Affective Computing” and subsequent book where she expanded her ideas. Picard was interested in how our human-to-human social interactions depend not just on cognition but also emotion – and that in a similar way, “human-computer interaction has been found to be largely natural and social; people behave with computers much like they behave with people.” When humans work in an environment without emotion or validation, they will feel worthless. As humans increasingly interact with technologies in their daily life, there is a need for empathy and the validation of “the user’s emotional abilities.” Picard and her partners at the MIT Media Lab – including Rana el Kaliouby, who later founded the company Affectiva – has continued to advocate for the idea that machines and AI need empathy as society increasingly adopts smart devices in all aspects of our lives. Ultimately, Picard argues that “The computer, the greatest tool humans have ever had, has the potential to adapt to what people want” and they need to be more “human-centered systems… toward embracing part of the spark that makes us truly human.”
Emotion AI is being used and developed for many different applications across different sectors and purposes. A number of use cases for the technology are low-risk and provide benefits for businesses and society at large. Emotion AI is being used to measure sentiment for advertisements, movies, and television shows. Sensors and cameras track how consumers feel about ads – does the graphic or commercial evoke happiness or sadness? Disney has used the technology to test out early drafts of movies, testing how audiences react to certain scenes. Is the scene as it is currently filmed and edited striking the emotions that the director and studio intended? Emotion AI is also being developed for automobiles, particularly for vehicle safety. These technologies can be especially beneficial as it hopes to measure fatigue, drowsiness, “road rage” or other extreme emotions that can impair a driver’s ability to operate the vehicle safely. The hope is that emotion AI can reduce the number of accidents and save lives. Additionally, through “cabin sensing” and measuring passengers’ emotions, the car can automatically adjust the internal environment to make passengers more comfortable, like dimming the lights if you are tired or playing relaxing music or reclining your seat if you are stressed. Automotive companies like GM, BMW, and Porsche are already testing these technologies. Affectiva, the emotion AI company founded by Rana el Kaliouby out of the MIT Media Lab, has been at the forefront of this commercialization of emotion AI, believing in its benefits rather than the technology’s flaws.
However, many uses of emotion AI can be problematic and have the potential for substantial harm. Companies like Hirevue (which reportedly licensed technology from Affectiva and whose clients include companies like GE, Unilever, Delta, and Hilton) are using emotion AI to interview job candidates and score them on qualities like “dependability,” “emotional intelligence,” “cognitive ability,” “grit,” and smiling, with the stated intention of making “hiring less biased.” However, there is skepticism about its effectiveness. For example, there are questions about how the algorithm would work if a medical procedure before the interview could affect an applicant’s mood, perhaps leading to depicting more negative emotions that could be undesirable. This could be biased against racialized people and immigrants who might have different body languages based on cultural differences, as well as Black faces being disproportionately rated as having negative emotions, based on the research described later. Additionally, differently abled individuals whose faces might be affected may score lower based on the hiring algorithm.
A mock-up of a report that hiring managers would receive after a recorded video interview, as per CNN.
Moreover, wearables like the Halo wrist band developed by Amazon, are being used in workplaces to measure physiological responses like heart rate, sweat gland behaviour, skin temperature, as well as vocal tone. Companies like Kronos, Pulse, and Humanyze promise human resources departments at companies like Virgin, JPMorgan Chase, and Bank of America that their products can analyze their employees’ emotional states, claiming that it can help “encourage” healthier lifestyles, while also mitigating poor decisions at work when employees are upset. This is creating a new form of surveillance, where employers are always monitoring their employees and potentially using their biometric data to inform HR decisions like promotion or termination. Hiring and employment have a significant effect on an individual’s livelihoods. The use of emotion AI to decide or help inform employers to decide whether an individual should be hired, promoted, or fired will significantly affect an individuals’ ability to live and impact their dignity.
Emotion AI is being used in law enforcement and border control, replacing enforcement officials and using iris scans, FRT, and voice and tone to verify individuals and detect deceitful behaviour. The Brookings Institute reported that the Canada Border Services Agency attempted to use “an experimental automated interviewing system called AVATAR” that failed to achieve the results that were expected. With firms like Clearview AI selling their FRT services to law enforcement agencies and emotion AI being marketed as a “value add” to these technologies to “identify behavioral patterns that indicate criminal intent,” emotion AI is creating a new form of surveillance and intrusion by the state on an individuals’ private thoughts. With the criticisms of AI being biased against racialized people and leading to discriminatory policing practices and false positive results, the addition of emotion recognition to the law enforcement toolbox is incredibly concerning that people’s rights will be further violated.
Of most concern is emotion AI’s use in children’s toys and education. Andrew McStay has researched how new children’s toys like Anki’s Cozmo use optical and microphones to adjust “his mood,” such as seeming upset when he gets “hungry,” or feeling joy when he wins a game with the child. With education moving towards distance learning and video classrooms, companies like Intel, Classroom Technologies, and Find Solution AI are developing AI that measure student engagement through a child’s emotions on video. By measuring a student’s emotions, these technologies promise to analyze a learner’s strengths and weaknesses, forecast their grades, and adapt lessons to improve upon mistakes. While there are promises that these toys can help a child develop emotional intelligence with a virtual friend, or that emotion AI in the classroom and improve a student’s learning, there are serious potential harms. Privacy is of paramount importance, as “the sharing of a child’s emotions and biometrics with third parties – even with parental consent – signals a serious change in the degree of transparency and trackability of people’s lives.” The tracking of children’s emotions are not just extraordinarily intrusive by surveilling inside the home, but on children – as young as toddlers – who have no conceptualization of consent and who are still in their developmental phase, learning how to navigate their emotions and the world-at-large. These technologies are arguably in violation of the United Nations Convention on the Rights of the Child and in general, a child’s rights and dignity, thus, presenting the largest danger of emotion AI.
Yet, there are still other applications that have potential benefits, but should be weighted with the potential costs. Emotion AI is being developed to assist in mental health care. Suicide prevention, crisis hotlines, and psychological counselors are under pressure to handle high demand for their services. Emotion AI systems can help individuals in crisis, giving people at risk of suicide someone to interact with and to prevent negative incidents. This technology is also being developed with principles of cognitive behavioural therapy, which is a psychiatric technique to treat depression, anxiety, and other mental illnesses. Emotion AI is also being developed to assist individuals on the autistic spectrum who struggle with social and emotional cues in other individuals to help “in understanding and operating in the socioemotional world around them.” Lastly, emotion AI is being developed to assist stroke victims and facial palsy patients in their recovery and the reconstruction of their faces. However, while emotion AI promises to assist in health care, it can also substantially harm individuals when, for example, these technologies are used to make diagnostic decisions. Medically trained professionals and expert doctors have greater context of an individual’s health history and experience with other patients to make better diagnoses and prescribe a treatment than an emotion AI algorithm. A wrong decision by an emotion AI system could result in bodily harm or even death.
The use cases listed above are just some of the potential applications of emotion AI, but there are many more being developed and the technology’s adoption is only growing.
Biometric data is one of our most intimate types of data that relates to our humanity. Health and biometric data are our most sensitive and unique personal information, which can be connected and identified with specific individuals. As well, feelings and emotions might be legally interpreted as private thoughts and beliefs, which could be constitutionally protected by the rights to liberty and unreasonable search and seizure in the Canadian Charter of Rights and Freedoms (sections 7 and 8) as well as the fourth and fifth amendments of the U.S. Constitution. The Supreme Court of Canada (SCC) argued that “an individual’s informational privacy interest as ‘a biographical core of personal information which individuals in a free and democratic society would wish to maintain and control from dissemination to the state, which tends to reveal intimate details of the lifestyle and personal choices of the individual’” such as, “’private thoughts, personal relationships, and romantic interests.’” In R. v. Mills (2019), Justice Martin argued that R. v. Duarte (1990) should be considered in classifying “records of our private thoughts” through digital communication as being “secure against surreptitious state access… with no judicial supervision.”
As described above, bias and discrimination against racialized people and those with disabilities have been identified in policing algorithms and FRT. Research on commercially available application services (APIs) of emotion AI demonstrated that these algorithms carry racial bias by scoring Black individuals with more negative emotions. This bias could lead to discriminatory practices that exclude individuals from marginalized groups and further exacerbate societal inequities.
Emotion AI is being used in automated decision-making systems to make decisions or assist a person to make decisions on important assessments. These technologies can prevent an individual from being hired, reject a person’s financial applications for loans or housing, falsely accuse someone of unlawful behaviour, deny an immigrant or refugee from entering the country, or make a wrong health diagnosis. This can lead to loss of income and housing, erroneous criminal charges, physical persecution in their countries of origin, bodily harm, or even death. The risk is even greater for children who are still in their developmental phase and have no conceptualization of consent.
As discussed later in the report, there is substantial criticism about the basic scientific premises of emotion recognition. Studies show that emotion recognition is not consistently correct and is based off flawed scientific methodology. This questions whether emotion AI technology is effective and reliable in its predictions – and thus, whether these technologies should be used at all and should be banned by governments.
Emotion AI and affective computing is built on the theoretical foundations of affect/emotion detection or recognition, a field led American psychologist Paul Ekman. Ekman’s theories were inspired by ancient Greek physiognomy, eighteenth and nineteenth century anatomical science and neurology (such as the work of Guillaume-Benjamin-Amand Ducenne de Boulogne), and the 1960s work of Princeton psychologist Silvan Tomkins on “Affect Imagery.” At the heart of these theories is the assumption that affects were universal among humans. While “Tomkins acknowledged that the interpretation [sic] of affective displays depends on individual, social, and cultural factors,” these early thinkers believed you could objectively recognize and measure affect through universally shared emotional responses in the face and other signs throughout your body. Based on these assumptions, Ekman began research on recording facial expressions and attempting to recognize affect in the mid-1960s – funded by the U.S. Department of Defense. Lee Hough, the head of the Advanced Research Projects Agency’s (“ARPA”) behavioural sciences division, recognized the “potential in understanding cross-cultural nonverbal communication” for national defense, intelligence, and law enforcement, and thus, invested $1 million USD (equivalent of $8 million USD today). (A curious part of the story is that Hough invested in Ekman’s research as a way to attempt to divert funds from the criticisms of U.S. Senators that Hough was collecting information to overthrow the left-wing government in Chile.)
Ekman aimed to find “microexpressions – tiny muscle movements in the face” that could infer emotional responses by using photography to codify facial expressions. In 1978, Ekman and his research partner published the Facial Action Coding System (FACS) that coded photographs of faces based on what they thought were six basic emotion types: happy, fear, disgust-contempt, anger, surprise, and sadness. FACS was developed from Ekman’s cross-cultural research with subjects from Chile, Argentina, Brazil, the U.S., and Japan. However, after criticisms about the methodology that facial expressions might have been culturally learned through global mass media, Ekman also studied Indigenous people in Papua New Guinea who were isolated from Western culture, with the hope that this research would prove the universality of affect expressions. Most modern emotion recognition research and technologies are based on FACS and the assumption of the universality of those six basic emotions.
The breakthrough of the development of FACS rapidly led to the automation of FACS’ labour-intensive measurement processes through early-1980s machine learning technology like Igor Aleksander’s WISARD. With the help of ARPA, Ekman inspired two teams to continue the work on automated FACS. The first was Terry Sejnowski and Marian Bartlett, the latter of whom became the lead scientist at Emotient – a leading company in emotion AI that was purchased by Apple in 2016. The second team was psychologist Jeffrey Cohn and computer vision research Takeo Kanade who first developed the Cohn-Kanade (CK) emotional expression dataset, which continues to be improved and used to train emotion AI technologies. Since then, Ekman’s work and FACS have influenced modern technologies and culture, such as new lie detection software used by law enforcement post-9/11, consulting Pixar in improving animation of faces to be more “lifelike,” and inspiring books and television shows. Yet, with the great influence Ekman has had on the field, his work has drawn controversy: one example is the Screening of Passengers by Observation Techniques (SPOT) program, which was used after 9/11 to detect terrorists, and was criticized by the U.S. Government Accountability Office and civil liberties groups” that it led to racial profiling, false positives, and potentially “no clear successes” despite its expensive price tag.
Ekman’s work, the technologies his research has inspired, and thus, the basic scientific premise of emotion AI, has been strongly criticized over the years. The core challenge is that the methodology of emotion recognition research is flawed, its basic assumptions are not true, and – along with biases and other problems in training data sets – algorithmic emotion recognition is scientifically inaccurate and unreliable with high error rates. With the proliferation of this technology in making decisions on sensitive areas like housing, employment, and loan applications, wrong decisions could be life-changing. A leading critic of Ekman’s research and the emotion recognition field at-large is psychologist and neuroscientist Dr. Lisa Feldman Barrett. In 2019, Barrett and other researchers published a meta-analysis of over 500 research papers in the corpus of academic literature on emotion recognition from emotional expressions. The paper critiques “the common view that instances of an emotion category are signaled with a distinctive configuration of facial movements that has enough reliability and specificity to serve as a diagnostic marker of those instances.” Barrett found that there were significant issues in the reliability, specificity, and generalizability of both “expression production” (“how people actually [sic] move their faces during episodes of emotion”) and “emotion perception” (“which emotions are actually inferred from looking at facial movements”).
While accuracy rates in measuring “expression production” is high by both human coders and algorithms in ideal laboratory conditions, accuracy drops “when coding facial actions in still images or in video frames taken in everyday life” to below 83%. Emotions measured by “dynamic changes in the autonomic nervous system (ANS)” using other bodily responses than facial expressions (cardiovascular, respiratory, perspiration, blood flow, electrical activity in the brain, heart rate, temperature, among other measures) also did not reliably code emotions correctly. An analysis of thirty-seven articles that measured “healthy adults from the United States and other developed nations” and whether specific facial expressions co-occur with feelings classified under the six basic emotion types (under laboratory conditions) found a correlation coefficient (r) of 0.31 indicating a weak reliability in accurate measurement of these factors. Fear was the least accurate emotion category to measure, with an r of 0.11. Additionally, the results of measuring facial expressions in natural, real-world settings are also similar.
Barrett contrasts emotion perception – inferring emotions through perception from facial expressions – with emotion recognition which only happens “if the referred inference has been verified as valid,” meaning that the inferred emotion correctly identifies the photograph and the subject validates the result as correct. A meta-analysis of studies that measured “healthy adults from the United States and other developed nations” and used human participants to infer emotions given a range of emotion categories to choose from found that there was strong reliability if the individuals were from the same culture, and medium reliability if they were from different cultures. However, the results vary for different emotion categories: happiness was most accurately predicted with an average consistency of 93.4% (within culture) and 87.6% (across cultures), while fear was the least accurate with rates of 71.7% (within) and 58.3% (across). Experiments using other methodologies than the “choice-from-array” method found even less reliability. While emotion perception may have relatively stronger reliability than studies of expression production, the research shows that emotion recognition is not as accurate as is commonly presumed (except when participants can choose between a range of words or scenarios). Yet, because the results are stronger-than-chance, researchers and computer scientists often justify the accuracy of their technologies as relatively reliable – despite the fact that these results are produced under ideal conditions by human participants.
Barrett summarizes these findings that facial expressions are not reliable “fingerprints” of specific emotions to make confident and accurate inferences. Instead, it reinforces the critique that emotion recognition requires much more context – including other biometric and environmental data as well as cross-cultural understandings – than simply facial expressions. Barrett has often pointed to the examples of scowling – which is often perceived as anger but can also portray concentration or a negative reaction to a bad joke – or smiling – usually associated with happiness, but can also have tinges of sadness or be used sarcastically. While results with greater-than-chance reliability “warrants publication in a peer-reviewed journal,” its lower reliability in absolute terms would be problematic where the inferences have large impacts on a person’s livelihood. Thus, Barrett cautions technology companies and developers who “are spending millions of research dollars to build devices to read emotions from faces” that “the science of emotion is ill-equipped to support any of these initiatives… no matter how sophisticated the computational algorithms.”
While Barrett’s research challenges the core assumptions behind emotion recognition, other studies have attempted to measure the outcomes and accuracy of existing emotion AI systems. A 2019 study by De’Aira Bryant and Ayanna Howard measured the performance of eight commercially available emotion recognition systems (Affectiva, Google Vision API, Microsoft Emotion API, Amazon Rekognition, Face++, Kairos, Sighthound, and Skybiometry) on five datasets of children’s faces with varying levels of diversity. Eight factors of diversity were measured in the datasets (age, gender, ethnicity, gaze, geographic location of recruitment, clothing, pose, and number of emotion classes) to develop diversity ratings for each dataset, with a rating of 0 being no diversity and 1 being full diversity. The datasets varied between 0.36 and 0.67, demonstrating the persistence of a lack of diversity as a common issue with AI datasets, leading to potentially negative impacts. The authors conducted a black box test of the emotion recognition systems and found wide variance in the accuracy with particularly low rates for certain factors, which would substantial consequences on children, especially as one of the most vulnerable populations. In terms of a raw measure of accuracy (stated as Matching Score “MS” or True Positive Rate by the authors), while most systems reported a > 90% MS in correctly classifying “happy,” there were shockingly low scores for Affectiva with an MS of 8.88% for “fear,” Amazon with an MS of 10.06% for “sad,” and Kairos with an MS of 18.55%. Average MS across the different emotion types varied between 43.49% for Kairos and 66.84% for Google. Thus, with very weak accuracy for recognizing children’s emotions on the most easily available emotion recognition technology, it is extremely concerning to use these technologies on children.
Dr. Lauren Rhue conducted a similar black box study of two popular emotion recognition programs to discover that these algorithms suffer from racial AI bias. Official photographs of NBA players’ faces provide a unique dataset with ideal characteristics, such as the fact that NBA players are all male and mostly – if not all – within an age range between 19-35, and that their photographs are taken professionally, well-lit, and facing front. Using more than 400 of these photographs from the site Basketball Reference, Rhue ran these faces for emotion analysis through Face++ and Microsoft’s Face API. The results found that black players were more often categorized as having negative emotions compared to white players. Microsoft’s API scored black players as 3x more contemptuous as white players, while Face++ perceived Black players as 2x angrier, 3x more afraid, and 20% less happy. After controlling for smiling by matching black and white players with similar smiles and other characteristics, such as pairing black player Darren Collison with white player Gordon Hayward, Collison is rated by Face++ as 20% less happy and 180x angrier than Hayward. Microsoft’s API fares only slightly better, with Collison being rated 5% less happy than Hayward, with a “negligible amount of contempt associated with his facial expression (0.1%).”
Nonetheless, these critiques of emotion AI have not stopped the commercialization of this technology. As Crawford argues, “There are powerful institutional and corporate investments in the validity of Ekman’s theories and methodologies. Recognizing that emotions are not easily classified, or that they’re not reliably detectable from facial expressions, could undermine an expanding industry.” Yet, companies and developers are still focused on increasing the accuracy rate rather than addressing core problems with the theoretical foundations of emotion recognition. As researcher Arvad Kappas argues “This is not an engineering problem that could be solved with a better algorithm.”
The emergence of emotion AI has raised conversations around ethics and responsibilities surrounding the design, development, and usage of these technologies. A report authored by Gretchen Greene for Partnership on AI raises questions that relevant stakeholders need to begin contemplating. Greene questions how emotion AI fits into existing debates around privacy and data collection and other AI technologies. Building off of the critiques by Barrett and Rhue, Greene questions whether the scientific accuracy for emotion recognition is good enough for real-life use, as well as how emotion AI could discriminate against people based on “disability, national origin, religion, race, age, sexual orientation, or gender.” We must be critical when emotion AI can “impact an important right or opportunity, like access to jobs, housing, or education” – domains that were covered by anti-discrimination legislation in the civil rights era – as well as how it can “reveal sensitive health information or other information that should have special protection” – such as mental illness diagnoses and “depression detection.”
In “An Ethical Framework for Guiding the Development of Affectively-Aware Artificial Intelligence,” Desmond Ong outlined a multi-stakeholder framework for guiding the ethical development and usage of emotion AI. The core of Ong’s framework is the separation of roles and responsibilities between developers (researchers, computer scientists, engineers, and designers developing the technology), operators (companies or other institutions who deploy the technology “and to whom the AI reports the output of any emotion analysis to”), emoters (individuals whose data are being collected and their emotions are being analyzed), and regulators. Ong proposes two pillars to guide ethical emotion AI: 1) Provable Beneficence, and 2) Responsible Stewardship. The principle of Provable Beneficence is the primary responsibility of developers, mandating that they prove the AI is “beneficial and does no harm” and must have credible predictions. Provable Beneficence also means that the AI is scientifically valid and generalizable, biases are minimized, and that there are transparency and accountability measures. The principle of Responsible Stewardship is the primary responsibility of Operators, who have “an ethical responsibility to the Emoters to ensure proper use and care of their data” when collecting information and making decisions based on the AI’s emotion analysis. Responsible Stewardship also means “(i) adhering to a pre-specified purpose, (ii) studying whether the intended effects differ from actual outcomes, (iii) being judicious about privacy, consent, and data ownership, and (iv) maintaining quality assurance.” Based on these ethical principles, Ong makes certain recommendations for regulators, including appointing subject-matter experts, “a more agile and responsive model of regulation,” developing an auditing program (such as the US National Institutes of Standards and Technology’ Face Recognition Vendor Test program), regulating the advertising of emotion AI, and paying significant attention to use cases where the emoter cannot opt-out or it is difficult to opt-out (such as “AI-assisted hiring, employee and student monitoring, public safety surveillance”).
With research that raises questions about the effectiveness, accuracy, and scientific basis of emotion recognition technologies, as well as the potential for substantial harm to individuals for wrong decisions, there is a clear need for policymakers to address and regulate emotion AI. However, there have been challenges in that the technology and its impacts are not well understood and thus, little discussion or awareness in policy circles and little evidence that policymakers see this as a priority. The few jurisdictions that are beginning to address the issues surrounding emotion AI vary in their approaches, focusing rather on governing biometric data or job interview hiring, but not the core problems of emotion AI directly. Moreover, there is a clear regulatory gap of Canadian legislation and regulation effectively addressing emotion AI technologies and their potential harms.
It is the responsibility of government to regulate emotion AI as the private sector is failing to self-regulate and the uses of these technologies span across different industries. Developers and operators are using emotion AI without guidance on the technology’s appropriateness to law, anti-discrimination or human rights. There has not been widespread discussion in the industry, outside of academia, about codes of ethics that firms, developers, and operators should abide by for responsible use of emotion AI. Only government can hold emotion AI stakeholders accountable and legally responsible for the appropriate use of their technologies, as well as mitigate risks and provide legal redress for harms.
Future regulation might focus on “meaningful transparency and understanding for data subjects about what personal data is implicated in system processing, and how,” potentially through mandatory comprehensive Algorithmic Impact Assessments (AIAs), which aligns with Ong’s principle of Provable Beneficence. And with biometric data being especially sensitive, further development of a policy framework for the use and collection of this data is necessary. In the United States, as of 2021, there is “no federal law [that] regulates the collection of biometric data, including facial recognition data.” However, Illinois, Washington, and Texas have some form of biometric data legal protection that can be an example for Canadian legislation.
Developing an effective regulatory framework must directly address the core problems surrounding emotion AI, including privacy issues, lack of accuracy, bias and discrimination in datasets and algorithmic designs, potential harms, and the questions around the basic scientific premises of the technology. Governance would require a number of different approaches. The first is to investigate the extent of the problem in Canada and to gain a better understanding of how emotion recognition technology works, what the risks are, and who the relevant stakeholders are before government can adequately and effectively address emotion AI. The second is to challenge privacy issues by updating current privacy regulation or developing new legislation, particularly to address biometric data for emotion AI. Most importantly, effective regulation must mitigate the risk of substantial harm in high-stakes contexts by imposing thresholds for the appropriate use of emotion AI, appeal mechanisms and legal redress for harms, or – as a blunt policy instrument – an outright ban on the use of these technologies. Guided by these approaches, both federal and provincial governments must jointly address emotion AI issues through their own jurisdictional spheres, as emotion AI transcends different sectors and policy areas. Regulating emotion AI not only effectively addresses emotion recognition technology, but – in addressing biometric data in privacy law – would also govern FRT and wearables, as well as – in addressing substantial harms – automated decision-making systems.
While there have been policy debates about regulating FRT, emotion AI has not been raised in policy conversations in Canada. However, in the discussions of FRT, there have been debates on how or whether existing privacy legislation addresses biometric data collection and application, as well as “no-go-zones” – data collection and application practices that are inappropriate. On the federal level, the Personal Information Protection and Electronic Documents Act (PIPEDA) governs the commercial collection and use of data and creates a privacy regulatory framework for Canada. As a provincial example, the Ontario Freedom of Information and Protection of Privacy Act (FIPPA) governs privacy in the province. While these legislative acts attempt to address important factors related to emotion AI, existing privacy regulation are not clear in their application in the modern digital era, nor do they address the core issues surrounding emotion AI.
The Office of the Privacy Commissioner (OPC) has contemplated how PIPEDA applies to biometric data but has yet to provide comprehensive and concrete guidance on how it interprets the legislation to biometrics. A discussion paper was released by the OPC in 2011 examining “Biometrics and the Challenges to Privacy,” but it is not legally binding. The OPC sees biometrics as “a range of techniques, devices and systems that enable machines to recognize individuals, or confirm or authenticate their identities.” However, that definition of identification and authentication is too narrow and does not consider how technology is beginning to use biometric data for modern commercial purposes like emotion AI and its use cases. Advertising, for example, is not using biometrics for authentication, but rather to collect and analyze data about how a consumer experiences a product. The OPC recognizes three main challenges of biometrics to privacy. The first is “Covert collection,” which addresses individuals’ awareness that their biometric data is being collected and whether they are being informed and asked for consent. The second is “cross-matching,” where the OPC is concerned that the data being collected is being used for a different purpose than the initial objective. Lastly, the OPC is concerned about “Secondary information,” where the purpose of the collection is unclear and the data is “divulg[ing] additional information,” such as health issues or socio-economic status.
The OPC applies a four-part test for the appropriateness of a biometric system: necessity, effectiveness, proportionality, and alternatives. A future emotion AI regulatory framework would have to require operators and developers to prove that the technology is necessary for satisfying a specific problem, the effectiveness of the system and its success or failure rates, and whether the loss of privacy is proportional to the benefits. The principle of “less privacy-invasive alternatives” is less important for emotion AI, as there is often a stated purpose for the system. On those three factors, while it is important for emotion AI operators and developers to address the purpose and the benefits of the technology, the effectiveness test is where emotion AI will find failure. As discussed above, the low accuracy rates of emotion AI can be problematic where even low failure rates can have a substantial impact on people’s livelihoods. However, while the paper shows that the OPC is studying the issue, it does not constitute official guidance about the interpretation of PIPEDA on biometrics nor does it impose any legally binding regulation.
The OPC has considered the application of “no-go zones” to privacy issues and released guidance documents on how the OPC interprets PIPEDA for “obtaining meaningful consent” and “inappropriate data practices.” In particular, the OPC cites subsection 5(3) of PIPEDA as the key clause regulating problematic collection and usage of data, which states that:
“An organization may collect, use or disclose personal information only for purposes that a reasonable person would consider are appropriate in the circumstances.”
The guidance document specifically mentions that it is inappropriate to collect and use data for “Profiling or categorization that leads to unfair, unethical or discriminatory treatment contrary to human rights law,” “Collection, use or disclosure for purposes that are known or likely to cause significant harm to the individual,” and “Surveillance by an organization through audio or video functionality of the individual’s own device.” These three factors are the most important for emotion AI. While the OPC recognizes that “profiling or categorization… that could lead to discrimination based on prohibited grounds contrary to human rights law” would be inappropriate, the OPC is ambiguous on what that entails, stating instead that “determining whether a result is unfair or unethical will require a case-by-case assessment.” The OPC does explain what constitutes as “significant harm” from subsection 10(7) of PIPEDA, which is defined as “bodily harm, humiliation, damage to reputation or relationships, loss of employment, business or professional opportunities, financial loss, identity theft, negative effects on (one’s) credit record and damage to or loss of property.” This definition does provide a starting point for discussing what is inappropriate for emotion AI, though it can be more explicit on how it relates to other anti-discrimination and human rights law in Canada, as well as harms specific to emotion AI. Lastly, the OPC provides guidance on covert surveillance through an individual’s device, as well as “so-called consent” in practices that are “grossly disproportionate to the business objective.” This outlines, for example, how emotion AI surveillance in the workplace or at school through wearables, smartphones, or other devices must have a clear, targeted purpose for their usage with explicit consent and collecting only the data necessary for that purpose.
These documents provide a starting point for discussing how existing privacy regulation could apply to emotion AI. However, none of these documents unambiguously applies to emotion AI nor do they address core problems with emotion recognition technology. Moreover, the OPC’s guidance is not legally binding, but rather guidance on how the Commissioner would interpret existing legislation for its investigations, whose results could be appealed and contested to the courts. And while this analysis has focused on federal regulation, the Information and Privacy Commissioner of Ontario has been mostly silent on these issues. There is a clear regulatory gap that fails to address emotion AI and its potential harms in an effective way. Personal information and privacy legislation needs to be updated for modern digital technology, particularly addressing how the collection and use of data – particularly biometric data – should be governed and whether it is appropriate to use that data for emotion recognition.
A good first step would be for Innovation, Science and Economic Development Canada (ISED) to strike a task force to undertake a study on emotion AI technology and its implications, including the prevalence of the use of these technologies on Canadians. While research has uncovered the use of emotion AI by private corporations in other jurisdictions internationally, there has been less investigation of the technology’s use in Canada. Without data about the extent of emotion AI’s use, as well as stakeholder perspectives, it would be difficult to design an effective policy framework. However, emotion AI has garnered little attention in policy discussions and the issue lacks a “Champion” to advocate for action.
The task force could partner with and fund non-governmental organizations and research institutes – like the Schwartz Reisman Institute or the Citizen Lab – to study emotion AI, a basic explanation of how it works, and its potential impact on individuals. Their findings could be used to establish a working definition of emotion AI and identify potential policy solutions. It would also allow industry stakeholders of emotion AI to explain the benefits of the technology, as well as concerned advocacy groups to voice their concerns about its harms. The task force would be mandated to identify gaps in the existing regulatory framework and make recommendations.
The federal government can establish a registry, require operators of emotion AI systems to register with a federal regulatory body and undergo a third-party AIA or audit. The AIA will have to report on a number of factors about the emotion AI system, including:
However, there are potential problems in implementing this policy solution. The first is that it is unclear what regulatory body would undertake this responsibility. Some political figures, like NDP Member of Parliament Charlie Angus, recommend that the Canadian Radio-television Telecommunications Commission (CRTC) be responsible for regulating AI. However, this would arguably be beyond the mandate of the CRTC and would massively expand the powers of the agency. It would also be difficult to create a new regulatory body and set up a new system, leading to high initial and operational costs.
As many of the existing emotion AI use cases are in areas that affect employment, housing, and health, regulation would fall under provincial jurisdiction under the Canadian constitution. Provinces like Ontario are currently undergoing reviews of their privacy legislation to update regulations for the digital age. The provinces should use this opportunity to consider the harms emotion AI can pose in areas of their constitutional jurisdiction and ban the use of emotion AI and automated decision systems in cases that would be discriminatory and violate civil and human rights.
The provinces can learn from other jurisdictions developing regulation on the use of emotion AI in hiring practices. In 2020, Illinois’ Artificial Intelligence Video Interview Act came into effect, mandating that applicants be informed about the AI measuring their “fitness” for the job, how the technology works, “what ‘general types of characteristics’ it considers when evaluating candidates,” and explicit consent from the applicant before proceeding. The law also limits access to the recorded video only to those who need it, and that companies delete videos within a month of an applicant’s request. However, Aaron Rieke – a technology rights advocate from Upturn – notes that Illinois’ law does “guarantee that you can opt out of an AI-based review of your application” or mandate that the employer made an alternative arrangement. Nor does the law outline recourse for an employer’s violation of the regulation.
More recently in November 2021, New York City passed a bill that, when it takes effect in 2023, will mandate the disclosure of the use of AI in hiring and allow alternative arrangements for candidates who decline. Employers and vendors may be fined up to $1500 USD for violating the regulation. The bill also requires that these AI hiring technologies undergo “bias audits” which are “an impartial evaluation by an independent auditor… [which tests the] automated employment decision tool to assess the tool’s disparate impact.” However, these audits have been criticized by advocacy groups for being ambiguous and too narrow in scope. The Center for Democracy and Technology “argues that the law only applies to the hiring process” but could still impact “compensation, scheduling, working conditions, and promotions.” The law is also unclear about who would conduct the audits, which could lead to employers and vendors escaping strict enforcement. Brookings noted that other jurisdictions are following Illinois and New York City in regulating AI-based hiring, including California and the District of Columbia, and that the federal Equal Opportunity Employment Commission (EEOC) – after pressure from U.S. Senators – recently announced an initiative to investigate these technologies and how they interact with federal civil rights legislation.
If adopted in the Canadian context, policy would have to be enacted at the provincial level, led by Ministries of Labour, Health, and Municipal Affairs and Housing. This regulatory framework would not just concern employment policy but also privacy policy – an area that the Ontario government has begun to review for the digital age. A white paper published in 2021 by the Ontario Ministry of Government and Consumer Services considers the commercial use of “automated decision systems” (ADS) – defined as “any technology that assists or replaces the judgement of human decision-makers” with the use of predictive analytics, machine learning, and other AI techniques – which would cover such hiring technologies. In defining ADS, the paper specifically references “employment decisions… [and] assess[ing] candidates for jobs.” The paper proposes that the organization using an ADS would provide an explanation of the decision and what and how personal data was used to make that decision. The proposal would also prohibit the use of ADS “to make a decision that could significantly affect the individual” without explicit consent. While there is no clear definition for what would constitute a decision that would “significantly affect the individual,” proposed legislation could explicitly state that employment decisions would be covered by the policy. Additional measures proposed by the white paper include the right of the individual to “request the correction of personal information,” comment and contest the decision, and request a human review of the decision. The proposal does leave open the possibility of allowing organizations to collect and use data “establishing… an employment or relationship between the organization and the individual,” which could be read as allowing the use of AI in hiring.
Considering the issues with the scientific foundations of emotion AI, the risk of discrimination on equity-seeking groups, and the potential for significant harm for wrong decisions, provincial governments should enact privacy and – as a particular example – employment policies. In particular, this legislation should establish explicit rights transparency, especially measures for disclosure, contesting decisions, requesting human reviews of decisions, and legal redress for proven harms. Additionally, as per the Center for Democracy and Technology’s concerns, new policies must also cover the use of AI for decisions regarding “compensation, scheduling, working conditions, and promotions.” The white paper’s proposed language currently leaves open the use of an employee’s personal information for “managing or terminating an employment or volunteer-work relationship,” which, as worded, could allow employers to use emotion AI technology to determine pay raises, promotions, and other working conditions based on attributes that an employer seems agreeable.
In developing new privacy policies, provincial government can also expand the definition of “significantly affect the individual” in the use of emotion AI for applications traditionally covered under anti-discrimination civil rights legislation or health, especially in policy areas under provincial jurisdiction. This includes equal treatment under the Ontario Human Rights Code, 1990 on issues of “occupancy of accommodation” and “employment,” but also financial applications such as loans and insurance. Health decisions should also be covered under this policy framework, especially in the diagnosis of mental disorders and other illnesses. These diagnostic decisions should be left to medical professionals who have more context about an individual’s health and livelihoods than an AI technology. The application of emotion AI in these cases would “significantly affect the individual” and would be life-changing. Therefore, it is recommended that provincial government regulate the use of emotion AI on applications for housing, job hiring and other employment issues like promotions or terminations, financial applications such as loans or insurance, as well as prohibiting these technologies from making health diagnoses. It is also important to protect children by prohibiting the use of emotion AI for children under the age of 18, especially in toys as well as their use in schools. Taken together, this policy proposal creates a regulatory framework that ensures proactive disclosure and consumer rights by mitigating harms.
Data used by emotion AI is often biometric in nature. Canadian governments should develop new regulations – either by updating existing privacy laws or enacting new legislation – that governs the collection and use of biometric data, and in particular, for emotion recognition. It is important to create a clearer understanding of the responsibilities of firms, developers, and operators of emotion AI and place higher thresholds of a duty of care on these more sensitive data.
A prominent example of the regulation of biometric data is Illinois’ Biometric Information Privacy Act, which identifies a “biometric identifier” as “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry,” but notably does not include other data like human biological samples, genetic information, or other health information (though these are covered by other legislation referenced in the Act). For the purposes of emotion AI, a report by the Partnership on AI references the California Consumer Privacy Act (CCPA)’s definition of “Biometric information,” which “includes many kinds of data that are used to make inferences about emotion or affective state, including imagery of the iris, retina, and face, voice recordings, and keystroke and gait patterns and rhythms.” Biometric data should also be expanded to include other data used by emotion AI to infer emotion, including “dynamic changes in the autonomic nervous system (ANS)” using other bodily responses than facial expressions (cardiovascular, respiratory, perspiration, blood flow, electrical activity in the brain, heart rate, temperature, among other measures). However, some of these other physiological data can also be classified as health data, which has its own set of regulatory principles that allow for legitimate collection and usage. While it is important to regulate the use of these physiological data for emotion AI applications, their inclusion in the language of new biometric data regulation would have to be carefully crafted. There are other cases of emotion AI that are not biometric, especially sentiment analysis of text or graphics. However, as biometric data are one of the most intimate personal information about an individual, and its collection and use have potential for substantial harm, regulation specifically on biometric data would be necessary.
At a minimum, the collection of biometric data and their use in ADS and emotion AI systems must have the explicit consent of the individual and not just notice, the right for an explanation and appeal of the decision, and alternative accommodations if the individual refuses. Usage of biometric data for inferring emotion must come with a higher level of responsibilities by instilling Ong’s ethical principles of Provable Beneficence and Responsible Stewardship, as well as outlining the legal principle of “the duty of care” and clear liability for harms. A 2004 article by Robert J. Currie in the Canadian Journal of Law and Technology analyzed the applicability of the Canadian common law principle of the duty of care to novel cases of technology. The principle – defined in the case Donoghue v. Stevenson as “tak[ing] reasonable care to avoid acts or omissions which you can reasonably foresee would be likely to injure your neighbour” – has already been established for relationships between doctors and patients or lawyers and clients. However, negligence case law has yet to develop comprehensively for cases involving digital technology, despite the fact that “personal injury is still personal injury; pure economic loss arising from negligent misrepresentation does not become something different because the representation was made by way of e-mail instead of orally.” A useful case study raised by Currie is the liability taken on by the company RealNetworks, whose media player RealPlayer exposed cyberattack vulnerabilities that would harm users. Currie argues, “RealNetworks obviously owes a duty of care to registered users of its product, because negligent construction of the product could foreseeably cause harm to the ultimate consumer, and because products liability is an established category of negligence.” Thus, the proposed biometric data legislation should outline responsibilities on developers and operators of emotion AI in preventing or mitigating harms on individuals, and that the language clarifies the liability taken on by developers or operators in the case of personal injury.
As a stronger measure in the biometric data legislation, it is recommended that regulation should explicitly include language that specifically addresses technologies or systems that use biometric data “to recognize, predict, infer, or analyze an individual’s emotional state” and that applications covered under this language should be prohibited.
In the AI Now Institute’s 2019 report, the Institute calls for a ban on emotion AI “in important decisions that impact people’s lives and access to opportunities.” AI Now specifically cites the “contested scientific foundations” of emotion AI as justification for governments to go beyond strict regulation to an overall prohibition of the use of these technologies “in high-stakes decision-making processes.” In particular, the report lists specific examples of recommended prohibited uses, “such as who is interviewed or hired for a job, the price of insurance, patient pain assessments, or student performance in school.” Meredith Whittaker, co-director of AI Now, argued that “We need to ensure, when these systems are used in sensitive contexts, that they are contestable, that they are used fairly… and that they are not leading to increased power asymmetries between the people who use them and the people on whom they're used.”
The critical uncertainty about the underlying scientific foundations of emotion AI and the challenges to its accuracy and reliability raises the question of whether governments should explicitly prohibit the use of AI technology for emotion recognition and to what extent. With doubts about the basic scientific premise that emotions can be consistently inferred correctly through biometric data, government should consider an outright ban on all uses of emotion AI technology. If it does not confidently work and the fundamental science is flawed, why should the technology be used at all?
At the very minimum, Canadian governments must prohibit the use of emotion AI for high-risk contexts where the use of emotion recognition technologies can significantly affect and harm individuals’ livelihoods and threaten their fundamental human rights. A flawed technology that cannot consistently and reliably live up to its expectations should not be allowed to be used to decide housing, employment, health decisions, and particularly, interacting with children. This policy should be a cornerstone for new biometric data legislation in addressing emotion AI, starting with the ban on the use of these technologies where there is potential for “significant harm,” as defined in subsection 10(7) of PIPEDA. However, regulation of emotion AI needs to go beyond that definition as it only relates to “harm, humiliation, [or] damage,” which is more of an ex-ante approach outlining how individuals can seek legal action after the harm has been done. The definition must be more aligned to “significantly affect” to proactively regulate emotion recognition technologies before the damage has been done. The Ontario government should also use the opportunity of reviewing FIPPA to regulate the use of ADS in applications that can “significantly affect” an individual, as well as to include explicit language about technologies that attempt to “recognize, predict, infer, or analyze an individual’s emotional state.” Canadian government should also govern the use of emotion AI by the public sector, government agencies, border control, and law enforcement. The use of emotion recognition technologies by government are extraordinarily intrusive in individuals’ interactions with the state, where government decisions can significantly affect an individual. It is questionable whether any use of emotion AI by the government is ever necessary beyond cost-effectiveness. The potential for harm is disproportionately greater than any potential benefits in terms of more efficient government processes or claimed increase in effectiveness for law enforcement, simply because of the power dynamic between individuals and the state.
However, Canadian governments should consider allowing the use of emotion recognition technologies in low-risk contexts that will not significantly harm individuals. The use of emotion AI in advertising, marketing, and media for sentiment analysis on products, commercials, TV shows, or movies has a low-risk for harm and is more akin to conducting user research, particularly if the data is used in aggregate and without personally identifiable information. Detecting a driver’s emotional state and fatigue provides more benefits than its potential harms by alerting individuals if they need to take a rest, hopefully leading to lower rates of car accidents. And with careful, strict regulation and explicit consent, emotion AI can have health benefits as assistive technology, such as helping stroke victims in their recovery. While it is beyond the scope of this report, governments can allow regulated experimentation of emotion AI, the concept of a “regulatory sandbox” – which allows for real-life testing of technology products under certain conditions and with strict oversight – might be useful in tandem with a broader biometric data legislative framework.
To implement a new biometric data legislation, the federal OPC should have their permanent funding increased by $10 million in the 2023-2024 fiscal year and gradually increased to $20 million per year by 2026. This estimate is in line with the projections in Fall Economic Statement 2020, which included funding for the implementation of the previously proposed Consumer Privacy Protection Act (which has yet to be enacted).
In Ontario, the Information and Privacy Commissioner of Ontario should receive an $8 million funding increase in the 2023-2024 fiscal year over their 2019-2020 budget of $20 million, increasing to $12.5 million by 2026.
This report aims to begin the conversation and to raise these issues with Canadian policy professionals. It is recommended that Canadian governments, both federal and provincial, should address emotion AI and its impacts in anticipation for increasing adoption of these technologies. Emotion AI is arguably more concerning than FRT because of the skepticism about the underlying science, reliability, and accuracy of the predictions and its potential for harm is much greater. Canadian governments should prioritize privacy, anti-discrimination, and civil and human rights principles in future regulation and must ban the use of emotion AI in high-risk applications and by government. Most importantly, Canada must not let emotion AI harm children.
Emotion AI is here. The technology is already being used in many aspects of our lives without society being aware. It promises benefits for driving safety, advertising, and health. But it also has the potential to truly harm people’s livelihoods by denying individuals opportunities for employment, housing, immigration, or safe refuge. It could have negative health impacts, resulting in injury or death. And it is extremely concerning in its use on children, who are particularly vulnerable to emotional manipulation. The policies recommended in this report might be difficult to implement, and they are admittedly blunt policy solutions, but the cost of not doing anything is too great. This is not just a future matter - emotion AI is here in Canada and it will only grow in use in the coming years.
Bard, Jennifer S. “Developing Legal Framework for Regulating Emotion AI.” Boston University Journal of Science and Technology Law 27, no. 2 (2021): 271–311.
Barrett, Lisa Feldman, Ralph Adolphs, Stacy Marsella, Aleix M. Martinez, and Seth D. Pollak. “Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements.” Psychological Science in the Public Interest 20, no. 1 (July 2019): 1–68. https://doi.org/10.1177/1529100619832930.
Bryant, De’Aira, and Ayanna Howard. “A Comparative Analysis of Emotion-Detecting AI Systems with Respect to Algorithm Performance and Dataset Diversity.” In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 377–82. Honolulu HI USA: ACM, 2019. https://doi.org/10.1145/3306618.3314284.
Chan, Milly. “This AI Reads Children’s Emotions as They Learn.” CNN Business, February 17, 2021. https://edition.cnn.com/2021/02/16/tech/emotion-recognition-ai-education-spc-intl-hnk/index.html.
Chen, Angela, and Karen Hao. “Emotion AI Researchers Say Overblown Claims Give Their Work a Bad Name.” MIT Technology Review, February 14, 2020, 7.
Crawford, Kate. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press, 2021.
Crawford, Kate, Roel Dobbe, Theodora Dryer, Genevieve Fried, Ben Green, Elizabeth Kaziunas, Amba Kak, et al. “AI Now 2019 Report.” New York: AI Now Institute, 2019. https://ainowinstitute.org/AI_Now_2019_Report.html.
Currie, Robert J. “Of Neighbours and Netizens, Or, Duty of Care in the Tech Age: A Comment on Cooper v. Hobart.” Canadian Journal of Law and Technology, 2004, 9.
Daniels, Jeff. “Lie-Detecting Computer Kiosks Equipped with Artificial Intelligence Look like the Future of Border Security.” CNBC, May 15, 2018. https://www.cnbc.com/2018/05/15/lie-detectors-with-artificial-intelligence-are-future-of-border-security.html.
Department of Finance Canada. “Supporting Canadians and Fighting COVID-19: Fall Economic Statement 2020.” Government of Canada, 2020.
Dusseldorp, Joseph R., Diego L. Guarin, Martinus M. van Veen, Nate Jowett, and Tessa A. Hadlock. “In the Eye of the Beholder: Changes in Perceived Emotion Expression after Smile Reanimation.” Plastic and Reconstructive Surgery 144, no. 2 (August 2019): 457–71. https://doi.org/10.1097/PRS.0000000000005865.
Engler, Alex. “Why President Biden Should Ban Affective Computing in Federal Law Enforcement,” August 4, 2021. https://www.brookings.edu/blog/techtank/2021/08/04/why-president-biden-should-ban-affective-computing-in-federal-law-enforcement/.
Government of Illinois. Biometric Information Privacy Act, 740 ILCS 14/1 § (2008). https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004&ChapterID=57.
Government of Ontario. “Modernizing Privacy in Ontario: Empowering Ontarians and Enabling the Digital Economy - White Paper,” 2021.
Greene, Gretchen. “The Ethics of AI and Emotional Intelligence: Data Sources, Applications, and Questions for Evaluating Ethics Risk.” Partnership on AI, July 30, 2020.
Guariglia, Matthew. “Police Use of Artificial Intelligence: 2021 in Review.” Electronic Frontier Foundation, January 1, 2022. https://www.eff.org/deeplinks/2021/12/police-use-artificial-intelligence-2021-review.
Heaven, WIll Douglas. “Predictive Policing Algorithms Are Racist. They Need to Be Dismantled.” MIT Technology Review, July 17, 2020. https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/.
Heilweil, Rebecca. “Illinois Says You Should Know If AI Is Grading Your Online Job Interviews.” Vox, January 1, 2020. https://www.vox.com/recode/2020/1/1/21043000/artificial-intelligence-job-applications-illinios-video-interivew-act.
In Machines We Trust. “AI Reads Human Emotions. Should It? (Part 2) (Transcript),” 2020. https://www.technologyreview.com/2020/10/14/1010474/ai-reads-human-emotions-should-it/.
———. “How Close Is AI to Decoding Our Emotions? (Part 1) (Transcript),” 2020. https://www.technologyreview.com/2020/09/24/1008876/how-close-is-ai-to-decoding-our-emotions/.
Information and Privacy Commissioner of Ontario. “A Year Like No Other: Championing Access and Privacy in Times of Uncertainty - Information and Privacy Commissioner of Ontario 2020 Annual Report,” June 24, 2021. https://www.ipc.on.ca/wp-content/uploads/2021/05/ar-2020-e.pdf.
Intel on AI. “Emotion and AI with Rana El Kaliouby.” Intel on AI, December 9, 2020.
Jee, Charlotte, and Will Douglas Heaven. “The Therapists Using AI to Make Therapy Better.” MIT Technology Review, December 6, 2021.
Joy Buolamwini and Timnit Gebru. “Gender Shades,” 2018. http://gendershades.org/index.html.
Kaliouby, R. E., R. Picard, and S. Baron-Cohen. “Affective Computing and Autism.” Annals of the New York Academy of Sciences 1093, no. 1 (December 1, 2006): 228–48. https://doi.org/10.1196/annals.1382.016.
Kendrick, Molly. “The Border Guards You Can’t Win over with a Smile.” BBC News, April 17, 2019. https://www.bbc.com/future/article/20190416-the-ai-border-guards-you-cant-reason-with.
Knight, Will. “Job Screening Service Halts Facial Analysis of Applicants.” WIRED, January 12, 2021. https://www.wired.com/story/job-screening-service-halts-facial-analysis-applicants/.
Lee, Nicol Turner, and Samantha Lai. “Why New York City Is Cracking down on AI in Hiring.” Brookings (blog), December 20, 2021. https://www.brookings.edu/blog/techtank/2021/12/20/why-new-york-city-is-cracking-down-on-ai-in-hiring/.
McKelvey, Fenwick, Brenda McPhail, and Reza Rajabiun. “AI Accountability Can’t Be Left to the CRTC.” Policy Options, February 2, 2022. https://policyoptions.irpp.org/magazines/february-2022/ai-accountability-crtc-oversight/.
McStay, Andrew. “Wearables-at-Work: Quantifying the Emotional Self.” Privacy Laws & Business, March 2017. https://doi.org/10.13140/RG.2.2.22218.98248.
McStay, Andrew, Vian Bakir, and Lachlan Urquhart. “Emotion Recognition Briefing Note - All Party Parliamentary Group on Artificial Intelligence.” Emotional AI Lab, July 7, 2020.
McStay, Andrew, and Dr Gilad Rosner. “Comment on Children’s Rights In Relation To Emotional AI And The Digital Environment,” n.d.
McStay, Andrew, and Gilad Rosner. “Emotional AI and Children: Ethics, Parents, Governance - 2020 Report.” Emotional AI Lab, 2020.
———. “Emotional Artificial Intelligence in Children’s Toys and Devices: Ethics, Governance and Practical Remedies.” Big Data & Society 8, no. 1 (January 2021): 205395172199487. https://doi.org/10.1177/2053951721994877.
Metz, Cade. “Google Glass May Have an Afterlife as a Device to Teach Autistic Children.” The New York Times, July 17, 2019. https://www.nytimes.com/2019/07/17/technology/google-glass-device-treat-autism.html.
Metz, Rachel. “There’s a New Obstacle to Landing a Job after College: Getting Approved by AI.” CNN News, January 15, 2020. https://www.cnn.com/2020/01/15/tech/ai-job-interview/index.html.
Office of the Privacy Commissioner of Canada. “Data at Your Fingertips Biometrics and the Challenges to Privacy,” February 2011. https://www.priv.gc.ca/en/privacy-topics/health-genetic-and-other-body-information/gd_bio_201102/.
———. “Guidance on Inappropriate Data Practices: Interpretation and Application of Subsection 5(3),” May 24, 2018. https://www.priv.gc.ca/en/privacy-topics/collecting-personal-information/consent/gd_53_201805/.
———. “News Release: Privacy Commissioner Issues New Guidance to Help Address Consent Challenges in the Digital Age,” May 24, 2018. https://www.priv.gc.ca/en/opc-news/news-and-announcements/2018/nr-c_180524/.
Ohlheiser, Abby, and Karen Hao. “An AI Is Training Counselors to Deal with Teens in Crisis.” MIT Technology Review, February 26, 2021. https://www.technologyreview.com/2021/02/26/1020010/trevor-project-ai-suicide-hotline-training/.
Ong, Desmond C. “An Ethical Framework for Guiding the Development of Affectively-Aware Artificial Intelligence.” ArXiv:2107.13734 [Cs], July 28, 2021. http://arxiv.org/abs/2107.13734.
Orlowski, Andrew. “How HR Departments Impose Tech Tyranny in the Woke Workplace.” The Telegraph, March 12, 2021. https://www.telegraph.co.uk/business/2021/03/12/hr-departments-impose-tech-tyranny-woke-workplace/.
Picard, Rosalind W. “Affective Computing.” M.I.T Media Laboratory Perceptual Computing Section Technical Report. Cambridge, Mass: MIT Media Laboratory, 1995.
———. Affective Computing. Cambridge, Massachusetts: MIT Press, 2000.
Poster, Mark. “CyberDemocracy: Internet as a Public Sphere.” In What’s the Matter with the Internet? Electronic Mediations, v. 3. Minneapolis: University of Minnesota Press, 2001.
R v. Mills, 2 SCR 320 (Canada (Federal) › 2019).
R v. Patrick, 1 SCR 579 (Canada (Federal) › 2009).
Rhue, Lauren. “Racial Influence on Automated Perceptions of Emotions.” SSRN Electronic Journal, 2018. https://doi.org/10.2139/ssrn.3281765.
Simonite, Tom. “New York City Proposes Regulating Algorithms Used in Hiring.” WIRED, January 8, 2021. https://www.wired.com/story/new-york-city-proposes-regulating-algorithms-hiring/.
Stark, Luke, and Jesse Hoey. “The Ethics of Emotion in Artificial Intelligence Systems.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 782–93. Virtual Event Canada: ACM, 2021. https://doi.org/10.1145/3442188.3445939.
Tracy, Phillip. “Emotion Tracking Remote Learning Spyware Could Ding Your Kid for Looking Bored in Math.” Gizmodo, April 18, 2022. https://gizmodo.com/remote-learning-spyware-tracks-student-emotions-1848806568.
Urquhart, Lachlan, and Diana Miranda. “Policing Faces: The Present and Future of Intelligent Facial Surveillance.” Information & Communications Technology Law, October 28, 2021, 1–26. https://doi.org/10.1080/13600834.2021.1994220.
Winner, Langdon. The Whale and the Reactor: A Search for Limits in an Age of High Technology. Nachdr. Chicago: Univ. of Chicago Press, 2001.
Wu, Chih-Hung, Yueh-Min Huang, and Jan-Pan Hwang. “Review of Affective Computing in Education/Learning: Trends and Challenges: Advancements and Trends of Affective Computing Research.” British Journal of Educational Technology 47, no. 6 (November 2016): 1304–23. https://doi.org/10.1111/bjet.12324.
Zetlin, Minda. “AI Is Now Analyzing Candidates’ Facial Expressions During Video Job Interviews,” February 28, 2018. https://www.inc.com/minda-zetlin/ai-is-now-analyzing-candidates-facial-expressions-during-video-job-interviews.html.