-->

Applying AI to Electronic Devices: Emotion Recognition and Protection Perspectives for Vulnerable Individuals

 
 

The rapid proliferation of the digitization processes affecting society over the past two decades – defining a further shift from a digital to an algorithmic society[1] – has resulted in a gradual development in the use of artificial intelligence systems, generating new challenges as well as increased risks, especially in the area of data protection.

In this regard, the area which has recently been most affected is that concerning the use of digital mobile services and applications for the care and well-being of specific groups of individuals (especially the elderly), which, on the one hand, pursue general interests aimed at improving the quality of life of different categories of patient-users, but, on the other, do not offer secure protection for informational self-determination and privacy[2].

In this hypothesis, digital technologies, especially employed for research and innovation, result in greater and better safeguarding of users’ health, ensuring a high level of protection[3].

AI can indeed be used in various and diverse ways, including the use of algorithms[4] – especially in the healthcare field – which enable the collection and processing of large amounts of data and information quite rapidly.

We are facing a full-scale revolution[5] that presents considerable risks and raises numerous questions, especially considering data sensitivity, which often concerns the health of individuals, privacy requirements, and possible cognitive distortions due to incorrect assumptions in the machine learning process[6].

It is worth noting, among the various uses of AI, the use of algorithms for Active and Healthy Aging[7], promising significant research achievements as well as the development of tools capable of improving health and quality of life even for individuals[8] of advanced age.

AI systems, therefore, are currently widely used as a preventive tool to aspire, for example, to healthy aging. This is done primarily through digital infrastructures and, more specifically, through so-called wearable devices[9]: devices aimed at pursuing active aging in one or more areas of the social or personal sphere, freely selecting the activity or activities in which to engage based on the user-elder’s aspirations and motivations. Getting older is associated with issues involving social, medical, and psychological aspects, which require specific research profiles for the adoption of appropriate strategies capable of guaranteeing the elderly a better life perspective, such as enabling them to achieve autonomous and active participation in social life (and eventually feeling like an integral part of society).

Recent years have seen the development of numerous active aging initiatives and policies, highlighting the inseparable link between digitization and health.

Such projects are certainly aimed at solving a crucial aspect, representing a cause of frailty in the elderly: the need to constantly monitor one’s health and psycho-physical state, using, namely, methods and devices that are as least invasive as possible[10].

In this context, AI technologies used for emotion recognition (a field of research that aims to develop algorithms and systems capable of interpreting human emotions[11], often using non-verbal signals such as facial expression, body language, or voice modulation), are gaining increasing importance, especially those capable of leveraging the benefits of digital transformation and, at the same time, leading to significant disadvantages, since they are practices aimed at profiting from the vulnerability of specific groups of individuals, given the privacy protection requirements of those same individuals.

Indeed, the use of such applications reveals the need for responsible development of these technologies to avoid over-reliance or, worse, outright dependence of the elderly on AI.

Many initiatives have been undertaken in this regard, and just as many acts have been adopted by the European Union[12] in recent years, to define a regulatory framework aimed at regulating the prerequisites, limits, and methods of use of AI systems and the spread of digitization processes (specifically in the health sector), to ensure greater protection.

Regarding devices for detecting the user’s emotional state[13], therefore, one of the most recently debated matters by European institutions (and others) [14] has been the development of an appropriate and transparent framework for the design and subsequent use of AI systems having the function of recognizing the emotions of individuals (so-called Artificial Intelligence Act, from now on, AI Act)[15]. Admittedly, for some years now, specific critical issues have emerged regarding how these systems function and the various risks associated with their use.

A preliminary aspect concerns the possible forms of discrimination that these devices would determine[16]. Under this assumption, there would be a risk of unreliability of those AI systems implemented according to emotion recognition modes peculiar only to the culture that produces them, even though they could also be employed by users from different cultures.

A second point concerns, on the contrary, the individual’s inability to refute the outcomes produced by the algorithm: it would indeed be particularly complex to provide proof of the error incurred by the algorithm when analyzing facial expressions or collecting the user’s biometric data. The final concern relates to the risks of violation of fundamental rights and, specifically, to the violation of personal data processing. This is because the operation of such technologies requires the processing of a large amount of special personal data (especially biometric and health data), which, when inter-combined, can easily lead to the identification of the user, with possible infringement of their right to privacy[17].

On this point, for better coordination between disciplines[18], the AI Act appears to be of paramount importance, complementary to existing data protection legislation, aimed at promoting AI systems that are secure, transparent, traceable, non-discriminatory, and, therefore, capable of guaranteeing effective protection of data subjects, avoiding possible risks of violations related to privacy, security and the fundamental rights of the individual.

In particular, regarding Artificial Intelligences for emotion recognition, the risks are also compounded by several concerns surrounding the scientific validity of their functioning, related not to the technology itself, but to the theory underlying the possibility of automated recognition of something as complex and changeable as human emotions. Therefore, demands to pursue an outright ban on these AIs[19] have seemed entirely legitimate.

In light of these considerations, it should also be noted that the use of AI systems for the collection and analysis of emotions raises different concerns related to the protection of the processing of individuals’ personal data and the risks that such systems could determine by manipulating emotions.

In such scenarios, it is necessary to identify possible remedies and specific measures aimed at developing preventive control and accountability tools (in compliance with the European General Data Protection Regulation[20]) for the monitoring of such technologies. In this respect, specific regulations prepared at the EU level on the subject of AI have been provided, designed to regulate a conscious use of these emotion-recognition devices, favoring less privacy-invasive settings. Such regulations are intended to protect the vulnerable by including the guarantee of data security, informed user consent, and restriction on the use of collected personal data to prevent abuse or discrimination[21].

Indeed, concerning the use of devices for the detection of the user’s emotional state, despite numerous demands for the inclusion of an absolute ban on emotion recognition in the AI Act, the EU regulator has raised several concerns, imposing a ban, however, only in specific areas (law enforcement; border management; workplace and educational institutions[22]). As such, the latest draft of the AI Act, adopted in June 2023 by the European Parliament, does not contain an absolute ban on emotion recognition through AI.

Specifically, Recital 26c highlights «serious concerns about the scientific basis of AI systems aiming to detect emotions» and recognizes that «emotions or expressions of emotions and perceptions thereof vary considerably across cultures and situations, and even within a single individual». Three fundamental shortcomings of these technologies are then listed: limited reliability, as emotions cannot be unequivocally associated with a set of movements or biological/biometric indicators; lack of specificity, as physical or physiological expressions do not uniquely correspond to certain emotions; and finally, limited generalizability, as the expression of emotions is influenced by context and culture. The AI Act then adds to the definition of “emotion recognition system” found in Art. 3 paragraph 34, specifying that it is «an AI system for the purpose of identifying or inferring emotions, thoughts, states of mind or intentions of individuals or groups on the basis of their biometric and biometric-based data».

Therefore, there is a need to promote a better balance of the opportunities provided by AI-based emotion recognition to improve human-computer interaction, calling for the development of buffer measures to avoid placing vulnerable people in more difficult conditions.

In this regard, the aim is to ascertain whether, in practice, within the framework of the AI Act and after the provisional agreement[23] reached between the EU Parliament and Council, the limitation on the use of such systems, as opposed to their total ban, represents a “lifesaver”.

Let us consider, for example, the use of some particularly innovative platforms related to wearable devices or, again, specific applications popular in the U.S. [24], such as SimSensei[25], used in areas such as mental health, to improve the mood of patients, enabling a better understanding of their emotional conditions and facilitating the diagnosis of psychological disorders[26].

These are applications that require specific elements from the moment they are first developed: compliance with the principles of privacy by design and by default[27]; protection and respect for the processing of general and special personal data; consistency in the application of biometric data for emotion recognition; adequate transparency to be provided to the patient-user; specific monitoring and a dedicated mechanism related to prior impact assessment, capable of concretely verifying the intelligibility of the AI system. This includes the necessary evaluation of the different aspects that characterize the legal framework for personal data and AI especially in the relationship between the EU and the US, which, while highlighting the different predisposition of the regulatory models adopted in both systems, nevertheless leaves room for unexpected convergences.

In conclusion, the AI Act is an important step forward in regulating the use of AI, including emotion recognition devices. Although there are positive aspects to the use of such technologies, it is essential to consider the negative consequences arising from their use and to take appropriate measures to protect the processing of personal data and prevent discrimination. The implementation of effective regulations and controls can ensure a responsible and ethical use of AI in the context of emotion recognition. Therefore, it is recommended that technologies be developed responsibly, taking into account the vulnerabilities[28] of those involved and the sensitive concerns regarding their privacy.

The aim is to create AI systems that do not discriminate against individuals/users and that focus primarily on the needs of vulnerable people (including the elderly), to bring them closer to inclusive and non-discriminatory technology[29].

Essentially, the hope is to find a balance between the technical aspects related to the development of new AI solutions and their optimal use, aimed solely at the production of benefits for the individual who is in a particularly fragile and vulnerable condition.

 

References

[1] G. CERRINA FERONI (Italian Data Protection Authority), Intelligenza artificiale e ruolo della protezione dei dati personali, 2023, https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9855742.

[2] R. CALO, Privacy, Vulnerability and Affordance, DePaul L. Rew., 66, 2017, p. 592.

[3] V. SALVATORE, L’Unione europea disciplina l’impiego dell’intelligenza artificiale e dei processi di digitalizzazione anche al fine di promuovere la tutela della salute, V. SALVATORE (ed.), Digitalizzazione, intelligenza artificiale e tutela della salute nell’Unione europea, Giappichelli, Torino, 2023, p. 4.

[4] S. WACHTER, The Theory of Artificial Immutability: Protecting Algorithmic Groups Under Anti-Discrimination, Tulane Law Review, 2022, 97, p. 9; H. STEEGE, Algorithm-Based Discrimination by Using Artificial Intelligence: Comparative Legal Consideration and Relevant Areas of Application, European Journal of Privacy Law & Technology, 1, 2021, p. 57.

[5] D. CHANG, AI Regulation for the AI Revolution, Singapore Comparative Law Review, 2023, 130-170.

[6] W. NICHOLSON II PRICE, Problematic Interactions between AI and Health Privacy, no. 4, 2021, 925-936.

[7] M. McTEAR; K. JOKINEN; M. M. ALAM; Q. SALEEM; G. NAPOLITANO et al., Interaction with a Virtual Coach for Active and Healthy Ageing, Sensors 2023, 23, p. 2748, https://doi.org/10.3390/s23052748; A. BRUNZINI; M. CARAGIULI; C. MASSERA; M. MANDOLINI, Healthy Ageing: A Decision-Support Algorithm for the Patient-Specific Assignment of ICT Devices and Services, Sensors 2023, 23, p. 1836. https://doi.org/10.3390/s23041836; M. MENASSA; K. STRONKS, F. KHATAMI; Z. M. ROA DÍAZ; O. PANO ESPINOLA et al., Concepts and Definitions of Healthy Ageing: A Systematic Review and Synthesis of Theoretical Models, eClinicalMedicine, 2023, p. 56, https://doi.org/10. 1016/j.eclinm.2022. 101821; A. PALIOTTA, Successful, Active and Healthy Ageing: differenze e similarità nell’approccio al tema dell’invecchiamento, Vita e Pensiero, 2022, 473-492; G. ÅGREN K. BERENSSON, Healthy Ageing  – A Challenge for Europe, 2006, 9-225, chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://ec.europa.eu/health/ph_projects/2003/action1/docs/2003_1_26_frep_en.pdf.

[8] EHDEN (European Health Data & Evidence Network), Protecting People while Using their Health Data for Research, a Framework for Understanding Relative Responsibilities and Roles, 2022, https://www.ehden.eu/protecting-people/.

[9] A. ZINZUWADIA; J. P. SINGH, Wearable devices—addressing bias and inequity, The Lancet – Digital Health, 2022,  https://www.thelancet.com/journals/landig/article/PIIS2589-7500(22)00194-7/fulltext.

[10] L. COLONNA, Artificial Intelligence in the Internet of Health Things: Is the Solution to AI Privacy More AI?, Boston University Journal of Science and Technology Law, 27, no. 2, 2021, 312-344.

[11] European Data Protection Supervisor (EDPS), TechDispatch on Facial Emotion Recognition, issue 1, 2021, p. 1; S. B. DAILY, Affettive Computing Historical Foundations, Current Applications and Future Trends, in Emotions and Affect in Human Factors and Human Computer Interaction, Elsevier, 2017, 213-231.

[12] M. EBERS, Standardizing AIThe Case of the European Commission’s Proposal for an Artificial Intelligence Act, L. A. DI MATTEO;N. CANNARSA; C. PONCIBO (eds.), The Cambridge Handbook of Artificial Intelligence: Global Perspectives on Law and Ethics, Cambridge Law Handbooks, 2022; A. RENDA; A. ENGLER, What’s the Name? Getting the Definition of Artificial Intelligence Right in the EU’s AI Act, CEPS Explainer, 2023.

[13] Y. CAI; X. LI; J LI, Emotion Recognition Using Different Sensors, Emotion Models, Methods and Datasets: A Comprehensive Review, Sensors, Basel, 2023, 1-36; T. R. MOSLEY, AI Isn’t Great at Decoding Human Emotions. So Why are Regulators Targeting the Tech?, MIT Technology Review, 2023, https://www.technologyreview.com/2023/08/14/1077788/ai-decoding-human-emotions-target-for-regulators/.

[14] The White House, FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/; Congressional Research Service, L. A. HARRIS; C. JAIKARAN, Highlights of the 2023 Executive Order on Artificial Intelligence for Congress (R47843), 2023, https://crsreports.congress.gov.

[15] European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, {SEC(2021) 167 final} - {SWD(2021) 84 final} - {SWD(2021) 85 final}, COM(2021) 206 final, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206; European Parliament, Artificial Intelligence Act: Deal on Comprehensive Rules for Trustworthy AI, 2023, https://www.europarl.europa.eu/news/it/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai.

[16] M. HILDEBRANDT, Discrimination, Data-Driven AI Systems and Practical Reason, European Data Protection Law Review (EDPL), 7, 3, 2021, 358-366.

[17] G. D’ACQUISTO, Intelligenza artificiale, obiettivo regole privacy per renderla “umana”, Agendadigitale, 2021, https://www.agendadigitale.eu/cultura-digitale/intelligenza-artificiale-e-protezione-dati-le-regole-per-comprendere-il-senso-della-tecnologia/.

[18] S. WACHTER; B. MITTELSTADT, A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI, Columbia Business Law Review, no. 2, 2019, 469-620.

[19] V. MARDA; E. JAKUBOWSKA, Emotion (Mis)Recognition: is the EU missing the point? The European Union is on the cusp of adopting a landmark legislation, the Artificial Intelligence Act. The law aims to enable an European AI market which guarantees safety, and puts people at its heart. But an incredibly dangerous aspect remains largely unaddressed - putting a stop to Europe’s burgeoning ‘emotion recognition’ market, in European Digital Rights, 2023, https://edri.org/our-work/emotion-misrecognition/; E. JAKUBOWSKA, Prohibit emotion recognition in the Artificial Intelligence Act, Access Now, 2022, 1-5, chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.accessnow.org/wp-content/uploads/2022/05/Prohibit-emotion-recognition-in-the-Artificial-Intelligence-Act.pdf.

[20] EUR-Lex, Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation), https://eur-lex.europa.eu/eli/reg/2016/679/oj.

[21] European Council, Report CAHAI(2020)23 Ad hoc Committee on Artificial Intelligence, 2020, 2-56, www.coe.int/cahai, «The prevention of harm is a fundamental principle that should be upheld, in both the individual and collective dimension, especially when such harm concerns the negative impact on human rights, democracy and the rule of law. The physical and mental integrity of human beings must be adequately protected, with additional safeguards for persons and groups who are more vulnerable. Particular attention must also be paid to situations where the use of AI systems can cause or exacerbate adverse impacts due to asymmetries of power or information, such as between employers and employees, businesses and consumers or governments and citizens»; Parliamentary Assembly, Preventing discrimination caused by the use of artificial intelligence, Resolution 2343/2020, https://pace.coe.int/en/files/28807/html.

[22] Art. 5 AI Act.

[23] European Parliament, Amendments Adopted by the European Parliament on 14 June 2023 on the Proposal for a Regulation of the European Parliament and of the Council on Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)), https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html.

[24] N. SHEN, AI Regulation in Health Care: How Washington State can Conquer the New Territory of AI Regulation, Seattle Journal of Technology, Environmental & Innovation Law (SJTEIL), 13, no. 1, 2023, 1-20; N. NI LOIDEAIN; R. ADAMS; D. CLIFFORD, Gender as Emotive AI and the Case of ‘Nadia’: Regulation and Ethical Implications, SSRN, 2021, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3858431.

[25] D. DEVAULT; R. ARTSTEIN; G. BENN; T. DEY et al., SimSensei Kiosk: A Virtual Human Interviewer for Healthcare Decision Support, Conference: Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems, 2014, 1061-1068.

[26] L. MONTALBANO, Brain-Machine Interfaces and Ethics: A Transition from Wearable to Implantable, Journal of Business and Technology Law, 16, no. 2, 2021, 191-222; C. BURR, N. CRISTIANINI, J. LADYMAN, An Analysis of the Interaction Between Intelligent Software Agents and Human Users, Minds and Machines, 28, 2018, p. 735.

[27] Art. 25 GDPR.

[28] G. MALGIERI; J. NIKLAS, Vulnerable data subject, Computer Law & security Review, 37, 2020, p. 1.

[29] European Parliament, EU AI Act: first regulation on artificial intelligence, 2023, https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.

Paylaş