REFLECTIONS ON THE LEGAL DEFINITION OF ARTIFICIAL INTELLIGENCE
-
7 Ocak 2022
Introduction
The question of what can be classified as artificial intelligence (“AI”) is a highly debated one and there is no uniformly agreed upon definition of AI. In fact, what is perceived as AI is a dynamic issue dependent on the technological development as well as the society’s understanding thereof. The so called “AI Effect” or “odd paradox” explains the relationship between the concept of AI and other components as follows:
“AI brings a new technology into the common fold, people become accustomed to this technology, it stops being considered AI, and newer technology emerges.5 The same pattern will continue in the future. AI does not “deliver” a life-changing product as a bolt from the blue. Rather, AI technologies continue to get better in a continual, incremental way.” [1].
Despite the AI Effect and resulting dynamic scope of AI, different definitions have been provided in the literature to facilitate the understanding of the concept. According to the commonly cited definition by Nilsson, “artificial intelligence is that activity devoted to making machines intelligent” [2] while Poole and Mackworth defines AI as “the field that studies the synthesis and analysis of computational agents that act intelligently” [3]. These broad definitions may be of assistance at recognizing an AI in daily life, but they also suffer from the inherent vagueness of the concept of “intelligence”. Acknowledging this ambiguity, both Nilsson and Poole & Mackworth proceed with the explanation of their understanding of intelligence. Nilsson states that “intelligence is that quality that enables an entity to function appropriately and with foresight in its environment”[2] whereas Poole & Mackworth notes that an agent acts intelligently provided the following conditions are met: “(i) what it does is appropriate for its circumstances and its goals, taking into account the short-term and long-term consequences of its actions, (ii) it is flexible to changing environments and changing goals, (iii) it learns from experience, and (iv) it makes appropriate choices given its perceptual and computational limitations” [3]. Demystifying “intelligence” certainly helps determining whether we have an AI but, again, the ambiguity survives within the expressions such as “learning” and “appropriateness”.
Broad and vague definitions are not always problematic. On the contrary, when the purpose is to explain a concept or convey a public message, they are rather preferable to strict and technical expressions as they would facilitate the process by leaving out marginal cases and resist the passing of time much longer. However, in these cases, naming something as AI would not trigger obligations or responsibilities, which becomes a crucial difference when it comes to law and regulation of AI.
Regulation of AI is another on-going debate the pre-requisite of which is to identify what falls under the term “AI”. Similar to the technical debate, there is no uniformly agreed upon legal definition of AI either. Unlike an ordinary classification made for illustrative or statistical purposes, a legal classification is mostly linked to a set of rights and obligations. Given the nature of legal provisions, using broad and ambiguous expressions might cause unexpected consequences. Thus, clearly delineating the scope of a provision that would apply to a specific concept such as AI is an examination that might have significant real-life consequences.
Accordingly, in this piece, I will explain firstly that not all technological developments merit the introduction of a new regulation. Secondly, I will argue that the majority of definitions provided in the Turkish literature are very broad to capture the regulatory concerns related to modern AI. I will proceed with the argument that we should stop seeking a one-fit-for-all definition of AI and focus on the unique and/or novel features of the system at hand for each regulatory question. Finally, I will conclude with the identification of the features that I find the most relevant when determining whether a given AI is already regulated or is there a need for new regulation.
A. When should we introduce new rules?
Regulation of technology is not a recent question. Back in 1996, John Perry Barlow has published the famous “A Declaration of the Independence of Cyberspace”[4], which is, one of most the prominent pieces of the so-called cyberlibertarian movement. Interaction in the cyberspace was indeed different than the physical world, which was the norm until then. Barlow expressed his opinion on how disruptive he found the cyberspace vis-à-vis traditional state intervention with the following paragraphs:
“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.
…
Cyberspace consists of transactions, relationships, and thought itself, arrayed like a standing wave in the web of our communications. Ours is a world that is both everywhere and nowhere, but it is not where bodies live.
…
Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here.”[4]
Barlow was not wrong in his emphasis on the cyberspace’s ability of removing physical borders. Internet made it possible for all of us to connect, communicate and interact with other people all around the world almost in real-time and proved him right. However, he erred by ignoring that actors in the cyberspace kept their connection to the physical world on various levels and that some general legal provisions such as the tort law, rules on unfair competition and the contract law among others, may apply not only to physical matter but also to intangible things, data, or other emerging technologies regardless how disruptive they are claimed to be.
Today, after approximately thirty years from Barlow’s declaration, we can see how differently the cyberspace has been shaped compared to his imagination as we are living in a world where the cyberspace itself and most of our interactions therein are highly regulated. Some of these regulations are long-existing legal regimes or principles whereas others have been introduced as a response to novel issues. The fact that the well-established freedom of expression is applied on the cyberspace, and the specific provisions on distance sales agreements constitute examples of the former and the latter case, respectively. Furthermore, a new regulation may be structured as a hybrid that is comprised of existing and new rules simultaneously. To illustrate, liability regimes for internet service providers generally borrow traditional concepts such as intent or link of causality from the civil liability doctrine. I believe that this nuance - a proper analysis and synthesis of the existing regulation and newly adopted ones- is a key to an efficient and smooth regulation.
In order to determine how a new piece of technological development is placed within the existing regulatory framework, we need to prefer substance over form and go beyond the etiquette of the technology to examine how it does what it claims to do. A recent rise of cryptocurrencies provides a good example for such analysis. Although they are mostly labeled as “cryptocurrencies” (emphasis added) it is generally accepted that, legally, they are neither money nor currency [5-8]. On the other side of the spectrum, even though it was claimed initially that they were not electronic money either [9], now it is argued that some crypto assets may qualify as such both under the EU law[5] and the Turkish law [6]. Similarly, some crypto assets can be classified as securities or capital market instruments, which removes the need to introduce a new regulation from the scratch [10, 11]. This analysis not only saves the time and resources that would, otherwise, be wasted for a redundant piece of regulation but also prevents potential conflicts beforehand.
Taking one step further into the core issue of this piece, it must be unveiled how the form obstructs the substance in case of AI. In most cases, AI takes the form of a computer program, which is familiar to most legal systems to some extent. Computer programs are mainly protected by copyright under the European Union (“EU”) law and the Turkish law. Civil liability regimes apply to damages and other liability issues related to use of computer programs as tools by a natural person. They are also easily governed by different types of contracts. Then, for the purposes of regulation of AI, the question becomes whether and under what conditions an AI becomes more than a computer program, exceeds the scope of the current regulatory framework, and, thus, merits the introduction of a new regulation. This question also addresses directly to the heart of our argument on existing legal definitions of AI. A useful legal definition of AI should distinguish between an ordinary computer program labelled as AI and an AI that differs from ordinary computer programs, which shall be examined in detail in the following sections.
B. Ending the misleading quest to humanize AI
There is no legal provision specifically defining or addressing AI under the Turkish law. On the other hand, there are various definitions provided in the literature. As far as our research concerned, all opinions associate AI with the ability of imitating humans to some extent. In this group, Kangal and Kapancı define AI with reference to the ability of a machine to imitate human behavior [12, 13]. Caşın et al., Akkurt, Aksoy, and Kılıçarslan state that AI is used to refer to the ability of a machine or an IT system to imitate the cognitive activities such as learning and problem-solving that are mainly associated with human intelligence [14-17]. Bak and Bozkurt Yüksel grant some weight to the perception created on the public and define AI as machines that undertake tasks that appear intelligent when carried out by humans [18, 19]. Çekin, defines AI by referring to the autonomy in addition to the imitation of human behavior [20]. Ozan Özparlak does not refer to imitating but to automating human behavior [21]. The Turkish National AI Strategy 2021-2025 takes a step further to synthesize all these definitions and states as follows:
“AI is defined as the ability of a computer or computer-controlled robot to perform various activities in a manner similar to that of intelligent creatures. The term AI is used for systems equipped with human cognitive abilities such as reasoning, meaning discovery, generalization or learning from past experiences in dynamic and uncertain environments”. [22]
We argue that the majority of these definitions are problematic on different grounds. However, before elaborating further on this issue, the question of what makes AI different should also be examined. Although AI as an expression is dated back to 1950s[23], the recent proliferation and increased recognition of AI is due to a shift of computing paradigm. Traditional instruction-based programming relies on the coder to explicitly hard-code all instructions that would be executed on a given input step by step to create an output. Naturally, in such a setting the output is, in principle, foreseeable and the coder has significant level of control on both the code and the output. The difference of the modern AI is that it relies on a subset of algorithms that operate not by following explicit instructions but by analyzing data and recognizing patterns therein. These are called “machine learning (“ML”) algorithms” and they primarily try to create outputs that are similar to the data they have been trained on. In order to put things into perspective, if we consider a simple operation on two integers “a” and “b” to find the resulting integer of “c”; in the traditional programming the coder explicitly instructs the code to “add a to b”, “extract a from b” or “multiply a by b” to calculate “c” whereas ML algorithms are provided with a set of values of “a”, “b” and “c” then they are expected to find what the operation is. When found, this operation is built into a model and can be subsequently run with new inputs. As the model to which new inputs are fed is created by ML algorithms, the coder has less control on the operations of the code and, thus the output is less foreseeable. The more data an ML algorithm is fed, the more unpredictable or unforeseeable the output it would create gets. Hence what is primarily different between ordinary computer programs and the ones based on ML algorithms are the degree of control the coder possesses over the operations and the level of foreseeability of the output.
Having explained the general functioning and main differences of modern AI compared to the traditional programming, the problematic aspects of the above-mentioned definitions may be explored in detail. First of all, the reference to machines or computers is confusing and misleading since the characteristic feature of modern AI lies within its novel paradigm of programming. A physical, robot-like apparatus is not required for AI, and, in most cases, the hardware component is nothing more than a simple host or a storage space. Secondly and similarly, the constant reference to the imitation of human behavior or cognitive tasks also misleading given that AI is solely based on statistical calculations. AI does not think nor learn as humans do. It does not carry out cognitive tasks either. It simply analyses data and recognizes patterns so that it could create a mathematical model through which users could create outputs that are similar to the ones created by humans. On a different note, one must also be cautious while defining AI by referring only to the nature of the outputs despite this similarity. Computer programs that are based on the traditional programming could also produce similar outputs Making calculations or playing games are examples of tasks that are traditionally associated with humans but standard calculators and the first version of Arthur Samuel’s checkers-playing machine were actually based on the traditional programming and not on ML techniques [24].
C. Functional definition of AI
It is apparent that definitions referring to the imitation of human behavior or cognitive skills are not useful considering that they are not accurate from the procedural point of view and wide enough to cover both AI as well as ordinary computer programs labelled as AI from the output-focused perspective. In this regard, we argue that we should stop drafting broad and non-practical definitions solely for the sake of having a definition. Instead, we should focus on drafting multiple but also functional ones that are tailor-made taking the relevant regulatory concerns into account.
How does a functional definition look like? Firstly, it should distinguish between ordinary computer programs that are already regulated and AIs. Secondly, it should also take the AI in the form of computer program as its default starting point but provide additional provisions for cases where an AI is integrated into a physical apparatus allowing it to interact with the physical world. Thirdly, it should take the interaction of the characteristics of modern AI and the relevant subject matter. If the regulation is about the liability issues related to AI systems, the focus should be unforeseeability of the outputs and its effects on the link of causality whereas in a regulation on AI-generated works, the focus should be on the control of the actors involved in the generation process and its effects on the originality.
Conclusion
Not every novel technology merits a new regulation. Before jumping onto the conclusion that there is a genuine need for the introduction of new rules, the question whether and to what extent the existing regulatory framework could apply to and handle the risks associated with the technology in question. The situation is the same with AI. Definitions currently provided in the Turkish legal literature are not practical as they are mostly very broad and suffer from the misleading inclination to refer to the imitation of human behavior or human cognition. A legal definition of AI should distinguish between what is already regulated and what should be regulated; avoid referring to human behavior or cognitive tasks, and be functional in the sense that it should refer to the features relevant to the purpose as well as the subject matter of the regulation at hand.
References
1. STUDY PANEL, Artificial Intelligence and Life In 2030: One Hundred Year Study on Artificial Intelligence. 2016, Stanford University: 2016.
2. NILSSON, N.J., The Quest for Artificial Intelligence: A History of Ideas and Achievements. 2010, Cambridge ; New York: Cambridge University Press. 562.
3. POOLE, D.L. and A.K. MACKWORTH, Artificial intelligence: foundations of computational agents. 2010, New York: Cambridge University Press.
4. BARLOW, J.P., A Declaration of the Independence of Cyberspace. 1996.
5. EUROPEAN BANKING, A., Report with Advice for the European Commission. 2019.
6. GÜÇLÜTÜRK, O.G., Türk Hukukunda Kripto Varlıkların Para ve Elektronik Para Niteliğinin İncelenmesi. REGESTA, 2020. 4(3): p. 383-408.
7. KARAMANLıOĞLU, A. Son Gelişmeler Işığında Kripto Paraların Hukuki Niteliği ve Kripto Para Borsalarına Dair Tespit ve Önerileri. Kadir Has Üniversitesi Hukuk Bülteni, 2021.
8. TURANBOY, A., Kripto Paraların Ortaya Çıkmaları ve Hukuki Nitelikleri. Banka ve Ticaret Hukuku Dergisi, 2019. XXXV(3): p. 47-62.
9. EUROPEAN BANKING AUTHORITY, EBA Opinion on ‘virtual currencies’. 2014.
10. Report of Investigation Pursuant to Section 21(a) of the Securities Exchange Act of 1934: The DAO. 2017, SEC.
11. YÜKSEL, S.H. and O.G. GÜÇLÜTÜRK, Kripto Varlıkların İlk Arzı (ICO) ve Türk Hukukunda İlgili Düzenlemelerin Tespiti, in Gelişen Teknolojiler ve Hukuk I: Blokzincir, E.E. AKSOY RETORNAZ and O.G. GÜÇLÜTÜRK, Editors. 2020: İstanbul. p. 181-254.
12. KANGAL, Z.T., Yapay Zeka ve Ceza Hukuku. 2021.
13. KAPANCI, K.B., Özel Hukuk Perspektifinden Bir Değerlendirme: Yapay Zeka ve Haksız Fiil Sorumluluğu, in Gelişen Teknolojiler ve Hukuk II: Yapay Zeka, E.E. AKSOY RETORNAZ and O.G. GÜÇLÜTÜRK, Editors. 2021. p. 127-193.
14. AKKURT, S.S., Yapay Zekanın Otonom Davranışlarından Kaynaklanan Hukuki Sorumluluk. Uyuşmazlık Mahkemesi Dergisi, 2019(13): p. 39-59.
15. KILIÇARSLAN, S., Yapay Zekanın Hukuki Statüsü ve Hukuki Kişiliği Üzerine Tartışmalar. Yıldırım Beyazıt Hukuk Dergisi, 2019(2): p. 363-389.
16. CAŞıN, M.H., D. AL, and N.D. BAŞıR, Yapay Zeka ve Robotların Eylemlerinden Kaynaklanan Cezai Sorumluluk Sorunu. Ankara Barosu Dergisi, 2021. 2021(1): p. 1-73.
17. AKSOY, H., Yapay Zekalı Varlıklar ve Ceza Hukuku. International Journal of Economics, Politics, Humanities & Social Sciences, 2021. 4(1): p. 10-27.
18. BAK, B., Medeni Hukuk Açısından Yapay Zekanın Hukuki Statüsü ve Yapay Zeka Kullanımından Doğan Hukuki Sorumluluk. Türkiye Adalet Akademisi Dergisi, 2018(35): p. 211-232.
19. BOZKURT YÜKSEL, A.E. and B. BAK, Futurist Hukuk. 2018.
20. ÇEKIN, M.S., Yapay Zeka Teknolojilerinin Hukuki İşlem Teorisine Etkileri. 2021: On İki Levha Yayınları.
21. OZAN ÖZPARLAK, B.O., Büyük Veri Çağında Yapay Zeka Sistemlerinin Çalışma İlişkilerinde Kullanımı: Hukuki Bir Değerlendirme. 2021: On İki Levha Yayınları.
22. MINISTRY OF INDUSTRY AND TECHNOLOGY and DIGITAL TRANSFORMATION OFFICE, National Artificial Intelligence Strategy. 2021.
23. MCCARTHY, J., et al. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. 1955. 27.
24. SAMUEL, A.L., Some Studies in Machine Learning Using the Game of Checkers. IBM Journal of Research and Development, 1959. 3(3): p. 210-229.
-
Osman Gazi Güçlütürk
Öğr. Gör. Dr.Kırklareli Üniversitesi Hukuk Fakültesi Bilişim Hukuku Anabilim Dalı
[email protected]