COLUNAS

  1. Home >
  2. Colunas >
  3. Humanidades e Novas Tecnologias >
  4. Regulating artificial intelligence: challenges and perspectives

Regulating artificial intelligence: challenges and perspectives

sexta-feira, 31 de março de 2023

Atualizado às 07:49

This paper was presented at the Conference on "Regulating artificial intelligence: challenges and perspectives" (30.03.2023), coordinated by professor Emmanuel R. Goffi, Co-Director of the Global AI Ethics Institute.

I would like to bring some central points of my recent research, which is developed at the Institute for Advanced Studies, Oscar Sala Chair, in the University of São Paulo, and in the Institute Ethikai - ethics as a service, mentioning perspectives for further research developments that I intend to keep carrying out in the theme of AI governance, compliance, ethics of AI, and protection of fundamental rights.

About the question of the painel 3 - "Divergent perspectives: Is a regulation of AI desirable and/or possible?", my answer is that it is necessary, so it must be feasible, and we should make all efforts to achieve it. In my research and also in the Ethikai Institute we are developing a framework for risk impact assessment, fashioned to the adequate protection of Fundamental Rights in IA contexts, and also to think how to address these challenges specifically for Brazil (and countries of the Global South), since we understand it will help in the development of an inclusive, democratic, multicultural and then anthropophagic IA.

It would be based on the so-called Epistemologies of the south (Anibal Quijano) and on the multiple dimensionality of Fundamental Rights. Also, we intend to bring some contributions for the discussion of some important documents on regulation of AI from the European Union, as we see there some fragilities, in order to better protect fundamental rights, reacting to some critics about the fixed level of risks of AI, as shown in the approach of documents like AI ACT, white paper on AI and so on.

First of all, we understand it is crucial to question why regulate AI through    hetero-regulation instead of just by self-regulation. And for me this is because we need an inclusive and sustainable vision, in order to understand the difference.  Along this line, it would be useful to combine efforts, in the sense of Angela Davis' idea of intersectionality, since we need laws AMONG WITH ethical principles, and we also need self-regulation, through good practices, governance and compliance.

Most of all, the laws needed are not only principled ones, but also such laws that support the practices just mentioned. Furthermore, it is needed the so-called proceduralization of such practices, as in the area of data protection under the GDPR - general data protection regulation of the European Union -, which is much better accomplished in this sense than the LGDP in Brazil. Despite of its positive aspects, our law shows its fragility precisely by not rendering proceduralization and incentives for good compliance practices. As we know by now, procedures are essential to apply principles in concrete cases, taking in account its particular circumstances.

In the book "The Rule of Law in Cyberspace", coordinated by Gilmar Ferreira Mendes and Thomas Vesting, it is widely pointed out to the importance of legislating cyberspace, and it would be the same logic to be applied when it comes to AI, avoiding the usual phrase of techs companies, "move fast and break things" or the Chinese approach of only regulating after there are AI damages.

After all, we live in the information society, the data society, society 5.0, in the era of post-humanism and transhumanism, during the "re/turn of/to the non-human" (Grusin).  At the same time, new challenges and opportunities arise with AI, with most companies and governments doubting if they are prepared to deal with such issues as AI ethics, AI governance in an environmental sustainable and social inclusive way. On the other hand, there is not much or not enough scientific research in the humanities focused on the necessary interdisciplinary approach, gathering experts in all the main fields involved, so that we could adequately address the complexity of the mentioned issues.

There is, therefore, a need for the elaboration of the epistemological and methodological foundations to the construction of compliance instruments focusing on the principle of prevention and adequate protection of fundamental rights. And this in order to really start talking about algorithmic justice and effective respect to fundamental rights, in the sense of not only to consider such rights in its individual scope, but also as collective and social rights, by recognizing its multidimensionality. We definitely must consider the environmental impact of new technologies, in order to develop the sustainable, inclusive, and democratic practices of governance and compliance that is needed.

In this sense, the principle of prevention stands out as a privileged instrument for the protection of rights from the threatens of new technologies, promoting the adoption of good practices predicated by compliance.

Besides that, a technological design focusing on long-term environmental sustainability is a market differential, a competitive advantage, once it involves the requirements for an AI of trust, that is to say, under human control, with transparency, explainability, and accountability. It also provides the strengthening of the Democratic State of Law, since it corresponds to an effective protection of fundamental rights of all portions of the population, considering vulnerable portions in particular, through a systemic protection.

We come to think along this line in a broadening of the current concept of "smart city" to that of a "smart polis" instead, by adopting the sense of "city" in classical terms, as a space for the realization of citizenship, involving the recovery of the public space to the better realization, promotion and respect of human and fundamental rights, since citizenship is the implementation and possibility of exercising such rights.

Back to my research, it is important to say that it deals with the development of a new "framework" model to protect fundamental rights systemically without hindering innovation and international competition. The goal is to think long-term and compromised with sustainability, while also considering the multicultural perspective, technodiversity, intercultural digital ethics, and the socio-cultural context of countries from the Global South through the already mentioned Epistemologies of the South, since those countries face greater institutional and democratic fragility. In a nutshell, the purpose of this new model is to contribute to the construction of sustainable and responsible governance of AI algorithms.

The issues related to ethics, compliance, governance, and regulation of artificial intelligence are at the forefront of current and urgent demands from businesses companies and governments. These issues require a multidisciplinary and a correlative multidimensional analysis, as they have polyvalent features, so demanding coordination between technical, legal, and philosophical resources of the highest quality. Additionally, a multidisciplinary and holistic perspective (as mentioned by Jean-Pierre Dupuy), is also needed, emphasizing the plural aspect, diversity, and specific socio-cultural contexts of the countries of the Global South.

This approach is essential to develop governance instruments aiming to protect fundamental rights in their multiple dimensionality, which are, from the subjective aspect, individual, collective, and social. There is also their objective, institutional aspect, related to the shaping of the Democratic State of Law. Such protections should necessarily be part of the composition of a design logic to address AI governance in a sustainable, inclusive, and environmental friendly way. This broadens and democratizes the discussion, as it is to expect in the diversity oriented epistemic approach we look for.

The current debate on algorithmic governance is a major concern of companies and governments facing AI. It involves the study of new metrics, "frameworks," and methodologies for evaluating AI models, focusing in the effective protection to fundamental rights, in order to prevent the up rise from potential affronts by AI. This requires a shift from the current paradigm, which centers on technical requirements only, such as accuracy and efficiency, towards an analysis and application that takes into account social, cultural, and ethical aspects.

The aim here is to move in the direction of an analysis centered on risk assessment and protection of rights, and this not only through a human-centered AI, avoiding an anthropocentric perspective, but going even further, by turning to a "planet-centered" or "life-centered" AI. Furthermore, it aims to establish the epistemological and methodological foundation for an AI governance model with greater flexibility, and therefore, sustainability (Klimburg, A., Almeida, V. A. F., & Almeida, V., 2019), avoiding the system's plastering in the future, in the face of further developments. This would be a flexible, modular-procedural governance system, that we think is most needed and our goal is to achieve it.

Few international instruments are centered in the protection of fundamental rights, and some do not mention possible violations to them already at sight, besides being limited to some applications of AI, and often only to the public sector or the company in question. This gap highlights the urgent need to develop a consolidated methodology supported by an adequate epistemological and hermeneutical basis, which involves the better understanding of fundamental rights, sustainable in the long term, in order to be effective in the protection of such rights. Therefore, the development of standards, methodologies for the creation of a framework, and certifications for responsible AI governance practices is urgent. We want to join the efforts in this direction.

This perspective is not limited to the EU. In 2019, the proposed US "Algorithmic Accountability Act" required, in some cases, the development of impact assessments. In 2021, the National Institute of Standards and Technology (NIST) was tasked by Congress to develop an "AI risk management framework" to guide the "reliability, robustness, and trustworthiness of AI systems" used in the federal government. The 2021 report of the National Security Commission on Artificial Intelligence recommended that government agencies using AI systems prepare "ex ante" and "ex post" risk assessments to provide greater public transparency.

In the field of data protection is widely accepted and applied the "privacy by design" principles developed by Ann Cavoukian ("Privacy by Design: The 7 Foundational Principles," Ann Cavoukian, 2009). While there is already talk of "fundamental rights by design," the usual proposals are limited to mentioning some fundamental rights (privacy), and only some elements of a trusted AI as well (transparency, explicability). However, there is a need for an adequate "framework" that focuses on the preventive and effective protection of all fundamental rights that AI may potentially affect, extending the scope of a trusted AI to take into account sustainability and inclusion.

Moreover, applying such principles will contribute to addressing another "gap" in present researches, which is the difficulty of translating ethical principles into concrete practices, by moving "from principles to actions," and so preventing the discussion from being limited to ethical principles lacking practical effectiveness, in order to avoid what has been called "ethical laundering." By applying a "framework" based on new principles of "fundamental rights by design" that we foster, taking into account the multidimensionality of fundamental rights, it is possible to access a systemic and sustainable protection.

Our proposal aims precisely to apply new "fundamental rights by design" principles in the design and compliance instruments, acting in a proactive way through "ex ante" mechanism, by focusing on the development of new foundations for AI applications. The proposal is inspired by the discussion between Robert Alexy and Marti Susi in the book "Proportionality and the Internet" and intend to elaborate new elements of the "fundamental rights by design", in order to include new variables, from a non-European perspective.

Therefore, ours is a proposal mainly for Brazil and the Global South Governance, which is based on the "modular" governance model brought by Virgilio A. F. Almeida and Urs Gasser's paper "A Layered Model for AI Governance". This would be a perspective we have characterized as anthropophagic and tropicalist, in allusion to important modernists cultural movements launched in Brazil in the last century by such artists as Oswald de Andrade and Caetano Velloso.

This model is more flexible and suitable for the unprecedented problems of our datified society. The modular approach in AI governance advanced by the authors is similar to the present one, "modular-procedural", as for instance both proposals have a procedural feature and aim to provide adequate solutions for new problems in the present time.

The theoretical framework follows the directives found in the most recent documents produced by the EU and other countries that adopt a strong level of protection for fundamental and human rights, such as "AIC ACT", "White paper on AI", "Unboxing AI - 10 steps to protect human rights", "Report of the United Nations Special on the Promotion and Protection of the Right to Freedom of Expression and Opinion (on AI and its impact on freedoms"), and "Governing data and artificial intelligence for all (Models for sustainable and just data governance") of the European Parliament. These documents provide essential AI concepts such as "data justice" and "human/fundamental rights impact assessments".

Important models for the elaboration of Impact Reports with a fundamental and human rights approach (Artificial Intelligence Impact Assessment - AIIA) include the proposal of the Centre for Information Policy Leadership (CIPL) and the Dutch Platform for the Information Society, and Responsible AI Impact Assessment Toll (RAIIA) developed by the International Technology Law Association.

The proposal we favor emphasizes that the protection of fundamental rights will not rule out innovation and aims to overcome the creation of myths surrounding AI. Just as myths are commonly spread in Brazil about racial democracy, the absence of racism, and a consolidated democracy, there are also mythological productions in the field of AI, corresponding to the idea that hetero-regulation would not be necessary and would hinder innovation, or that the legislations that must exist should reach a weak level of protection of rights or be simple and generic.

As we are not convinced of the soundness of those conception, our proposal argues that a team of heterogeneous experts with independence, autonomy, and specific knowledge of fundamental rights and ethics principles, governance/compliance in a multidisciplinary way is necessary to better prepare and evaluate, the impact assessments of AI, observing the requirements of multiculturalism, holism, and inclusion (legitimacy).

Our framework turns out to be more flexible due to the procedure that must be observed in each concrete case, and it may be changed in the future depending on the technological development, if it becomes more secure and reliable, and with less potential of affront to the fundamental rights of vulnerable populations. It does not take into consideration only the application of AI, in itself, and in isolation, but also its context of use, the population involved that may suffer damage to their FD (and analyzes this according to each specific application of AI).

In conclusion, the inclusion of fundamental rights, balancing, and the application of proportionality in the elaboration of compliance documents such as LIA (Legitimate Interests Assessment), the DPIA (Data Protection Impact Assessment) and AIIA (AI Impact Assessment) with a focus on fundamental rights should be done by a team with specific knowledge, independence, multiculturalism, and multidisciplinarity. Such a document can contribute to the improvement of the quality of products and services in AI and their legitimacy as well.