The Council of Europe’s Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law

Tags: , , , ,

3/2024

The Council of Europe’s Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law

Tags: , , , ,

Il contributo presenta un’analisi della Convenzione del Consiglio d’Europa sull’Intelligenza Artificiale nel contesto della regolamentazione europea e globale dell’IA. Questa analisi è particolarmente pertinente alla luce dei recenti sviluppi sull’IA, come il lancio di ChatGPT e la finalizzazione del Regolamento UE sull’IA. Lo studio esamina il potenziale della Convenzione, ponendo in evidenza come il suo ambito di applicazione si estenda oltre i sistemi ad alto rischio di cui al Regolamento UE sull’IA. La Convenzione riafferma l’importanza di stabilire principi regolatori, diritti e garanzie, nonché di adottare un approccio basato sul rischio. Sebbene la Convenzione rimanga in ombra rispetto al Regolamento IA e alla legislazione UE, essa può rappresentare un quadro di riferimento significativo per gli Stati membri dell’UE, offrendo un valore quasi costituzionale e certamente simbolico.


The paper presents an analysis of the Council of Europe Convention on Artificial Intelligence in the context of European and global AI regulation. This analysis is particularly pertinent in light of recent developments in AI, such as the release of ChatGPT, and the finalisation of the EU Regulation on AI. The study examines the potential of the Convention, noting that its scope extends beyond high-risk systems, such as the EU AI Law. The Convention reaffirms the importance of establishing regulatory principles, rights and guarantees, as well as adopting a risk-based approach. While the Convention remains overshadowed by the IIA Regulation and EU legislation, it can also be a significant framework for EU member states, offering a quasi-constitutional and symbolic value.
Summary: 1. The Council of Europe joins the regulation of artificial intelligence with the “AI Convention”.- 2. The normative, interpretative and symbolic value of an IA Convention.- 3. The general provisions: purpose, the controversial application to the sector and the exclusions in research, defence and national security.- 4. The important regulation of “principles” and the focus on “affected subjects” and groups and collectives.- 5. The rights to documentation, records or notifications and the guarantee of an independent authority.- 6. Relevant risk and impact assessment and mitigation obligations.- 7. Some common clauses in rights conventions and the “Monitoring and Cooperation Mechanism”.- 8. The interrelation of the Convention with EU law and in particular with the EU AI Act.- 9. To conclude: the Convention incorporates the “lyric” into the “prose” of the EU Regulation.- 10. Bibliographical References.

1. The Council of Europe joins the regulation of artificial intelligence with the “AI Convention”[1]

It is worthwhile to consider the Council of Europe’s Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law (hereafter AI Convention) in light of the growing regulation of artificial intelligence (AI). The regulation of artificial intelligence (AI) has been a topic of discussion within the European Union (EU) since 2016, particularly following the publication of the AI White Paper in 2020[2]. This was followed by the proposed AI Regulation (RIA) in 2021[3]. The culmination of this regulatory process was reached on 9 December 2023, with an agreement and subsequent finalisation of the text, its final approval and publication in July 2024[4]. The EU regulation (EU AI Act) sparked AI regulation in other parts of the world. Thus, on 16 June 2022, Canada introduced the Digital Charter Implementation Act 2022 (Digital Charter Implementation Act)[5] under the Artificial Intelligence and Data Act (AIDA)[6]. In December 2022, Brazil initiated the processing of Bill No. 2338 in the Senate, aimed at regulating AI[7]. Also, among other things, there is the prospect of weak regulation by the UK following its exit from the EU[8]. These initiatives seem to have been awaiting the adoption of the EU AI Act.

Especially with the emergence of Chatgpt since the end of 2022, the process has accelerated. In China, for example, the Interim Measures for the Management of Generative Artificial Intelligence Services, adopted on 13 July 2023, in force since 24 August, is noteworthy[9]. In Canada, the Voluntary Code of Conduct for the Responsible Development and Management of Advanced Generative AI Systems was announced and opened for voluntary adoption in September 2023[10]. In the context of the G7, there is the so-called “Hiroshima Process” which is substantiated in the agreement on the International Code of Conduct for Organisations Developing Advanced AI Systems of 30 October 2023, a voluntary instrument for AI developers[11]. On the same day, the United States adopted the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence of 30 October 2023[12]. More partially, on 3 January 2024, California has regulated its Artificial Intelligence Accountability Act[13] or Illinois has its Artificial Intelligence Video Interview Act (820 ILCS 42/)[14]. In Latin America there are multiple legislative proposals[15].

The adoption of the AI Convention, which will take effect in 2024, should be viewed in this context, particularly in light of the lengthy shadow cast by the EU’s EU AI Act, although there has already been some background since 2019.

Thus, the Council of Europe, through its Committee of Ministers, established the Ad hoc Committee on Artificial Intelligence (CAHAI)[16] on 11 September 2019. Subsequently, the Committee on Artificial Intelligence (CAI) was given a mandate from January 2022 to December 2024[17] , with the Terms of Reference of the CAI[18]. The CAI was tasked with creating a «cross-cutting binding legal instrument»[19] that promotes innovation and establishes «robust and clear» principles for the design, development and application of AI systems, recognising that it «cannot currently regulate all aspects of the development and use of AI systems».

This mandate was clearly intended to transcend borders, seeking to create an «instrument attractive not only to states in Europe but to as many states as possible in all regions of the world», involving “Observers”[20] such as Israel, Canada[21], the United States, Japan, the Global Partnership on Artificial Intelligence (GPAI), Internet companies, and civil society organisations. They participate in the drafting and may accede to the AI Convention once it enters into force, as stipulated in its article 31.

The CAI worked on the basis established by CAHAI between 2019 and 2021[22]. Recital 9 underlines the urgency of «a globally applicable legal framework». This defined basic contents[23] which have been articulated in successive documents: the purpose and scope of the IA Convention; definitions of IA system, lifecycle, provider, user and «IA subject»; fundamental principles, including procedural safeguards and rights of IA subjects; additional measures for the public sector, as well as IA systems posing «unacceptable» and «significant» levels of risk; a mechanism for monitoring and cooperation between the parties and final provisions.

A revised “Zero Draft” was published on 6 January 2023 (4th plenary meeting)[24]. On 7 July 2023 (6th meeting), the Consolidated Working Draft[25], the basis for future negotiations, was released. On 18 December 2023 (8th meeting), the Draft Framework Convention was released[26]. The final text of the Convention was basically agreed in the framework of the 20 March 2024 session of the Committee of Ministers, then at the 29 April meeting. It appears that there was a certain risk of non-agreement by the United States, which lobbied hard for the Convention not to apply to private parties[27]. The Convention was finally adopted on 17 May 2024 at the 133rd Session of the Committee of Ministers (Strasbourg, 17 May 2024). The Convention is listed as No 225 in the series of treaties[28], there is an Explanatory Report[29] and it opens for signature on 5 September 2024 at the Conference of Ministers of Justice in Vilnius. It is a long convention comprising seventeen recitals and thirty-six articles divided into eight chapters[30].

2. The normative, interpretative and symbolic value of an IA Convention

It is a Convention of minimum standards, with few obligations and rights. The CAHAI stated the objective of creating a «common legal framework containing certain minimum standards for the development, design and implementation of AI in relation to human rights, democracy and the rule of law»[31]. It speaks of «common general principles and standards» (Recital 9) and of the «framework character of the Convention, which may be supplemented by other instruments» (Recital 11).

As to the degree of concreteness of the obligations in the text of the AI Convention, Article 4 generally states that «Each Party shall adopt or maintain measures to ensure that activities within the life cycle of artificial intelligence systems are consistent with obligations to protect human rights, as enshrined in applicable international law and its domestic law». Article 5 generically presents the obligation to take measures with respect to the «Integrity of democratic processes and respect for the rule of law» as well as «to respect human dignity and individual autonomy» (art. 6). On as many as 31 occasions it is stated that the parties shall take or adopt «necessary», «appropriate», «political» or «legislative or other measures». These are common expressions in the international context. As will be discussed, all measures «shall be graduated and differentiated» according to the impacts and risks generated by AI systems under a risk-based approach (Art. 1.2).

Although in general the IA Convention is not characterised by clear obligations and clear specific rights, there are several reasons to take the Convention into account normatively. As will be shown, the general “principles” that are regulated in Chapter III have a high potential in the hands of legal operators, all the more so since the EU AI Act does not ultimately regulate principles. In addition, Chapter IV regulates some rights and guarantees. Article 16 (Chapter V) on risk and impact assessment and mitigation imposes the most relevant obligations of the entire IA Convention. Furthermore, the direct effect of treaties, their quasi-constitutional value and their potential interpretative effect and their integration into State law should not be lost sight of. Thus, it should be remembered that, after their ratification and publication, international treaties are incorporated into domestic law (Art. 96 of the Spanish Constitution), allowing them to be directly invoked before authorities and courts[32]. It is for these reasons that, although it is not easy to derive rights and obligations from the IA Convention without intermediate legislation by the States, the potential of its direct application and its pre-eminence over domestic rules must be recognised (arts. 29 and 30 Law 25/2014 in Spain). Furthermore and especially, it is important to highlight the quasi-constitutional value of treaties related to fundamental rights in accordance with article 10.2 of the Spanish Constitution, as they become mandatory elements for interpreting the Constitution and its rights. Thus, the Treaty «becomes in a certain way the constitutionally declared content of rights and freedoms» (STC 36/1991, FJ 4) and determines «the exact outlines of their content» (STC 28/1991, FJ 5). Thus, once ratified, a convention has considerable normative and interpretative value.

Even beyond their normative legal value, the symbolic and meta-legal value of the Council of Europe’s rights treaties should not be forgotten, as they represent a commitment to shared values and set ideals and political guidelines. Their symbolic value and their ability to guide the interpretation of fundamental rights and to promote policies and legislation are crucial. Unlike regulations with more specific provisions such as the EU AI Act or the GDPR, this IA Convention also has these characteristics of such potential.

3. The general provisions: purpose, the controversial application to the sector and the exclusions in research, defence and national security

Chapter I, with its General Provisions, highlights the general objective that «activities within the lifecycle of artificial intelligence systems should be fully consistent with human rights, democracy and the rule of law» and mentions the graduation of obligations according to the severity of «adverse impacts» (Art. 1.1 and 2). Article 16 elaborates on this risk approach.

The scope of application of the AI Convention is associated with the concept of “artificial intelligence system”: «machine-based system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments». «Different artificial intelligence systems vary in their levels of autonomy and adaptiveness after deployment» (art. 2). There is international convergence on the definition of AI, reflected in the evolution of the EU Regulation and the OECD adjustments in November 2023[33], so no divergences are foreseen. The IA Convention applies to IA «systems that have the potential to interfere with human rights, democracy and the rule of law» (Art. 3.1). The IA Convention may imply obligations for systems that are not “high risk”, since it does not deal with this notion, which, as is well known, is essential to implement EU AI Act obligations.

The application of the IA Convention to the private sector has been a particularly contentious issue that conditioned its final approval due to strong lobbying by the US[34]. The version of 7 July 2023 did not specify anything in this regard. The 18 December draft set out options. Option A does not differentiate between sectors, implying possible obligations in both. Option B refers to «activities […] carried out by public authorities or entities acting on their behalf» although the obligation of the parties to enforce in respect of private entities is expressly stated. Option C does not distinguish, although it reinforces for the public sector the need for appropriate measures and states the introduction of «progressively» adopting measures for «private parties». The EDPS took a positive view and assumed that the IA Convention covered «both public and private providers and users […], irrespective of whether the providers and users of IA systems are public or private entities» (Nos 23 and 24). It also welcomed the fact that «additional measures for the public sector» were to be established, which would also cover private entities when they provide public services (No. 25)[35].

A compromise agreement has finally been reached, which offers the signatory parties – if I may say so – an “à la carte” choice of commitment for the private sector. The final version of Article 3.1 starts from the application «by public authorities, or private actors acting on their behalf» (a). For what is not covered by this subparagraph (a), subparagraph (b) states that it will «address the risks […] of artificial intelligence systems by private actors […] in a manner conforming with the object and purpose of this Convention».

And in particular, it refers to what each State declares at signature or ratification, choosing to apply the Convention «to activities of private actors or by taking other appropriate measures to fulfil the obligation set out in this subparagraph». As a safeguard clause, it is stated that this possibility of limiting the scope of the IA Convention to the private sector «may not derogate from or limit the application of its international obligations undertaken to protect human rights, democracy and the rule of law».

The exclusions from application to research and in respect of national security are not total. It should be noted that the exclusion of research is an issue that has changed in the EU AI Act. The Commission’s initial proposal did not mention – and therefore did not exclude – research. The exclusion of AI systems for «the sole purpose of scientific research and development» was introduced in the EU Council’s version of December 2022 (Art. 2, 5a). In June 2023, the Parliament delimited this exception[36], even proposing the possibility of delegated acts to specify the scope of the exclusion. This has not been reflected in the final text. The final published text limits the scope of the research exception. The EU AI Act «shall not apply to AI systems or models, including their output results, developed and put into service specifically for the sole purpose of scientific research and development» (Art. 2. 6th EU AI Act) and «shall not apply to any research, testing or development activity relating to AI systems or AI models prior to their placing on the market or putting into service» (Art. 2. 8th EU AI Act). However, the exemption is conditional on compliance with «ethical and professional standards» (Recital 25) and «conformity with applicable Union law» (Art. 2. 8th EU AI Act). Furthermore, the research exception does not in any case cover tests under real conditions (Art. 2. 8 EU AI Act). This exclusion is logical, since the objective of the regulation is to regulate the placing on the market of IA systems, whereas in research the objective – in principle – is not to place the product on the market. It should be remembered that research is susceptible to modulations and relaxations in terms of data protection legislation, also, but it is never an exclusion[37].

For the IA Convention in relation to research, there were several options in the December 2023 Draft: exclusion «unless the systems are tested or otherwise used» (Option A 3.2); «where the design, development, use and decommissioning of IA systems involve research, such research shall be included within the scope of this Convention in Article Option B 3.3.]», Option C 3.2 delegates the issue to domestic legislation.

In my opinion, the normative solution should delimit various criteria, such as the necessary public utility of the research, that there is no placing on the market of the AI system or that there is no use of the AI system that may impact on general or uncontrolled environments in the research. Finally, Article 3.3 states as a general rule that the IA Convention «shall not apply to research and development activities regarding artificial intelligence systems not yet made available for use». However, it is obvious that the exclusion of research cannot derogate from the validity of recognised fundamental rights, nor from other applicable legislation. Therefore, Article 3.3 states the application «if testing or similar activities are undertaken in such a way that they have the potential to interfere with human rights, democracy and the rule of law». Furthermore, the exception to the investigation is «Without prejudice to Articles 13 and 25(2)», i.e. without prejudice to innovative measures such as sandboxing «under the supervision of their competent authorities» (Art. 13) or exchanges of information between the parties (Art. 25(2)).

With regard to national security and defence, this is an area in which a very strong discretion is granted to states[38]. However, these limitations or discretion do not in themselves imply the exclusion of the application of the law or rights and, if necessary, it is necessary to expressly establish it. This has been done in the EU AI Act. Thus, the EU AI Act «shall not affect the national security competences of the Member States». Furthermore, the EU AI Act does not apply to AI systems developed or used «exclusively for military, defence or national security purposes» (Art. 2.3). In the final version, this exclusion has been broadened, as the EU AI Act will not apply even if these military systems are placed on the market by any type of entity. It should be recalled that in these areas of national security, the application of European data protection law, which is often applied to the area of AI, is also excluded (Article 2(a) GDPR)[39].

In the case of the IA Convention there was no total exclusion for national defence and security. The draft Convention of 7 July 2023 referred to «restrictions, derogations or exceptions» (Chapter III)[40]. The 18 December version provided for a general exclusion for defence, but not a total exclusion for national security. Thereafter, there were several nuanced alternatives and more or less power for the Parties with respect to this exclusion[41]. For its part, the EU Council in its Decision on the Convention recalls the exclusive responsibility of each State in this matter[42]. Before the adoption of the final text, I considered it positive that, at least potentially, the validity of the IA Convention in matters of national security was recognised, albeit with a minimum formula. Finally, Article 3.2 opts for a strong exclusion of national security in the hands of each state, but an exclusion that is compatible with applicable international law[43].

4. The important regulation of “principles” and the focus on “affected subjects” and groups and collectives

The regulation of general principles applicable to all AI systems in the AI Convention is of particular relevance, even more so than it may initially appear. In this regard, it is worth recalling that for years, among dozens of declarations and documents, some essential ethical principles of AI have become visible and distilled[44]. Harvard[45] analysed more than thirty of the main international and corporate declarations on AI ethics and synthesised them into privacy, accountability, security, transparency and explainability, equity and non-discrimination, human control, professional responsibility, human values and sustainability. The future Convention is positive in that it goes beyond declarations in the field of soft law and regulates these principles moving, if I may say so, from the muses of ethics to the theatre of law.

In the IA Convention the principles are integrated under the formula that «Each Party shall» adopt these principles. Thus, Chapter III “establishes” «common general principles which each Party shall apply […] to the extent remedies are required by its international obligations and consistent with its domestic legal system» (as an “explanatory note”). Eight articles (Articles 6 to 13) express and affirm such «principles»: human dignity and individual autonomy (Article 7), transparency and oversight (Article 8), accountability and responsibility (Article 9), equality and non-discrimination (Article 10), privacy and protection of personal data (Article 11). In the December 2023 draft, one article regulated the preservation of health [and the environment] (art. 11)[46], but is no longer in the final version. The principles of «reliability» (Art. 12) and safe innovation (Art. 13) are included. There are few concrete rules, such as «the identification of content generated by artificial intelligence systems» (Art. 7). The December 2023 draft mentioned «applicable national and international standards and frameworks on personal data protection and [data governance]» (Art. 10)[47]. In what is now Article 11.2, «effective guarantees and safeguards have been put in place for individuals, in accordance with applicable domestic and international legal obligations». Article 12 on «reliability» previously affirmed «safety, security, accuracy, performance, quality […] integrity, data security, governance, cybersecurity and robustness requirements» (Art. 12) and now only «adequate quality and security». It stresses that «each Party is called upon to enable the establishment of controlled environments for developing, experimenting and testing artificial intelligence systems under the supervision of its competent authorities» (Art. 13).

It is normal for a Convention to be a flexible and open regulation, characterised by broad mandates or cross-references to existing law. Nevertheless, its normative potential is important, especially as the RIA has not finally regulated legal “principles” of general application to IA. As is generally known, principles play an essential role in shaping and structuring the legal order, they inform the legal order and are key tools for the interpretation and application of rules; principles are a source of inspiration for resolving conflicts and for creating and supporting new interpretations. Precisely, normative principles have a particular potential in digital and disruptive sectors, where there is considerable uncertainty, such as in the case of AI.

Moreover, we should not forget that in a field as related as data protection, the “principles” (Art. 5 GDPR) have played and continue to play these general roles, being the fundamental pillars for more than thirty years. In fact, the data protection “principles” have constituted concrete rules applicable to processing operations. Indeed, their mere non-compliance directly implies the commission of infringements.

In the case of the EU, no principles were finally regulated in the articles and their proclamation was limited to a recital. There were no principles either in the Commission’s proposal of 2021 or in the Council’s text of December 2022. However, the EU Parliament in June 2023 included in a new Article 4a the «General principles applicable to all AI systems» (Amendment 213) in some detail: «human intervention and oversight» (a), «technical robustness and security» (b), «privacy and data governance» (c), «transparency» (d), «diversity, non-discrimination and fairness» (e) and «social and environmental well-being» (f). These principles applied to all IA systems – high-risk or not – and also to foundational models. It should be noted that this version of the EU Parliament regulated these principles in the regulatory text, but «without creating new obligations» (Art. 4a.2), although they were to inspire standardisation processes and technical guidelines (Art. 4a.2). Finally, the text of the EU AI Act does not regulate principles. The seven “non-binding” principles are mentioned in Recital 27, but they are expressly excluded from «the legally binding requirements of this Regulation», although they will be projected «where possible, in the design and use of IA models» and «should serve as a basis for the development of codes of conduct». The significance of the principles of the IA Convention is thus greater than those of the EU AI Act.

As to the recognition of “data subjects”, the definition of “artificial intelligence subject”[48] appeared in the “Zero draft” of 6 January 2023, Article 2(e) and was “welcomed” by the EDPS[49]. This approach contrasted sharply with the EU AI Act, which has been criticised for its total ignorance of those affected by an AI system. It went so far as to state that «While the IBA focuses on the digital single market and does not create new rights for individuals, the convention could fill these gaps […] while the European Commission emphasises economics and market integration, the Council of Europe focuses on human rights, democracy and the rule of law»[50]. As a reflex effect, following this zero draft of the January 2023 Convention, in the EU AI Act in June the definition of «affected person» was included among the definitions, along with various rights in the Parliament’s amendments (Amendment 174, Art. 3. 1, 8a). «Affected persons» are finally not defined in the EU AI Act, but they do form part of the «Scope of application» of Article 2. 1 “g” EU AI Act[51]. In the version of 7 July and 18 December 2023 as well as in the final text of the Convention, the concept of «subject of IA» has also disappeared, although the term is used in Article 14.2 (a) and (b) with regard to the right to provide information to those affected.

A more relevant issue is the recognition and protection of collectives and groups affected by AI. As I have had occasion to insist years ago, it is necessary to go beyond an approach limited to the direct impact of the automated system or AI on the individual[52]. It is necessary to take into account the structural and massive impact of the use of AI systems, which in many cases are used to support general decision-making in both the public and private sectors. With regard to groups, it is essential to encourage and, where necessary, ensure specific transparency and the participation of civil society and affected groups in the various uses of AI systems. This is what I proposed for the Charter on Digital Rights in a section on «Guarantees of social impact»[53]. Mantelero[54] is particularly inspiring on this issue. The UNESCO Recommendation on AI also insists on the need for a collective approach and above all the inclusion of social participation mechanisms in relation to the use of AI systems[55].

Canada has highlighted a gap in the IA Convention, underlining the importance of «moving from individual privacy to collective privacy»[56]. The EDPS, in his conclusions, urges to incorporate in the IA Convention «the specification that social or group risks posed by IA systems should also be assessed and mitigated» (No 6)[57].

The truth is that this sensitivity to the groups and collectives affected can be seen in the future Convention, although the guarantees are not delineated with extreme precision. In any case, it represents a significant advance on the current situation, paving the way and motivating States to develop and refine these collective protection mechanisms. Thus, Article 16 regulates the need to identify and assess risks and «consider, where appropriate, the perspectives of relevant stakeholders, in particular persons whose rights may be impacted;» (Art. 16. 2º c).

Civil society participation is mentioned in Article 5 on «Integrity of democratic processes and respect for the rule of law». Paragraph 2 «in the context of activities within the lifecycle of artificial intelligence systems, including individuals’ fair access to and participation in public debate, as well as their ability to freely form opinions»[58]. Even more noteworthy, if anything, Article 19 on «public consultation» provides that «Each Party shall seek to ensure that important questions […] are, as appropriate, duly considered through public discussion and multistakeholder consultation in the light of social, economic, legal, ethical, environmental and other relevant implications». There are even some more indirect references of interest[59].

5. The rights to documentation, records or notifications and the guarantee of an independent authority

Although it is claimed that one of the “key aspects” of the IA Convention is its focus on rights[60], in general terms it does not establish new subjective rights. There are more than forty references or mentions to pre-existing «human rights» or «rights» already recognised. There are also particular references to «equality and non-discrimination» (Art. 10 and 17), to «privacy and data protection» (Art. 11) or to the «Rights of persons with disabilities and of children» (Art. 18), but no substantive content or new rights are regulated.

As an exception, certain obligations for documentation, records and other measures to ensure effective «remedies» in Article 14, as well as the «Procedural safeguards» of monitoring and notification in Article 15, may be considered as rights in the IA Convention.

In 2014, in the US context, Crawford and Shultz coined the «data due process» to redefine the guarantees of due process in the context of algorithmic decision-making. This new right included the guarantee of minimum information to those affected about what the algorithm predicted, on the basis of what data and the methodology used[61]. Following this line, Article 14 on «Remedies» obliges states to «adopt or maintain measures» that guarantee effective remedies. To this end, the generation of documentation, records and evidence is guaranteed, as well as their accessibility to those affected. Thus, the need to «to ensure that relevant information regarding artificial intelligence systems which have the potential to significantly affect human rights and their relevant usage is documented, provided to bodies authorised to access that information and, where appropriate and applicable, made available or communicated to affected persons» (art. 14. 2º a) is affirmed. Furthermore, this information must be «sufficient to enable the persons concerned to challenge the decision(s) taken» (Art. 14. 2º b). The fact that these guarantees, which will have to be specified by the parties to the Convention, are expressly regulated is undoubtedly positive.

However, I believe that these obligations cannot be generalised to all AI systems. To this end, a restrictive interpretation of the reference in Article 14 to systems that may be «significantly affecting» human rights could be envisaged.

Among the «remedies» (Chapter IV), Article 15 regulates «Procedural safeguards». In the December 2023 Draft, the first paragraph was based on «the importance of human review/[oversight]» and, as a consequence, «[where an artificial intelligence system reports or takes decisions [or acts] that substantially affect human rights], effective procedural safeguards, guarantees and rights […] shall be made available to affected persons»[62] . The final version omits the reference to human oversight and merely states that in such cases «effective procedural guarantees, safeguards and rights, in accordance with the applicable international and domestic law» have to be available.

Although it is not expressed as concretely as it should be and is weakened by the lack of mention of human supervision, Article 15(1) has clear connections with the guarantees that European law recognises with respect to automated decisions: Article 22 GDPR[63], Article 11 of Directive (EU) 2016/680 or the new Article 9(1) of Council of Europe Convention 108, which also recognised this «right» in 2018. Now, the guarantees of Article 15 of the IA Convention would not only apply to a decision «based solely on automated processing», but these guarantees explicitly extend also to situations where the IA «significantly impacts upon the enjoyment of human rights», even if the decision is not directly automated, but substantially based on the system’s proposal. This is in line with the correct line that the CJEU recently adopted in December 2023[64]. It should be noted that the EU AI Act has been changing in its wording as to what is considered a high-risk system. Initially a system was high risk if it was used for high risk purposes. Subsequently, it was high risk «unless the information output from the system is merely incidental to the relevant action or decision to be taken» (Art. 6.3 EU AI Act, Council version December 2022). In the final version of the EU AI Act, the issue becomes more nuanced. Even if the system is used for high-risk purposes, AI systems for «a limited procedural task» (a) or if the AI system «aims to improve the outcome of a previously performed human activity» (b) are no longer high-risk. It is also not high risk if the system detects patterns or deviations from patterns «and is not intended to replace or influence previously completed human assessment, without proper human review» (c). Finally, the high-risk safeguards of the EU AI Act do not apply if the system is «intended to perform a preparatory task for an assessment relevant to the use cases listed in Annex III» (Art. 6.3(d) EU AI Act). The interpretation given to the IA Convention in this regard will undoubtedly be important.

The second paragraph of Article 15(2) adds the obligation, and potentially the right, that «persons interacting with artificial intelligence systems are notified that they are interacting with such systems rather than with a human being». In the EU since 2018 the HLEG included in transparency the right to «communication», essentially as a «right to know that [persons] are interacting with an AI system» (no. 78)[65]. Article 50 of the EU AI Act lists this notification among the various «Transparency obligations for certain AI systems». Article 15.2 of the IA Convention seems to follow international softlaw, where the communication is bifurcated, on the one hand, into a right or obligation to «notification when interacting with an AI system» or «principle of interaction»[66] and, on the other hand, into the safeguards against automated decisions in Article 15.1.

The recognition of rights and guarantees in Article 15 is undoubtedly a positive step. However, it would be advisable to further specify their requirements, contents and minimum powers.

The important guarantee of an independent authority, to which effective remedies can be submitted, should also be highlighted. In this regard, Article 13 of the European Convention on Human Rights (ECHR) recognises the right to an effective domestic remedy before a national authority to guarantee the rights of the ECHR itself (and not others)[67]. This remedy must enable an independent, not necessarily judicial, authority to examine the merits of the petition and, if appropriate, to grant adequate reparation[68]. Along these lines, Article 14 of the IA Convention on «Remedies» in paragraph 3 provides for «an effective possibility for the persons concerned to lodge a complaint with the competent authorities». The final version does not expressly mention that it will be «before the supervisory mechanism referred to in Article 26, in accordance with its national law»[69]. In any case, Article 26(1) on «Effective oversight mechanisms» is equally relevant: «Each Party shall establish or designate one or more effective mechanisms to oversee compliance with the obligations in this Convention». It shall also «ensure that such mechanisms exercise their duties independently and impartially and that they have the necessary powers, expertise and resources to effectively fulfil their tasks of overseeing compliance with the obligations in this Convention, as given effect by the Parties» (2nd). If there is more than one mechanism, they shall cooperate with each other (3rd).

The EU AI Act did not require the independence and impartiality of the «market surveillance authority». However, Parliament’s amendment 123 to Recital 77 and 558, which included a new Article 59(4) (Article 70 in the final version), states that they «shall exercise their powers independently, impartially and without bias in order to preserve the objectivity of their activities and functions» («full independence» in Recital 159). It is striking that the first authority expressly created for these functions in the EU, the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), which has been regulated by Spanish Law 28/2022 and in particular Article 8 of Royal Decree 729/2023 of 22 December, does not even remotely meet the independence criteria set out in both the EU AI Act and the IA Convention[70].

This is a courageous regulation with sufficient specificity given the nature of the Convention. The EDPS underlines that the convention must expressly «grant the competent supervisory authorities adequate powers of investigation and enforcement» (no 47 and 49th and conclusion 10) and takes into account that a convergence of sectoral authorities with competences on the use of AI is very likely to occur, so cooperation will be necessary (no 46)[71], as well as the need for «cross-border cooperation between competent authorities to be designated by the parties to the agreement» (conclusion 11).

6. Relevant risk and impact assessment and mitigation obligations

The regulation of risk and impact assessment is possibly the most relevant and noteworthy aspect of the IA Convention. Since 2018, the EU has opted for a risk-based approach under the IA Europe brand of Ethics & Rule of law by design (X-by design)[72]. Also the EU High Level Group of Experts with its comprehensive evaluation list[73]. The IA White Paper 2020 spelled out the risk model, which was concretised a year later in the EU AI Act proposal. This approach implies that the greater the impact or risk of the IA system, the more obligations and safeguards are imposed. Under the compliance philosophy, possible risks and damages are identified and appropriate preventive and proactive mechanisms must be put in place to avoid their occurrence[74]. Particularly noteworthy are Mantelero’s contributions[75] in the framework of data protection and then for IA systems.

The risk-based approach was integrated between 2019 and 2021 by the Ad hoc Committee on Artificial Intelligence (CAHAI), among others, through the participation of Mantelero[76]. In December 2021 the CAHAI defined the «Core Elements» of the future Convention and made clear that it was to «focus on the prevention and/or mitigation of risks […] the legal requirements for the design, development and use of AI systems must be proportionate to the nature of the risk they pose to human rights, democracy and the rule of law» (Part I, no. 5). It also devoted several passages to the issue (V. 18-21 and XII 45-53). The possibility of including a «Human Rights, Democracy and Rule of Law Impact Assessment» (HUDERIA) was introduced, although it was stated that it «need not be part of a possible legally binding instrument» (no. 19)[77] and considers «complementing it with a non-legally binding cross-cutting instrument to assess the impact of IA systems» (no. 45) alongside «national or international legislation, with other compliance mechanisms, such as certification and quality labelling, audits, isolated regulatory spaces and periodic monitoring» (no. 47). Under the IA Convention the impact assessment would only be carried out «if there are clear and objective indications of relevant risks» on the basis of an initial review of all IA systems (no. 48) and should be updated «systematically and regularly» (No. 49). The Convention detailed the main steps that should be carried out: risk identification, impact assessment, governance assessment and mitigation and evaluation (no 50)[78] and in particular the minimum elements of the impact assessment (No 51)[79]. It also advanced the importance of involving civil society in the matter and ensuring participatory formulas (no 53). On this basis, the “zero draft” of 6 January 2023 reaffirmed the «risk-based approach» (no 5) which has been welcomed by the doctrine[80] and in the EU in its EU Decision on the Convention, as it serves to «minimise risks […] while avoiding unnecessary and disproportionate burdens or restrictions» (Annex no 4). For his part, the EDPS states that «the Commission should aim to include in the Convention a methodology for assessing the risks posed by AI systems in key areas» (No 19)[81].

It is therefore appropriate to note the regulation of the «assessment and mitigation of risks and adverse impacts» in the Convention. The general obligation to take «legislative, administrative or other measures», which «shall be graduated and differentiated as may be necessary in view of the severity and probability of the occurrence of adverse impacts on human rights, democracy and the rule of law» (Art. 1.2), is set out at the very front door. And it is regulated in what is arguably the most relevant and noteworthy part of the IA Convention: Article 16 of Chapter V imposes a general obligation to adopt measures «for the identification, assessment, prevention and mitigation of risks» (Art. 16.1). These measures «shall be graduated and differentiated» (Art. 16.2) or, as stated in the Draft, «shall take into account the risk-based approach referred to in Article 1». Specific obligations include «due account of the context and intended use of artificial intelligence systems» (a), «severity and probability of potential impacts» (b)[82]; «consider, where appropriate, the perspectives of relevant stakeholders, in particular persons whose rights may be impacted» (c). It also includes «apply iteratively throughout the activities within the lifecycle of the artificial intelligence system». This makes less specific the obligation that was stated in the Draft: «recording, monitoring and due consideration of impacts»(d). Oversight is now stated generically, as follows: «include monitoring for risks and adverse impacts to human rights, democracy, and the rule of law» (e). Documentation duties are also affirmed: «documentation of risks, actual and potential impacts, and the risk management approach» (f). However, in the final version, transparency duties have been omitted, as it does not state as in the Draft «where appropriate, publication of information on the efforts» taken. In the December 2023 Draft, as a logical consequence of the identification and assessment of risks, it was stated that it is appropriate to «require, where appropriate, testing of artificial intelligence systems before making them available for first use and when they are significantly modified» (g). In the final version, Article 16(3) states that it «shall adopt or maintain measures that seek to ensure that adverse impacts of artificial intelligence systems to human rights, democracy, and the rule of law are adequately addressed» and that these measures «should be documented and inform the relevant risk». It also states among the measures to be taken «testing of artificial intelligence systems before making them available for first use and when they are significantly modified» (16.2.g).

Also, if the risks are considered «incompatible», «Each Party shall assess the need for a moratorium or ban or other appropriate measures» (Art. 16.4). It is striking that – unlike, for example, the EU’s essential Article 5 EU AI Act, which sets out an extensive list of prohibitions – there is no prohibition of specific IA systems in the IA Convention. This is despite the fact that the CAHAI in December 2021 initially suggested banning certain IA applications in whole or in part.

7. Some common clauses in rights conventions and the “Monitoring and Cooperation Mechanism”

The IA Convention includes the usual minimum standard clauses of a rights convention[83]: it cannot be interpreted to «limiting, derogating from or otherwise affecting» rights already guaranteed in a Party’s domestic law or in other treaties (art. 21), nor to prevent «a wider measure of protection» (art. 22). Of particular importance for the European Union and its Member States, it provides for the preferential application of agreements or treaties that exist between the parties on the matter (Art. 27.1)[84].

There is no hard compliance mechanism for the IA Convention, but a “Monitoring and Co-operation Mechanism” (Chapter VII) and reference is made to a «Conference of the Parties» (Art. 23) for regular consultations on problem identification, supplements and amendments, interpretative recommendations, exchange of information, dispute settlement, co-operation and even public hearings (Art. 2). It also provides for international cooperation (Art. 25) and invites accession to the IA Convention, with accession by non-member states of the Council of Europe being envisaged (Art. 31). The possibility of «amendments» (Art. 28), «Dispute settlement» (Art. 29, not for EU states with regard to its regulation) is regulated.

As far as «Signature and entry into force» is concerned, it is significant that only five states are required to ratify, of which at least three must be members of the Council of Europe, which is evidence of the intention to activate the IA Convention as soon as possible (as, for example, the Cybercrime Convention, No 185).

8. The interrelation of the Convention with EU law and in particular with the EU AI Act

In principle, everything regulated by the IA Convention is already protected and further safeguarded by EU law, in particular as regards the EU AI Act and the GDPR. It is crucial to consider the overlap and interaction between the IA Convention and EU law and therefore EU actions and the provisions of the Convention must be taken into account to ensure consistency and avoid problems in the interaction of the IA Convention with EU law. In any case, the quasi-constitutional value of the IA Convention and its potential to deploy interpretations or stimulate regulations should not be underestimated. There are also specific aspects of the IA Convention that may go somewhat beyond EU regulation.

As regards measures to ensure consistency between EU law and the Convention, in the negotiations the EU seeks «1. that the Convention is compatible with EU single market law and other areas of EU law» and «3. that the Convention is compatible with the proposed Artificial Intelligence Act (EU AI Act)»[85]. The European Commission drew attention to the fact that «there is a very significant overlap between the preliminary draft of the Convention and the EU AI Act»[86]. For the EDPS this was an «important opportunity to complement the proposed IA Law by strengthening the protection of fundamental rights»[87]. For this reason, the EDPS advocates «the inclusion in the Convention of provisions aimed at strengthening the rights of the persons concerned»[88].

The EU proposes that «negotiations should be conducted on behalf of the Union» to ensure «coherence and uniformity»[89]. The Council emphasises to the states that «in the negotiations of the convention they should […] support, with full mutual respect, the Union’s negotiator»[90] and «cooperate closely throughout the negotiation process, in order to ensure the unity of the external representation of the Union» (art. 2 of Council Decision (EU) 2022/2349 of 21 November 2022). In any case, in order to avoid a bad combustion it was considered «necessary» to include «a disconnection clause» allowing the EU Member States that become Parties to the IA Convention to regulate the relations between them under EU law: «As between the EU Member States the proposed IA Act should prevail»[91]. Thus, the IA Convention specifically provides that «Parties which are members of the European Union shall, in their mutual relations, apply European Union rules governing the matters within the scope of this Convention without prejudice to the object and purpose of this Convention and without prejudice to its full application with other Parties» (Final Clauses, Chapter VIII, Art. 27. 2º «Effects of the Convention»). However, the aforementioned Article 22 requires the application of the most favorable law and the «wider measure of protection», and I believe that in some areas the IA Convention could raise the standard and guarantees of Union law. In these cases, the IA Convention would have to be applied, except as regards «mutual relations» (Art. 27).

Finally, the possible accession of the European Union itself to the IA Convention is regulated[92] . In the Draft, it was regulated that «within the spheres of its competence» «the European Union shall participate in the Conference of the Parties with the votes of the Member States in» (Art. 26.4)[93]. However, the final version no longer includes this provision.

As for the overlap and interaction of the Convention with EU law, measures are taken, so that if there are conflicts between these rules, they will not be significant. As a starting point, the EU AI Act or the EU GDPR regulate, and with greater guarantees, what the IA Convention regulates. However, there are some specific elements of the Convention that may go somewhat beyond the IAR and EU law:

– In terms of scope of application, the IA Convention contains obligations that are not limited to “high risk” IA systems, as is the case for almost all EU AI Act obligations. In particular, risk and impact assessment and mitigation safeguards (Article 16) could be required for non-high risk systems.

– Similarly, while the rules of the EU AI Act or the GDPR are excluded from the scope of national security, the IA Convention does not radically exclude their application, although the difference is almost negligible.

– The regulation of the “principles” in the IA Convention has a remarkable interpretative potential, whereas in the EU AI Act they are only mentioned in a recital and to the exclusion of their legal effects.

– In relation to rights, the Article 14 guarantees of access to documentation, records and evidence for recourse could become relevant.

– Also, the Convention extends the scope of Article 15 safeguards when an IA system «significantly impacts» a human decision, while the EU AI Act has introduced many nuances to the issue. I consider that in these cases, if this is the case, the Convention itself should be applied as a more protective regulation (Art. 22), except for the provisions of Article 27. In any case, it seems unlikely that there will be – if I may say so – any short-circuit or misunderstanding of the IA Convention with the EU States or the EU itself as a member.

9. To conclude: the Convention incorporates the “lyric” into the “prose” of the EU Regulation

We are at a crucial moment for the regulation of Artificial Intelligence. The EU’s long-standing action in this regard, especially its new EU AI Act, the development of this technology and the impact of technologies such as ChatGPT have accelerated regulation at a global level. In 2024, the process of adopting an IA Convention from the Council of Europe will be completed. This is a big attempt to harmonise and build a standard and minimal framework for Europe, with a worldwide scope. An effort focused on human rights, democracy and the rule of law and not so much on the economy and the market.

Europe wants to position itself as a shining beacon, a cornerstone in EU AI Act not only for the whole continent but also globally. For non-EU member States the IA Convention can obviously play an important normative and legal role. And for the EU and its ratifying Member States, the Convention has the potential to complement and even enhance and strengthen the protection of rights. It has been stressed in this respect that the IA Convention is not limited to high-risk IA systems. Moreover, it integrates “principles” into the legal and policy domain, which go beyond the agreed principles of AI ethics. These legal principles have great potential in the hands of legal operators to distil concrete rules and obligations, as has been the case for decades with data protection principles. In addition, the IA Convention gives data subjects affected by IA certain rights and, in particular, guarantees and mechanisms for the defence of their rights before independent authorities. Particularly relevant is the risk-based approach and, in particular, Article 16, which requires a continuous assessment and mitigation of risks and adverse impacts of any kind of AI system.

But beyond these specific contributions that can complement the EU AI Act, the value of the Convention goes further. If I may say so, the Convention puts the lyric to the prose that is the EU AI Act. The EU AI Act establishes the foundations and structures of a safe and trusted AI ecosystem, the Convention focuses on its impact on individuals and democratic society. The EU AI Act is methodical, detailed and precise and charts a clear path through technical and legal complexity, setting firm standards and concrete obligations for providers and users or implementers of AI systems. In contrast, on the lyrical side, the Convention rises to normatively integrate the fundamental values, ethical principles and human rights that should guide the evolution of AI. The Convention not only has a symbolic and meta-legal value, but is also a normative instrument, capable of quasi-constitutional integration into the legal systems of the Member States, and has great interpretative potential. This is why the IA Convention supersedes dozens of declaratory and soft-law instruments that were already superfluous, unwieldy and even tedious.

It is not possible to foresee how much and when the Convention will be able to deploy its potential. However, with its adoption alone Europe takes the lead in regulating artificial intelligence, starting from rights and democracy and incorporating “lyric” into the “prose” of the EU Regulation.

10. Bibliographical References

A. Brandusescu, R. Sieber, Comments on Preliminary Discussions with the Government of Canada on Council of Europe Treaty Negotiations on Artificial Intelligence, 31 August 2023. SSRN: https://ssrn.com/abstract=4559139.

J. A. Castillo Parrilla, Group privacy: a challenge for the right to data protection in the light of the evolution of artificial intelligence, in Derecho Privado y Constitución, no. 43, 2023, pp. 53-88, doi: https://doi.org/10.18042/cepc/dpc.43.02

L. Cotino Hueso, Derechos y garantías ante el uso público y privado de inteligencia artificial, robótica y big data, in M. Bauzá (dir.), El Derecho de las TIC en Iberoamérica, La Ley – Thompson-Reuters, Montevideo, Uruguay, 2019, pp. 917-952, at http://links.uv.es/BmO8AU7.

L. Cotino Hueso, Ethics in the design for the development of reliable artificial intelligence, robotics and big data and their usefulness in law, in Revista Catalana de Derecho Público, no. 58, 2019. http://dx.doi.org/10.2436/rcdp.i58.2019.3303.

L. Cotino Hueso, La primera sentencia del Tribunal de Justicia de la Unión Europea sobre decisiones automatizadas y sus implicaciones para la protección de datos y el Reglamento de inteligencia artificial (The first judgment of the Court of Justice of the European Union on automated decisions and its implications for data protection and the Artificial Intelligence Regulation), in Diario La Ley, January 2024.

L. Cotino Hueso, Los tratados internacionales y el Derecho de la Unión Europea. Integración y relaciones con el ordenamiento español, in J.M. Castellá Andreu (ed.), Derecho Constitucional Básico, VII Ed., Huygens, Col. Lex Academica, Barcelona, 2023, pp. 241-258.

L. Cotino Hueso, Guide to Data Protection in IA and Data Spaces, ITI, Valencia, 2021.

L. Cotino Hueso, Nuevo paradigma en la garantías de los derechos fundamentales y una nueva protección de datos frente al impacto social y colectivo de la inteligencia artificial, in L. Cotino Hueso (ed.), Derechos y garantías ante la inteligencia artificial y las decisiones automatizadas, Thompson-Reuters Aranzadi, FIADI (Federación Iberoamericana de Asociaciones de Derecho e Informática), Cizur, 2022.

L. Cotino Hueso, A. Gómez de Ágreda, Criterios éticos y de Derecho Internacional Humanitario en el uso de sistemas militares dotados de inteligencia artificial, in Novum Jus, Vol. 18, No. 1, 2024.

K. Crawford, J. Schultz, Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms, in Boston College Law Review, Vol. 55, No.93, 2014.

J. Fjeld, N. Achten, H. Hilligoss, A. Nagy, M. Srikumar, Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI, Berkman Klein Center for Internet & Society Research at Harvard University. January 2020, https://dash.harvard.edu/handle/1/42160420.

High-Level Expert Group on AI (HLEG), Ethics guidelines for trustworthy AI, 2019, no. 78, https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419.

H. van Kolfschooten, C. Shachar, The Council of Europe’s AI Convention (2023-2024): Promise and pitfalls for health protection, in Health Policy, 2023.

D. Leslie, C. Burr, M. Aitken, J. Cowls, M. Katell, M. Briggs, Artificial Intelligence, Human Rights, Democracy and the Rule of Law: a Primer, The Council of Europe and The Alan Turing Institute, June 2021, https://rm.coe.int/primer-en-new-cover-pages-coe-english-compressed-2754-7186-0228-v-1/1680a2fd4a.

A. Mantelero, From group privacy to collective privacy: towards a new dimension of privacy and data protection in the big data era, in L. Taylor, B. Van Der Sloot, L. Floridi (eds.), Group privacy, Springer, Verlag, 2017.

A. Mantelero, Toward a New Approach to Data Protection in the Big Data Era, in U. Gasser, J. Zittrain, R. Faris, R. Heacock Jones, Internet Monitor 2014: Reflections on the Digital World: Platforms, Policy, Privacy, and Public Discourse, Berkman Center for Internet and Society at Harvard University, Cambridge (MA), pp. 84 ff.

A. Mantelero, Beyond Data. Human Rights, Ethical and Social Impact Assessment, Springer, Information Technology and Law Series IT&LAW 36, 2022, https://link.springer.com/book/10.1007/978-94-6265-531-7

R. Martínez Martínez, Inteligencia artificial desde el diseño. Retos y estrategias para el cumplimiento normativo, in Revista catalana de dret públic, nº 58, 2019, pp. 64-81.

A. Palma Ortigosa, Decisiones automatizadas y protección de datos personales. Especial atención a los sistemas de inteligencia artificial, Dykinson, 2022.

A. Roig I Batalla, Las garantías frente a las decisiones automatizadas del Reglamento general de Protección de Datos a la gobernanza algorítmica, J.M. Bosch, Barcelona, 2021.

P. Sánchez-Molina, El origen de la cláusula de la mayor protección de los derechos humanos, in Estudios de Deusto, Vol. 66/1, 2018, pp. 375-391, doi: http://dx.doi.org/10.18543/ed-66(1)-2018pp375-391.

P. Valcke, V. Hendrickx, The Council of Europe’s road towards an AI Convention: taking stock, in Law, Ethics & Policy of AI Blog, 25 January 2023, https://www.law.kuleuven.be/ai-summer-school/blogpost/Blogposts/AI-Council-of-Europe-draft-convention.

J. Ziller, El Convenio del Consejo de Europa de inteligencia artificial frente al Reglamento de la Unión Europea: dos instrumentos jurídicos muy diversos, por Jacques Ziller, in L. Cotino Hueso, P. Simó Castellanos (coords.), Tratado sobre el Reglamento de Inteligencia Artificial de la Unión Europea, Aranzadi, 2024

J. Ziller, The Council of Europe Framework Convention on Artificial Intelligence vs. the EU Regulation: two quite different legal instruments, in CERIDAP, 2, 2024, https://ceridap.eu/the-council-of-europe-framework-convention-on-artificial-intelligence-vs-the-eu-regulation-two-quite-different-legal-instruments/?lng=en.

  1. This study is the result of research from the following projects: MICINN Project “Public rights and guarantees against automated decisions and algorithmic bias and discrimination” 2023-2025 (PID2022-136439OB-I00) funded by MCIN/AEI/10.13039/501100011033/; Project “Algorithmic law” (Prometeo/2021/009, 2021-24 Generalitat Valenciana); “Algorithmic Decisions and the Law: Opening the Black Box” (TED2021-131472A-I00) and “Digital transition of public administrations and artificial intelligence” (TED2021-132191B-I00) of the Recovery, Transformation and Resilience Plan. Generalitat Valenciana CIAEST/2022/1 stay, Public Law and ICT Research Group Catholic University of Colombia; Digital Rights Agreement-SEDIA Scope 5 (2023/C046/00228673) and Scope 6. (2023/C046/00229475).
  2. White Paper. On Artificial Intelligence – A European approach for excellence and trust, COM(2020) 65 final, Brussels, 19.2.2020, at https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en.
  3. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). On the regulatory process, please follow the documents at https://www.europarl.europa.eu/legislative-train/theme-a-europe-fit-for-the-digital-age/file-regulation-on-artificial-intelligence. This version follows the text finally adopted by COREPER on 9 February 2024 and approved by the Internal Market and Civil Liberties Committees of the EU Parliament on 13 February. The recitals and articles mentioned are prior to their final numbering in the final publication. https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/CJ40/AG/2024/02-13/1296003EN.pdf.
  4. Its name is, finally, Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance). OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/2024/1689/oj.
  5. https://www.parl.ca/legisinfo/en/bill/44-1/c-27.
  6. The text at https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading. The status of processing at https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act

    https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document.

  7. Projeto de lei nº 2338 do Senado, que regulamenta a IA, texto inicial em https://legis.senado.leg.br/sdleg-getter/documento?dm=9347622&ts=1702407086098&disposition=inline&_gl=1*1ifop8*_ga*MTg0Njk3ODg3MS4xNzA1Mzk4MDU2*_ga_CW3ZH25XMK*MTcwNTM5ODA1Ni4xLjAuMTcwNTM5ODA1Ni4wLjAuMA. Procedure at https://www25.senado.leg.br/web/atividade/materias/-/materia/157233.
  8. For example, UK Government, A pro-innovation approach to AI regulation, Department for Science, Innovation and Technology and Office for Artificial Intelligence, 29 March 2023, update 3 August. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper
  9. http://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm
  10. Launched by the Minister for Innovation, Science and Industry. Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems.
  11. https://www.mofa.go.jp/files/100573473.pdf.
  12. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/. See also the information note https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.
  13. California Senate Bill 896, https://legiscan.com/CA/text/SB896/id/2868456.
  14. https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=4015&ChapterID=68#:~:text=An%20employer%20may%20not%20use,use%20of%20artificial%20intelligence%20analysis.
  15. There are regulatory proposals from individual legislators and others linked to national strategies on the subject, such as in Peru, Chile, Colombia, to mention a few.
  16. Thus, at the 353rd meeting of the Committee of Ministers, Decision CM/Del/Dec(2019)1353/1.5, 11 September 2019.
  17. https://www.coe.int/en/web/artificial-intelligence/cai.
  18. Terms of reference of ad hoc Committee (CM(2021)131-addfinal) Terms of reference https://rm.coe.int/terms-of-reference-of-the-committee-on-artificial-intelligence-for-202/1680a74d2f.
  19. This can be found, among others, at https://dig.watch/processes/convention-on-ai-and-human-rights-council-of-europe-process. Also, see https://www.law.kuleuven.be/citip/blog/the-council-of-europes-road-towards-an-ai-convention-taking-stock/.
  20. It should be recalled that Canada, the United States, Japan, Mexico and the Holy See are “observers”, while Israel is an Observer of the Parliamentary Assembly. https://www.coe.int/es/web/about-us/our-member-states.
  21. Regarding Canada’s position there are two studies Canadian Bar Association, Council of Europe Treaty Negotiations on Artificial Intelligence, Canadian Bar Association, Privacy And Access Law Section, Immigration Law Section, Ethics And Professional Responsibility Subcommittee, October 2023, https://www.cba.org/Our-Work/Submissions-(1)/Submissions/2023/October/October/Council-of-Europe-Treaty-Negotiations-on-Artificia. Also, A. Brandusescu, R. Sieber, Comments on Preliminary Discussions with the Government of Canada on Council of Europe Treaty Negotiations on Artificial Intelligence (31 August 2023). Available at SSRN: https://ssrn.com/abstract=4559139.
  22. About CAHAI, https://www.coe.int/en/web/artificial-intelligence/cahai CAHAI, First progress report, 1384th meeting, 23 September 2020, https://search.coe.int/cm/Pages/result_details.aspx?ObjectID=09000016809ed062. Of interest are the preparatory CAHAI studies, Towards regulation of AI systems, Global perspectives on the development of a legal framework on Artificial Intelligence systems based on the Council of Europe’s standards on human rights, democracy and the rule of law, Compilation of contributions DGI (2020)16, CAHAI Secretariat, December 2020, https://rm.coe.int/prems-107320-gbr-2018-compli-cahai-couv-texte-a4-bat-web/1680a0c17a. Also noteworthy is the Feasibility study, CAHAI, Feasibility study on a legal framework on AI design, development and application based on CoE standards, 17 December 2020, https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da. Also, D. Leslie, C. Burr, M. Aitken, J. Cowls, M. Katell, M. Briggs, Artificial Intelligence, Human Rights, Democracy and the Rule of Law: a Primer, The Council of Europe and The Alan Turing Institute, June 2021, https://rm.coe.int/primer-en-new-cover-pages-coe-english-compressed-2754-7186-0228-v-1/1680a2fd4a.
  23. CAHAI, Possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law, 3 December 2021. https://rm.coe.int/cahai-2021-09rev-elements/1680a6d90d.
  24. CAI, Revised zero draft [framework] convention on artificial intelligence, human rights, democracy and the rule of law, Strasbourg, 6 January 2023, https://rm.coe.int/cai-2023-01-revised-zero-draft-framework-convention-public/1680aa193f.
  25. CAI, Consolidated Working Draft of the Framework Convention, Strasbourg, 7 July 2023, https://rm.coe.int/cai-2023-18-consolidated-working-draft-framework-convention/1680abde66.
  26. CAI, Consolidated working draft, 18 December 2023 https://rm.coe.int/cai-2023-28-draft-framework-convention/1680ade043.
  27. G. Volpicelli, International AI rights treaty hangs by a thread, in Politico, 11 March 2024, https://www.politico.eu/article/council-europe-make-mockery-international-ai-rights-treaty/.
  28. https://rm.coe.int/1680afae3c.
  29. Council of Europe, Explanatory Report to the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, March 2024, https://rm.coe.int/1680afae67.
  30. A Preamble with seventeen Recitals and thirty-six articles structured in eight chapters: General Provisions (I); General Obligations (II); Principles (III); Remedies (IV); Assessment and Mitigation of Risks and Adverse Impacts (V) and those relating to Implementation (VI); Monitoring and Cooperation Mechanism (VII) and Final Clauses (VIII).
  31. Part II and III. 11. CAHAI, Possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law, cit.
  32. On this subject, see my chapter: L. Cotino Hueso, Los tratados internacionales y el Derecho de la Unión Europea. Integración y relaciones con el ordenamiento español, in J. M. Castellá Andreu (ed.), Derecho Constitucional Básico, VII Ed., Huygens, Col. Lex Academica, Barcelona, 2023, pp. 241-258.
  33. The “new” definition in “AI terms & concepts”, https://oecd.ai/en/ai-principles, should be taken into account.
  34. G. Volpicelli, International AI rights treaty hangs by a thread, cit.
  35. EDPS, Opinion 20/2022 on the Recommendation for a Council Decision authorising the opening of negotiations on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the rule of law, 13 October 2022, p. 8, https://edps.europa.eu/system/files/2022-10/22-10-13_edps-opinion-ai-human-rights-democracy-rule-of-law_en.pdf.
  36. Recital 2f (new), Amendment 11 and in the scope regulation in Article 2 – paragraph 5d (new).
  37. The topic is analysed in depth in my study: L. Cotino Hueso, Guía de Protección de Datos en IA y Espacios de Datos, ITI, Valencia, 2021. Access here.
  38. On this subject, see my study: L. Cotino Hueso, G. de Ágreda, Criterios éticos y de Derecho Internacional Humanitario en el uso de sistemas militares dotados de inteligencia artificial, in Novum Jus, Vol. 18, No. 1, 2024.
  39. Article 2(a) GDPR excludes from its application «in the course of an activity which falls outside the scope of Union law» and, in this sense, Recital 4 states as an example «activities in relation to the common foreign and security policy of the Union».
  40. https://rm.coe.int/cai-2023-18-consolidated-working-draft-framework-convention/1680abde66.
  41. Thus, there is flexibility in the application of the Convention to «protect essential national security interests, including through foreign intelligence and counter-intelligence related activities» (Option A); or simply for «national security interests» (Option B). In Option C, the party «may restrict» the application of the Convention «to protect essential national security interests».
  42. Recital 10, with reference to Article 216(2) TFEU. Council Decision (EU) 2022/2349 of 21 November 2022 authorising the opening of negotiations on behalf of the European Union for a Council of Europe Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, https://eur-lex.europa.eu/legal-content/ES/TXT/HTML/?uri=CELEX:32022D2349.
  43. «A Party shall not be required to apply this Convention to activities within the lifecycle of artificial intelligence systems related to the protection of its national security interests, with the understanding that such activities are conducted in a manner consistent with applicable international law, including international human rights law obligations, and with respect for its democratic institutions and processes».
  44. The 2017 EU Parliament Declaration on Robotics is a clear manifestation of these principles. The EU’s Ethical Guidelines for Trustworthy AI, the OECD’s Recommendation of the Council on Artificial Intelligence of 22 May 2019 and UNESCO’s Recommendation on the Ethics of Artificial Intelligence of November 2021 certainly stand out. Already in 2018, the AI4People project counted 47 internationally proclaimed ethical principles and distilled them into five points: beneficence («do good»), non-maleficence («do no harm»), autonomy or human agency («respect for the self-determination and choice of individuals») and justice («fair and equitable treatment for all»). All this can be followed in my study: L. Cotino Hueso, Ethics in the design for the development of reliable artificial intelligence, robotics and big data and their utility from the law, in Revista Catalana de Derecho Público, nº 58, 2019: http://dx.doi.org/10.2436/rcdp.i58.2019.3303.
  45. J. Fjeld, N. Achten, H. Hilligoss, A. Nagy, M. Srikumar, Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI, Berkman Klein Center for Internet & Society Research at Harvard University, 2020, https://dash.harvard.edu/handle/1/42160420.
  46. «Article 11 – Preservation of health [and the environment]. Each Party shall adopt or maintain measures to preserve health [and the environment] in the context of activities within the lifecycle of artificial intelligence systems».
  47. The EDPS, Opinion 20/2022 on the Recommendation for a Council Decision authorising the opening of negotiations on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the rule of law, cit. recommends an explicit reference to compliance with EU law (3.3) and that there should be a data protection by design and by default approach (paragraph 6, p. 13 et seq. nr. 36 et seq.).
  48. Thus, in Article 2 e. «“subject of artificial intelligencemeans any natural or legal person whose human rights and fundamental freedoms or related legal rights guaranteed by applicable national or international law are affected by the application of an artificial intelligence system, including decisions taken or substantially informed by the application of such a system».
  49. EDPS, Opinion 20/2022 on the Recommendation for a Council Decision authorising the opening of negotiations on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the rule of law, cit. p. 10 nr. 28.
  50. P. Valcke, V. Hendrickx, The Council of Europe’s road towards an AI Convention: taking stock, in Law, Ethics & Policy of AI Blog, 25 January 2023, https://www.law.kuleuven.be/ai-summer-school/blogpost/Blogposts/AI-Council-of-Europe-draft-convention.
  51. «affected persons who are located in the Union-».
  52. See La creación dinámica de grupos algorítmicos, la privacidad colectiva y de grupo, in my study: L. Cotino Hueso, Nuevo paradigma en la garantías de los derechos fundamentales y una nueva protección de datos frente al impacto social y colectivo de la inteligencia artificial, L. Cotino Hueso (ed.), Derechos y garantías ante la inteligencia artificial y las decisiones automatizadas, Thompson-Reuters Aranzadi, FIADI (Federación Iberoamericana de Asociaciones de Derecho e Informática), Cizur, 2022. On the subject recently, J. A. Castillo Parrilla, Group privacy: a challenge for the right to data protection in light of the evolution of artificial intelligence, in Derecho Privado y Constitución, no. 43, 2023, pp. 53-88, doi: https://doi.org/10.18042/cepc/dpc.43.02.
  53. See my contribution to the CSIS Network, CSIS contribution to the public consultation on the Digital Rights Charter, December 2020 https://bit.ly/3paF0H0.
  54. In addition to the work reviewed, for example, (A. Mantelero, From group privacy to collective privacy: towards a new dimension of privacy and data protection in the big data era, in L. Taylor, B. Van Der Sloot, L. Floridi (eds.), Group privacy, Springer, Verlag, 2017, chap. 8.
  55. In particular paragraphs 28, 30 (equality), 46 and 47 (governance), 50, 52, 53, 60 and 64 (ethical impact) or No 91 (gender).
  56. A. Brandusescu, R. Sieber, Comments on Preliminary Discussions with the Government of Canada on Council of Europe Treaty Negotiations on Artificial Intelligence, cit. p. 6.
  57. EDPS, Opinion 20/2022 on the Recommendation for a Council Decision authorising the opening of negotiations on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the rule of law, cit., no. 6, paragraph 8 of ‘Conclusions’, p. 15.
  58. The December 2023 draft affirmed «fair access to public debate [and the ability of individuals to make decisions free from undue/[harmful and malicious] outside influence or manipulation».
  59. The December 2023 draft included «whistleblower protection» in what was Article 19, which could be a mechanism for guaranteeing collective interests. Article 20 generically states «digital literacy and appropriate digital skills for all segments of the population» (Art. 21).
  60. Thus A. Brandusescu, R. Sieber, Comments on Preliminary Discussions with the Government of Canada on Council of Europe Treaty Negotiations on Artificial Intelligence, cit. p. 3 on «Rights mechanisms».
  61. K. Crawford, J. Schultz, Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms, in Boston College Law Review, Vol. 55, No.93, 2014, p. 125.
  62. «Recognising the importance of human review / [oversight], each Party shall ensure that, [where an artificial intelligence system substantially informs or takes decisions [or acts] impacting on human rights], effective procedural guarantees, safeguards and rights, in accordance with the applicable domestic and international law, are available to persons affected thereby».
  63. Of particular reference in this regard is the Article 29 Group, Guidelines on automated individual decisions and profiling for the purposes of Regulation 2016/679 of 3 October 2017, revised on 6 February 2018. I have analysed in particular this precept in L. Cotino Hueso, Derechos y garantías ante el uso público y privado de inteligencia artificial, robótica y big data, in M. Bauzá (dir.), El Derecho de las TIC en Iberoamérica, La Ley – Thompson-Reuters, Montevideo, Uruguay, 2019, pp. 917-952, accessed at http://links.uv.es/BmO8AU7. In particular, A. Palma Ortigosa, Decisiones automatizadas y protección de datos personales. Especial atención a los sistemas de inteligencia artificial, Dykinson, 2022 and A. Roig I Batalla, Las garantías frente a las decisiones automatizadas del Reglamento general de Protección de Datos a la gobernanza algorítmica, J.M. Bosch, Barcelona, 2021.
  64. The judgment of the CJEU of 7 December 2023 is the first to deal centrally with Article 22 GDPR and adopts a criterion along the lines of including the guarantee of this article in cases in which the automated system determines the basic elements of the decision to be taken. On this subject, it is worth following my recent study: L. Cotino Hueso, La primera sentencia del Tribunal de Justicia de la Unión Europea sobre decisiones automatizadas y sus implicaciones para la protección de datos y el Reglamento de inteligencia artificial (The first judgment of the Court of Justice of the European Union on automated decisions and its implications for data protection and the Artificial Intelligence Regulation), in Diario La Ley, January 2024.
  65. High-Level Expert Group on AI (HLEG), Ethics guidelines for trustworthy AI, 2019, no. 78, https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419.
  66. Thus, it is worth following the already mentioned Harvard study, J. Fjeld, N. Achten, H. Hilligoss, A. Nagy, M. Srikumar, Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI, cit.
  67. Council of Europe, Guide on Article 13 of the European Convention on Human Rights. Right to an effective remedy, updated 31 August 2022, https://ks.echr.coe.int/web/echr-ks/article-13.
  68. The ECtHR judgment of 3 April 2001, Keenan v. United Kingdom, para. 122 summarises the content of this precept.
  69. In the version of 18 December, letter (c) is indicated as a possibility (in red).
  70. Article 8 only affirms «Autonomy» and «Technical independence» as «Principles of action of the Agency» and that «the Agency shall act with full autonomy». Despite these statements, the statute in no way guarantees either functional or organisational independence, but rather dependence on the Secretary of State.
  71. Thus, No 46 notes «the heterogeneity of the areas covered by AI systems (ranging from labour and employment to financial services, education and health care, administration of justice, fraud prevention, etc.), there is a need for structured systems and institutionalised cooperation between different competent authorities (in particular between data protection authorities and competent sectoral authorities)».
  72. A comprehensive analysis in my study: L. Cotino Hueso, Ethics in Design for the development of reliable artificial intelligence, robotics and big data and their utility from the law, cit.
  73. HLEG, Ethics guidelines for trustworthy AI, cit., especially Chapter III and listing, pp. 33-41.
  74. I focus on this in L. Cotino Hueso, Nuevo paradigma en la garantías de los derechos fundamentales y una nueva protección de datos frente al impacto social y colectivo de la inteligencia artificial, cit.
  75. Regarding the data domain A. Mantelero, Toward a New Approach to Data Protection in the Big Data Era, in U. Gasser, J. Zittrain, R. Faris, R. Heacock Jones, Internet Monitor 2014: Reflections on the Digital World: Platforms, Policy, Privacy, and Public Discourse, Berkman Center for Internet and Society at Harvard University, Cambridge (MA), pp. 84 ff. For AI, A. Mantelero, Beyond Data. Human Rights, Ethical and Social Impact Assessment, Springer, Information Technology and Law Series IT&LAW 36, 2022, https://link.springer.com/book/10.1007/978-94-6265-531-7. In Spain, among others R. Martínez Martínez, Inteligencia artificial desde el diseño. Retos y estrategias para el cumplimiento normativo, in Revista catalana de dret públic, nº 58, 2019, pp. 64-81.
  76. A. Mantelero, see Chapter III pp. 61-119, in CAHAI, Towards regulation of AI systems, Global perspectives on the development of a legal framework on Artificial Intelligence systems based on the Council of Europe’s standards on human rights, democracy and the rule of law, cit. p. 72.
  77. CAHAI, Possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law, cit., no. 19, p. 5.
  78. «(1) Risk identification: Identification of risks relevant to human rights, democracy and the rule of law.(2) Impact assessment: Assessment of impact, taking into account the likelihood and severity of effects on those rights and principles.(3) Governance assessment: Assessment of the roles and responsibilities of duty-bearers, rights-holders and stakeholders in the implementation and governance of mechanisms to mitigate impact;(4) Mitigation and evaluation: Identification of appropriate mitigation measures and ensuring continuous evaluation. (1) Risk identification: Identification of relevant risks to human rights, democracy and the rule of law.(2) Impact assessment: Assessment of impact, taking into account the likelihood and severity of effects on those rights and principles.(3) Governance assessment: Assessment of the roles and responsibilities of duty-bearers, rights-holders and stakeholders in the implementation and governance of mechanisms to mitigate impact;(4) Mitigation and evaluation: Identification of appropriate mitigation measures and assurance of continuous evaluation».
  79. Namely: level of autonomy, underlying technology, intended and potentially unintended use, complexity of the system, transparency and explainability, human oversight and control, data quality, the robustness/security of the system, the involvement of vulnerable individuals or groups, geographic and temporal scope, the assessment of the likelihood and extent of potential harm and its reversibility, and whether it is a “network line”.
  80. Albeit superficially, H. van Kolfschooten, C. Shachar, The Council of Europe’s AI Convention (2023-2024): Promise and pitfalls for health protection, in Health Policy, 2023, p. 3, or A. Brandusescu, R. Sieber, Comments on Preliminary Discussions with the Government of Canada on Council of Europe Treaty Negotiations on Artificial Intelligence, cit. p. 2, which considers it as one of the «main results».
  81. EDPS, Opinion 20/2022 on the Recommendation for a Council Decision authorising the opening of negotiations on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the rule of law, cit. On the subject, p. 11 et seq.
  82. It was stated in the draft, «the severity, duration and reversibility of the possible risks».
  83. On the subject P. Sánchez-Molina, El origen de la cláusula de la mayor protección de los derechos humanos, in Deusto Studies, Vol. 66/1, 2018, pp. 375-391, doi: http://dx.doi.org/10.18543/ed-66(1)-2018pp375-391.
  84. «If two or more Parties have already concluded an agreement or treaty on the matters dealt with in this Convention, or have otherwise established relations on such matters, they shall also be entitled to apply that agreement or treaty or to regulate those relations accordingly, so long as they do so in a manner which is not inconsistent with the object and purpose of this Convention».
  85. Annex to Council of the EU, Council Decision (EU) 2022/2349 of 21 November 2022 authorising the opening of negotiations on behalf of the European Union for a Council of Europe Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, https://eur-lex.europa.eu/legal-content/ES/TXT/HTML/?uri=CELEX:32022D2349.
  86. European Commission, Recommendation for a Council Decision authorising the opening of negotiations on behalf of the European Union with a view to a Council of Europe Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, COM/2022/414 final, Brussels, 18 August 2022, p. 5 https://eur-lex.europa.eu/legal-content/ES/TXT/HTML/?uri=CELEX:52022PC0414.
  87. EDPS, Opinion 20/2022 on the Recommendation for a Council Decision authorising the opening of negotiations on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the rule of law, cit., p. 11, p. 7.
  88. Ibid., section 3.2 p. 8, no. 18 et seq.
  89. European Commission, Recommendation for a Council Decision authorising the opening of negotiations on behalf of the European Union with a view to a Council of Europe Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, cit., p. 5.
  90. Recital 11, Council of the EU, Decision (EU) 2022/2349, cit.
  91. Ibid, Council of the EU, Decision (EU) 2022/2349, cit., p. 6.
  92. Thus, in the Secretariat’s proposal, Council of the EU, Decision (EU) 2022/2349, cit., p. 4. In particular, in the Annex, p. 3 «(3)That the Convention enables the European Union to become a party to it».
  93. «[Within the areas of its competence, the European Union shall exercise its right to vote with a number of votes equal to the number of its member States which are Contracting Parties to this Convention; the European Union shall not exercise its right to vote in cases where the member States concerned exercise theirs, and conversely]».