4/2023

The impact of the AI Act on public authorities and on administrative procedures

Tags: , , , ,

Il contributo riassume i principali emendamenti adottati dal Parlamento europeo, durante la sua prima lettura della proposta di Regolamento UE sull’Intelligenza Artificiale (AI Act). Più in particolare, si analizza l’impatto di questa nuova normativa sulla decisione amministrativa automatizzata e si esamina l’autonomia concessa agli Stati membri, nel recepire tali disposizioni all’interno delle rispettive normative nazionali sul procedimento amministrativo. La tesi esposta nel contributo considera l’AI Act un atto normativo necessario e che, se adottato con alcuni degli emendamenti proposti dal Parlamento europeo, potrà regolare adeguatamente lo sviluppo e l’uso di sistemi di AI da parte delle autorità pubbliche europee, definendo così un elevato standard normativo, che potrà essere rafforzato dai legislatori nazionali.


This contribution summarises the main amendments adopted by the European Parliament during its first reading of the EU’s Proposal for a Regulation on Artificial Intelligence (AI Act). It outlines the impact of this Act, if adopted with such amendments, on automated administrative decision-making (“adm-ADM”), and examines the margin Member States will have to supplement such provisions in their respective national administrative procedure acts. It concludes that the AI Act is a necessary piece of legislation and that, if adopted with some of the Parliament’s amendments, it will adequately regulate the development and use of AI systems by European public authorities, setting a high regulatory standard that can be reinforced by national legislators.
Summary: 1. Introduction.- 2. The AI Act and adm-ADM after the European Parliament’s amendments.- 2.1. The key aspects of the AI Act and its application to public authorities.- 2.2. The amendments adopted by the European Parliament.- 2.3. The amendments of the European Parliament having direct impact on adm-ADM and on administrative procedures.- 3. The Administrative Procedure Acts after the AI Act.- 4. Conclusion.

1. Introduction[1]

The Proposal for a Regulation on Artificial Intelligence (AI Act) presented by the European Commission in April 2021[2] and currently in its final processing phase, after the amendments of the Council[3] and the European Parliament (EP)[4], is generating high expectations and heated debates around the world. It is expected to be approved before the EP elections in June 2024, following the ongoing trilateral negotiations (trilogues) between these institutions. Spain, currently holding the six-month presidency of the Council and committed to advancing the AI Act’s adoption during its tenure, has already passed regulations governing the agency that will be designated as the national supervisory authority when the AI Act enters into force[5].

This contribution first outlines the major impact that the AI Act, if adopted, will have on automated decision-making by public authorities across Europe (hereinafter administrative ADM or “adm-ADM”), especially if it includes some of the amendments adopted by the EP (para. 2). The second part examines the margin Member States will have to supplement the provisions of the AI Act in their national administrative procedure acts (para. 3). It concludes by underlining the importance of the adoption of the AI Act with some of the EP’s amendments in order to have an adequate regulatory framework for the development and use of AI systems by European public authorities (para. 4).

2. The AI Act and adm-ADM after the European Parliament’s amendments

2.1. The key aspects of the AI Act and its application to public authorities

The AI Act, based on Article 114 of the TFEU (approximation of laws to achieve the internal market) and aimed at guaranteeing the free movement of AI systems, applies equally to public and private actors that develop such systems («providers») or that use them for professional purposes («users» or «deployers» according to the new wording of the EP). Public authorities will usually be considered “users”, but they will be “providers” when they develop their own AI systems in-house or purchase tailor-made AI systems.

With an approach typical of product safety legislation, the Commission’s Proposal bans certain AI systems (Art. 5) and, above all, imposes numerous obligations on providers (and, to a lesser extent, users) of the high-risk systems listed in its Annexes II and III. Annex II refers to AI systems which are safety components of certain products already covered by EU law, e.g. machines, toys, medical devices, vehicles and aircraft, while Annex III contains a list of what are known as stand-alone AI systems, which are not linked to other products and which relate to certain use cases that are considered to be particularly dangerous. Many of these high-risk use cases of Annex III concern public authorities, such as those related to the management of critical infrastructure, access to educational and vocational training institutions, assessment of students, selection, promotion and dismissal of workers, access to public services and public benefits, or the different use cases related to law enforcement and the management of migration, asylum and border control.

Such high-risk systems must be subject to a conformity assessment before being placed on the market, which (in almost all Annex III use cases) should normally be carried out by the provider itself and not by third parties. They must also be registered in a centralised and publicly accessible database to be managed by the Commission (Art. 60). As is generally the case in product safety legislation, such high-risk systems will be presumed to comply with the obligations of the AI Act when they conform to the technical standards to be developed by the European standardisation bodies (CEN, CENELEC and ETSI).

Other, non-high-risk AI systems are only subject, in certain specific cases, to the transparency obligations under Art. 52 (e.g. informing individuals when interacting with a chatbot or when an AI system generates deepfakes).

The Commission’s Proposal also provides that the supervision of compliance with all these prohibitions and obligations will be the responsibility of the Member States (through the national supervisory authorities) and, in the case of Union authorities, of the European Data Protection Supervisor (EDPS). The national supervisory authorities and the EDPS may impose heavy fines in the event of non-compliance, the maximum amount of which is set by the AI Act itself (up to 30 000 000 euros or, if the offender is company, up to 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher; and up to 500 000 euros in case of EU authorities).

2.2. The amendments adopted by the European Parliament

The EP, in the more than 700 amendments that has adopted, has not altered the key aspects of the AI Act described in the previous section, but has included very important changes that affect public authorities developing and using AI systems. The most important are the following.

  1. In Art. 3(1)(1), the EP provides a new definition of AI in line with the OECD Recommendation of 2019[6], based on the autonomy of the system and not on concrete techniques (Annex I enumerating such AI techniques is therefore deleted). According to the new version of recitals 6 and 6a, AI systems will normally, but not necessarily, include machine learning.
  2. A new Art. 4a includes a list of general principles applicable to the development and use of all AI systems and not only to high-risk systems. This is important because most AI systems developed or used by public authorities won’t be classified as high-risk. Such principles include among others the principle of «human agency and oversight», which requires that AI systems function «in a way that can be appropriately controlled and overseen by humans»; the principle of «transparency», including «appropriate traceability and explainability», and information of «affected persons about their rights»; and the principle of «diversity, non-discrimination and fairness»[7].
  3. The new version of Art. 5 significantly extends the list of prohibited AI systems, many of them potentially used by public authorities, such as real-time remote biometric identification systems in publicly accessible spaces (which are completely banned, without the exceptions foreseen in the Commission’s Proposal; moreover, post remote biometric identification systems in publicly accessible spaces are only admitted where they are authorised by a judge); predictive policing AI systems; AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage (the controversial practice carried out by the company Clearview AI); and AI systems that infer emotions of a natural person in the areas of law enforcement, border management, in workplace and education institutions.
  4. Regarding high-risk AI systems, those that are most intensively regulated by the AI Act, the new version of Art. 6 adds an “extra layer” to the classification as high-risk. This means that AI systems related to the areas and use cases listed in Annex III (the aforementioned stand-alone AI systems) shall only be considered as high-risk «if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons»[8]. The new Art. 6(2) allows providers who consider that their AI system does not pose such a significant risk to submit a brief reasoned notification (one page suffices according to recital 32a) to the national supervisory authority and to place it on the market without having to comply with the obligations of the AI Act if they do not receive objections from the national supervisory authority within three months. This “extra layer” was first introduced by the Council and has been criticised by many NGOs[9]. Note that this “exemption” may be asked for only by providers, not by users/deployers of AI systems.
  5. At the same time, regarding Annex III, the EP has added some new high-risk use cases, empowers the European Commission not only to add, but also to change and to delete some of the use cases through delegated acts, and expressly includes not only AI systems that «make decisions», but also those that «materially influence decisions»[10]. This last clarification is important and shows that the AI Act not only covers fully automated decisions, but also the so-called semi-automated decisions and procedures, in which the final decision is adopted by a human on the basis of the output of an automated system. It is also interesting that the last high-risk use case of Annex III (point 8, referred to AI systems used by judges) is extended to administrative appeals and ADR-mechanisms. Although the wording of Amendment 738 is not very clear, the new recital 40 seems to confirm this and that according to the legislator judicial (and also administrative) remedies must be in any case decided by humans.
  6. The EP has also moderated some of the substantive requirements imposed on high-risk systems in Arts. 8 ss. by being less categorical. E.g. the new version of Art. 10(3), instead of requiring training datasets to be «relevant, representative, free of errors and complete», establishes that they shall be «relevant, sufficiently representative, appropriately vetted for errors and be as complete as possible in view of the intended purpose»[11].
  7. Of course, the EP’s most widely publicised amendment in the media has been the new Art. 28b on foundation models. This provision pretends to address concerns about ChatGPT and other generative AI systems, which did not exist when the Commission’s proposal was presented in April 2021. This Article imposes stringent obligations on providers (not users) of such models, even if they are not classified as high risk. This regulation of foundation models is the most questioned aspect of the AI Act in the famous open letter signed last June by 150 relevant European businesses[12], and which has generated a counter-letter also signed by 150 civil society organisations supporting the AI Act and many of the amendments of the EP[13]. It is in any case hard to imagine public authorities developing such complex models and therefore being subject to these obligations.
  8. Other important amendments adopted by the EP are the requirement for national supervisory authorities to be fully independent (which increases the credibility of the control of compliance with the AI Act by other national authorities), the significant strengthening of the institutional position and powers of the European AI Board, by making it a fully-fledged independent agency (“European AI Office”), or the increase of the amounts of the fines that the EDPS can impose on EU authorities in case of non-compliance with the AI Act (up to 1.5 million euros).

2.3. The amendments of the European Parliament having direct impact on adm-ADM and on administrative procedures

Separate mention should be made of a number of amendments adopted by the EP which have a direct impact on automated decision-making by public authorities and the administrative procedures they must follow in order to take binding single-case decisions.

The EP, following the suggestions of several NGOs[14] and the EDPS/European Data Protection Board[15], has included some obligations directly addressed to protect natural persons affected by the use of high-risk AI systems. This is an important change, because the Commission’s Proposal only referred to providers and users of AI systems, and not to affected persons, who are now defined in Art. 3(1)(8a) as «any natural person or group of persons who are subject to or otherwise affected by an AI system».

In a paper drafted before the EP’s amendments I suggested to include some of these obligations (in particular, the duty to inform the affected parties and the public, as well as the duty to conduct impact assessments before and after automating administrative decision-making) in a new specific Title of the AI Act on the use of AI systems by the EU administration, which could be based on the legal basis of Art. 298 TFEU[16]. The EP has chosen to extend them to all types of public and private users (deployers) of AI systems, as will be seen next.

  1. The first important measure is the obligation of all (private and public) users/deployers to inform affected persons that they are subject to the use of a high-risk system of Annex III. According to the new paragraph 6a of Art. 29, this obligation exists not only where the decision is fully automated, but also where the AI system is used to assist a human in making the decision. This Article also specifies that this information shall include the intended purpose and the type of decisions the AI system makes. It also obliges to inform the affected person about the right to request an explanation, which will be referred to below.
  2. This obligation to inform the concrete affected person is supplemented by the new obligation of (only) public authorities (and private undertakings designated as gatekeepers under the Digital Services Act –DSA–[17]) to register their use of high-risk AI systems in the aforementioned EU database of high-risk systems envisaged in Art. 60 (new Art. 51(1a)). This is an important transparency obligation for public authorities that will allow control of administrative high-risk systems by public watchdogs and that goes beyond the concrete administrative procedures that must be followed to adopt single-case decisions. As has been seen, according to the Commission’s Proposal only providers were obliged to register AI systems in this database.
  3. A second important measure to protect affected persons is the new right to an explanation envisaged in Art. 68c. According to this new provision, deployers (in our case public authorities) that use high-risk AI systems to adopt decisions with legal effects or that adversely affect a natural person must give a clear and meaningful explanation to this affected person when he or she requires it. What has to be explained is the role of the AI system in the decision-making procedure, the main parameters of the decision taken and the related input data. This explanation must only be given on request of the affected person and may be excluded, in justified cases, by Union or Member State law.
  4. A third relevant measure included by the EP to protect affected parties is the right they have to lodge a complaint with the national supervisory authority if they consider that the AI systems relating to them infringe the AI Act (new Art. 68a). This complaint is without prejudice to any other administrative or judicial remedy that may exist.
  5. Last but not least, the EP has also introduced in the new Art. 29a the widely demanded[18] obligation for users of high-risk systems to carry out a fundamental rights impact assessment (FRIA) prior to their first use. Such impact assessment must take into account the specific context of use of the AI system and includes the duty to make wide consultations to the national supervisory authority and relevant stakeholders, who shall have six weeks to submit comments. Public authorities (and gatekeepers according to the DSA) must publish a summary of this impact assessment when they register the use of the AI system in the aforementioned EU database of high-risk systems of Art. 60. If a data protection impact assessment must also be carried out, it can be included as an addendum to this FRIA.

It is important to underline that according to the version of Art. 83(2) amended by the EP, all these requirements and those mentioned in the previous sections will not only be applicable to new high-risk AI systems used by public authorities, but also to those that have been put into service before the approval of the AI Act. Art. 83(2) gives providers and deployers of such AI systems two years after the entry into force of the AI Act (which will take place twenty days after its publication) to comply with them. This period is extended to four years in case of the EU large-scale IT systems listed in Annex IX. According to the Commission’s Proposal, pre-existing AI systems used by public authorities only had to comply with it if they were subject to significant changes, which is still the case for AI systems used by private parties.

3. The Administrative Procedure Acts after the AI Act

It follows from what has been seen that the AI Act (especially if it ends up including the EP’s amendments) will have a major impact on EU and Member State public authorities when using or developing AI systems. It will prohibit some AI systems that many public authorities would want to use. It will impose numerous substantive and procedural obligations on them when developing or using high-risk systems, including obligations to conduct a prior impact assessment with extensive consultation before using the system, to register the use of the system in a European database, and to inform and provide a detailed explanation to natural persons affected by decisions based on such systems. The AI act will also impose certain transparency obligations on them when using other non-high-risk AI systems listed in Art. 52 (such as chatbots), and compliance, whenever they use any AI system, with the general principles mentioned before. Compliance with all these prohibitions and obligations will be supervised by independent national supervisory authorities and, in the case of EU authorities, by the EDPS, who may impose heavy fines on them.

On the other hand, the broad concept of provider used in the AI Act prevents public authorities from circumventing the obligations of this act when commissioning external contractors to develop tailor-made AI systems.

All these requirements go far beyond Art. 22 GDPR[19], which is still applicable, and which only refers to fully automated decisions, not those taken by humans on the basis of automated systems, and to decisions based on the processing of personal data, and not those based on big data (as is usually the case with AI machine learning systems). On the other hand, Art. 22 only requires that fully automated decisions taken by public authorities are subject to a specific legal authorisation and to suitable measures to safeguard the rights and freedoms and legitimate interests of the data subject.

The requirements imposed by the AI Act are in my view perfectly compatible with the administrative procedure requirements arising from Art. 41 of the Charter of Fundamental Rights of the EU and national Administrative Procedure Acts (APAs)[20]. In particular, the right to an explanation of how the system works is consistent with the administration’s duty to state reasons, which has emerged as a major deterrent to the use of opaque machine-learning algorithms in administrative decision-making. On the other hand, the flexible approach of the AI Act provisions allows them to be complied with both in the framework of fully automated administrative procedures, and in the more usual (and also dangerous) case of semi-automated decisions, taken by a human being but under the determining influence of a computer system.

An important question then arises: what margin will have national legislators to develop the provisions of the AI Act regarding the use of AI systems by their public authorities? In my opinion, the AI Act establishes minimum guarantees concerning the use of AI systems by public authorities that cannot be reduced, but which can be increased by national legislators.

The free movement of AI systems that meet the requirements of the AI Act does not prevent a national (or even regional) legislator, in its APA, according to different policy options e.g.:

  1. To extend the requirements imposed by the AI Act on high-risk systems to other types of systems used by public authorities that do not merit such a classification according to Annex III, establishing e.g. the obligation to carry out a simplified impact assessment or to register them in a local or national database.
  2. To extend to legal persons affected by decisions based on AI systems the safeguards that the AI Act (under the influence of data protection and the legal basis of Art. 16 TFEU mentioned alongside Art. 114 TFEU in the first citation of the preamble of the AI Act) only provides for natural persons. Natural and legal persons have usually the same procedural rights in their relations with public authorities according to national APAs, and legal persons should also be able to defend themselves adequately when they are subject to a poorly designed or trained AI system used by the administration.
  3. To add further requirements to the use of AI systems by public authorities, e.g. that a specific legal basis exists (as required by the German APA for fully automated decisions[21]), that the final decision must necessarily be taken by a human being (as required in general by the Austrian APA[22]) or, at least, that a human being must intervene when the person concerned submits arguments in the course of the hearing or lodges an administrative appeal prior to judicial review.
  4. Or to prohibit the use of AI systems by public authorities in certain circumstances, e.g. when exercising discretionary powers, as provided by the German APA in relation to fully automated decisions[23], or by a legislative draft amending the Estonian APA which in case of discretion only allows the use of expert systems previously programmed and not machine learning[24].
  5. National APAs may of course also freely regulate automated systems that are not covered by the AI Act, i.e. those that do not deserve to be qualified as AI, and which are still mostly used by public authorities all over Europe.

All these adaptations could also be taken into account in a specific APA for the EU administration such as the one drafted within the ReNEUAL network[25] and which was also formally demanded by the EP in its Resolutions of 15.1.2013 and 9.6.2016[26].

These and other policy choices that restrict the use of AI systems and other ADM-systems by public authorities seem perfectly admissible and are not prohibited by the AI Act. Admittedly, recital 1 states (also in the EP’s version) that the AI Act «ensures the free movement of AI-based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation». But this does not seem to be aimed at preventing Member States from conditioning the use of AI systems by national public authorities, but rather at preventing them from imposing additional restrictions on the development and use of such systems in the private sector. The free movement of goods and services is conceived for citizens and businesses, not for public authorities, which cannot oppose to their national legislator that a European Regulation entitles them to develop and use a certain software system without additional limitations.

When the debate on the suitability of adopting a European codification of administrative procedure to be observed by all national administrations when implementing Union law has arisen, significant doubts have been raised about EU competence, arguing that this would infringe the so-called institutional and procedural autonomy of the Member States, and it has been considered more prudent to limit such a codification to the procedures of the Union administration, which has the solid legal basis provided by Art. 298 TFEU[27]. In the same vein, Art. 41 of the Charter only applies directly to the EU administration, even though the CJEU has extended the principle of good administration that emerges from it to national administrations as well. It would not make much sense that, against this background, the EU legislator would and could deprive the Member States of their competence to shape the administrative procedure to be observed by their public authorities by means of a piece of legislation such as the AI Act, which is limited to regulating a certain type of software. If anything, the opposite could be argued: whether the AI Act can actually impose the procedural obligations examined above on the various national administrations.

For the sake of clarity and for the avoidance of doubt, it would be desirable that the AI Act expressly stipulates this possibility for Member States and Union law to increase the guarantees for persons affected by AI systems used by national and Union public authorities, as the new Art. 2(5c) proposed by the EP does in relation to workers. It would make no sense that workers can enjoy more enhanced protection than that provided by the AI Act while citizens and legal persons dealing with public authorities do not.

In any case, considering the high interest most public authorities have in the rapid development of AI tools, it is very possible that the AI Act ends up being the major regulatory framework and that national APAs don’t add additional requirements and limitations.

4. Conclusion

In conclusion, even if the AI Act still needs some fine-tuning in the trilogues, it is a necessary piece of legislation that should be adopted with the EP’s amendments indicated in section 2.3. With such amendments, it will adequately regulate the development and use of AI systems by public authorities, establishing a high regulatory standard that can be further developed by the national APAs.

There are high expectations worldwide regarding the approval of this Act. As Europeans, we must not disappoint them and should be able to pass the AI Act in the coming months, before the parliamentary term expires.

  1. This paper is the written version, with some changes, of the presentation given at the Conference “The future of European public law under the influence of automated decision-making” held on 14-15 September 2023 at the University of Luxembourg as final conference of the INDIGO research project (PCI2020-112207 / AEI / 10.13039/501100011033).
  2. Proposal for a Regulation of the European Parliament and the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain union legislative acts of 21 April of 2021, COM(2021) 206 final, 2021/0106 (COD).
  3. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts – General approach (6 December 2022), ST 15698 2022 INIT.
  4. Amendments adopted by the European Parliament on 14 June 2023 on the Proposal for a Regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)), P9_TA(2023)0236. A very useful comparative table of the Commission’s initial proposal and the amendments adopted by the Council and the Parliament can be found at the bottom of this website: https://www.kaizenner.eu/post/aiact-part3 (last visited: 24 September 2023).
  5. Royal Decree 729/2023 of 22 August, approving the Statute of the Spanish Agency for the Supervision of Artificial Intelligence (Real Decreto 729/2023, de 22 de agosto, por el que se aprueba el Estatuto de la Agencia Española de Supervisión de Inteligencia Artificial).
  6. OECD, Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449, p. 7.
  7. Amendments adopted by the European Parliament on 14 June 2023 on the Proposal for a Regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, cit.
  8. Ivi.
  9. EDRi et al., EU Trilogues: The AI Act must protect people’s rights, 12 July 2023, available at https://edri.org/our-work/civil-society-statement-eu-protect-peoples-rights-in-the-ai-act-trilogue-negotiations/ (last visited: 24 September 2023), p. 4.
  10. Amendments adopted by the European Parliament on 14 June 2023 on the Proposal for a Regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, cit.
  11. Ivi.
  12. Open letter to the representatives of the European Commission, the European Council and the European Parliament: Artificial Intelligence: Europe’s chance to rejoin the technological avant-garde, available at https://www.theverge.com/2023/6/30/23779611/eu-ai-act-open-letter-artificial-intelligence-regulation-renault-siemens (last visited: 24 September 2023).
  13. EDRi et al., cit.
  14. EDRi et al., cit., p. 2, and the previous documents to which it links.
  15. EDPB-EDPS, Joint Opinion 5/202 on the proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), 18 June 2021.
  16. O. Mir, Algorithms, Automation and Administrative Procedure at EU Level, in University of Luxembourg Law Research Paper No. 2023-08, 4 September 2023, available at SSRN: https://ssrn.com/abstract=4561009 or http://dx.doi.org/10.2139/ssrn.4561009, pp. 10-18.
  17. Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act).
  18. See e.g. EDRi et al., cit., p. 2 and the Urgent Appeal to approve a solid Fundamental Rights Impact Assessment in the EU Artificial Intelligence Act signed by more than 150 European scholars and circulated on 12 September 2023.
  19. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 concerning the protection of individuals with regard to the processing of personal data, as well as the free circulation of such data and which repeals Directive 95/46/EC (General Data Protection Regulation).
  20. On the importance of the classical principles of administrative procedure related to the fundamental right to good administration and enshrined in many national APAs to protect citizens adequately when faced with adm-ADM (particularly the duty of careful investigation of the case by the administration, the right of the interested parties to be heard, their right of access to the file and the duty of the administration to give reasons for its decisions) see O. Mir, cit., passim.
  21. § 35a of the German Federal APA (Verwaltungsverfahrensgesetz, VwVfG). See J.-P. Schneider, F. Enderlein, , Automated Decision-Making Systems in German Administrative Law, in CERIDAP, 1, 2023, p. 100.
  22. § 18(3) of the Austrian APA (Allgemeines Verwaltungsverfahrensgesetz, AVG). See F. Merli, Automated Decision-Making Systems in Austrian Administrative Law, in CERIDAP, 1, 2023, pp. 42 ss.
  23. § 35a of the German Federal APA. See J.-P. Schneider; F. Enderlein, (n. 17), pp. 100-101.
  24. § 7(3)(5) of the Estonian APA according to the Draft Act to Amend Administrative Procedure Act and Other Acts in Relation Thereto (634 SE). See I. Pilving, Guidance-based Algorithms for Automated Decision-Making in Public Administration: the Estonian Perspective, in CERIDAP, 1, 2023, pp. 59 ss., 68 ss.
  25. P. Craig, H. Hofmann, J.-P. Schneider, J. Ziller (edited by), ReNEUAL Model Rules on EU Administrative Procedure, Oxford University Press, 2017. Available online and in different languages at http://www.reneual.eu (last visited: 24 September 2023).
  26. European Parliament resolution of 15 January 2013 with recommendations to the Commission on a Law of Administrative Procedure of the European Union (2012/2024(INL)); European Parliament resolution of 9 June 2016 for an open, efficient and independent European Union administration (2016/2610(RSP)). See more recently European Parliamentary Research Service, Digitalisation and administrative law. European added value assessment, PE 730.350, November 2022.
  27. See notes 21 and 22; O. Mir, Arguments in Favour of a General Codification of the Procedure Applicable to EU Administration, PE 432.776, March 2011, p. 24.

Oriol Mir Puigpelat

Full Professor of Administrative Law, "Pompeu Fabra" University of Barcelona.