Lo sviluppo delle tecnologie digitali consente alle autorità amministrative nazionali di tutta l’UE di utilizzare strumenti di decisione automatizzata o persino sistemi basati sull’intelligenza artificiale nell’esercizio dei loro poteri pubblici in tutti i settori del diritto amministrativo. Sebbene alcuni di questi settori rientrino nell’ambito di applicazione del diritto dell’UE e altri rimangano formalmente di competenza interna, gli strumenti e i processi digitali utilizzati dalle amministrazioni sono sempre più disciplinati dalla normativa dell’UE, in particolare dal GDPR e dall’AI Act. Quando le autorità nazionali si avvalgono di tali tecnologie regolamentate, esse agiscono nell’ambito di applicazione del diritto dell’UE, indipendentemente dal settore di policy sostanziale interessato. Ciò comporta implicazioni di natura costituzionale. Agendo sulla base della legislazione digitale dell’UE, le autorità nazionali determinano l’applicabilità della Carta dei diritti fondamentali dell’Unione europea, che vincola gli Stati membri ogniqualvolta essi attuino il diritto dell’Unione. La digitalizzazione e l’automazione dei procedimenti amministrativi ampliano pertanto la portata pratica dei diritti fondamentali dell’UE. Gli standard europei in materia di diritti fondamentali acquisiscono così la capacità di incidere sulla progettazione e sullo sviluppo degli strumenti di ADM (Automated Decision-Making) o di intelligenza artificiale, nonché sul ragionamento giuridico e sul controllo giurisdizionale delle decisioni generate dall’ADM o assistite dall’IA. Il contributo sostiene che ciò favorisca una forma di armonizzazione delle prassi amministrative nazionali guidata dalla tecnologia, che emerge indirettamente dalla regolazione degli strumenti digitali, e ne analizza le implicazioni sia per le autorità amministrative sia per i giudici nazionali.
The development of digital technologies enables national administrative authorities across the EU to use automated decision-making tools or even systems based on artificial intelligence in the exercise of their public powers across all fields of administrative law. While some of these fields fall within the scope of EU law and others remain formally domestic, the digital tools and processes used by administrations are increasingly regulated by EU legislation, most prominently the GDPR and the AI Act. When national authorities rely on such regulated technologies, they act within the scope of EU law, irrespective of the substantive policy field concerned. This has constitutional implications. By acting under EU digital legislation, national authorities trigger the applicability of the EU Charter, which binds Member States whenever they implement EU law. Digitalisation and automation in administrative proceedings thus expand the practical reach of EU fundamental rights. The EU standard of fundamental rights thus has the capacity to shape harmonisation in the design and development of ADM or AI tools, as well as the reasoning and judicial review of ADM-generated or AI-assisted decisions. The article argues that the EU standard fosters a form of technology-driven harmonisation of national administrative practices that emerges indirectly from the regulation of digital tools, and it analyses its implications for both administrative authorities and national courts.
1. Introduction
The development of modern technologies and the ever-increasing capacity of digitalization enable national administrative authorities across the EU to use automated decision-making (ADM) tools or even systems based on artificial intelligence (AI) in the exercise of their public powers. These tools can be used for all administrative activities, including administrative decision-making, in a variety of fields, such as taxation, social security, market regulation, competition and migration. While some of these fields are governed by EU law, others remain outside its scope and are governed solely by national law.
Digital tools, systems and processes that can be used within national administrations are subject to ever-expanding EU legislation regulating data protection, digital governance and AI. The most prominent examples of this legislation are currently the GDPR and the AI Act. When a national administrative authority uses technology subject to such regulations, it begins to act under EU law regardless of the actual field of administrative law in which it decides.
This has constitutional implications, because using the regulated technology or processes in administrative cases pulls such cases into the scope of application of the Charter of Fundamental Rights of the EU (EU Charter). The EU Charter is applicable in national proceedings only when the authority is implementing EU law, and acting in accordance with the digital legislation governing ADM or AI tools constitutes such implementation. Thus, the increasing digitalization, technological development and automation of administrative practices opens the door to the application of the EU standard of fundamental rights across all policies, reinforcing the relevance of the EU Charter throughout the lifecycle of modern tools – from their development and design to the reasoning and review of ADM-generated or AI-assisted decisions. Furthermore, the mandate for national administrative authorities to comply with digital legislation and the EU Charter creates potential for the harmonization of certain administrative practices and procedures. This harmonization does not stem from actual procedural rules but emerges indirectly through the regulation of the technology used.
Against this background, this article examines the implications of this “technology-driven harmonization” of certain aspects of national administrative law, with regard to practice of both national administrative authorities and the national courts that review their actions. The article proceeds in three steps. First, it summarizes the ways in which the scope of application of EU law, and therefore the EU Charter in national proceedings is defined. Second, it demonstrates how digital legislation, specifically the GDPR and the AI Act, contributes to extending the scope of application of the EU Charter by regulating technology, devices, and automated processes employed by national authorities. Third, it analyses the most relevant provisions of the EU Charter with regard to automation and the deployment of AI in national administrative decision-making, examining how and to what extent these provisions can influence national administrative practices and contribute to their harmonization.
2. Delimitating the scope of application of EU law and of the EU Charter
The provisions of the EU Charter– and the EU standard of fundamental rights protection more generally – do not apply universally and unconditionally within the EU Member States. As Article 51(1) of the EU Charter explicitly states, the EU Charter is binding on the Member States «only when they are implementing Union law». The meaning of implementing has been clarified by the case law of the Court of Justice saying that Member States are required to respect fundamental rights defined by EU law only when acting within the scope of EU law, and that «the applicability of European Union law entails applicability of the fundamental rights guaranteed by the Charter»[1]. The underlying rationale is that Member States, when implementing EU law, function as “agents” of the Union and are therefore subject to the EU’s standard of fundamental rights[2].
However, it may be challenging to determine in concrete cases whether a national authority is acting within the scope of EU law and thus whether it is required to apply the EU Charter at the national level. In its case law, the Court of Justice has generally identified two situations in which the EU’s standard of fundamental rights is applicable to Member States: first, when they are fulfilling an obligation arising from EU law and acting as required by it (“Wachauf”[3] type of situation or the “agency situation”); and second, when they are derogating from EU law and must nonetheless respect its fundamental rights obligations (“ERT”[4] type of situation or the “derogation situation”)[5]. Nonetheless, even this categorisation is neither entirely clear nor beyond dispute, and legal scholars have already suggested several ways how to work with an alternative and more nuanced categorisation[6].
Even the sole agency situation can be divided into sub-categories based on the type of the EU source of obligations: First, a national authority applies an EU provision that is directly applicable, typically a provision of an EU regulation. Second, the authority also acts as an agent of EU law when it indirectly gives effect to obligations stemming from EU law through national implementing measures, most commonly by applying national acts that are transposing EU directives[7].
The definition of the first type of the agency situation is rather unequivocal: the national authority acts within the scope of the application of EU law insofar as the given case is covered by a directly applicable EU provision, and that includes both the situation where the authority actually applies that provision and where it fails to do so. However, the second type, i.e. identifying situations involving the indirect implementation of EU measures through national legislation, can be rather challenging, especially when the case involves the application of multiple legal sources from both the EU and national levels, or when a substantive EU rule is enforced through a national procedural norm. The Court of Justice has already clarified that the indirect application of EU law also encompasses situations where a Member State adopts measures within the discretionary powers conferred by an EU act[8]. As was explicitly stated in the landmark Fransson case, indirect implementation of an EU commitment also occurs when a Member State enforces substantive EU law by applying its own procedural or sanctioning rules[9]. In concrete cases, however, it may still be unclear whether the applicable national provisions are indeed giving effect to a commitment stemming from EU law or how far the reach of EU law extends, and where, or in which respects, the matter remains governed solely by national law and hence by the national catalogue of fundamental rights.
As the Court of Justice frequently emphasises, in order to determine whether a case falls within the scope of the EU Charter, national authorities must assess «whether the national legislation at issue is intended to implement a provision of European Union law, what the character of that legislation is, and whether it pursues objectives other than those covered by European Union law, even if it is capable of indirectly affecting that law, and also whether there are specific rules of European Union law on the matter or capable of affecting it»[10]. It also notes that the implementation of EU law «presupposes a degree of connection between an act of EU law and the national measure at issue which goes beyond the matters referred to or the indirect effects of one of the matters on the other, having regard to the assessment criteria laid down by the Court»[11]. In other words, the Court of Justice requires a sufficient link of a national measure to the original EU commitment or a certain degree of proximity between them. For example, in Ispas[12], the Court of Justice concluded that EU law also applies to the issue of access to the administrative file in proceedings concerning the collection of VAT, because the VAT framework is governed by EU law, and thus the national enforcement of VAT rules must comply with the general principles of EU law, including the rights of defence. The national authorities must comply with such principles, even if the applicable EU legislation «does not expressly provide for such a procedural requirement»[13]. In other words, the mere fact that EU VAT rules are being enforced establishes a sufficient link to the national procedural rules applied for that purpose, meaning that those procedural rules must comply with EU fundamental rights standards.
Even though the cases might have been complex, the logic of the reasoning is rather clear. However, what can become difficult to assess are situations where a national authority applies EU procedural rules that themselves serve to implement fundamental rights guaranteed by the EU Charter. This is notably the case with procedural directives in areas such as asylum or criminal law. A good example is Directive (EU) 2016/343 on the strengthening of certain aspects of the presumption of innocence[14], which elaborates upon the right to effective judicial protection (Article 47 of the EU Charter), since it aims to «to enhance the right to a fair trial in criminal proceedings by laying down common minimum rules concerning certain aspects of the presumption of innocence and the right to be present at the trial»[15]. In such cases, the applicability of the EU Charter becomes self-reinforcing: Applying an EU procedural rule (directive) that is itself designed to implement a fundamental right (Article 47 of the EU Charter) triggers the applicability of the EU Charter as a whole.
Interestingly, rules stemming from procedural directives apply even to cases that are purely national in terms of their subject matter and do not contain any cross-border elements or other aspects of EU law. For example, as the Court of Justice confirmed in Moro[16], the provisions of Directive 2012/13 on the right to information in criminal proceedings[17] are applicable in all cases, whether or not they entail a cross-border element. Criminal procedural directives are designed to support the system of mutual recognition and to ensure that procedural rights are generally safeguarded, so that when mutual recognition is triggered, the authority in the second Member State can trust the fairness of the proceedings in the first. While the primary purpose of these directives is to lay the groundwork for potential cross-border situations, they apply even in purely domestic cases with no cross-border element. In any case, the harmonization they pursue is limited to specific aspects of criminal procedure. This means that the mere claim that the information obligation has been breached does not render the entire criminal case subject to EU law. Criminal proceedings as a whole are governed by national law, and most applicable rules, both substantive and procedural, are unaffected by EU law. EU law only intervenes in partial aspects related to providing information through a specific provision rooted in an EU procedural directive. Therefore, only these aspects fall within the scope of EU law and the EU Charter[18].
As the extensive case law illustrates, the precise meaning of “implementing EU law” remains subject to ongoing contestation and clarification. However, the most complex and problematic cases dealt by the CJEU concern national measures transposing EU directives. In these cases, doubts and uncertainty arise from the fact that legal obligations stemming from EU law are implemented indirectly through national provisions, and this makes it difficult to determine whether a national rule, which is applied by a national authority primarily, is sufficiently close to the original commitment required by EU law.
However, this uncertainty should not arise when national authorities find themselves in the first type of the agency situation, namely when they act under EU regulations that are directly applicable in Member States based on Article 288 TFEU. When a national authority applies a regulation, this action falls clearly within the scope of EU law and automatically triggers the applicability of the EU Charter. But even this category may be difficult where the directly applicable provision of a regulation primarily concerns procedural elements of national decision-making. In such cases, the provision affects the domestic process only indirectly or in a limited manner. The resulting ambiguities and implementation challenges then resemble those typically associated with procedural directives.
3. Digital legislation pulling the national administrative action into the scope of the EU Charter
The issue of scope of application of EU law and hence of the EU Charter arises also with regard to regulations governing the digital sphere and in the context of the digitalisation of national administrations which is covered by these regulations: In order to secure more efficient, faster and more objective decision-making, national administrative authorities are increasingly relying on tools that enable the automated processing of data or are even employing more sophisticated AI tools. The use of these modern tools is governed by two key EU regulations: the General Data Protection Regulation (GDPR)[19] and the AI Act (AIA)[20].
The GDPR sets out the conditions for processing personal data, and it is applicable both to private and public actors. It follows that if a national administrative authority processes personal data, it is considered either a “controller” [Art. 4(7) GDPR] or a “processor” [Art. 4(8) GDPR] and therefore falls within the scope of this directly applicable EU legislative act. More specifically, the GDPR applies to situations in which personal data is processed automatically, and this leads to a decision that significantly affects an individual. Article 22 GDPR prohibits automated individual decision-making, including administrative decision-making, unless authorized by EU or national law. If authorized, such decision-making must meet criteria that safeguard the individual’s rights, freedoms, and legitimate interests. Furthermore, individuals that are subject to fully or partially automated decision-making must be informed that the automation has taken place and provided with meaningful information about the logic of automated processes[21].
Article 22 GDPR is thus the crucial point here: Whenever a national authority employs automated processing of personal data, it acts under this provision. As a result, automated processing of personal data in national administrative decision-making means that a national authority acts under EU law.
Similarly, whenever a national administrative authority deploys a high-risk AI system as part of its activities, it qualifies as a “deployer” in accordance with Article 3(4) AIA. If the authority develops its own AI system, it is considered a “provider” pursuant Article 3(3) AIA. It follows that in both cases, the authority is subject to the requirements and duties of this regulation, such as transparency, record-keeping, human oversight, robustness or accuracy. If a high-risk AI system is used to generate a decision that significantly affects an individual, such a person has a right to get clear and meaningful explanation of what role the AI system played in the decision-making and how it affected the result (Article 86(1) AIA]. Again – acting under the obligations under the AIA, a national authority acts within the scope of EU law.
It does not, however, mean that any automated processes or use of any types of AI would necessarily fall under these regulations. The GDPR governs automated processing of personal data, which excludes processing of data related to legal persons, synthetic data, or data of a technical or economic nature. Similarly, the AIA contains rules for the deployment of high-risk AI system, and it is foreseeable that some AI systems used by administrative authorities do not qualify for this category. Anyway, using those digital tools that are governed by the said regulations in administrative decision-making would mean acting under these regulations, and hence “implementing EU law”, which also triggers the applicability of the EU Charter.
This introduces a significant conceptual shift: when a national administrative authority engages in automated processing of personal data or deploys a high-risk AI system in its decision-making, the case inevitably falls within the scope of EU law – and thus the EU Charter – regardless of the substantive area of the case at hand. It follows that the applicability of the EU Charter can be triggered not only by the substantive legal domain that is covered by EU law (such as VAT, asylum, or labour law), but also by the use of tools governed by directly applicable EU regulations, namely the GDPR and the AIA. As a result, the EU Charter becomes applicable even in cases that would otherwise be seen as falling outside the substantive scope of EU law, including areas such as social security in purely domestic contexts or direct taxation.
However, in such situations, the EU Charter provisions will only apply to those aspects of the case that are directly related to the use of ADM or AI tools covered by the relevant digital legislation, not to the substance of the decision. In other words, the commitments stemming from the GDPR and the AIA are not about requiring the effective enforcement of substantive EU rules through national implementing legislation or procedural law. Rather, they impose duties on national authorities to use certain digital technologies, such as automated processing of personal data or AI systems, in conformity with EU standards, irrespective of whether the underlying administrative activity is based on EU or national law.
Here, the logic would be comparable to that applied by the Court of Justice in the Moro case[22] and others involving EU criminal procedural directives, as mentioned above: While EU law does not govern the substantive layer of domestic criminal proceedings, it does require national procedures to comply with EU procedural legislation and, consequently, with fundamental rights as they are guaranteed by EU law. Aimed ultimately at establishing standards that facilitate mutual recognition, these directives partially harmonize aspects of national criminal procedure. In a similar vein, the harmonization introduced by the GDPR and the AIA aims to set standards for the use of certain technology, and it also targets only some elements of the national administrative activity.
However, unlike the harmonization of certain aspects of criminal procedure through dedicated procedural directives, the harmonization through digital legislation emerges indirectly. This legislation regulates the use of digital tools rather than imposing explicit procedural rules to be followed by administrative authorities. Neither the GDPR nor the AIA are procedural codes addressed to national administrations, but they establish rules for processing personal data and for deploying AI across all sectors, including public administration. Consequently, implications of a rather procedural nature – such as transparency requirements, the provision of meaningful information or explanations, and human review and oversight – are applicable to all technology-driven decision-making, whether the technology is used by a private entity for business purposes or by an administrative authority to produce administrative decisions or similar outputs.
Arguably, this normative framework results in a partial harmonization of administrative procedure, which emerges not through general procedural legislation, but through the regulation of specific digital tools. Accordingly, in the context of automation and AI, it is the use of digital tools regulated by EU law that brings national administrative actions within the scope of the EU Charter. More specifically, the design and use of such tools, the enforcement of rights under the GDPR (including requirements of transparency and human oversight), and the fulfilment of obligations under the AIA at the national administrative level must comply with the relevant Charter rights.
Ultimately, the employment or deployment of modern tools in national administrative decision-making creates a layered system of the protection of fundamental rights. National law and its catalogue of fundamental rights must be observed with regard to the substance of the case, while EU law and the EU Charter govern the way how the tool is used.
4. Automation or AI in national administrative action and the relevant EU Charter provisions
It follows from the preceding analysis that when national administrative action relies on ADM tools or involves the deployment of high-risk AI systems, national authorities must comply with the EU Charter, insofar as the use of these technologies is governed by EU law. While the EU Charter applies as a whole, certain rights are particularly relevant in this context due to their close connection with digital legislation and the types of risks that are involved therein. These include overarching substantive rights such as the right to privacy and data protection, and the right to non-discrimination, which are especially critical when administrative action is mediated or supported by technology. In addition, specific procedural rights come to the forefront, especially those concerning the operation of national administrative authorities and courts. These are the right to good administration (which includes the duty to give reasons), and the right to effective judicial protection. Although both sets of rights are important, they affect the use of digital technologies in administrative decision-making processes differently. Substantive rights are likely to influence the parameters of ADM or AI tools, while procedural rights will govern the subsequent actions of administrative and judicial authorities.
4.1. Substantive rights
The fundamental rights most likely to be affected by automation and the deployment of AI in administrative practice are the right to the protection of personal data (Article 8) and the principle of non-discrimination (Article 21).
4.1.1. Right to the protection of personal data
The applicability of Article 8 of the EU Charter in the context of the GDPR seems circular. It operates on a logic similar to that of the procedural directives in criminal law, as previously explained: Applying the GDPR that gives a more concrete expression a fundamental right to the protection of personal data anchored in the EU Charter[23] triggers the applicability of the EU Charter itself. Compliance with the GDPR is therefore considered respect for this fundamental right, and vice versa, a breach of the GDPR can constitute a violation of this right. Consequently, in the context of automated decision-making under the GDPR, invoking the right to the protection of personal data can appear to be conceptually redundant. Nevertheless, a question arises of whether the right to the protection of personal data (potentially in combination with the right to privacy enshrined in Article 7) can play a distinct role alongside the provisions of the GDPR, and whether invoking this right could add substantial value to a legal dispute[24].
As the CJEU insists, fundamental rights review cannot be reduced into the review of legislative compliance, because the EU Charter serves as the ultimate constitutional benchmark against which all legislative and administrative action must assessed. As emphasized by the CJEU, e.g. in Digital Rights Ireland[25] or Schrems I[26], secondary legislation such as the GDPR must be interpreted in light of Articles 7 and 8 of the EU Charter. It follows that the right to the protection of personal data enshrined in the EU Charter retains autonomous normative significance even where the administrative action formally complies with the GDPR. In practice, there may be a situation in which an authority or a court interprets a GDPR provision in light of the Charter’s rights or, when the GDPR text appears ambiguous or incomplete, it attempts to address a potential normative lacuna through fundamental rights.
In case of the use of ADM in national administrative decision-making, it is possible to imagine a scenario where narrow reading of Article 22 GDPR and the associated safeguards for the automated processing of personal data, including the right to explanation, do not offer adequate protection of privacy and personal data of the individual, and therefore, these provisions should be interpreted more broadly to provide such protection[27]. Article 8 of the EU Charter can also be invoked if a national authority interprets Article 22 GDPR too narrowly, arguing that it applies only to fully automated administrative decisions, which would exclude partially automated decisions from the given safeguards, leaving individuals without effective legal protection. There might also be a claim that, although the automated decision-making appears to be formally in compliance with the GDPR, it interferes with the fundamental rights to privacy and to the protection of personal data in a disproportionate manner or even violates the essence of these rights. The Charter provision could also be used to challenge the legality of Article 22 GDPR and other relevant provisions. In all these scenarios, the right to the protection of personal data constitutes not only the legal basis for the GDPR, but also a benchmark for assessing the legality of its provisions and a standard for interpreting how automated decision-making should be employed in administrative practice[28]. Furthermore, the EU Charter embeds a duty of a proportionality assessment that goes beyond formal compliance with the GDPR[29]. Even if the processing of personal data complies with the legal basis under Articles 6 and 9 GDPR and the automation respects safeguards stated by Article 22 GDPR, there is still a possibility that such an action would interfere with Article 8 EU Charter in a disproportionate manner[30]. As emphasised by the CJEU in Digital Rights Ireland[31], any limitations on the right to data protection must be strictly necessary and proportionate.
In contrast to the GDPR, the AIA does form a legal basis for processing personal data. Nonetheless, as recital 69 AIA reminds, «[t]he right to privacy and to protection of personal data must be guaranteed throughout the entire lifecycle of the AI system». This means that the compliance with Article 8 of the EU Charter must be ensured from initial design and development of the tool to its deployment and ongoing use within national administrative decision-making. In practice, this requires that using AI tools is in compliance with the right to the personal data protection, but also with the GDPR that further implements that right.
At the initial design and development stage, the right to data protection influences the purpose for which the AI tool is created, the categories of data used, and the entire model architecture. For instance, the categories of data incorporated into the tool must adhere to the principles of purpose limitation and data minimisation (Article 5 of the GDPR), and sensitive data may only be used if it is strictly necessary[32]. The entire system must also be traceable to enable data subjects to exercise their rights of access, rectification and objection. During the training stage, it is necessary to ensure data cleaning and bias testing and to document the whole process. If large datasets are reused, compliance with Article 6(4) GDPR is required, which stipulates that the original purpose of data collection must not be exceeded disproportionately. Finally, when the AI tool is deployed within actual administrative decision-making, the right to personal data protection requires that individuals affected by AI-assisted decisions have meaningful safeguards. This includes the right to be informed about the automated data processing through an AI tool in accordance with Articles 13, 14, 15 and 22 GDPR. Furthermore, it must be possible to contest such a decision and to alter the outcome of the processing based on human oversight. Therefore, if the national authority uses a simpler ADM tool to process personal data, or deploys a more sophisticated AI tool, the right to the protection of personal data, as enshrined in the EU Charter, will act as a normative constraint throughout the entire process, from developing the tool to making the final decision. This development is likely to lead to a certain degree of harmonization in the national procedures and technology-related practices of administrative authorities. Regardless of the area of administrative law in question (and whether it is within the scope EU law), national authorities must comply with the normative constraints arising from the EU Charter. The fundamental right to the protection of personal data therefore has horizontal effect. All ADM or AI tools must be developed in a manner that ensures compliance with this right, which must also be reflected by the procedural frameworks within which the ADM-based or AI-assisted outputs are generated, assessed and relied upon. Subsequently, national courts are responsible for supervising this alignment and verifying that the protection guaranteed by the EU Charter is effectively realized in practice.
4.1.2. Non-discrimination
Unlike the right to the protection of personal data, the principle of non-discrimination (Article 21 of the EU Charter) is not fully imprinted into the GDPR let alone the AIA. While the principle of non-discrimination is reflected in several provisions of these regulations, its incorporation remains partial. For instance, Article 9 of the GDPR prohibits the processing of special categories of personal data that may reveal characteristics commonly associated with discrimination, such as racial or ethnic origin, political opinions, religious beliefs, or sexual orientation. In the context of automated decision-making, data controllers are encouraged to implement appropriate safeguards to prevent discriminatory outcomes, as noted in Recital 71. The AIA is more explicit in this regard because its recitals repeatedly affirm that its rules must be non-discriminatory and applied in a non-discriminatory manner. Moreover, Article 77 grants national authorities charged with protecting fundamental rights – including the principle of non-discrimination – access to the technical documentation required by the AIA. However, even though both regulations seek to align with the principle of non-discrimination, compliance with them does not exclude the possibility that this principle will be violated anyway. There is a particular risk that ADM tools may be designed in a way that enables discriminatory outcomes[33]. Similarly, AI systems that are trained using ostensibly neutral data may nevertheless produce discriminatory effects due to historical biases encoded in the training data or its labelling[34].
This creates an opportunity for greater engagement with Article 21 of the EU Charter when administrative authorities at the national level use automation. In such cases, the EU Charter can serve as an important normative source for safeguarding the non-discriminatory outcomes of automated administrative decision-making, particularly when there is a risk that automation or the use of AI could have a negative and disproportionate impact on certain individuals based on their race, gender, or sexual orientation. Article 21 may not only be invoked ex post – in a review procedure in order to correct an allegedly discriminatory administrative decision, but it may and should be used ex ante as a standard governing the design and parameters of the ADM or AI tool. The EU Charter will therefore intervene into the otherwise technical dimension of the algorithm and setting of the specific software used as an ADM tool or of the training and functioning of an AI system.
In its case law, the Court of Justice has already highlighted the legal relevance of the sole design and programming of tools used for automated processing of data when it insisted that «the pre-established models and criteria on which that type of data processing are based should be, first, specific and reliable, making it possible to achieve [intended] results»[35]. This implies that both programming an ADM tool and building an AI system have a considerable normative significance, because the way how these tools are designed predetermines the outcome of the process[36]. Consequently, when developing ADM or AI tools, the principle of non-discrimination must be embraced sensitively, since flaws in design or training can induce bias in the outcome of the decision-making process, which will render the administrative decision in violation of the EU Charter. As scholars tend to agree, ADM tools are highly susceptible to discrimination because it is practically impossible to eliminate all potential biases[37]. AI tools based on machine learning techniques, in particular, can absorb and replicate real-world discrimination. From a technical point of view, problems with the tool can arise at various stages of its development. For example, if the sample is unrepresentative, the patterns identified by the tool will also be unrepresentative. If incorrect features are assigned to data, the tool will learn from these errors. Ultimately, problems will emerge if characteristics within the data that should not influence the outcome are given incorrect or biased labels[38]. The technical development of the tool and the quality of the training data are therefore of a high legal relevance. If a national administrative authority decides to deploy an ADM or AI tool within its decision-making, it must ensure that the tool – regardless of whether it has been developed by the authority itself or procured from a private company – complies with all necessary rules to avoid discrimination. As the CJEU has already highlighted, automated analysis cannot use sensitive personal data, which are defined in Article 9 of the GDPR, such as racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, or data concerning health and sex life or sex preferences, because such automated processing would violate Articles 7, 8 and 21 of the EU Charter[39]. Consequently, this should apply not only to the actual processing of data, but also to the designing and programming of the tool, including training of the AI tool. Furthermore, the CJEU insists on regular assessment of the tool and its results «to ensure that those pre-established models and criteria and the databases used are reliable and up to date»[40]. This requires that an authority relying upon ADM or AI tools performs their consistent monitoring and evaluation to avoid discriminatory outputs.
As with the right to personal data protection, compliance with the right to non-discrimination will probably lead to converging standards across Member States regarding system design, transparency, oversight and remedies for individuals affected by automated decisions.
4.2. Fundamental procedural rights
The EU Charter and its procedural standards constitute an extension of legislation, influencing administrative procedures at a national level. The administrative authority must comply with the right to good administration enshrined in Article 41 of the EU Charter whenever automated processing of personal data takes place within administrative decision-making or a high-risk AI system is deployed. Although this provision formally addresses only EU institutions, the Court of Justice considers the principle of good administration to be part of the general principles of EU law[41]. Therefore, the content of Article 41 should also apply to national authorities. Furthermore, administrative decisions that rely on ADM tools or AI systems under the GDPR or the AIA will be subject to judicial review in accordance with Article 47 of the EU Charter, which enshrines the right to effective judicial protection.
4.2.1. Right to good administration
The right to good administration as enshrined in Article 41 of EU Charter guarantees that every person has the right to have their affairs handled impartially, fairly, and within a reasonable time. It also includes specific procedural entitlements, such as the right to be heard, the right of access to one’s file, and the duty of the administration to give reasons for its decisions and the duty of care, which means that administrative authorities must «examine carefully and impartially all the relevant aspects of the individual case»[42].
Overall, this right extended to the national authorities through general principles establishes the expectation of procedural fairness whenever national authorities implement EU law. In the context of automated or AI-assisted decision-making, this requires the individual to be informed that an ADM or AI tool has been used[43]. Furthermore, all relevant facts must be carefully assessed, and the standards must not be lower than those applied when the entire case is processed by human officials. The right to good administration also requires human oversight of the ADM-based or AI-assisted outcome[44]. This oversight must be effective, in the sense that the outcome must be modified if inaccuracies are detected[45].
A part of the right and the principle of good administration, which is particularly important in the context of automation and AI, is the duty of public authorities to give reasons for their decisions. Knowing the reasons on which the administrative decision is based is not only important for the affected individuals, but it is also crucial at the stage of judicial review of such a decision. As the CJEU puts it, the duty to give reasons is «a corollary of the principle of respect for the rights of the defence, which is a general principle of EU law»[46]. An individual affected by an administrative decision must be able to understand its grounds, so the reasons for the decision must be specific and concrete[47]. In parallel, a court examining legality of that decision must also be able to reconstruct the reasoning that led to that decision, because a substantive review can be performed only if such a reasoning is comprehensible for the court. In the words of the CJEU, the duty to give reasons «is an essential procedural requirement that must be distinguished from the question whether the reasoning is well founded, which is a matter of the substantive legality of the contested act»[48].
The duty to give reasons is closely linked to the concept of explanation[49], which is enshrined both in the GDPR and the AIA[50]. Under these regulations, the administrative authority must provide meaningful information of how the tool or algorithm used for automated data processing works[51]. As the Court of Justice put in case Dun and Bradstreet, the right to meaningful information must be understood «as a right to an explanation of the procedure and principles actually applied in order to use, by automated means, the personal data of the data subject with a view to obtaining a specific result», and such explanation «must be provided by means of relevant information and in a concise, transparent, intelligible and easily accessible form»[52]. However, technical explanation is not enough, and it is necessary to provide “legal explanation”, i.e. information which can be assessed from a legal point of view[53]. In the words of the CJEU, the duty to provide meaningful information «cannot be satisfied either by the mere communication of a complex mathematical formula, such as an algorithm, or by the detailed description of all the steps in automated decision-making, since none of those would constitute a sufficiently concise and intelligible explanation»[54]. Therefore the requested standard should be the “human standard for explanation”[55]. This means that any output generated by or with the help of ADM or AI tools must be legible and meaningful to a human being, who must then be able to assess its legality.
However, as remarked by Hildebrandt, «an explanation is not the same as a justification»[56]. While the GDPR and the AIA require sufficient transparency of the criteria, formulas, parameters and other components of the ADM tool, the requirement for proper justification of the decision goes further. Simply making the decision-making process transparent will not guarantee that the decision will clearly explain the reasons behind it. Therefore, an ADM or AI-assisted administrative decision must also contain a traditional legal justification. This justification must go beyond an explanation of how the tool works or how the data is processed. It must present what facts were considered, how they were assessed, how they are linked to legal provisions and how they support the final decision. In other words, although the duty to provide an explanation and the duty to give reasons may overlap in certain practical respects, they are not equivalent. The requirement to state reasons for an administrative decision entails a more demanding standard of legal justification than merely explaining the technical functioning of the algorithm or the tool.
Conceptually, it is also useful to distinguish between two levels of automated output, which may overlap in practice but impose different requirements in terms of justification and, consequently, in the assessment of the legality of administrative decisions. Firstly, the tool generates either a partial result that forms part of the final decision, or the final decision itself. At this level, it is required to provide information on the input data, the tool’s parameters and the processing method. Ultimately, a court will evaluate whether these are in accordance with the law.
At the second level, the tool generates a text containing the reasoning and legal justification requirements, including the presentation of relevant facts and their evaluation, the interpretation of applicable legal provisions, and a description of how these lead to the final decision. At this stage, the court will conduct a traditional review to assess whether the automatically generated text is coherent and understandable, and free from internal contradictions or illogical considerations. The court will treat the text in the same way, regardless of whether it is generated based on pre-set rules or through machine learning AI.
4.2.2. Right to effective judicial protection
As already mentioned above, the requirements for the right to effective judicial protection are closely linked to the principle of good administration. For judicial review of an ADM-based or AI-assisted administrative decision to be effective, a certain quality of the administrative output must be ensured[57]. This requires traceability of automated decision-making, auditability of algorithms, preservation of documentation and intelligible reasoning that can be reconstructed by a judge. Therefore, the demands of judicial review under EU law, stemming from Article 47 of the EU Charter, reinforce the demands on administrative decision-making assisted by new digital tools[58].
At the same time, effective judicial protection is a right that individuals affected by EU law can invoke directly. Article 47 therefore highlights the necessity for those individuals to be able to meaningfully challenge ADM-based or AI-assisted administrative decisions. This is also linked to the requirements of good administration, since affected persons must be provided with all relevant information about how the tool contributed to the final decision, and they must have an effective means of contesting any inaccuracies that may have arisen regard to both input data and the method how they were processed.
In other words, the effectiveness of judicial protection is still dependent on the quality of explanations and reasoning provided in the administrative decision[59]. Both the affected individual and the judge conducting the review must be able to understand the administrative decision and its basis, regardless of whether it was made entirely by a human official or with the help of digital tools. In this regard, Article 47 puts further pressure on administrative authorities to use systems that comply with explainability requirements, to maintain safeguards for effective human oversight of the process, allowing for necessary corrections, and to ensure that the final decision is accompanied by proper legal justification[60].
In this regard, the right to effective judicial protection, enshrined in the EU Charter, plays an important systemic and constitutional role[61]. While secondary legislation, such as the GDPR and the AIA, provides for obligations mostly related to the technical aspects of automation in administrative decision-making, Article 47 sets a higher standard, filling potential normative gaps and guiding the interpretation of the rules embedded in the legislation. It also invalidates practices that could render judicial review ineffective. Overall, the requirements of effective judicial protection should ensure that national administrative action employing ADM or AI tools complies with the rule of law[62].
Using ADM or AI tools in national administrative decision-making processes can not only lead to the convergence of administrative processes and administrative adjudication across Member States. It can also have a spillover effect. Since the deployment of technology governed by EU law extends the scope of EU law to otherwise internal situations, the category of cases in which national courts may submit preliminary references according to Article 267 TFEU is enlarged. National courts can pose questions related to the rules on the proper use of technology as set out in the relevant EU legislation. Nonetheless, they can also seek guidance on how national settings and conditions should be reconciled with the requirements of effective judicial protection. This could create a new channel for rule-of-law-oriented preliminary references, particularly in Member States where judges seek clarification on their role under EU law and its institutional implications[63].
5. Conclusion
The current or future use of ADM or AI tools by national administrative authorities cannot escape the normative force of the EU Charter. Whenever a national authority deploys modern technologies that are governed by EU legislation, it is acting within the scope of EU law. And whenever a national authority acts within the scope of EU law, it must also comply with the EU Charter. Consequently, regardless of whether the authority is dealing with cases that are substantively governed by EU law, or with so-called purely internal cases, the deployment of certain modern technologies in the decision-making process triggers the applicability of the EU Charter. The EU Charter is therefore apt to influence the technological and procedural aspects of such decision-making, such as the design of ADM or AI tools, the modalities of data storage and processing, the automated generation of outputs and the structuring of legal reasoning produced by these tools or with their assistance. The EU Charter also affects the standard of the duty of administrative authorities to give reasons and the conditions under which administrative decisions are subject to judicial review.
Therefore, the EU Charter has the potential to be a catalyst for the technology-driven harmonization of administrative practices across the Member States, since it sets standards for the design and development of ADM and AI tools, promotes transparency in processes and emphasizes the requirement of reviewability of ADM-generated or AI-assisted administrative decisions. The EU Charter hence not only constrains national administrations, but it also influences the emerging model of automated administrative decision-making across the EU.
Conceptually, this produces a specific form of unification of administrative rules and practices that is not based on traditional techniques of integration. It is not harmonization through comprehensive procedural codes or partial procedural rules. It also differs from harmonization through mutual recognition or through coordination among national authorities. It is more accurate to speak of a technology-driven convergence of rules that flows rather indirectly from EU legislation governing the technologies used in administrative decision-making and from the accompanying duty to respect the EU Charter. In order to comply with both sets of rules, national authorities must inevitably adapt their processes, institutional settings and potentially their entire decision-making culture. In this way, the extension of the scope of application of the EU Charter through the deployment of digital tools governed by EU law can lead to a partial unification of administrative processes at the national level, which is derived from the obligation to comply with fundamental rights and principles anchored in EU law.
- Court of Justice, judgment of 26 February 2013, C‑617/10, Åkerberg Fransson, ECLI:EU:C:2013:105, p. 21. ↑
- Cf. J. Weiler, The constitution of Europe: «do the new clothes have an emperor?» and other essays on European integration, Cambridge University Press, Cambridge ; New York, 1999, pp. 120–121. ↑
- Court of Justice, judgment of 13 July 1989, 5/88, Wachauf, ECLI:EU:C:1989:321. ↑
- Court of Justice, judgment of 18 June 1991, 260/89, ERT, ECLI:EU:C:1991:254. ↑
- The terms “agency situation” and “derogation situation” used e.g. in K. Lenaerts, J. A. Guttierez Fons, ‘The Place of the Charter in the EU Constitutional Edifice’, in S. Peers et al., The EU Charter of Fundamental Rights: A Commentary, Hart, Oxford, 2014, pp. 1567–1568. ↑
- L. Besselink, The Member States, the National Constitutions and the Scope of the Charter, in Maastricht Journal of European and Comparative Law, 8, 2001, pp. 68–80; A. Cuyvers, The Scope, Nature and Effect of EU Law, in E. Ugirashebuja et al., East African Community Law, Brill, Nijhoff, 2017, pp. 161–181; B. De Witte, The scope of application of the EU Charter of Fundamental Rights, in M. González Pascual et al., The right to family life in the European Union, Routledge, London, 2017; M. Dougan, Judicial review of Member State action under the general principles and the Charter: Defining the scope of Union law, in Common Market Law Review, fasc. 52, Issue 5 2015, pp. 1201–1245; B. Pirker, Mapping the Scope of Application of EU Fundamental Rights: A Typology, in European Papers – A Journal on Law and Integration, 3, 2018, pp. 133–156. ↑
- Cf. Court of Justice, Opinion of AG Bobek of 7 September 2017, C-298/16, Ispas, ECLI:EU:C:2017:650, p. 32. ↑
- Court of Justice, judgments of 21 December 2011, C-411/10 and C-493/10, N.S. and Others, ECLI:EU:C:2011:865, paragraphs 65 to 68; and of 13 June 2017, C-258/14, Florescu and Others, ECLI:EU:C:2017:448, p. 48. ↑
- Court of Justice, judgment of 26 February 2013, C-617/10, Åkerberg Fransson, ECLI:EU:C:2013:105. ↑
- Court of Justice, judgment of 8 November 2012, C-40/11, Iida, ECLI:EU:C:2012:69, p. 79, referring to C-309/96, Annibaldi, ECLI:EU:C:1997:631, paragraphs 21 to 23. See also judgment of 5 May 2022, C-83/20, BPC Lux 2, ECLI:EU:C:2022:346, p. 27; and of 22 January 2020, C-177/18, Baldonedo Martín, ECLI:EU:C:2020:26, p. 59, and the case-law cited. ↑
- Court of Justice, judgment of 28 October 2021, C-319/19, Komisija za protivodejstvie na korupcijata i za otnemane na nezakonno pridobitoto imuštestvo, ECLI:EU:C:2021:883, p. 44; of 22 April 2021, C-485/19, Profi Credit Slovakia, ECLI:EU:C:2021:313, p. 37; of 22 January 2020, C-177/18, Baldonedo Martín, ECLI:EU:C:2020:26, paragraphs 57 to 59; and of 16 July 2020, C-686/18, Adusbef and Others, ECLI:EU:C:2020:567, paragraphs 51 and 52. ↑
- Court of Justice, judgment of 9 November 2017, C-298/16, Ispas, ECLI:EU:C:2017:843. ↑
- Ibid., p. 26. Cf. Judgment of 17 December 2015, C-419/14, WebMindLicenses, ECLI:EU:C:2015:832, p. 84; and of 22 October 2013, C-276/12, Sabou, ECLI:EU:C:2013:678, p. 38. ↑
- Directive (EU) 2016/343 of the European Parliament and of the Council of 9 March 2016 on the strengthening of certain aspects of the presumption of innocence and of the right to be present at the trial in criminal proceedings, OJ L 65, 11.3.2016, p. 1-11. ↑
- Ibid., recital 9. ↑
- Judgment of 13 June 2019, C-646/17, Gianluca Moro, ECLI:EU:C:2019:489. ↑
- Directive 2012/13/EU of the European Parliament and of the Council of 22 May 2012 on the right to information in criminal proceedings, OJ L 142, 1.6.2012, p. 1-10. ↑
- As to the opposite argument, cf. Opinion of AG Campos Sánchez Bordona from 10 July 2019, C-467/18, Rayonna prokuratura Lom, ECLI:EU:C:2019:590, p. 1 (not endorsed by the Court of Justice). ↑
- Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119, 4.5.2016, pp. 1–88. ↑
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), OJ L, 2024/1689, 12.7.2024. ↑
- Articles 14(2)(g) and 15(1)(h) GDPR. ↑
- Judgment of 13 June 2019, C-646/17, Gianluca Moro, ECLI:EU:C:2019:489. ↑
- Court of Justice, judgments of 29 July 2019, C 40/17, Fashion ID, ECLI:EU:C:2019:629, p. 66; and of 28 April 2022, C 319/20, Meta Platforms Ireland, ECLI:EU:C:2022:322, p. 73. ↑
- Cf. I. Bantekas, V. Bratsiakou, Automated Decision-Making Systems and Black Box Challenges Under European Union Administrative Law, in Fordham International Law Journal, 49, 1/2026, pp. 28–29. ↑
- Court of Justice, judgment of 8 April 2014, C-293/12 and C-594/12, Digital Rights Ireland, ECLI:EU:C:2014:238. ↑
- Court of Justice, judgment of 6 October 2015, C-362/14, Schrems I, ECLI:EU:C:2015:650. ↑
- Cf. A. Roßnagel, P. Richter, Art. 5. Principles relating to processing of personal data, in I. Spiecker genannt Döhmann et al., General Data Protection Regulation: Article-by-Article Commentary, Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden, 2023. ↑
- Cf. G. Rudolf, P. Kovač, The Role of Automated Decision-Making in Modern Administrative Law: Challenges and Data Protection Implications, in Central European Public Administration Review, 22, 2/2024, pp. 83–108, p. 92. ↑
- H. C. H. Hofmann, Automated Decision- Making (ADM) in EU Public Law, in H. C. H. Hofmann, F. Pflücke, Governance of automated decision making and EU law, Oxford University Press, Oxford, 2024, pp. 1–32, p. 18. ↑
- Cf. L. Enqvist, M. Naarttijärvi, Discretion, Automation, and Proportionality, in M. Suksi, The Rule of Law and Automated Decision-Making, Springer International Publishing, Cham, 2023, pp. 147–178. ↑
- Court of Justice, judgment of 8 April 2014, C-293/12 and C-594/12, Digital Rights Ireland, ECLI:EU:C:2014:238. ↑
- Cf. G. Rudolf, P. Kovač, The Role of Automated Decision-Making in Modern Administrative Law: Challenges and Data Protection Implications, p. 91, cit. ↑
- Cf. U. Franke, First- and Second-Level Bias in Automated Decision-making, in Philosophy & Technology, 35, 2/2022, p. 21; A. Vetrò et al., A data quality approach to the identification of discrimination risk in automated decision making systems, in Government Information Quarterly, 38, 4/2021; R. Xenidis, Beyond bias: algorithmic machines, discrimination law and the analogy trap, in Transnational Legal Theory, 14, 4/2023, pp. 378–412. ↑
- A.Z. Huq, Constitutional Rights in the Machine Learning State, in Cornell Law Review, 105, 2020, pp. 1875–1954, pp. 1924–1925. Referring to S. Corbett-Davies et al., The Measure and Mismeasure of Fairness, in Journal of Machine Learning Research, 24, 2023, pp. 1–117, pp. 17–19. ↑
- Court of Justice, judgment of 6 October 2020, C-511/18 – C-520/18, La Quadrature du Net, ECLI:EU:C:2020:791, p. 180, referring to Opinion of 26 July 2017, C-1/15, EU-Canada PNR Agreement, ECLI:EU:C:2017:592, p. 172. ↑
- Cf. O. Mir, Algorithms, Automation and Administrative Procedure at EU Level, in H. C. H. Hofmann, F. Pflücke, Governance of automated decision making and EU law, Oxford University Press, Oxford, 2024, pp. 53–78, p. 58. ↑
- E. Abrusci, R. Mackenzie-Gray Scott, The questionable necessity of a new human right against being subject to automated decision-making, in International Journal of Law and Information Technology, 31, 2/2023, pp. 114–143, p. 124; J. Adams‐Prassl et al., Directly Discriminatory Algorithms, in The Modern Law Review, 86, 1/2023, pp. 144–175, p. 144. ↑
- A.Z. Huq, Constitutional Rights in the Machine Learning State, pp. 1924–1925, cit. Referring to S. Corbett-Davies et al., The Measure and Mismeasure of Fairness, pp. 17–19, cit. ↑
- Court of Justice, judgment of 6 October 2020, C-511/18 – C-520/18, La Quadrature du Net, ECLI:EU:C:2020:791, p. 165. ↑
- Ibid., p. 182; referring to Opinion of 26 July 2017, Opinion 1/15, EU-Canada PNR Agreement, ECLI:EU:C:2017:592, paragraphs 173 and 174. ↑
- Court of Justice, judgment of 8 May 2019, C-230/18, PI v Landespolizeidirektion Tirol, ECLI:EU:C:2019:383, paragraphs 56–58. ↑
- Court of Justice, judgment of 21 November 1991, C‑269/90, Technische Universität München, ECLI:EU:C:1991:438, p. 14. ↑
- Cf. Recital 71 GDPR. ↑
- Cf. O. Mir, Algorithms, Automation and Administrative Procedure at EU Level, pp. 63–65, cit. ↑
- Court of Justice, judgment of 6 October 2020, C-511/18 – C-520/18, La Quadrature du Net, ECLI:EU:C:2020:791, p. 182. ↑
- Court of Justice, judgments of 22 November 2012, C‑277/11, M., ECLI:EU:C:2012:744, p. 88; and of 11 December 2014, C‑249/13, Boudjlida, ECLI:EU:C:2014:2431, p. 38. ↑
- Ibid. ↑
- Court of Justice, judgments of 5 May 2022, C-54/20 P, Commission v Missir Mamachi di Lusignano, ECLI:EU:C:2022:349, p. 69; of 10 March 2022, C‑167/19 P and C‑171/19 P, Commission v Freistaat Bayern and Others, ECLI:EU:C:2022:176, p. 77; or of 30 November 2016, C‑486/15 P, Commission v France and Orange, ECLI:EU:C:2016:912, p. 79. ↑
- Cf. J. C. Covilla, Artificial Intelligence and Administrative Discretion: Exploring Adaptations and Boundaries, in European Journal of Risk Regulation, 16, 1/2025, pp. 36–50. ↑
- Article 13(2)(f), Article 14(2)(g), Article 15(2)(h) GDPR; Article 86(1) AIA. Cf. S. Wachter et al., Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation, in International Data Privacy Law, 7, 2/2017, pp. 76–99, pp. 82–83. ↑
- O. Mir, The AI Act from the Perspective of Administrative Law: Much Ado About Nothing?, in European Journal of Risk Regulation, 16, 1/2025, pp. 63–75. ↑
- Court of Justice, judgment of 27 February 2025, C-203/22, Dun & Bradstreet Austria, ECLI:EU:C:2025:117, p. 58. ↑
- H. P. Olsen et al., What’s in the Box? The Legal Requirement of Explainability in Computationally Aided Decision-Making in Public Administration, in H.-W. Micklitz, Constitutional Challenges in the Algorithmic Society, Cambridge University Press, Cambridge, 2022, p. 222. See also Z. Ződi, Algorithmic explainability and legal reasoning, in The Theory and Practice of Legislation, 10, 1/2022, pp. 67–92. ↑
- Court of Justice, judgment of 27 February 2025, C-203/22, Dun & Bradstreet Austria, ECLI:EU:C:2025:117, p. 59. ↑
- H. P. Olsen et al., What’s in the Box? The Legal Requirement of Explainability in Computationally Aided Decision-Making in Public Administration, p. 226, cit. ↑
- M. Hildebrandt, Algorithmic regulation and the rule of law, in Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376, 2018. ↑
- H. C. H. Hofmann, Automated Decision- Making (ADM) in EU Public Law, p. 20, cit. ↑
- Cf. J.C. Covilla, Artificial Intelligence and Administrative Discretion: Exploring Adaptations and Boundaries, pp. 42, 47, cit. ↑
- Cf. R. Williams, Rethinking Administrative Law for Algorithmic Decision Making, in Oxford Journal of Legal Studies, 42, 2/2022, pp. 468–494, p. 481. ↑
- Cf. O. Mir, The impact of the AI Act on public authorities and on administrative procedures, in CERIDAP – Rivista Interdisciplinare Sul Diritto Delle Amministrazioni Pubbliche, 4, 2023, p. 249. ↑
- Cf. D.-U. Galetta, H. C. Hofmann, Evolving AI-based automation-the continuing relevance of good administration, in European Law Review, 48, 6/2023, pp. 617–635, p. 628. ↑
- Cf. V. Roeben, Judicial Protection as the Meta-norm in the EU Judicial Architecture, in Hague Journal on the Rule of Law, 12, 1/2020, pp. 29–62. ↑
- See, e.g., Court of Justice, judgment of 20 March 2020, C‑558/18 and C‑563/18, Miasto Łowicz, ECLI:EU:C:2020:234. ↑