Federico Nassuato

PhD in Administrative Law at the University of Udine.

The use of algorithms and A.I. systems in administrative action has strongly challenged the requirements of administrative due process. Due to the absence of national statutory rules on administration by algorithm, administrative courts have established a set of principles (the so-called “principles of algorithmic legality”) in order to protect the legal position of citizens involved in administrative procedures, borrowing them mostly from the EU General Data Protection Regulation (GDPR). Case law specifically requires public bodies to comply with: a) the citizen’s right to access to meaningful information concerning the automated decision-making; b) the citizen’s right not to be subject to a decision based solely on automated processing; c) the prohibition of algorithmic bias. After a brief overview of the content of these principles, this paper aims to analyse the relation between them and Article 21-octies, par. 2 of Law No. 241/1990. This paper questions whether they have been understood by the courts as reinforced procedural rules to avoid the “weakening” effect, provided by Article 21-octies with regards to procedural impropriety of non-discretionary decisions. In particular, this paper questions whether the strengthening of the procedural rules could be aimed at counterbalancing the lack of substantive legality, due to the exercise of implied powers by the public bodies in using algorithms, or whether it should be based on a different legal reasoning.

Read More