Algorithms

The use of Artificial Intelligence (AI) in financial markets requires a balanced and proactive approach. The distinction between weak and strong AI systems highlights the need to adapt the sector legislation with respect to the sudden growth of the autonomy of the algorithms. The challenge is to balance natural technological development with market security. The balancing of human responsibility with the socialization of damages and with daring innovative solutions, such as the recognition of the legal personality of advanced AI systems or “smart law” hypotheses, would help jurists to manage, with less uncertainty, the new dynamics of financial markets.

Read More

This paper examines the implementation of artificial intelligence in decision-making processes within public administration, with a focus on addressing the challenges of transparency, accountability, and the intelligibility of AI-generated decisions. The paper discusses the importance of imputability in decisions made with deep learning algorithms. It emphasises that by granting public administrations full control over the training dataset, source code, and knowledge base, the imputability of the decision can be ensured. This control enables administrations to validate the relevancy and accuracy of the algorithm's training data, address potential biases, and comply with legal and ethical requirements. The paper then proposes the use of Large Language Models (LLM) as a solution to enhance the transparency and motivation behind AI-assisted decisions. It highlights that LLMs can generate articulate and comprehensible textual outputs that closely resemble human-generated decisions, allowing for a deeper understanding of the decision-making process. Furthermore, the paper emphasises the significance of providing access to the training dataset, source code, and individual administrative precedents to enhance transparency and accountability. It argues that by offering these components, stakeholders can evaluate the validity and reliability of AI-assisted decisions, fostering trust in the decision-making process.

Read More

The Author briefly reviews the major problems of administrative justice in the Third Millennium. These include: digitalization and judicial review of automated decisions by administrative judges; the relationships between national and EU law; legal measures to reduce delays in judgments without affecting their effectiveness and the uncertainty of rules and the excessive discretion left to judges.

Read More

This article addresses the legal order of the digital sphere, focusing on the constitutional dimension. In particular, the new freedoms that have come into being with the advent of technology, especially digital platforms. There are doubts about the effectiveness of regulatory intervention, such as that attempted by the EU whose purpose seems to focus on sanctions for violations rather than on promoting freedoms. The article emphasizes the positives rather than the negatives of a digital legal order, while recognizing the problems with the digital market where the large companies exercise a dominant and anti-competitive position. The article also addresses the issue of democracy on the Internet and the challenge of misinformation.

Read More

In spite of the image of a developed e-governance, advanced automated decision-making (ADM) systems have not been widely used in Estonian public administration and there is still no general legal framework for them. The draft bill to amend the Administrative Procedure Act, which was presented to Parliament in 2022, takes a rather cautious approach to the issue too, significantly limiting the automation of discretionary decisions and in particular the use of self-learning algorithms. Automated administrative decisions would not be discouraged by the application of procedural principles inherent to the rule of law, such as hearing and reasoning. However, for the automation of discretionary decisions in appropriate cases, a solution has been proposed whereby typical cases would be solved in a fully automated way by means of predefined algorithms based on internal administrative guidelines. This solution is not an universal magic bullet for every situation, but may allow for a certain degree of innovation, provided appropriate procedural and organisational safeguards are respected. Fundamental preconditions for that are the categorical separation of the guidance and algorithm, as well as the publication of the guide. An optimal model of public accountability has to encourage authorities to take appropriate precautions when implementing algorithms.

Read More

Automated decision-making has been discussed in Austrian administrative law for more than 40 years. The focus has always been on the administrative act (in the sense of a formal individual decision) and the pertaining procedure. In this area, there are established principles, although new technologies raise new questions. Beyond the administrative act, we are still very much in the dark.

Read More

This article aims at analysing the decision-automation-systems currently used by public administrations in Italy. After an analysis of the legal framework, the different systems are classified and illustrated: in particular, the case of the so-called “good school” algorithm is discussed. The conclusions dwell on the reason for the scarce use of these tools in the Italian landscape, also due to the slow and uneven digitisation of the public sector.

Read More

The use of algorithms and A.I. systems in administrative action has strongly challenged the requirements of administrative due process. Due to the absence of national statutory rules on administration by algorithm, administrative courts have established a set of principles (the so-called “principles of algorithmic legality”) in order to protect the legal position of citizens involved in administrative procedures, borrowing them mostly from the EU General Data Protection Regulation (GDPR). Case law specifically requires public bodies to comply with: a) the citizen’s right to access to meaningful information concerning the automated decision-making; b) the citizen’s right not to be subject to a decision based solely on automated processing; c) the prohibition of algorithmic bias. After a brief overview of the content of these principles, this paper aims to analyse the relation between them and Article 21-octies, par. 2 of Law No. 241/1990. This paper questions whether they have been understood by the courts as reinforced procedural rules to avoid the “weakening” effect, provided by Article 21-octies with regards to procedural impropriety of non-discretionary decisions. In particular, this paper questions whether the strengthening of the procedural rules could be aimed at counterbalancing the lack of substantive legality, due to the exercise of implied powers by the public bodies in using algorithms, or whether it should be based on a different legal reasoning.

Read More