Automated Decision-Making Systems in Spanish Administrative Law

Tags: ,

1/2023

Automated Decision-Making Systems in Spanish Administrative Law

Tags: ,

La legislazione spagnola sulla decisione automatizzata è tra le più avanzate a livello globale. Entrata in vigore nel 2007, essa è stata recepita nella vigente legge sul procedimento amministrativo del 2015. Attualmente, più di un terzo del settore pubblico spagnolo ricorre a tali sistemi. Permangono alcuni problemi di conformità al diritto amministrativo generale, come ad esempio rispetto al diritto ad una corretta applicazione, al diritto di presentare memorie sul preavviso di rigetto, o alla motivazione di decisioni automatizzate o alla trasparenza della programmazione. Nonostante vi siano state centinaia di decisioni automatizzate, il contenzioso in questa materia si è rivelato meramente episodico. Il contributo esamina i requisiti formulati dalla legge Spagnola per consentire l’adozione di sistemi decisionali automatizzati da parte dell’amministrazione pubblica: l’approvazione preventiva ed espressa, il controllo esterno del sistema, la firma elettronica della decisione, ecc.; e gli aspetti critici tuttora presenti.


Spain's legislation on automated administrative action is among the most advanced in the world. Enacted in 2007, it has remained the same in the current Administrative Procedure Act of 2015. Currently, more than one third of the Spanish public sector uses these systems. Some problems of compliance with general administrative law remain, such as the right to correct applications, the right to present arguments in the face of a draft before a rejection decision is issued, or the motivation of automated decisions or the transparency of programming. But despite the fact that thousands of automated decisions are taken every day, judicial litigation in this area has so far been merely anecdotal. This paper describes the requirements established by Spanish law for the implementation of automated decision-making systems used by Public Administration: their express prior approval, the auditing of the system, the electronic signature of the decisions, etc.; and the critical aspects that still remain.

Summary. 1. Legal framework; – 1.1. Arts. 41 and 42 of Act 40/2015, of 1st October, on Legal Regime of Public Sector; – a) Concept of “automated administrative action”; – b) Prior determination by the administration of the legal requirements. Algorithms, are regulations? The lack of transparency; – c) The body competent to implement AI systems and to attribute to it its activity; – d) Supervision and quality control. The lack of legal supervision; – e) Auditing of the information system and its source code. Special reference to the personal data protection regime; – f) E-signature of automated decisions (art.42 LRJSP); – 1.2. Art.23 of Act 15/2022, of 12th July, for Equal Treatment and Non-discrimination; – 1.3.- Amendment of the Workers’ Statute; – 1.4. Creation of the Spanish Agency for the Supervision of AI; – 2. Soft law: charter of digital rights; – 3. Case-law and main problems in current situation; – 3.1. The “Bosco” case; – 3.2. The VioGen algorithm; – 3.3. Automatic initiation of sanctioning procedures; – 3.4. Deprivation of safeguards in automated grant award procedures; – 3.5. Non-compliance with the duty to state reasons for administrative acts.

1. Legal framework

1.1. Arts. 41 and 42 of Act 40/2015, of 1st October, on Legal Regime of Public Sector

Spanish[1] legislation on administrative procedure was deeply amended in 2015. Until then, there were two main rules:

  • A law regarding the administrative procedure and the legal regime of the public sector, when acting by traditional means (paper): Act 30/1992, of 26th November, on Legal Regime of Public Sector and Common Administrative Procedure.
  • A law regarding those matters, when acting by electronic means: Act 11/2007, of 22nd June, on Citizens’ Electronic Access to Public Services.

In 2015, these rules were replaced by two new ones, with a different regulatory scheme:

  • Act 39/2015, of 1st October, on Common Administrative Procedure (LPAC).
  • Act 40/2015, of 1st October, on Legal Regime of Public Sector (LRJSP)

These rules consider that the Administration’s normal means of action is electronic. And although they represent a change in the regulatory structure, they introduce only few developments in the matter under consideration.

The main rules standing a specific legal regime of automated administrative action and AI are arts.41 and 42 LRJSP (former arts.39 and 18 of 11/2007 Act, respectively). They do not regulate AI, but so-called “automated administrative action” [2]. It is well known that not all automated administrative action is AI. But any AI system uses automated processes at least at some stage. These are, therefore, the main provisions that regulate the use of AI by the public sector in Spain. Let’s start with art.41. It reads as follows[3]:

«Article 41. Automated administrative action.

1. Automated administrative action is understood to be any decision or action carried out entirely by electronic means by a public administration within the framework of an administrative procedure, and in which a public employee has not been directly involved.

2. In the case of automated administrative action, the competent body or bodies −as appropriate−, shall be established in advance for the definition of the specifications, programming, maintenance, supervision and quality control and, where appropriate, auditing of the information system and its source code. It shall also indicate the body to be held responsible for the purposes of appeal».

The main issues arising from this provision are as follows.

a) Concept of “automated administrative action”.

The concept of automated administrative action provided by Section 1 of art. 41 is quite right. Some aspects should be highlighted.

The concept is taken from the Annex of Act 11/2007, which contained definitions, including this one. The only difference is that the 2007 Act referred to natural persons, while Act 40/2015 refers to public employees. In general, although it may seem vague, the definition is considered quite suitable[4]. In particular, the provision is aware that it is impossible to adopt an administrative action absolutely devoid of human intervention, at least in the design and management of the information system. Therefore, it is enough that such human intervention be indirect.

Secondly, the concept is also delimited by the context in which the automated administrative action system is applied: it is, necessarily, «in the framework of an administrative procedure». Therefore, this legal regime does not apply to automated systems, or AI systems, used for the improvement of the material provision of public services. It only concerns formalised administrative activity.

It is important to note that this definition does not only refer to automated decisions (administrative acts) in the strict sense, but also to algorithms, AI systems or automated actions that support decision-making, i.e. that are used in the framework of the administrative procedure as an input to be taken into consideration for the final decision in the decision-making process. This clarification is very important, because it considerably broadens its scope of application, preventing the Administration from limiting the application of the legal requirements to cases in which the final decision is adopted in an automated manner.

b) Prior determination by the administration of the legal requirements. Algorithms, are regulations? The lack of transparency

The automated actuation system must be expressly established, which is an extremely important requirement, because it prevents the implementation of de facto systems.

This requirement has sparked discussion among scholars about the legal nature of algorithms: are algorithms regulations? The main players in this debate have been Boix-Palop, who considers that it is, while Huergo-Lora argues that it is not[5]. The question is not purely theoretical; on the contrary, it is the practical consequences that matter: if an algorithm is a regulation, it must be implemented through the procedure for drafting regulations, which opens the process to public participation, and also requires official publication of the rule before it comes into force. In short, if the algorithms are regulations, the problem of their transparency would be solved.

At present, and without denying that Boix-Palop’s approach is very suggestive, it seems that the majority position is to consider that algorithms are… algorithms[6]. In other words, they are not, in themselves, acts or regulations. The specific legal nature will be predictable of the administrative action in which it is inserted, but not of the algorithm itself.

An algorithm is a tool, a mere mean of action. An algorithm cannot be a regulation because it cannot innovate the legal system. An algorithm has necessary to fit in legal framework.

So, if an algorithm creates new criteria or rules for decision making, it exceeds the function and the scope in which it can act and becomes invalid.

Thus, the system of automated administrative action can be established by means of an administrative decision or by means of a regulation, even one with the force of law. For example, Article 13.2 of Royal Decree 203/2021, of 30th March, on the Electronic Functioning of the Public Sector, states that «at the national level, the determination of an administrative action as automated shall be granted by decision of the head of the competent administrative body», i.e. by means of an administrative act[7]. However, automated administrative actions have also been implemented through regulations with the force of law: for example, Royal Decree-Law 2/2021, of 26th January, on the Reinforcement and Consolidation of Labour Measures, has regulated the initiation of administrative procedures in the Social Security field by means of automated official reports of infringement[8].

This situation implies complete opacity about the algorithm or the programming of the AI system, because it does not have to be made public in any way. This is contrary to what the experts argue, both in comparative law and in Spanish law: they insist on the need for transparency in this matter, so that it can be verified that the system does not incur in bias (discrimination) or in any other infringement of the legal system[9]. In the final section of this report, we will discuss a current legal proceeding in Spain on this issue.

c) The body competent to implement AI systems and to attribute to it its activity.

Art. 41.2 LRJSP states that the administrative body in charge of the system, and to which the action is imputed for the purposes of appeal, must be determined prior to its start of operation. This is a requirement of the highest relevance.

More specifically, the recent and aforementioned Article 13.2 of Royal Decree 203/2021, of 30th March −applicable only to the national Administration−, provides that the decision establishing areas of automated administrative action «shall specify the appeals that may be lodged against the action, the administrative or judicial body, where appropriate, before which they are to be submitted, and the time limit for submitting them». This is a guarantee known in Spanish administrative law as “pie de recurso” −food of appeal−, which requires that the notification of administrative acts includes this information.

It should be noted that these provisions discard the possibility that automated administrative action can be considered as “autonomous”, as it will always be attributed to a specific administrative body.

Moreover, automated administrative action is thus equated with the formalised administrative action of issuing administrative acts, imposing the same guarantees in both cases.

d) Supervision and quality control. The lack of legal supervision.

This provision also states the requirement for the system to be monitored. This duty is not explicitly but implicitly imposed, as is clear from the text of the article if we consider the purpose of the regulation[10]. The rule calls the requirement of human supervision, widely demanded both in Spanish law and in comparative law[11].

In my opinion, the problem is not only that there should be human supervision. Moreover, this supervision must be multidisciplinary. It should be made up by specialists from different fields. In particular, it must specifically include legal supervision. This is a point on which I have been particularly insistent: it is necessary that, in the team carrying out the supervision, there are legal experts to ensure compliance with the legislation[12].

In most cases, the public sector lacks the means to carry out AI developments itself and has to rely on procuring these solutions from the market. However, a screening of public procurement tenders for AI systems in Spain shows that they do not provide for the inclusion of legal experts in the work teams. There are engineers, mathematicians, computer scientists… even experts in communication (for publicity campaigns to publicise the implementation of the system), but jurists are conspicuous by their absence.

e) Auditing of the information system and its source code. Special reference to the personal data protection regime.

Article 41.2 LRJSP also states that the information system and its source code be audited. But it has been pointed out that the criteria for such an audit are not yet clear[13].

There is, however, one particular area where some methods have been developed: the protection of personal data. In this workshop, some proposals are set out. As far as Spanish law is concerned, we must cite the documents drawn up by the Spanish Data Protection Agency (Agencia Española de Protección de Datos): in particular, the guidelines “Adecuación al RGPD de tratamientos que incorporan Inteligencia Artificial. Una introducción” (2020), and “Requisitos para auditorías de tratamientos que incluyan inteligencia artificial” (2021)[14].

f) E-signature of automated decisions (art.42 LRJSP).

Article 42 LRJSP regulates the electronic signature that must be incorporated into the automated administrative action. It provides as follows:

«Article 42. Signature systems for automated administrative action.

In the exercise of competence in automated administrative action, each public Administration may determine the cases in which the following electronic signature systems may be used:

a) Electronic seal of a public administration, body, agency or public law entity, based on a qualified electronic certificate that meets the requirements of electronic signature legislation.

b) Secure verification code (SVC) assigned to the Public Administration, body, public agency or public law entity, under the terms and conditions established, allowing in any case the verification of the integrity of the document by accessing the corresponding electronic office»[15].

The main consequence of this rule is that decisions adopted by means of automated administrative action will incorporate an electronic signature system. This makes it possible to identify the body to which the action is attributed, which is relevant for appeal purposes.

Otherwise, the text is self-explanatory and does not merit comment for the purposes of this report.

1.2. Art.23 of Act 15/2022, of 12th July, for Equal Treatment and Non-discrimination.

Just last summer, a new development concerning algorithms and AI has been approved. The Article quoted reads as follows:

«Article 23. Artificial Intelligence and automated decision-making mechanisms.

1. Within the framework of the National Artificial Intelligence Strategy, the Charter of Digital Rights and European initiatives on Artificial Intelligence, public administrations shall encourage the implementation of mechanisms so that the algorithms involved in decision-making used in public administrations take into account criteria of minimisation of bias, transparency and accountability, whenever technically feasible. These mechanisms will include their design and training data, and address their potential discriminatory impact. To this end, impact assessments will be promoted to identify potential discriminatory bias.

2. Public administrations, within the framework of their competences, shall prioritise transparency in the design, implementation and interpretability of decisions taken by algorithms involved in decision-making processes.

3. Public administrations and companies will promote the use of ethical, reliable and respectful of fundamental rights Artificial Intelligence, following in particular the recommendations of the European Union in this regard.

4. A quality label for algorithms will be promoted».

It should be noted that this provision, due to its wording, seems more like a soft-law norm than a rule, since it does not set out imperative requirements or rules for action, but rather mere guidelines.

Moreover, it just states these guidelines “whenever technically feasible”, which is not solving the main problem currently posed by AI as decision-making process support: “black box” algorithms, in which it is not possible to satisfy these requirements. Such algorithms are therefore not prohibited and are exempted from the requirements of this provision.

1.3. Amendment of the Workers’ Statute

On 11 May 2021, Royal Decree-Law 9/2021 was approved, amending the Workers’ Statute, which is the core of Spanish labour laws; was subsequently replaced by 12/2021 Act, of 28th September, after it was enacted by the Parliament. It adds a new provision to Article 64.4 of the Workers’ Statute (Royal Legislative Decree 2/2015, of 23 October), which lays down the information rights of workers’ representatives. Specifically, it recognises their right to:

«d) Be informed by the company of the parameters, rules and instructions on which algorithms or artificial intelligence systems are based, whenever they affect decision-making that may have an impact on working conditions, access to and maintenance of employment, including profiling».

This legal reform follows an Italian ruling concerning Deliveroo’s riders, which had a major impact in Spain[16]. The ruling annulled the system of assigning orders to riders because it is not possible to know how the algorithm works.

The precept is an evident boost to the transparency of algorithms and AI systems.

As regards the Spanish public sector, it applies to public employees who do not have the status of civil servants, but of labour employees, and whose employment relationship is therefore regulated by the Workers’ Statute.

1.4. Creation of the Spanish Agency for the Supervision of AI.

The National Artificial Intelligence Strategy (see below) envisages the creation of a Spanish Agency for the Supervision of Artificial Intelligence (AESIA). The General National Budget for 2022 allows the government to initiate the procedures for its creation[17], and grants an allocation of EUR 5 million for this purpose. The procedure has already begun[18].

The regulation establishing the Agency’s competences has yet to be adopted. But it is certain that the Agency will eventually take over the tasks conferred on the national supervisory authority by the draft European Regulation.

2. Soft law: charter of digital rights

On 14th July 2021, the Spanish government approved the Charter of Digital Rights[19]. In its introductory part this document states: «the Charter is not normative in nature but aims to recognise the very new challenges of application and interpretation posed by the adaptation of rights to the digital environment, and to suggest principles and policies relating to rights in the digital environment».

In terms of content, we are interested in Section XVIII of the Charter, which is entitled: «Digital rights of citizens in their relations with public administrations». In particular, paragraphs 6 and 7, relating to AI, provide:

«6. The rights of citizens in relation to artificial intelligence recognised in this Charter shall be promoted within the framework of administrative action. In all cases, the following rights are recognized:

a) Decisions and activities in the digital environment shall respect the principles of good administration, and the right to good digital administration, as well as the ethical principles driving the design and uses of artificial intelligence.

b) Transparency on the use of artificial intelligence tools, and on their functioning and scope in each specific procedure −and, in particular, on the data involved, their margin of error, their scope of application and their decisive or non-decisive nature. The law may regulate the conditions of transparency and access to the source code, in particular to verify that it does not produce discriminatory results.

c) To receive an understandable explanation in natural language of the decisions taken in the digital environment, with justification of the relevant legal rules, the technology used and the criteria for their application to the case. The individual shall have the right to have the administrative decision reasons, or an explanation, when it departs from the criteria proposed by an automated or intelligent system.

d) Discretionary decision-making is reserved to human persons, unless provision is made in the rules for automated decision-making with appropriate safeguards.

7. A digital rights impact assessment will be required in the design of algorithms in the case of automated or semi-automated decision-making».

The wording of the text is regrettable. The final version was improved in some aspects, taking into account the proposals of a group of scholars belonging to the Administrative Law and Artificial Intelligence Network[20].

The document has had a high media impact, but little effectiveness so far. Although its approval has been considered positive, it is criticised that, instead of proclaiming new rights in a non-binding Charter, the effectiveness of the rights that citizens already have in the legislation on common administrative procedure should be guaranteed[21].

3. Case-law and main problems in current situation

Numerous AI systems have been implemented in recent years in Spain, especially by the national government and by regional governments (autonomous communities), but there are also examples of municipalities that have implemented these systems; recent reports affirm that the third part of Spanish public sector has already implemented AI systems[22].

Despite this, and the existence of a legal framework since 2007, judicial litigation has been virtually non-existent. This does not mean that there are no controversial cases, or social and scholar criticisms. But they have not been followed up by judicial appeals.

3.1. The “Bosco” case.

It is the only significant case to date in the public sector. “Bosco” is a computer system developed by the Ministry of Energy and used by energy companies to grant the so-called “bono social”, i.e. a discount on electricity and gas bills for the benefit of certain groups: large families, low incomes, etc.

The CIVIO Foundation (for social purposes), detected that the algorithm was failing, for example by denying the social bonus to widows, so it asked the Ministry for access to the technical specifications, to the performance tests, and to the source code, in order to verify its functioning. The request was based on the right of access to public information, stated in 19/2013 Act, of 9th December, on Transparency, Access to Public Information and Good Governance, which establishes limits to the content of this right.

When access was denied, the decision was appealed to the Transparency Council, an independent administrative body. The Council partially upheld the appeal, allowing access to the technical specification and performance tests, but refusing access to the source code, in order to protect industrial property.

The Foundation lodged a judicial appeal against the Council’s decision. It was resolved by ruling 143/2021, of 30th December (ordinary procedure 18/2019), of the Contentious-administrative Central Court (Juzgado Central de lo Contencioso-administrativo) nr. 8. The ruling dismissed the appeal. In this case, it was argued by the Ministry that access to the source code posed a risk to national security, since the system shared information with sensitive databases, which would be put at risk.

The judgement has been appealed to a higher court, the “Audiencia Nacional”, but no judgement has yet been handed down. It has come to light, however, that the Transparency Council has modified its position and supports the appeal, based on the idea of segmenting the source code, deleting the elements that affect national security and keeping those that explain the rules or criteria applied by the algorithm.

The resolution of this case generates a lot of expectation. If the “Audiencia Nacional” upholds the appeal, it will be a great triumph for all those who have been calling for greater transparency in the use of algorithms and AI by the public sector.

3.2. The VioGen algorithm

This algorithm was developed in 2007 by the Ministry of Interior to assess the risk of gender-based violence. Since then, it has been applied more than 5 million times in police stations, despite the fact that Spain is one of the countries with the lowest rate of feminicide in the world.[23]. It is also used in criminal courts, to grant precautionary measures for the protection of victims.

The system has been widely criticised[24]. Victim advocacy organisations consider that the system tends to massively under-rate risk: approximately 45% of cases are rated as “not appreciated”. An analysis of 126 cases of murdered women found that 55 of them received an insufficient protection order from “VioGen”. In addition, “VioGén” has been found to adapt the risk assessed for each victim according to the police means available.

The Ministry has denied access to the algorithm, but the matter has not been brought to court.

3.3. Automatic initiation of sanctioning procedures.

Royal Decree-Law 2/2021, of 26th January, on the Reinforcement and Consolidation of Labour Measures, has regulated the initiation of administrative procedures in the Social Security field by means of automated official reports of infringement. The information systems of the Labour and Social Security Inspectorate can detect facts which may constitute an offense, which allows for the automated initiation of sanctioning procedures.

The controversy in this case arises because, even if the person concerned does not take any action (e.g., does not submit any allegations), the procedure can be pursued up to the point of imposing a sanction. Although this may be surprising, it is nothing new: the same processing scheme applies to some traffic offences, such as exceeding the speed limit on the road and being photographed by a speed camera: in this case, the start of the sanctioning procedure is also automatic, and can lead to the same consequences.

3.4. Deprivation of safeguards in automated grant award procedures.

Automated systems are also criticised for removing certain guarantees for citizens in administrative procedures.

This is the case with robotic systems for the granting of subsidies. For example, in 2020, the regional government of Andalusia implemented 35 robots to process the procedures for granting subsidies to individual entrepreneurs, because of the pandemic. The burden on 20,000 civil servants was eased, relieving 100,000 hours of work. And the entire procedure, from application to payment, was reduced to 2 months. Discounting the cost of the robots, the system has saved 1.3 million euros.

But it is not all good things. As it was an automated system, applicants were deprived of the right to correct their application, or to present arguments in the face of a draft to deny aid. If any information in the system was incorrect, the aid was refused and the applicant was forced to complain.

3.5. Non-compliance with the duty to state reasons for administrative acts.

The opacity of algorithms is often seen as a problem of transparency. But in my view, it is even more serious: it is a breach of the duty to give reasons for administrative acts.

The duty to state reasons for administrative acts is a central guarantee of administrative law. It is a requirement related to the right of defense. Failure to comply with it renders the administrative act invalid. It is unnecessary to develop this issue here.

I myself[25], as well as some of my colleagues at Pablo de Olavide University[26], have been particularly insistent on the idea that algorithms, automated administrative action and AI do not escape the need to give reasons for the decisions based on them. Some scholars also notice this point[27].

This is not the time to discuss this issue at length. Suffice it to say that, as I have tried to explain in these works, the duty to state reasons has particularities in the way it is articulated in the case of automated or AI actions, but it is required in any case. According to the current regulatory framework (the common administrative procedure legislation), this entails the impossibility of allowing black box algorithms to support decision making process, because it is impossible to know why the system takes that decision and not another one. We can think of new ways to give reasons in these cases, but they will need to be explicitly regulated.

  1. ORCID 0000-0002-6734-8672. This paper is a result of the research Project UPO-1381574, «Artificial Intelligence and Administrative Law», granted in the framework of Operational program FEDER Andalusia 2014-2020.
  2. On automated administrative action, see especially the works of A. Cerrillo-Martínez, Robots, asistentes virtuales y automatización de las administraciones públicas, Revista Galega de Administración Pública, nr. 61, 2021, pp.271 ff.; “Automatización e inteligencia artificial”, in I. Martín-Delgado, El procedimiento administrativo desde la perspectiva de la innovación tecnológica, Madrid: Iustel-IVAP, 2020, pp.389 ss.; and “Actividad administrativa automatizada y utilización de algoritmos”, in F. Castillo-Blanco, S. Fernández-Ramos y J.M. Pérez-Monguió (Eds.): Las políticas de buen Gobierno en Andalucía (I): Digitalización y transparencia, Sevilla, Instituto Andaluz de Administración Pública, 2022, pp.259 ss.On the matter, see also A. Soriano-Arnanz: “Decisiones automatizadas y discriminación: aproximación y propuestas generales”, Revista General de Derecho Administrativo, n.º 56, 2021 (also online, INAP); by the same autor, “Decisiones automatizadas: problemas y soluciones jurídicas. Más allá de la protección de datos”, Revista de Derecho Público: Teoría y Método, vol.3, 2021, pp.85 ff.; F.J. Bauzá-Martorell, “Identificación, autenticación y actuación automatizada de las Administraciones públicas”, in E. Gamero-Casado (Ed.), Tratado de procedimiento administrativo común y régimen jurídico básico del sector público, vol.1, Valencia: Tirant lo Blanch, 2017, pp.787 ff.; and R. Gómez-Padilla, “Comentario al art.41 LRJSP”, in M.A. Recuerda-Girela (Ed.), Régimen jurídico del sector público y procedimiento administrativo común, Cizur Menor: Aranzadi-Thomson-Reuters, 2016, p.1.222 ff.

    And there are two works edited in times of 11/2007 Act, but still useful: I. Martín-Delgado, “Naturaleza, concepto y régimen jurídico de la actuación administrativa automatizada”, Revista de Administración Pública, n.º 180, 2009, pp.353 ff.; X. Uríos-Aparisi and I. Alamillo-Domingo, La actuación administrativa automatizada en el ámbito de las Administraciones públicas. Análisis jurídico y metodológico para la construcción y explotación de trámites automáticos, Escola d’Administració Pública de Catalunya, Barcelona, 2011.

  3. All translations of rules into English are by the author.
  4. See particularly I. Martín-Delgado (2009), pp.367 ff.
  5. See A. Boix-Palop, “Los algoritmos son reglamentos: la necesidad de extender las garantías propias de las normas reglamentarias a los programas empleados por la Administración para la toma de decisiones”, Revista de Derecho Público: Teoría y Método, vol.1, 2020, pp.223 ff.; A. Huergo-Lora, Una aproximación a los algoritmos desde el Derecho administrativo, in the book edited by himself, La regulación de los algoritmos, Aranzadi-Thomson-Reuters, Cizur Menor, 2020, pp.64 ff.
  6. See A.D. Berning-Prieto, La naturaleza jurídica de los algoritmos, in E. Gamero-Casado (Ed.), Inteligencia artificial en el sector público (retos, límites y medios), Valencia: Tirant lo Blanch, 2023 (in press).
  7. This rule only applies to the national administration, not to the autonomous communities or municipalities.
  8. See below, section 3.3.
  9. On the need for transparency in algorithms and AI, quoting just some relevant Spanish authors, see J. Valero-Torrijos, “Las garantías jurídicas de la inteligencia artificial en la actividad administrativa desde la perspectiva de la buena administración”, Revista Catalana de Dret Public, nr. 58, 2019, pp.82 ff.; L. Cotino-Hueso, Hacia la Transparencia 4.0: el uso de la Inteligencia Artificial y big data para la lucha contra el fraude y la corrupción y las (muchas) exigencias constitucionales», in C. Ramió (Ed.), Repensando la Administración digital y la innovación pública, Madrid: Instituto Nacional de Administración Pública, 2021, pp. 169 ff.; I. Martín-Delgado, La aplicación del principio de transparencia a la actividad administrativa algorítmica, in E. Gamero-Casado, (Ed.), Inteligencia artificial en el sector público (retos, límites y medios), Valencia: Tirant lo Blanch, 2023 (in press); A. Cerrillo-Martínez, El impacto de la inteligencia artificial en las Administraciones Públicas: estado de la cuestión y una agenda, in A. Cerrillo-Martínez, M. Peguera-Poch, Retos jurídicos de la inteligencia artificial, Cizur Menor: Thomson Reuters Aranzadi, Cizur Menor, 2020, pp. 83 ff.; J. Ponce-Solé, Inteligencia artificial, Derecho administrativo y reserva de humanidad: algoritmos y procedimiento administrativo debido tecnológico, in Revista General de Derecho Administrativo, nr.50, 2019 (also in INAP, open access); I Criado, J. Valero and J. Villodre: Algorithmic transparency and bureaucratic discretion: The case of SALER early warning system, Information Polity, vol.25, nr. 4, 2020, p.452; M.E. Gutiérrez-David, Administraciones inteligentes y acceso al código fuente de los algoritmos públicos. Conjurando riesgos de cajas negras decisionales, in Derecom, nr. 30, 2021, pp. 143 ff.
  10. There are other examples of implicit duties imposed on the Administration in the matter of eGovernment. For example, art.13 a) of 39/2015 Act, states that citizens have the right “To communicate with Public Administrations through an electronic General Access Point”. No rule regulates this General Access Point. But it has been understood as a point that integrates all the services and procedures of each public administration, thus facilitating citizens’ relations by electronic means. The Constitutional Court, in its ruling 55/2018, of 24th May, considers that the national legislator can compel all public administrations to have such an access point.
  11. See J. Ponce-Solé, “Reserva de humanidad y supervisión humana de la inteligencia artificial”, El Cronista del Estado Social y Democrático de Derecho, nr.100, 2022, pp.58 ff.; and bibliography cited by him.
  12. See E. Gamero-Casado, “Compliance (o Cumplimiento Normativo) de desarrollos de Inteligencia Artificial para la toma de decisiones administrativas”, Diario La Ley (Wolters Kluwer), Nº 50, Sección Ciberderecho, 19/04/2021.
  13. See J. Bone, “Auditoría de Inteligencia Artificial”, (31/07/2020).
  14. On data protection and algorithms and AI, see, recently, D. Terrón-Santos and J.L. Domínguez-Álvarez, i-Administración pública, sistemas algorítmicos y protección de datos, Madrid: Iustel, 2022.
  15. “Electronic office” refers to a special type of electronic access point established under Spanish law (art.38 LRJSP), the “sede electrónica”, literally “electronic headquarters”. Electronic offices must meet a number of IT requirements and safeguards. The use of electronic offices is mandatory for the exercise of administrative powers, i.e. for formalised administrative action.
  16. This is the ruling of the Ordinary Court of Bologna of 31/12/2020.
  17. La disposición adicional centésima trigésima de la Ley 22/2021, de 28 de diciembre, de Presupuestos Generales del Estado para el año 2022
  18. See the press release on the Government’s website.
  19. In addition to the Charter of Digital Rights, reference can be made to the Artificial Intelligence Strategies approved in Spain, and especially, the National Strategy: Estrategia Nacional de Inteligencia Artificial, ENIA, 2020. However, its contents have no relevance for the purposes of this report.
  20. Red Derecho Administrativo e Inteligencia Artificial (Red DAIA), to which I myself belong. To see the proposal document, follow this link.
  21. Indeed, e-government is leading to a serious loss of safeguards, which must be addressed as a matter of priority; see E. Gamero-Casado, Reflexiones introductorias: de la administración electrónica a la digital (o la historia interminable), in A. Cerrillo-Martínez, La administración digital, Madrid: Dykinson, 2022, pp.36 ff.
  22. It is a report promoted by Microsoft and prepared by EY, Inteligencia artificial en el sector público. España. Perspectivas europeas para 2020 y años siguientes; it found 213 relevant AI use cases in Spain in 2019; of course, in practice there are many more. See also some relevant cases in JRC Datasets.
  23. See statistics in Forum Libertas, 21/06/2022 (https://www.forumlibertas.com/espana-uno-de-los-paises-europeos-con-menos-feminicidios/)
  24. See the article published in elDiario.es, el 09/03/2022.
  25. See E. Gamero-Casado, Compliance (o cumplimiento normativo) de desarrollos de Inteligencia Artificial para la toma de decisiones administrativas, in Diario La Ley (Wolters Kluwer), Nº 50, Sección Ciberderecho, 19/04/2021.See J. Bone, Auditoría de Inteligencia Artificial, (31/07/2020); and Necesidad de motivación e invalidez de los actos administrativos sustentados en inteligencia artificial o en algoritmos, in Almacén de Derecho, 04/02/2021).
  26. See R. Navarro-González, La motivación de los actos administrativos, Cizur Menor: Aranzadi-Thomson-Reuters, 2017, pp.287 ff.; and A.D. Berning-Prieto, Validez e invalidez de los actos administrativos en soporte electrónico, Cizur Menor: Aranzadi-Thomson-Reuters, 2019, pp.267 ff.
  27. See I. Martín-Delgado (2009), p.377; X. Uríos-Aparisi and I. Alamillo-Domingo: La actuación administrativa automatizada en el ámbito de las Administraciones públicas. Análisis jurídico y metodológico para la construcción y explotación de trámites automáticos, Barcelona: Escola d’Administració Pública de Catalunya, 2011, pp.21-31, 88 and 133; and A. Cerrillo-Martínez: ¿Son fiables las decisiones de las Administraciones públicas adoptadas por algoritmos?, in European review of digital administration & law, vol.1, nr.1-2, 2020, pp.31 ff.

Eduardo Gamero Casado

Full Professor of Administrative Law, University of Pablo de Olavide, Seville.