Il presente contributo analizza la crescente automazione dei poteri sanzionatori delle pubbliche amministrazioni, con particolare attenzione all’impiego di sistemi basati sull’intelligenza artificiale per l’individuazione delle violazioni e l’irrogazione delle sanzioni. Muovendo da esempi empirici tratti dall’esperienza spagnola – in particolare dall’applicazione automatizzata delle zone a basse emissioni a Barcellona e dalle procedure di ispezione del lavoro – il lavoro esamina le implicazioni di tali pratiche per i principi fondamentali del diritto amministrativo. La tesi che si sostiene è che gli attuali quadri normativi non affrontino adeguatamente i rischi specifici connessi all’automazione dell’esercizio del potere sanzionatorio, quali i falsi positivi, la rigidità nell’applicazione delle sanzioni e l’erosione delle garanzie procedimentali. Si propone perciò lo sviluppo di strumenti di tutela giuridica che siano specificamente calibrati ed idonei a conciliare l’efficienza tecnologica con le esigenze fondamentali dello Stato di diritto e la tutela dei diritti dei singoli.
This paper analyses the growing automation of administrative sanctioning powers in public administrations, with particular attention to the use of AI-driven systems for detecting legal infringements and issuing penalties. Drawing on empirical examples from Spain – especially the automated enforcement of low-emission zones in Barcelona and labour inspection procedures – it examines the implications of these practices for fundamental principles of administrative law. The paper argues that current regulatory frameworks insufficiently address the specific risks posed by automated sanctioning, such as false positives, rigidification of penalties, and erosion of procedural guarantees. It therefore advocates the development of tailored legal safeguards capable of reconciling technological efficiency with the core requirements of the rule of law and the protection of individual rights.
1. Introduction
The automation of tasks and processes, specially fostered by artificial intelligence, have expanded rapidly across the public sector. For years, public administrations have successfully deployed chatbots to provide information and guidance to citizens[1], adopted automated systems to detect and punish traffic infringements[2], or used algorithmic tools for fraud detection[3], to name only some of the most paradigmatic and widely documented examples.
Moreover, the European Commission’s comprehensive assessment of 940 AI use cases across Europe illustrates the breadth and maturity of these deployments[4].
This finding is also reinforced by Grimmelikhuijsen & Tangi[5], whose study draws on insights from over 500 public managers across various European countries. Their results highlight a paradigm shift whereby AI is no longer perceived as merely an aspirational tool but rather as an operative component within contemporary public services, increasingly integrated across a wide range of activities.
Within this broader trend, automation has begun to transform one of the most sensitive spheres of public administration powers: the monitoring of legal compliance and the imposition of administrative penalties. Using a variety of technical tools – video surveillance systems, speed radars, environmental sensors, automated number plate recognition[6], large-scale data analytics, and algorithmic decision systems[7] – administrations can now conduct continuous monitoring, automatically verify potential infringements, and even issue sanctioning decisions – or penalties proposals – without human involvement. This transformation has been associated with the rise of Smart City infrastructures, leading Sharp to coin the term “smart administrative punishment”[8].
Empirical evidence reveals how widespread these practices have become. Examples include the automated control of polluting vehicles entering Barcelona’s low-emission zone[9]; algorithmic infringement-detection systems within Spain’s Labour Inspectorate[10]; automated speeding sanctions in many countries but significantly in France[11]; and successive legislative reforms in Lithuania since 2018 expanding automated administrative punishment across traffic and tax domains[12]. A review of 250 EU cases by Van Noordt & Misuraca[13] further documents computer-vision systems used in the Netherlands and Belgium to detect mobile phone use while driving.
Automated detection and punishment systems appear to improve rule compliance, as they address traditional administrative limitations such as limited human monitoring capacity or the low perceived likelihood of punishment[14].
This paper examines the legal implications of this transformation in the functioning of public administrations, whose operations are becoming increasingly automated, particularly with regard to their supervisory and sanctioning activities.
In this context, a preliminary clarification is necessary. It should be noted that this study will focus exclusively on the administrative sanctioning powers, understood as the authority of public administrations to punish breaches of (administrative) law that are classified as administrative infringements, rather than criminal offences. Accordingly, the paper only reflects on the legal and regulatory framework governing administrative sanctions or penalties, most commonly imposed in the form of fines by public officials or administrative bodies, as opposed to criminal penalties imposed by judicial authorities in response to the commission of a crime.
According to the Council of Europe Recommendation No R(91)1 of the Committee of Ministers to member states on administrative sanctions, those can be defined as «administrative acts which impose a penalty on persons on account of conduct contrary to the applicable rules, be it a fine or any punitive measure, whether pecuniary or not». As for context, the same recommendation states that «administrative authorities enjoy considerable powers of sanction as a result of the growth of the administrative state as well as a result of a marked tendency towards decriminalisation».
Naturally, interactions may arise between criminal and administrative sanctioning powers. The most paradigmatic case of concern in regard of these interactions is overlapping the sanctioning powers, raising the risk of punishment through both administrative and criminal channels[15]. Undoubtedly, an automated system of administrative surveillance may detect conduct amounting to a criminal offence. In such cases, however, the administrative inspectorate will refer the file to the Public Prosecutor’s Office or to the competent judicial authority, and any subsequent investigation and, where appropriate, sanctioning will be conducted in accordance with the safeguards and rules of criminal procedure, avoiding the risk of overlapping penalties, according to the appropriate national or regional rules.
Nevertheless, the focus of this article lies exclusively on the manner in which public administrations are automating both the detection of legal infringements and the imposition of sanctions, leaving aside the interactions with criminal investigations and sanctions, since in those cases the regulatory framework would be different. Accordingly, the instruments and procedures discussed will be limited to those operating within the administrative sphere of the State’s exercise of its ius puniendi.
With these considerations in mind, the paper begins by illustrating how, in Spain, the automation of administrative supervisory and enforcement activities is already envisaged within the legal system. While a general regulatory framework governing automated administrative action exists, certain specific areas are subject to more detailed rules that have significantly developed the regulation of automated control functions. Two such examples are the Barcelona’s Metropolitan Area ordinances on low-emission zones and the Labour Inspectorate procedures for labour law violations.
In both cases, AI-based digital systems are capable of autonomously identifying regulatory infringements and, without any human intervention, generating proposals for administrative sanctions, which are subsequently notified to the alleged offender.
After presenting these two paradigmatic examples of the growing automation within Spanish public administrations in section 2, the paper reflects, in section 3, on the implications of a regulatory framework adapted to automated sanctioning practices, particularly in light of the rights and principles governing administrative procedures. Special attention is paid to those guarantees that operate within the context of the exercise of administrative sanctioning powers.
Section 4 focuses on presenting the argument that the current legal framework regulating AI and automated administrative action does not establish specific guarantees for the automation on the area of administrative sanctioning power, and yet, in view of experiences such as those in Spain described in section 2, it is necessary to move forward in regulating certain specific minimum safeguards for digital systems that are capable of identifying infringements and proposing appropriate administrative sanctions for such breaches. That regulatory proposal would be drafted under the basis of the traditional safeguards and guarantees that are at risk according to what will be presented in section 3.
Finally, section 5 reflects on the growing tension between efficiency-driven automation in public administration and legal safeguards, with some concluding remarks that highlight the need of further legal developments in order to deal with the problems inherent to technology in the use of administrative sanctioning powers.
In this paper, in order to establish a more neutral term, the word “penalty” will be the term preferred, but “sanction” and even “fine” are used.
The first two words are treated as synonyms, while fine will be used having into consideration that fines are, only, those sanctions consisting on the payment of an amount of money to the public administration.
2. Two Spanish Rules in Force Allowing Automated Controls and Penalty Proposals
This section addresses Spanish regulations at different levels that provide for fully automated actions to detect infringements and propose applicable penalties. Specifically, a selection has been made of two legal instruments: a municipal ordinance and a state law. Those legal instruments foresee the possibility of carrying out administrative control actions without human intervention.
In the two selected cases, a high degree of automation is clearly observed, unlike other provisions in the Spanish legal system which, although regulating occasional automated functions, reserve a final decision-making role for the competent public official or authority. One such case could be, for example, the innovative Valencian Law 22/2018 of 6 November, on the General Inspection of Services and the Alert System for the Prevention of Misconduct, in which a regulation of a digital red-alert system is provided, but with a very limited automation degree.
In contrast, in the two areas presented below, the regulated actions can be fully automated, without human intervention, and may have legally binding sanctioning effects on infringing individuals. The two selected areas are the control of low-emission zones in Barcelona and the Labour Inspectorate procedures for labour law violations. These areas are currently regulated by quite recent rules that show the improvements and advancements in automating the administrative sanctioning powers, significantly regarding the detection of infringements and issuing sanctioning proposals. However, this automation race, as will be seen expressively in the case of the low-emission zone, involves risks.
2.1. Barcelona’s Ordinances on Low-Emission Zone
Spanish Law 7/2021 of 20 May, on climate change and energy transition, establishes in Article 14 that municipalities meeting certain characteristics, which will be addressed below, were required to establish, before 2023, low-emission zones within their respective sustainable urban mobility plans, introducing mitigation measures aimed at reducing emissions arising from mobility.
The same Article 14 of Law 7/2021 defines low-emission zones in the following terms: «the area delimited by a public authority, in the exercise of its powers, within its territory, of a continuous nature, in which restrictions on vehicle access, circulation, and parking are applied in order to improve air quality and mitigate greenhouse gas emissions, in accordance with the classification of vehicles by their emission levels as established in the current General Vehicle Regulations».
Pursuant to Article 14.3 of the aforementioned Law 7/2021, this obligation applies, as a general rule, to all municipalities with more than 50,000 inhabitants and to island territories. Exceptionally, it may also apply to municipalities with more than 20,000 inhabitants, but only when the limit values of the pollutants regulated in Royal Decree 102/2011 of 28 January, on the improvement of air quality, are exceeded.
Some of the most densely populated areas in Spain, such as Madrid and the metropolitan area of Barcelona, had already begun to restrict the circulation of polluting vehicles in urban centers, establishing low-emission zones by means of municipal ordinances in 2019, anticipating what would later be imposed as a general obligation throughout Spain by the 2021 Law aforementioned.
Those initial ordinances were challenged, and, essentially, the courts ultimately annulled, in whole or in part, the local ordinances establishing the low-emission zones due to deficiencies and flaws in the reasoning or in the procedure followed (see the judgments of the High Courts of Justice of Madrid of 27 July 2020, ECLI:ES:TSJM:2020:9774 and ECLI:ES:TSJM:2020:8151; and of Catalonia of 21 March 2022, ECLI:ES:TSJCAT:2022:1576)[16]. In what follows, references will be made exclusively to Barcelona, which is where the automated control system Chronos-Eco is used, to which I will refer later in this section.
Following that adverse judicial ruling from March 2022, the Barcelona City Council worked on a new ordinance, which was finally approved in 2023. This new 2023 Barcelona low-emission zone ordinance was also challenged before the High Court of Justice of Catalonia, which, in a judgment of 24 November 2025 (ECLI:ES:TSJCAT:2025:6428), dismissed the appeal against the Ordinance of the Barcelona City Council of 27 January 2023, which establishes the criteria for vehicle access, circulation, and parking within the Barcelona Low-Emission Zone and promotes zero-emission mobility, thereby upholding the legality of the new regulation.
Article 10 of the 2023 Ordinance establishes the prohibition on circulation within the low-emission zone on specific days and hours of the week: «1. With the aim of improving air quality and mitigating greenhouse gas emissions, as well as mitigating the impacts of climate change and protecting public health, access to and circulation within the Barcelona low-emission zone by the most polluting vehicles is prohibited. 2. The measure established in section 1 applies from Monday to Friday on working days, between 07:00 and 20:00». There is the possibility of obtaining occasional access authorizations, as well as justified situations in which access by polluting vehicles would be permitted, as detailed in other articles (Articles 12, 14, and 15).
For the purposes of this study, it is particularly relevant to refer to Article 16, as it provides the legal basis for the use of an automated system to monitor compliance with access restrictions, and to identify and sanction offenders. Its literal wording is as follows: «Compliance with the provisions of this Ordinance shall be monitored through a license-plate recognition system and the City Council’s technological platform using automated means, without prejudice to the powers assigned to the Urban Guard and in compliance with the applicable regulations on the capture and use of images and the protection of personal data».
In these respects, the 2019 and 2023 ordinances are equivalent, and the same automated technological control solution has been used in the metropolitan area of Barcelona from 2019 to the present. Therefore, despite the change in the regulatory framework, it can be said that the Chronos-Eco system has been monitoring access to the Barcelona low-emission zone since 2019.
The technical solution used to carry out this monitoring consists of cameras installed around the perimeter of the low-emission zone, connected to a system that processes the images in order to identify the license plates of vehicles entering the zone. These data are cross-checked against the database containing the traffic environmental labels assigned according to the pollution levels produced by the vehicle associated with that license plate, so as to determine automatically whether the obligation not to access the zone with a polluting vehicle is being complied with; and, in the event of non-compliance, to automatically validate the penalty by identifying the corresponding infringement and sanction, so that the proposal of penalty can be notified to the registered owner of the vehicle[17].
So, what happens if an entry into the low-emission zone is detected by a vehicle that is prohibited from accessing it due to its high level of pollution? In this regard, the Ordinance establishes a system of infringements and penalties. Article 20 provides that «failure to comply with the prohibition established in Article 10 constitutes a serious infringement, punishable by a fine of 200 euros». Furthermore, if air quality is worse, exceptionally, the penalty is set at 260 euros, in accordance with the literal wording of section 2 of Article 20: «Where the infringement is committed during an air pollution episode declared by the competent authority of the Government of Catalonia, the amount of the fine shall be 260 euros».
These penalties are also supported by Articles 80 and 81 of Royal Legislative Decree 6/2015 of 30 October, approving the consolidated text of the Law on Traffic, Motor Vehicle Circulation, and Road Safety, which generally provides for fixed-amount fines (200 euros, with a 30% increase in cases of greater severity).
Since its inception, the automated monitoring and enforcement system of the low-emission zone has been controversial. Thus, from the very early stages of its operation, it was common for proposed fines to be issued for access to the low-emission zone by polluting vehicles that had been detected by the cameras through license-plate recognition, even though they were not in breach of the access restriction regime, as the vehicle was being towed by a tow truck, on its way to a repair workshop or to a scrapyard. Therefore, the vehicle was not circulating within that area, polluting the air, since it was not being propelled by its own engine. At the time, in 2020, the press described these fines as anomalous (“insólita”) fines[18].
This problematic situation, far from being resolved, appears to have persisted, which ultimately led, in 2023, to the Barcelona City Council publicly requesting tow truck operators to cover up license plates in order to avoid these erroneous and anomalous automated penalties.[19] It is, however, striking that the burden of taking action to mitigate the shortcomings of the public administration’s automated systems has been shifted onto private operators (tow truck drivers), instead of adopting internal administrative measures to resolve the problem, such as assigning a public officer to manually review and verify all proposed sanctions.
Moreover, a case of an erroneous automated fine was brought to the attention of the local ombudsman, the Sindicatura de Greuges of Barcelona, who resolved the citizen’s complaint in May 2024. In that case, a fine proposal was automatedly issued to a car owner for an alleged infringement for driving within the low-emission zone with a polluting car, but, again, in the case it was the municipal tow truck that was transporting the vehicle within the zone.
This was the picture included in the fine proposal:
Source: Sindicatura de Greuges de Barcelona (2024).
As could be clearly seen, the polluting vehicle has no driver; it is being towed by a tow truck and, furthermore, if it was moving by itself, it would be travelling in the wrong direction, as indicated by the directional arrows painted on the road surface.
Throughout the entire process, from image capture to the issuance of the proposed sanction, as confirmed by the municipal Sindicatura de Greuges, no employee of the City Council viewed the captured images. As a result, the system amounted to fully automated enforcement, in which errors also occurred that would have been evident to a human reviewer had the images been examined. This situation, according to the Sindicatura de Greuges de Barcelona, is considered against the principles of good administration and effectiveness of public service.
To conclude this section, I would like to point out that this case, concerning the low-emission zone, may be paradigmatic in highlighting two aspects: the tendency toward uniformity – or the suppression, if not disregard, of the role of discretion – in the determination of sanctions, through fixed amounts that can be applied in a fully automated manner; and the risk of digital false positives, which, although potentially avoidable as they would be evident to a human observer or controller, are multiplied by the use of high-volume automated systems (such as traffic surveillance) and could have widespread consequences.
To prevent these problems, it would be necessary for the regulations to establish additional safeguards. Surprisingly, however, the legislation provides no such additional guarantees against these risks (loss of adaptation of the sanction to the specific case; risk of receiving an unjust sanction due to a false positive), nor did the public administration implement organizational or procedural measures to meaningfully address these issues, allowing improper sanctions to occur for years, as we have seen.
2.2. Labour Inspectorate Procedures for Labour Law Violations
In the sectoral field of labour, which, together with tax control and traffic control, is one of the public administration areas where automation has developed most extensively in Spain, regulations can be found since 2009 that specify actions that may be carried out using automated decisions. In particular, regarding unemployment benefits, this possibility was introduced through Royal Decree-Law 10/2009 of 13 August, which regulates the temporary unemployment protection and insertion program, in its first final provision[20].
However, the most significant development in automation, at least from the perspective relevant here, related to administrative monitoring and enforcement functions, occurred in 2021. Indeed, in 2021 there was an amendment in the Spanish rules regarding the Labour and Social Security Inspectorate. Article 53 of Royal Legislative Decree 5/2000 of 4 August, approving the consolidated text of the Law on Infringements and Sanctions in the Social Order, provides, following that reform in 2021[21], that infringement reports issued by the Labour and Social Security Inspectorate may be generated in an automated manner.
In the same vein, Royal Decree 928/1998 of 14 May, approving the General Regulations on Procedures for the Imposition of Sanctions for Social Order Infringements and for Social Security Contribution Settlement Files, was also amended in 2021 in order to elaborate further on automated action, with a specific regulatory framework in Articles 43 et seq., incorporating notable safeguards regarding human intervention in cases of disagreement with the automated system’s criteria.
Itnvisageed that the automated system of the Inspectorate may carry out the checks automatically and even issue the infringement reports on its own (Articles 44 and 45 Royal Decree 928/1998), which are notified and, if neither the indicated fine has been paid nor explicit disagreement with the content of the report has been expressed, may automatically generate the decision containing the proposed penalty, so that the subsequent steps of the labour administrative sanctioning procedure may be pursued before the competent labour authority.
Accordingly, this Royal Decree points out that «If allegations are made invoking facts or circumstances different from those stated in the record, inadequacy of the factual account in said record, or unfairness for any reason» to the automated infringement report, it will be then when «the case must be assigned to an officer with inspection functions, to report on them». In consequence, in this case, the role of the human inspector seems to be relegated to that of a mere reviewer of the automated system, but limited to intervening only in the event of the recipient’s disagreement with the automated control received.
However, it is not specified which inspection and control activities, with respect to compliance with labour and Social Security obligations, will be carried out in an automated manner, as this task is deferred to a subsequent administrative act. In this regard, it is stated that this will be determined by «a resolution of the Director of the State Body for Labour and Social Security Inspectorate, to be published in the electronic headquarters, specifying the cases in which such automated action will be used (Article 43 of Royal Decree 928/1998).
This is the formal, legal description found in the legislation regulating labour inspection activities. However, this is not sufficient to determine whether these provisions have had any practical effectiveness or implementation, that is, whether labour inspection and sanctioning activities are actually being automated. Unfortunately, very little information is currently available in the public domain on this matter.
The European Labour Authority (ELA) made a study visit to Spain on the 6th February 2024, in order to study the automated procedure on formal infringements by the Spanish Labour and Social Security Inspectorate. ELA published the corresponding review, and, according to that paper, the Spanish inspectorate has, indeed, a pilot automated penalty system for employers that commit formal infringements, but at the time of the visit it was running still under human oversight, despite being designed to work autonomously[22].
According to the description in the ELA’s executive summary of the Study Visit to Spain: Automated Procedure on Formal Infringements, the main objective of the project is to leverage existing data sources, such as Social Security registration and declaration information, to automatically and proactively identify corporate behaviors that breach uncontested formal obligations, such as notifications of worker registrations within established deadlines. These types of non-compliances, referred to as formal infringements, do not require complex or subjective investigation by an inspector at the initial stage, making them particularly suitable for automated processing.
Operationally, the pilot system integrated by the Labour Inspectorate uses algorithms and technological tools to cross-check data from various official sources to detect indications of formal non-compliance. When the presence of such an infringement is confirmed, the system automatically generates a proposed sanction, which can be translated into a notification electronically sent to the offending employer. In accordance to the ELA’s paper, «Infringement notices are mass-generated, and individuals are promptly notified, receiving an invoice that offers a 40% penalty reduction. The electronic notifications speed up the process. Notified individuals can either pay with the reduction and end the procedure or submit allegations. If they submit allegations, the files are transferred to an inspector or sub-inspector. If no action is taken, an automated resolution proposal is forwarded to the competent authority for a final decision»[23].
Although the process is designed to become fully automated, at the current stage the sanctions generated are reviewed by human inspectors before being communicated, thereby ensuring a space for human supervision to address potential errors or misinterpretations that could arise from exclusive reliance on digital computational tools.
This regulatory framework governing labour inspection activities thus reveals the importance of providing agile mechanisms enabling alleged offenders to challenge the soundness of the automated outcome before a human inspector. As seen before, sometimes AI mistakes and false positives can be really obvious mistakes for a human being.
This solution can, at least partially, mitigate the problem of false positives through this additional safeguard, although it will be necessary to ensure that the submission of such arguments is a very simple procedure and does not impose an excessive burden on the alleged infringer. Even so, the caution with which the system is being deployed is evident, since, despite having had legal authorization since 2021 to operate with full autonomy, in 2024 the system was still operating in a pilot phase with limited autonomy, subject to final human supervision of each sanctioning decision proposed by the digital system.
3. New Law for a New Era: Moving Forward or Moving Backwards?
Crucially, this digitally-driven transformation in administrative controls is also altering the regulatory architecture of public powers on punishing law infringements. New punishing regimes designed for automated enforcement display features that depart from long-standing administrative traditions. The ordinance establishing low-emission zones in Barcelona exemplifies this shift: traditional sanction ranges with discretionary minimum and maximum amounts were replaced by fixed, uniform penalties, eliminating the individualised case-by-case assessment.
This form of normative rigidification will be expanded to other areas in order to allow automated action, as suggests the developments in Denmark’s digitally ready legislation strategy, which seeks to draft legal rules in ways that facilitate automation. As Gøtze[24] has argued, this digitally ready legislation tends to reduce administrative discretion and constrains the contextual evaluation reducing the space needed for case-specific assessment, which, one can add, has been traditionally central in the common punishing practices, under the principle of proportionality. Indeed, these changes raise serious legal and constitutional concerns.
Automated sanctioning may be incompatible with core components of proportionality doctrine, particularly the obligation to tailor sanctions to the circumstances of the specific case and to prevent arbitrary or excessive penalties. Indeed, by way of example in the Spanish case, it may be noted that automated administrative punishment entails a direct conflict with the principle of proportionality in at least two respects that the Spanish courts have upheld in connection with the necessary individualisation of penalties to the specific case, in order to avoid disproportionate sanctions or, more generally, administrative decisions adopted without duly taking into account the circumstances present in a given case.
Firstly, Spanish Constitutional Court Ruling 199/2014, of 15 December (ECLI:ES:TC:2014:199), held that the administration had violated the principle of proportionality in sanctioning by regulating fixed amounts of fines since penalties need to be individualized. As stated in that ruling: «the application of this key determined a fixed amount of the fine, disregarding the principle of proportionality, with no reference whatsoever to the individualisation of the sanction».
A second interesting ruling takes into account another relevant safeguard in relation to administrative punishment: the Spanish Constitutional Court has developed the protection of the punished subject against potential arbitrariness of the public administration when punishing an administrative law infringement, stating that penalties must be adapted to the specific case by providing adequate reasoning as to why the amount imposed is appropriate in each concrete instance, and that constitutional review of the decision is possible. As stated in a 2021 ruling: «indeed, the individualisation of the penalty, within a range that does not infringe the requirement of predetermination, corresponds to the punishing administration, and the role of this Court in an eventual appeal proceeding must be limited to verifying the constitutional correctness of the reasoning developed to justify the specification of the administrative penalty in light of the circumstances present in the case, expressly taking into account that the mere passage of time may cause the amount of penalties to lose relevance, in proportion to the conduct that may be subject to punishment»[25].
A third relevant ruling is found in the Supreme Court Judgment of 5 June 2025 (ECLI:ES:TS:2025:2596), which established that it is possible to apply the penalty corresponding to a less serious category of infringement when, if the penalty applicable to the formally classified offence were imposed, the outcome would be excessive in light of the circumstances of the case. As stated by the Supreme Court: «the Chamber considers that there are sufficient arguments to regard this as a new criterion for modulating the imposition of administrative sanctions in justified cases where the strict application of the legal classification of a predetermined sanction may lead to a disproportionate result». In this regard, the principle of proportionality is given precedence over the principle of typification, understood as an element of the broader principle of legality in sanctioning matters.
In short, the Constitutional Court has established that standardising punishments in the form of fixed sanctioning amounts infringes the principle of proportionality by preventing their individualised application to the specific case, and it affirms that such case-specific application must be supported by adequate reasoning capable of dispelling any trace of arbitrariness. Going a step further, the Supreme Court even introduces a new requirement derived from the proportional application of administrative penalties on a case-by-case basis, consisting in assessing whether a less serious category of infringement should be recognised in the particular case in order to adjust the penalty accordingly.
Looking at other legal systems, further issues linked to proportionality can also be identified which clearly require a case-by-case assessment before deciding on the sanction. For instance, in the case of the Czech Republic, constitutional doctrine established in 2003 implies that the fine cannot be imposed in such an amount that would ultimately be liquidating for the sanctioned entrepreneur[26].
Moreover, at the European Union level, it has been said that since Greek Maize case in 1988/9[27], there has been a solid trend to even through secondary law instruments, consider that sanctions shall be, both by criminal or administrative proceedings, «effective, proportionate and dissuasive» (Judgment of the Court of 21 September 1989, Case C-68/88, Commission v. Greece, ECLI:EU:C:1989:339, paragraph 24). In this regard, there is a solid and more advanced construction nowadays regarding solely the “principle of proportionality”, insisting, among other aspects, in the extent of the principle also to the assessment of the factors which may be taken into account in the fixing of the fines (Judgment of the Court (Sixth Chamber) of 19 June 2025, Case C-671/23, M v Lietuvos bankas, ECLI:EU:C:2025:457, paragraphs 45-63).
From a rights-based perspective, automated sanctioning by issuing a fine proposal when an infringement is automatedly detected threatens two rights that have been considered relevant safeguards on the common core of Administrative Laws in Europe, such as the right to be heard and the duty to give reasons when an administrative decision affects someone, especially in a detrimental manner[28]. Moreover, both rights are considered to be recognised in common rights charters in Europe, since the European Court of Human Rights confirmed their inclusion under the right to a fair trial under article 6 of the European Convention of Human Rights[29], and are both, also, expressively considered a part of the right to good administration under Article 41 (1) of the EU Charter of Fundamental Rights.
Moreover, the full automation of the sanctioning procedure would not, in principle, be compatible with data protection rights.
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016, known as the General Data Protection Regulation (GDPR), establishes in Article 22 the right not to be subject to a decision based solely on automated processing, including profiling, when such a decision produces legal effects concerning an individual or similarly significantly affects them.
However, this right may be limited, and full automation could be permitted when provided for by European Union law or by the laws of a Member State, although in any case suitable measures must be established to safeguard the data subject’s rights and freedoms and legitimate interests. Nevertheless, no further details are provided, in the case of this exception, as to when and how to meet the protection standard required to counterbalance the exemption from the right not to be subject to fully automated processing, and there has also been discussion on whether or not the automated administrative action frameworks comply with Art. 22 of the GDPR[30], which may even be resolved in the negative in those cases where legislation on administrative automation is scarcely regulated and citizen safeguards underdeveloped or not provided, as considered by Reichel in relation to the clause in Article 28 of the Swedish Administrative Procedures Act[31].
In short, it seems that these requirements, which have been developed over decades to limit the arbitrary use of administrative sanctioning power, are at risk of losing their validity, in parallel with the expansion of automated administrative sanctioning.
In accordance with the guarantees set out above, in order to determine the amount of a fine, the public administration must first exercise its discretion in order to propose the appropriate penalty in view of the circumstances of the specific case, and it cannot do so without hearing the person who is to be punished.
Faced with this contradictory situation, several lines of thought and action are possible. Firstly, it could be argued that the implementation of digital tools that automate administrative control and sanctions operates informally, that is, outside administrative procedures in the strict sense, and that the fines generated are merely proposed penalties with no legal effect, as they can be voluntarily accepted or challenged. Well, that could indeed be a valid line of thinking, although it overlooks a fundamental issue: often, in practice, these proposed sanctions have the capacity to become firm administrative penalties with full legal effect simply by the passage of time without having been subject to allegations. Furthermore, they also overlook the fact that, for the person to whom the penalty is proposed, submitting arguments may require the same technical advice as filing a formal administrative appeal. Therefore, it seems that this line of thinking relies excessively on formal categories, without taking into account that, on a practical level, the difference between a proposed penalty and a formal administrative sanction is not so great for the person being punished.
Secondly, it may be thought that those principles (in special, proportionality) were designed for a society that was very different from the present one with the advent of AI and high possibilities for automation, and that technological progress should also transform the way in which administrative oversight is carried out and the way in which administrative sanctioning powers are exercised. After all, its overall impact, from the perspective of effectiveness, seems positive in reducing law infringements, as illustrated by some papers[32], which could justify, to some extent, an optimistic view regarding digitalization and automation within public administrations. Consequently, this view would maintain that those traditional safeguards no longer fit within the new material or practical context, and should therefore be reframed in order to also validly encompass automated decision-making, reformulating those safeguards in light of the new reality of administrative oversight possibilities. This more utilitarian view prioritizes outcomes, efficacy, and efficiency, at the expense of other aspects that it would consider less relevant once weighed against them. According to the aforementioned survey conducted by Grimmelikhuijsen & Tangi[33], it is relatively common among public managers to belief or expect that AI would help their organisation to reap benefits. However, from a legal standpoint, this position cannot be upheld, since the guarantees I have described regarding the exercise of administrative sanctioning powers are linked to fundamental rights, and it is not possible for lower-ranking rules to limit or even conflict with these guarantees.
Thirdly, one may adopt a rigid formal legal stance, arguing that, if fully automated administrative sanctioning action does not currently fit within the legal safeguards outlined above, it is unfeasible and no public administration should employ it for the time being, until automated systems allow for fair assessments adaptable to the specific case and for effective interaction with the offending individual.
This approach, the most protective of legal guarantees and the most rigid from a legal standpoint, would not accommodate current techniques of automated administrative oversight, since these systems are generally based on fixed-amount fines (which could be against the principle of proportionality as applied to the sanctioning power) and on infringements detected through isolated objective parameters, without interpretation of the overall context of the case or of aspects subject to subjective assessment (for example, they take into account only whether a car’s license plate crossed a specific point where passage was prohibited). Moreover, they do not allow for any dialogue with the system, so that the person who is intended to be fined may be heard before the sanction proposal is issued and notified.
In light of all these possible approaches, and the risks they involve, is there room for technologies that automate certain aspects of administrative oversight and the exercise of sanctioning powers, while still providing citizens with sufficient legal safeguards and ensuring an adequate level of legal certainty for administrative action?
My personal view at this point is that automated administrative sanctioning may ultimately become legally valid and consistent with the core safeguards that have been developed in legal doctrine. However, the path is not straightforward, and much progress remains to be made within legal systems, since, given the current state of technology, it will be necessary to establish additional safeguards to compensate for the limitations these systems may have. Indeed, as would be seen in the following section, regulatory frameworks are not fully aligned in regulating administrative automated action, and it is possible to identify room for further developing a coherent set of safeguards to address the issues identified.
4. Automation and AI: the regulatory framework and the need for safeguards in automated punishment systems
Despite the scale and speed of these transformations, regulatory responses remain fragmented and conceptually underdeveloped. In fact, legal approaches across Europe – and even among different regions within the same country – are disparate and sometimes even diametrically opposed.
In this regard, the starting point is that automated decision-making does not always require the use of AI. As has been widely analysed and accepted, automated public action will in some cases rely on AI, while in other cases it will be based on less sophisticated systems that do not fall within the definition of AI[34].
Starting from this premise, the fragmentation of the regulatory framework governing automated administrative sanctioning systems becomes evident: state-level legislation – or, where applicable, regional or local rules – on automated decision-making will always apply, while AI-specific regulations (European, national and, where relevant, regional or local) may additionally apply, but only when the system qualifies as AI.
Moreover, another element of controversy lies in the lack of a unified theoretical and definitional framework concerning automated administrative action. Thus, when referring to automated sanctioning, at least three distinct dimensions can be identified: automation in detection (e.g., cameras, recognition algorithms); automation in the legal qualification of facts (systems that classify a conduct as an infringement); and automation in decision-making (where the digital system determines the penalty applicable to the infringement). Even the generation of the document to be notified to the punished person may be subject to automation.
In the legal traditions of certain countries, such as Spain[35] or Germany[36], referring to automated administrative action entails only fully automated processes, without human involvement, which, in our area of activity would equal to covering all automatable stages of the cycle, from the detection of non-compliance to the drafting of the proposed sanction. Consequently, these systems, that can work without human oversight and can deliver decisions with negative effects on individuals, seems to require a higher and more specific set of safeguards in comparison to those systems that only automate partially that control and punishing activity.
With regard to the regulation of automated administrative systems, with or without AI, legislative responses vary significantly. In Spain, basic state legislation allows automation in all cases, subject to certain safeguards that must be observed[37]. However, in some Autonomous Communities (names of Spanish regions), the use of automated systems is expressly limited. Thus, Article 44.2 of Catalan Law 26/2010 of 3 August, on the legal regime and administrative procedure of the public administrations of Catalonia, provides that «only those acts that can be adopted through programming based on objective criteria and parameters are susceptible to automated administrative action». In a similar vein, Balearic Law 7/2024 of 11 December, on urgent measures for the administrative simplification and rationalisation of the public administrations of the Balearic Islands, provides in Article 66.2 that activities involving value judgments may not be carried out by means of automated administrative actions.
Likewise, in Germany, since 2017 the possibility of adopting fully automated administrative acts is allowed, but with a limit: when using automated means, «the administrative authority must not have any discretion» in the activity carried out (Article 35a VwVfG).
As an intermediate solution, we can consider the Lithuanian case. Paužaitė-Kulvinskienė & Strikaitė-Latušinskaja present developments in the Code of Administrative Offences of the Republic of Lithuania No. XII-1869 2015, which permits automated sanctioning in certain specifically listed cases that are considered «typically clear, objective, and indisputable», significantly across traffic and tax domains[38].
Furthermore, where such automated administrative sanctioning systems are based on AI, additional rules, specifically tailored to rule AI, apply. As an interesting example, we can consider Galician Law 2/2025 of 2 April, on the development and promotion of artificial intelligence in Galicia. As established in Article 1.2 of this Galician law, it is a statute that lays down «the legal framework governing the use of artificial intelligence by the Galician public sector in the exercise of its activities and in its relations with citizens, businesses, entities, and other public administrations».
Among other noteworthy aspects, attention may be drawn to the recognition of the right to request the review of decisions made by AI systems on the grounds of technical deficiencies, unreliable methods, or bias (Article 25 of Galician Law 2/2025); as well as the establishment of safeguards to prevent bias in AI systems, and the limitation on the use of AI systems for the adoption of automated administrative acts, such that, in principle, they may not be employed when such acts require a subjective assessment of the relevant circumstances or legal interpretation (Article 12 of Galician Law 2/2025).
However, the possibility of resorting to the use of AI in such cases is envisaged, provided that the additional safeguards established in Article 12.5 of Galician Law 2/2025 are implemented. Among these safeguards is the requirement to design the system «so that it does not allow unsupervised alteration of the system’s or model’s operation and provides simple and easily understandable information about its functioning, enabling affected individuals to comprehend and challenge the outcome».
Moreover, going further on AI-specific rules, we can consider the safeguards for AI considered as high-risk systems under the EU AI Act (Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828).
The AI Act defines high-risk systems in Article 6, distinguishing between security-related systems and those listed in Annex III. While some systems used by public authorities are considered high-risk, the types analysed in this paper generally perform tasks that may not fall under this classification. Indeed, Annex III refers to Law enforcement functions, but according to its definitions, this concerns to criminal investigations. Indeed, section 6 refers to law enforcement, and to functions carried out by law enforcement authorities, generally in relation to criminal activities (criminal offenses). Additionally, the term “law enforcement authority” is defined in Article 3(45) and the subject of “law enforcement” could be found in Article 3(46), emphasizing their connection with the detection and prosecution of crimes, as well as the enforcement of criminal sentences. In consequence, since administrative authorities using these systems lack jurisdiction to prosecute crimes, limiting their primary purpose to detect and punish administrative infringements, these systems cannot fall under that type of high-risk systems. While such systems could incidentally detect criminal activity, this is not their main function.
On the other hand, Section 5 in Annex III refers to AI-systems related to the access to and enjoyment of public services and benefits. In particular, subsection (a) contemplates systems used by public authorities to grant or revoke essential benefits or services. Obviously, AI based control systems could be used to detect fraud in public assistance, such as the systems used by the Labour Inspectorate in Spain to detect whether a person is fraudulently receiving subsidies or benefits. This could lead us to consider, at least hypothetically, whether that clause might be interpreted broadly to apply high-risk AI safeguards to systems that may produce such indirect consequences, even if their primary purpose is different, such as detecting fraud. Hence, if the system detects a fraud, and the consequence of the fraud is to revoke the benefit, are we dealing with a high-risk AI system?
In this regard, despite not being highly clear, we must consider that subsection (b) specifically excludes from its high-risk status systems that are used to detect financial fraud, which, to some extent, can have the revoking effect (for benefits or subsidies) as a consequence of detecting a fraud.
Consequently, we can, at the present moment, say that systems used for administrative punishing in this context currently do not clearly meet the criteria on Annex III and are unlikely to be classified as high-risk. This points out to a recommendation of including the systems based on AI and that are used to detect, and/or punish administrative infringements, since their consequence on citizens can be really detrimental.
Finally, to sum up, when considering together automated action rules and AI regulations, some common ground emerges: the limitation of digital means in relation to tasks or activities implying discretion or the need of some kind of relevant subjective judgment in appreciation. In this vein, Ponce argues in favour of a «legal reservation of the exercise of discretionary powers in favour of human beings», in order to ensure «the human exercise of empathy in the diligent weighing of facts, rights, and interests that constitutes the proper administration of discretionary powers», which he has termed as a reservation of humanity (‘reserva de humanidad’)[39].
Sanctioning activity is particularly subject to discretion and requires individualized assessment, as established by the principle of proportionality. For this reason, it seems difficult to conceive that it could be fully automated, given the legal safeguards surrounding the use of technology.
However, as some examples have shown, it is likely that not all sectoral areas require the highest level of human rigor when detecting infringements, since, as shown by the approach followed in Lithuanian Code or in the “formal infringement” approach by the Spanish Labour Inspectorate, it could be possible to identify a small number of infringements that can be detected automatically without requiring contextual interpretation, paired with fixed, low-value penalties that are intended to have minimal impact on the offender’s assets. Nevertheless, it will always be necessary to balance the safeguards lost when the individualized logic of justice is bypassed by not tailoring the sanction to the specific case. Moreover, another problem emerges, as shown in the low-emission zone in Barcelona: even when the infringement is objective and easy to detect with simple data-crossing, the system can cause false positives, both by bugs and by misunderstanding the context (e.g. the polluting car being carried by the tow truck).
In consequence, even when only some specific infringements are isolated to automated compliance control, extra safeguards need to be in place in response to this phenomenon. These safeguards should, at least, ensure that individuals are informed about the automated nature of control activities, and guarantee the possibility of promptly appealing any automated decision to a human authority/inspector.
A positive example can be found in the aforementioned Royal Decree 928/1998 of May 14, as amended in 2021. Firstly, automatically generated infringement reports must explicitly indicate that they result from an automated administrative process and must specify the means used to verify the supporting facts (Article 45). Secondly, any objections to these reports must be reviewed by human inspection staff (Article 47.3), ensuring human oversight. These objections can be submitted without legal representation and at no cost to the individual, although seeking legal counsel would entail personal expenses not recoverable from the administration.
Awareness that an alleged infraction and a penalty were identified and generated automatically can be highly valuable when preparing an objection. The individual may simply argue that the system made an error, attaching relevant documentation to support their claim that no infraction occurred, or, simply, pointing out the incoherence between the legal reasoning and the facts observed in the data input (e.g. the polluting car being towed through the low-emissions perimeter). Human review of the objection opens the door to identifying false positives, annulling improperly proposed penalties, and ideally prompting internal review of the AI system to address the source of the error.
Furthermore, the right to request a review of AI-based decisions is explicitly recognized in certain legislation, such as Article 25 of the Galician Law 2/2025. This provision allows individuals to challenge decisions based on technical deficiencies, unreliable methods, or biases in the system, strengthening the protection of rights against errors or unfairness generated by automated processes.
However, discouragingly, these safeguards related to the automation of public administration are implemented in a sector-specific rather than a general, cross-European manner. Consequently, their application is limited to certain sectoral control activities or to specific digital tools.
In general, legal scholarship has made significant efforts to study AI, and also there has been some interesting developments in AI regulation, such as the EU AI Act, and some national rules. However, the study and the development of legal systems regarding the administrative sanctioning use of these automated digital tools remains incomplete. This is partly due to the lack of a comprehensive development of administrative sanctioning powers in Europe, and especially because the treatment of systems that may be automated without AI has not been addressed at a European level, despite their growing practical application.
Administrative sanctioning law, in general terms, has historically lacked its own distinct identity, having been largely constructed with inspiration from criminal law, treating both punishing sources as complementary components of a single ius puniendi, governed by equivalent rules and principles.
Nonetheless, for more than a decade now, scholars have begun to speak of an administrative sanctioning law that is freeing itself «from the traditional constraints imposed by criminal law»[40]. Moreover, it has been observed – sometimes with a degree of caution – that in several areas sanctioning is also a way to administrate[41], highlighting the clear lack of alternative means to ensure compliance with legal norms. This occurs alongside the expanding use of sanctioning powers by public administrations in diverse areas, aimed at deterring potential infractions[42].
In light of this landscape, marked by regulatory fragmentation and the insufficiency of current safeguards, it is essential to move towards a coherent and harmonised system for the regulation of digital systems that automate administrative punishing powers in Europe.
Firstly, a reform of national administrative procedure laws is required to establish a common minimum standard of safeguards for any automated administrative action with sanctioning effects, regardless of whether it employs AI or simpler digital systems. This standard should include, at least, 1) a “reservation of humanity” that limits full automation to infringements of an objective and rule-bound nature, excluding those that require contextual or discretionary assessments, and reducing to a minimum the use of fixed sanctions (avoiding rigidification of penalties), and always providing reasons for their appropriateness, since they cannot be adapted to the specific circumstances of each case and therefore place essential elements of the principle of proportionality under strain. And 2), the right to meaningful human oversight, allowing any citizen to challenge an automated decision through a simple, free procedure that leads to effective review by an inspector empowered to correct “false positives” and, where appropriate, to promote the reprogramming or updating of the digital system to prevent future errors.
Secondly, this procedural standard should be connected with AI regulations, particularly the EU AI Act, through the explicit inclusion in its Annex III of AI systems used for the detection of administrative infringements and the imposition of sanctions as high-risk systems. This classification would entail the application of enhanced requirements for transparency, human oversight, risk management, and traceability throughout the system’s entire lifecycle.
5. Concluding Remarks: Automation, Sanctioning Power and the Future of Administrative Law
5.1. The growing tension between efficiency-driven automation and classical legal safeguards
The progressive automation of administrative monitoring and sanctioning functions marks a profound transformation in the exercise of public power. This digital transformation is not marginal or speculative. As demonstrated by the empirical examples analysed in this paper, automated administrative sanctioning is already a concrete reality. These developments illustrate how automation increasingly substitutes human judgment in decisive procedural stages, including the identification of infringements, the qualification of conduct, and the proposal of penalties.
One of the main points of interest of technology aiming automated controls is that it make it possible to implement effective enforcement tools. They even can allow to introduce new legal duties that could not otherwise be effectively imposed, due to the practical impossibility of properly monitoring their enforcement. For example, in the case of low-emission zones, it would be possible to carry out occasional random checks on vehicles entering low-emission restricted areas, with police officers stopping some vehicles in order to verify, through a database, if their license plates are in the list of low polluting vehicles.
However, traditional (or mainly human) enforcement of such an obligation may require a large number of officers and may cause significant disruption for road users, since it would further hinder mobility. Indeed, it should also be noted that this type of automated monitoring involves fewer burdens or inconveniences for those subject to control, as the system performs the verification of infringements without interrupting the activity being carried out. Thus, access-control systems for low-emission zones do not require vehicles to stop, as mentioned before, and likewise, the Labour Inspectorate system do not involve inspectors entering the offices of companies, nor do they interrupt staff activity in order to provide documentation or respond to other requests from inspection personnel.
These benefits, at first glance, appear positive. However, as noted earlier, there is a clear tension with certain rights and principles, particularly the principle of proportionality in the context of sanctioning powers, as well as the rights to be heard and not to be subjected to fully automated decision-making. In doing so, automation challenges long-established assumptions about discretion, procedural guarantees, and individualised justice that have traditionally structured administrative sanctioning on the presuppose deliberation, contextual evaluation and meaningful human judgment.
Automated sanctioning systems, however, operate on fundamentally different logics. They are designed to process large volumes of data, apply predefined parameters, and generate standardised outcomes with minimal or no human intervention. This technological architecture favours uniformity, speed, and scalability, but systematically constrains contextual sensitivity, discretionary balancing, and dialogical interaction.
This tension becomes particularly visible in the rigidification of sanctioning frameworks, as illustrated by the replacement of sanction ranges with fixed penalties in order to enable full automation. While such normative choices may facilitate technological deployment, they risk undermining constitutional doctrines of proportionality. In this respect, automation does not merely affect procedural techniques but reshapes the very substance of administrative sanctioning law, and additional safeguards should be in place in order to reestablish a balance.
5.2. False positives and mass enforcement: the problem of systemic error
The empirical cases examined in this paper also reveal that automated sanctioning systems significantly amplify the risks associated with false positives and systemic error. In traditional enforcement models, human oversight and limited enforcement capacity functioned as implicit filters that contained the scale of erroneous sanctions. By contrast, automated systems are capable of producing a lot of sanction proposals daily, meaning that even low error rates may translate into substantial numbers of unjust penalties.
The Barcelona low-emission zone example illustrates this phenomenon. Automated licence-plate recognition systems repeatedly generated fines against vehicles that were objectively not circulating, but merely being towed. These errors, which would have been immediately apparent to a human observer, persisted over time due to the absence of systematic human verification. The result was a form of algorithmically produced administrative injustice, where individuals bore the burden of contesting manifestly incorrect penalties.
This dynamic highlights a structural shift in the allocation of procedural burdens. In automated systems, the default assumption increasingly becomes the correctness of the machine-generated outcome, while individuals are expected to activate complex objection mechanisms in order to rectify errors. Such an inversion of procedural logic raises serious concerns from the perspective of good administration and effective protection of rights, especially when sanctions are relatively small but massively replicated. In that regard, it is important to recall that the Sindicatura de Greuges de Barcelona highlighted that avoiding control and oversight of the digital system outcomes is against the citizen’s right to good administration[43].
5.3. Fragmentation and insufficiency of current regulatory frameworks
Another major conclusion of this study concerns the fragmentation and inadequacy of the current legal framework governing automated administrative sanctioning. While general provisions on automated administrative action and AI regulation exist at both national and European levels, they remain largely disconnected from the specificities of sanctioning procedures. As a result, they fail to provide targeted safeguards proportionate to the intensity of interference involved in administrative penalties.
Neither the general administrative automation clauses nor the GDPR’s framework on automated decision-making sufficiently address the concrete procedural challenges posed by automated sanctioning. Key issues such as mandatory human review and meaningful participation rights remain under-regulated.
Moreover, the coexistence of general automation law, sector-specific enforcement rules, and emerging AI regulations produces a highly complex normative environment. This fragmentation undermines legal certainty for both public authorities and individuals, complicates judicial review, and hinders the development of coherent standards of protection. The absence of a dedicated regulatory framework for automated administrative sanctioning thus emerges as one of the most pressing challenges identified in this paper.
- See A.G. Larsen, A. Følstad. The impact of chatbots on public service provision: A qualitative interview study with citizens and public service providers, in Government Information Quarterly, vol. 41(2), Elsevier BV, Amsterdam, 2024; and A. Brioscú, et al. A new dawn for public employment services: Service delivery in the age of artificial intelligence, in OECD artificial intelligence papers, 19, 2024, pp. 34-37. ↑
- Among many others, see L. Carnis,, Automated Speed Enforcement: What the French Experience Can Teach Us, in Journal of Transportation Safety & Security, 3(1), 2011; A.Snow. Automated Road Traffic Enforcement: Regulation, Governance and Use: A review, RAC Foundation, London, 2017; and J. M. Martínez Otero. Hipervigilancia administrativa y supervisión automatizada: promesas, amenazas y criterios para valorar su oportunidad, in Revista Española de Derecho Administrativo. 231, 2024. ↑
- See S. Ranchordás & Y. Schuurmans. Outsourcing the Welfare State: The Role of Private Actors in Welfare Fraud Investigations, in European Journal of Comparative Law and Governance, 7:1, 2020, p. 6. ↑
- European Commission. Public Sector Tech Watch. Mapping Innovation in the EU Public Services, inPublications Office of the European Union, Luxemburg, 2024, available on http://www.european-union.europa.eu. 3 ↑
- See S. Grimmelikhuijsen and L. Tangi. What factors influence perceived artificial intelligence adoption by public managers, Publications Office of the European Union, Luxembourg, 2024, available on http://www.publications.jrc.ec.europa.eu. ↑
- See J. Tang, et al., Automatic number plate recognition (ANPR) in smart cities: A systematic review on technological advancements and application cases, in Cities, 2022, p. 129. ↑
- A relevant discussion, based on the Spanish legal system and administrative practice with these type of automated systems, can be found in M. Casino Rubio, La automatización de las actuaciones administrativas sancionadoras, in M. Vaquer Caballería, & J. Pedraza Córdoba (eds.). La Actuación Administrativa Automatizada: sus claves jurídicas, Tirant Lo Blanch, Valencia, 2025, pp. 559-560. ↑
- See V. Sharp, Smart Administrative Punishment: a Slippery Slope of Automated Decision-Making and its Economic Incentives in Public Law, in CERIDAP, 4/2025. ↑
- See section 2.1 of this paper. Also, a fine proposal by this kind of systems is analysed in Sindicatura de Greuges de Barcelona. Resolució de la Sindicatura de Greuges. Queixa relativa al dret a una bona administració (gestió i recaptació), available on http://www.sindicaturabarcelona.cat. ↑
- See ITSS (Inspección de Trabajo y Seguridad Social), Plan Estratégico ITSS 2025-2027, available on www.oeitss.gob.es.; and also see J. M. Goerlich Peset. Decisiones administrativas automatizadas en materia social: algoritmos en la gestión de la Seguridad Social y en el procedimiento sancionador, in Labos, vol. 2(2), 2021. ↑
- Some of the most representative papers on that matter are L. Carnis,, Automated Speed Enforcement: What the French Experience Can Teach Us, in Journal of Transportation Safety & Security, 3(1), 2011; and L. Carnis, Automated Speed Detection and Sanctions System: Application and Evaluation in France, in Journal of Intelligent Transportation Systems, 12(2), 2008. ↑
- See, on the Lithanian case, J. Paužaitė-Kulvinskienė, G. Strikaitė-Latušinskaja. Regulating automation: the legal landscape of ‘automated administrative orders’ in Lithuania, in Italian Journal of Public Law, vol. 17, issue 2/2025. ↑
- See C. Van Noordt & G. Misuraca. Artificial intelligence for the public sector: results of landscaping the use of AI in government across the European Union, in Government Information Quarterly, 2022, vol. 39(3), Elsevier BV, Amsterdam. ↑
- In this regard, empirical evidence is presented in Z. A. Cheng, Z. Dong, M. S. Pang, Automated Enforcement and Traffic Safety, in Management Science, 71(12), 2025. ↑
- This set of cases is covered by the well-known issue of ne bis in idem, considered the principle establishing that the same person cannot be sanctioned twice for the same conduct under the same grounds, which is generally prohibited. For a European take, see the European Court of Human Rights Judgment, 10 February 2009, case of Sergey Zolotukhin v. Russia. Moreover, Principle 3 in the Council of Europe Recommendation No R(91)1 of the Committee of Ministers to member states on administrative sanctions considers this issue, stating that «A person may not be administratively penalised twice for the same act, on the basis of the same rule of law or of rules protecting the same social interest», in Spain, Article 31 Law 40/2015 states that «Acts [meaning, in this context, acts that are qualified as infringements] that have already been sanctioned criminally or administratively may not be punished again in cases where there is an identity of the subject, the act, and the legal basis». ↑
- A reflection on this last judgment can be found on M. J. Montoro Chiner, M. J. Ordenanza relativa a la zona de bajas emisiones para preservar la calidad del aire en la ciudad de Barcelona: Análisis del instrumento ambiental, reflexión sobre el alcance de las restricciones impuestas y consideraciones sobre las sentencias del Tribunal Superior de Justicia de Cataluña, de 21 de marzo de 2022, in Cuadernos de derecho regulatorio, Vol. 1(1), 2023. ↑
- See AMB. Guía técnica para la implementación de zonas de bajas emisiones. Barcelona, AMB-FEMP, 2021, pp. 33-38, available on http://www.femp.femp.es. ↑
- See http://www.lavanguardia.com. ↑
- See http://www.elperiodico.com. ↑
- On this matter, with reference to other developments in the labour and social security sector, J. M. Goerlich Peset. Reglamento de inteligencia artificial e intervención pública en las relaciones laborales, in Labos, vol. 5, 2024, pp. 230–233. ↑
- J. M. Goerlich Peset. Decisiones administrativas automatizadas en materia social: algoritmos en la gestión de la Seguridad Social y en el procedimiento sancionador, in Labos, vol. 2(2), 2021, pp. 22-25. ↑
- The following lines, from that paper, define the kind of infringements that can be automatedly detected (what they call “formal infringements”) and the current state of its usage: «Formal infringements’ is a type of non-compliance by employers in the Social Security system which is uncontroversial and does not require further investigation. It can be detected without direct intervention from an inspector at that stage. For example, if employers fail to notify workers’ registrations on time, this delay is automatically recorded in the social security’s files. The Spanish Labour and Social Security Inspectorate (ITSS) has a semi-automated procedure for detecting and sanctioning formal infringements in different areas, such as the improper use of bonuses in labour contracts by employers. The sanction is automatically generated, but it is still reviewed by inspectors before it is notified to the employers. The system is ready to be moved to a fully automated process but has not been implemented yet» (ELA, Study Visit to Spain: Automated Procedure on Formal Infringements, 2024, p. 1, available on http://www.ela.europa.eu). ↑
- ELA, Study Visit to Spain: Automated Procedure on Formal Infringements, 2024, pp. 4-5, available on http://www.ela.europa.eu. ↑
- See M. Gøtze. Danish Digital Design and the Gradual Erosion of Technology Neutral Administrative Law, in Central European Public Administration Review, 22(2), 2024. ↑
- Spanish Const. Court Ruling, jud. 28 January 2021, n.4. ↑
- See T. Krabec & R. Čižinska R. Measuring the impact of an administrative fine on a company and its future survival: a case study from the Czech Republic, in Financial Internet Quarterly, 17 (4), 2021. ↑
- H. Korkka-Knuts, S. Melander. Contours of a principled corporate sanction policy in the EU: Exploring a constitutionally justified balance between criminal and administrative sanctions, in New Journal of European Criminal Law, 16(1), 2025, p. 34. ↑
- See G. della Cananea, The Common Core of European Administrative Laws: Retrospective and Perspective, Brill: Leiden, 2023, pp. 98-99. ↑
- In this regard, see A. Andrijauskaitė. The Principles of Administrative Punishment under the ECHR. Doctoral dissertation, Vilnius, 2022, available on http://www.dopus.uni-speyer.de. ↑
- In this regard, see J. P. Schneider & F. Enderlein. Automated Decision-Making Systems in German Administrative Law, in CERIDAP, n. 1, 2023, pp. 105-106. ↑
- See J. Reichel. Regulating Automation of Swedish Public Administration, in CERIDAP. Issue 1, p. 82. ↑
- See Z. A. Cheng, Z. Dong, , Automated Enforcement and Traffic Safety, in Management Science 71(12), 2025. ↑
- See S. Grimmelikhuijsen, L. Tangi, What factors influence perceived artificial intelligence adoption by public managers, Publications Office of the European Union, Luxembourg, 2024, available on http://www.publications.jrc.ec.europa.eu . ↑
- For all, see I. Martín Delgado, El impacto de la IA en la toma de decisions administrativas. Una visión general a la luz del ordenamiento jurídico español, in Zegarra Valdivia (ed.), La inteligencia artificial y la actividad de la administración pública. CIACJ, Lima. 2025. ↑
- See Article 41 Law 40/2015 of 1 October, on the Legal Regime of the Public Sector (Régimen Jurídico del Sector Público), and also I. Martín Delgado. El impacto, op. cit. . ↑
- See article 35a of the General Administrative Procedures Act (Verwaltungsverfahrensgesetz – VwVfG). Also, see J. P. Schneider & F. Enderlein. Automated Decision-Making Systems in German Administrative Law, in CERIDAP, n. 1, 2023; C. Fraenkel-Haeberle. Fully Digitalized Administrative Procedures in the German Legal System, in European Review of Digital Administration & Law – Erdal. Vol. 1, Issue 1-2, 2020; and E. Buoso. Fully Automated Administrative Acts in the German Legal System, in European Review of Digital Administration & Law – Erdal, vol. 1 issue 1-2. ↑
- See Article 41 Spanish Law 40/2015, and also see A. Cerrillo i Martínez, Lección 21. Actuación automatizada, robotizada e inteligente, in: Manual de Derecho administrativo (2n ed.), Marcial Pons, Madrid, 2024. ↑
- See J. Paužaitė-Kulvinskienė & G. Strikaitė-Latušinskaja. Regulating automation: the legal landscape of ‘automated administrative orders’ in Lithuania, in Italian Journal of Public Law, vol. 17, issue 2/2025, p. 22. ↑
- See J. Ponce Solé, J. Inteligencia artificial, Derecho administrativo y reserva de humanidad: algoritmos y procedimiento administrativo debido tecnológico, in Revista General de Derecho Administrativo, 50, 2019. ↑
- A. Nieto. Derecho administrativo sancionador, Tecnos,Madrid, 5th ed., 2012, p. 15. ↑
- In this regard, see A. Huergo Lora. Diferencias de régimen jurídico entre las penas y las sanciones administrativas que pueden y deben orientar su utilización por el legislador, con especial referencia a los instrumentos para la obtención de pruebas, in A. Huergo Lora (ed.), Problemas actuales del Derecho administrativo sancionador, Iustel, Madrid, 2018, p. 29. ↑
- See H. Korkka-Knuts, et al. Reassessing Deterrence: The Effectiveness of Administrative Sanctions across three EU Regulatory Frameworks, in Helsinki Legal Studies Research Paper No. 90, Forthcoming in the European Law Review (2025), available on: http://www.ssrn.com/abstract=5169603 or www. dx.doi.org. ↑
- See Sindicatura de Greuges de Barcelona, Resolució de la Sindicatura de Greuges. Queixa relativa al dret a una bona administració (gestió i recaptació), 2024, available on http://www.sindicaturabarcelona.cat. ↑