<i>Charting the Course Towards a New Legal Framework for Smart Cities</i> (2025)

Una caratteristica tipica delle tecnologie che utilizzano forme di controllo autonome è la loro crescente complessità derivante dall’aumento del grado di automazione che, in relazione al processo del loro utilizzo nella vita quotidiana, presenterà nuovi rischi che aumentano soprattutto quando i sistemi autonomi sono utilizzati nel trasporto pubblico urbano, dove è più alto il rischio di causare danni alla salute e alla vita di un gran numero di persone contemporaneamente. Questi rischi sono legati alla questione del funzionamento di un regime di responsabilità civile che rifletta la distribuzione dei rischi in relazione all’utente diretto, al produttore o all’operatore del sistema autonomo attraverso lo standard di cura previsto. Il presente articolo mostra in quale direzione e su quali basi si potrebbero fare considerazioni nel definire il livello di cura previsto e in quale misura è applicabile anche l’attuale impostazione dei rapporti di responsabilità. Infine, verranno illustrate le modifiche legislative pilota in alcuni Paesi in materia di responsabilità per i veicoli autonomi, insieme al regime della nuova direttiva sulla responsabilità del prodotto per gli sviluppatori di software.


A typical feature of technologies using autonomous forms of control is their increasing complexity resulting from the increasing degree of automation, which, in relation to the process of their use in everyday life, will present new risks that mainly increase when autonomous systems are used in urban public transport, where there is a higher risk of causing harm to the health and life of a large number of people at once. These risks are linked to the issue of the functioning of a civil liability regime that would reflect the distribution of risks in relation to the direct user, manufacturer or operator of the autonomous system through the expected standard of care. The paper shows in which direction and on what basis considerations could be taken when defining the expected level of care and to what extent the current setting of liability relationships is also applicable. Finally, we will present pilot legislative changes in some countries regarding liability for autonomous vehicles and outline the new Product Liability Directive regime for software developers.  
Summary: 1. Initial remarks.- 2. Planned and implemented autonomous public transport projects.- 3. Risks Associated with the Use of Autonomous Public Transport Systems and Their Allocation.- 4. Liability for damage caused by autonomous public transport systems and potential liable persons.- 4.1. Fault-based unlawful conduct as a legal basis for damage attribution.- 4.2. Attribution of liability based on grounds other than fault (liability of the operator, producer, and developer of autonomous public transport systems).- 5. New horizons and upcoming trends in recent legislation to address the issues of liability for autonomous systems.- 6. Conclusion.

1. Initial remarks

As we look towards a future where vehicles can see[1], think, and learn[2], seamlessly integrating into the framework of smart cities as autonomous public transport – this evolution prompts us to carefully address important liability issues alongside the technical and technological challenges we face, including the collection, processing, and protection of data, protection of personality rights of all individuals involved in transportation[3], and significant ethical aspects[4].

It is precisely the question of liability for damage caused by various roboshuttles, autonomous buses, or self-driving trams, where human influence in driving will be eliminated or reduced to a minimum, that must not be overlooked when considering smart cities. This applies in cases where such vehicles cause harm to transported passengers (e.g., injury to life, health, or property) or to individuals who are not passengers but suffer damage caused by autonomous public transport systems.

Autonomous vehicles are regarded as one – if not the main – possible technology that is to be a part of multi-faceted approach to support the achievement of smart city goals[5]. The autonomous vehicles are hence one of the main “building blocks” of smart cities and are simultaneously a staple symbol of establishing smart urbanisation and innovation in general.

2. Planned and implemented autonomous public transport projects

There are numerous projects already implemented in different areas of the world when it comes to autonomous public transport. In 2016 in Australia, the first live test of the autonomous bus RAC Intellibus was launched in the city of Perth. This bus operated regularly and concluded its service in 2023 with no caused accidents. It was capable of transporting up to 15 passengers, with a maximum speed of 45 km/h[6].

In 2017, the first trial ride of the experimental Future Bus, equipped with CityPilot technology from Mercedes-Benz took place in the Netherlands. This demonstration proved that autonomous public transport indeed can offer enhanced safety, fuel savings, and improved comfort. These rides were conducted with a driver present, as required by regulations, and were generally deemed successful[7].

Also in 2017, Techbox reported on a hybrid of a train and a bus developed in China. The concept, known as Autonomous Rail Rapid Transit (ART), represents a connected convoy of buses capable of running along a designated route without a driver[8].

In 2018, the first self-driving tram was tested in Germany at the InnoTrans 2018 trade fair. This tram, called Siemens Combino, conducted a test ride without passengers in a regular traffic[9]. The tram utilized elements of artificial intelligence, although a human operator was still present. During the test ride, various simulated emergency situations were carried out, and the test was generally considered successful[10]. The city of Hamburg has the potential to become a pioneer in autonomous transport in Europe as the first tests of autonomous buses took place in 2019, and starting next year, the city will continue testing additional autonomous vehicles[11]. Later on in 2023, MAN announced that it had entered into a collaboration with Mobileye, a subsidiary of Intel. Their first joint objective is to launch the pilot operation of the first automated urban bus with a safety driver, starting in 2025, as part of the newly announced research project MINGA in Munich[12].

In 2019, Volvo, in collaboration with Nanyang Technological University, launched a pilot project for the world’s first fully autonomous electric bus in Singapore. This was the Volvo 7900 Electric, a fully electric bus with a capacity of nearly 80 passengers[13]. The trial operation concluded in 2023[14]. Again in 2019, Volvo Buses demonstrated an autonomous version of its 7900 Electric urban bus in Sweden. The demonstration took place in collaboration with Keolis, near Gothenburg. The test operation was, too, successful[15].

Huge success in the area of smart cities’ public transport was reported from the United Kingdom in 2022. Fusion Processing announced the commencement of testing for the Dennis Enviro200 autonomous bus as part of the CAVForth project, which had been under development since 2020 and its members claim that one of their primary motivations is safety, as up to 86% of traffic accidents on the roads of the United Kingdom are caused in some way by human error. The bus has a capacity of up to 36 passengers and operates along a route of approximately 22 km[16]. This bus is still in operation and has even won the “Self-Driving Industrial Vehicle of the Year” award[17]. However, the first fully autonomous bus service in the United Kingdom was launched in Scotland in 2023. It is described as the most ambitious and complex autonomous bus pilot project in the world and these buses, too, are a part of the CAVForth project implementation. The second phase of the project is known as CAVForth II, which will extend the current route by an additional 5 miles. This phase is set to begin after the completion of the first phase, with a planned continuation at least until March 2025[18].

Norway launched the operation of autonomous buses in the city of Stavanger. This is not a pilot test but a fully operational bus service[19]. In 2022 the e-Atak electric bus was introduced as an autonomous vehicle that reaches speed of up to 50 km/h and can transport up to 52 passengers. The bus is capable of performing autonomous tasks – previously carried out by a human driver – both day and night, such as stopping at bus stops, opening doors for boarding and alighting, and similar functions[20].

In the field of public transport particularly in Slovakia, data collection for the development of autonomous bus control systems was conducted in Bratislava as part of the Pilot Project for Smart Mobility of the Future in the first half of 2021. This project was implemented at the Slovak University of Technology with cooperation of the Žilina University[21]. The data collection was carried out using a bus equipped with sensors (cameras, radars), which gathered data from real-world traffic. These will be used in the development of autonomous bus control systems to be implemented in near future through a new platform[22].

This new platform, which is also a multi-sector interest association of legal entities, Smart Mobility Slovakia, is dedicated to creating conditions and actively advocating for the development of smart mobility[23]. This shall include preventing traffic jams, addressing parking issues, reducing emissions, as well as supporting sustainability. Even though the development is not yet implemented in day-to-day life of public transport, due to leading automotive manufacturers focusing on research and development in the concept of autonomous mobility there is high potential for autonomous control in public transport in near future.

When it comes to plans for implementing semi-autonomous and autonomous public transport in EU, according to the President of the German Federal Motor Transport Authority (KBA), Richard Damm, a significant breakthrough in autonomous transport is expected within the next few years. He states that by 2026 or 2027 at the latest the first autonomous buses will be operating in European cities transporting real passengers. According to him, “robot buses” are sufficiently safe, with only minimal residual risks remaining. He anticipates that within 5 to 10 years, autonomous transport, including buses and freight vehicles without drivers, will be used widely and regularly[24].

In Hong Kong it is expected that the largest autonomous bus transport system will be operational by the end of 2025. The total route length is expected to be approximately 850 meters, with a travel time of under 3 minutes. Each bus can carry a maximum of 16 passengers per trip. The initial goal of the system is to achieve a transport capacity of about 500 passengers per hour, with each bus operating a one-way route[25].

These were just a fraction of all the innovations that consecutively give life to plans and ideas that were not that long ago a mere uncertain future possibility. Autonomous public transport is not just an intangible concept anymore but is slowly but steadily becoming a part of our everyday lives. This naturally comes hand-in-hand with new challenges to civil liability when it comes to possible risk of harms caused by these new integrated systems not controlled by a human driver.

3. Risks Associated with the Use of Autonomous Public Transport Systems and Their Allocation

Since civil liability fundamentally rests on the principle of attributability of harm caused to a subject other than the injured party based on special attribution grounds, assessing liability for damages caused by autonomous public transport vehicles during their operation primarily depends on evaluating the presence of such grounds of attribution.

A defining characteristic – and a problematic aspect[26] concerning the attribution of harm –is that the autonomy granted to such systems for decision-making and action ultimately results in limited predictability and imprecise traceability of their conduct. Even the programmer cannot reliably anticipate what behavioural algorithms[27] the autonomous system may acquire through learning or how it will specifically react in particular situations.

Given that predictability plays a crucial role in tort law, the occurrence of harmful consequences under such circumstances raises legitimate questions regarding the legal nature of liability, the entity to which such harm can be attributed, the determination of causality[28] (particularly in cases involving the complex interconnectivity of cooperating systems), and related issues. These are primarily discussed in the context of civil liability[29] for damages caused, while the criminal law aspects of autonomous control systems[30] are currently less prominently emphasised).

The core consideration in defining liability for autonomous public transport systems lies in determining to whom the erroneous behaviour of a public transport vehicle can be attributed and devising mechanisms for allocating the risks associated with their development, production, operation, and use. This allocation must respect and balance the legitimate interests of the various stakeholders[31] involved in relationships related to autonomous control systems.

The technical and technological specifics associated with using autonomous urban public transport in smart cities introduce new and, to varying degrees, specific risks, even in legal assessments. Tort law’s role is to determine how these risks will be allocated and who will primarily bear them in cases where damage arises within the operational scope of such an autonomous transport system.

Risks that are (or will be) relevant and necessarily factored into the framework for their allocation are linked to the high degree of automation inherent in these systems. These risks may arise from algorithmic errors or malfunctions in hardware[32]. A critical risk factor within autonomous control systems is integrating computational control with components that, in the “real” world, can exhibit forceful effects[33] (whether due to their physical, chemical, or other properties). Such software and hardware integration can create distinct risks, mainly when a software error directly triggers mechanical hazards[34] – an issue particularly characteristic of autonomous vehicles. From a frequency perspective, mechanical impacts of hardware are more likely to cause harm to life or health than software errors alone.

Another critical factor influencing the legal liability assessment is the mobility of autonomous public transport systems. Unlike static systems, mobility increases the radius within which harmful consequences may occur. Furthermore, their operation outside laboratory or industrial environments and deployment into real public spaces heighten risks due to the greater likelihood of unpredictable situations that autonomous public transport vehicles may encounter[35]. This risk is further exacerbated by the increased intensity of interactions with people, as human behaviour is also inherently unpredictable to a certain degree. This aspect is even more pronounced in autonomous public transport systems than standard individual autonomous vehicles, as public transport inherently involves large numbers of passengers or goods.

The interconnection of various systems processing the data necessary for the operation of autonomous buses, trams or roboshuttles also significantly raises the risk of inaccurate data being transmitted within this shared network, potentially leading to incorrect situational assessments by these systems. The mutual interdependence of individual systems on the accuracy of data collection, evaluation, and processing represents a limiting factor for attributing such risks to specific entities involved in the operation of the overall system, primarily due to unclear causal relationships.

A specific, novel risk typical of autonomous systems arises from the fundamentally unpredictable behaviour of self-learning algorithms[36]i.e., the risk inherent in the autonomous nature of such public transport systems. Given the current state of scientific and technological knowledge, the unpredictability of autonomous systems’ behaviour assesses the nature, scope, and frequency of risk realisation for liability purposes relatively complex. It has not yet been conclusively demonstrated whether autonomous control systems are as reliable and safe as the currently used systems dominated by human factors or, conversely, whether autonomous systems are to be regarded as less or more secure[37].

This assessment of risks currently remains in the realm of expert estimations[38]. Nonetheless, with a certain degree of exaggeration, one might fully agree with Marc Andreessen, who stated in an interview with The New York Times, «People are so bad at driving cars that computers don’t have to be that good to be much better. Any time you stand in line at the D.M.V. and look around, you’re like, Oh, my God, I wish all these people were replaced by computer drivers»[39].

4. Liability for damage caused by autonomous public transport systems and potential liable persons

The “traditional” tort law theory operates with several concepts through which conclusions can be drawn regarding who may bear liability for damage caused and under what conditions. The following text outlines whether and to what extent these general principles of civil liability can also be applied to damage caused by autonomous public transport systems.

In general, two major categories of grounds can be identified for attributing liability for damage to a specific party:

1. The category of individuals associated with the source of danger encompasses persons that contribute to the creation, operation, or use of a particular source of danger and to whom the obligation to compensate for damage resulting from the realisation of risks associated with such a source of danger is attributed[40], irrespective of fault or unlawful conduct.

Liable persons under this framework may include those whose abstract or concrete risks are tied to these systems, who derive benefits from them, or who possess the capability to control and manage them. Theoretically, such persons could include the system’s producer, developer, programmer, supplier, or operator. Additionally, there is the prospective question of whether liability could, in future, be attributed to the autonomous system itself[41].

2. The second category of persons involves subjects assessed based on whether they adhered to a required standard of care. If such a standard is breached and the person’s actions directly lead to damage, liability for damage caused can be attributed.

Balancing these various concepts should ultimately yield a set of responsible entities normatively obligated to bear the adverse consequences.

Unfortunately, traditional theories of liability attribution and risk distribution – such as the control of risk theory[42], the benefit from risk theory, or the increased risk theory[43] –encounter significant challenges when applied to autonomous systems. Autonomous systems are not programmed to perform specific activities; instead, they are designed to independently learn how to perform such activities, continuously generating their own code (program) independently of their original creator[44]. This self-learning process, influenced by internal and numerous external factors, significantly diminishes any party involved’s ability to create, operate, or use these systems to control or manage them effectively.

This issue manifests in two dimensions:

a) based on the legislature’s legal and policy decisions, a normative risk allocation involves determining which entity will be held liable for damage caused by an autonomous system and under what conditions such liability will be attributed;

b) formal fulfilment of liability prerequisites concerns the practical ability to meet and demonstrate fulfilling the prerequisites required for establishing liability[45].

The unique characteristics of autonomous systems, particularly their capacity for independent and evolving decision-making, make reconciling traditional liability frameworks with their application reality challenging. These complexities necessitate reevaluating how liability is assigned, and risks are distributed within the legal structures governing autonomous public transport systems.

4.1. Fault-based unlawful conduct as a legal basis for damage attribution

The fundamental basis for attributing liability for damages generally stems from the fault of the person who caused the harm, whether through negligence or intent. Negligence refers to behaviour where the individual, under the required standard of care appropriate for the specific case and circumstances, could have identified and prevented the harmful outcome[46].

In the context of autonomous control systems, the applicability of the fault principle is significantly limited. The higher the degree of autonomy of the system, the less feasible it becomes to identify a subject whose fault can be linked to a harmful outcome.

Depending on the level of autonomy in a public transport vehicle requiring some interaction from the driver, liability may be considered if the driver fails to adhere to the expected standard of behaviour mandated for operating such a system. Since public transport drivers are typically employees of a transport company, most legal systems would apply vicarious liability provisions, potentially attributing liability to the employer for harm caused by the employee. The specifics of this principal-agent liability may vary across jurisdictions, with the employer potentially being held jointly or solely liable alongside the employee.

The limitation of this fault-based framework stems from the rising degree of autonomy, which results in fewer responsibilities for the driver in managing non-standard situations or system errors. Consequently, the question of the standard of care is closely linked to the extent of system automation. As automation grows, users and other involved parties have greater expectations that the system will independently recognise and mitigate risks without external human intervention.

A breach of the standard of care, relative to the degree of autonomy in an autonomous public transport system, could manifest in real-life scenarios such as a driver of a highly autonomous vehicle failing to intervene when the system either does not react or reacts improperly, or ignoring system prompts to take control in an unusual situation. Such a lack of cooperation between the driver and the system could result in driver fault and liability for damage[47].

This principle applies even without explicit legal provisions outlining the rights and duties of autonomous system users. In many legal systems, the obligations of an autonomous system driver can be derived from the general statutory duty of care to act in a manner that prevents harm. Courts evaluate whether the expected standard of care was upheld in a given context.

A breach of the standard of care might occur, for instance, if a bus driver engaged in activities incompatible with promptly taking control upon a system prompt[48], significantly delayed their response or failed to assume control after being alerted by the autonomous system[49].

As the degree of autonomy increases, the role of the human factor diminishes, reducing the scope for claiming damages from the driver. This is because the system’s operation, including its decision-making and execution, is governed by computer algorithms. Except for specific circumstances, it can be affirmed that the qualitative and quantitative standard of care expected of drivers of autonomous systems is lower than for non-autonomous systems[50]. Consequently, the range of cases involving exclusive driver liability diminishes, with driver liability often reduced to contributory negligence, potentially shared with the injured party.

Given that defects in autonomous systems may increasingly originate from the system’s technical design, care responsibilities will likely shift to entities capable of addressing such obligations, such as producers and programmers[51].

The attribution of fault as a legal basis for liability may extend to producers of autonomous systems if they breach contractual or statutory obligations and cause harm through such breaches. For example, a producer may be liable for failing to recall an autonomous system upon discovering safety risks or for neglecting to address a known issue through software updates[52].

Technical standards provide a baseline for defining the obligations of producers, though these represent minimum safety benchmarks[53]. Additional standards are derived from scientific and technical advancements to ensure the level of protection aligns with the existing state of scientific knowledge and practical feasibility[54].

Producers of autonomous systems must protect users and third parties from risks arising from both proper and improper system use. Expected standards of care for producers include the safe development and production of the system, protection against unauthorised interference by users or third parties[55], adequate user instruction and information, fulfilling ongoing maintenance obligations (e.g., providing necessary software updates), and proactively responding to defects, such as recalling defective units.

If the autonomous driving system is a combination and cooperation of software and hardware, which autonomous public transport vehicles fulfil, it is questionable whether the hardware producer should also be responsible for the risks associated with the control system. Gless and Janal[56] address this issue by stating rather obviously that, in principle, yes, but only if the error that has occurred stems from the producer’s responsibility and organisational sphere. The problematic element in this statement, however, is precisely the content of the responsibility and organisational sphere of the various actors involved in the process of developing, producing, marketing, operating, programming, evaluating, and subsequently controlling autonomous systems. In this context, it will be relevant, for example, in terms of the timing of the occurrence of the error leading to the damage, to distinguish between those which existed at the time of delivery of the autonomous system and for which the producer is liable, and errors which occurred only as a result of updates to the system, the damage caused by which can be attributed to the entity within whose sphere of competence the software update falls (usually the producer, but it may be a completely different entity).

A significantly limiting factor when considering the standard of care regarding autonomous systems is their very nature, which is based on a process of self-learning, allowing them to make their own decisions as a result of the algorithms available and acquired through learning. These are influenced not only by the technical solutions directly from the producer but also by the information they have acquired from the external environment in the process of “learning”. If these systems can learn certain behaviours and use them independently, they have the advantage that they do not need to be given strategies in advance to deal with specific problems, as not all potential issues can (and need to) be anticipated beforehand. However, this renders the system’s behaviour somewhat unpredictable[57]. This can result in situations where a faulty algorithm or other error in the autonomous vehicle control software[58], incorrect assessment of the situation based on learned behavioural patterns can lead to damage, as well as situations where the damage occurred as a result of a decision by the autonomous system (hitting an oncoming vehicle to avoid a pedestrian who unexpectedly entered the roadway), but the control system did not exhibit any fault (on the contrary, it acted by the algorithm).

The question is how this impacts the standard of care itself, particularly with the producer. From an objective point of view, it is probably not reasonable to expect a reduction in the standard of care in such cases; on the contrary, it is a legitimate expectation that the producer should eliminate such erroneous decisions by the system to the extent that this is possible and bearable[59]. The ability and capability of the entity to minimise or eliminate such risks is a critical factor in determining the objective standard of care. The risks of this process are known to the producers or other actors involved and can even be handled (it is possible to check what the system has learned first and then, after verification, include it in the decision-making process). Nevertheless, it is impossible to eliminate these risks, given the wide variety of situations that the system may encounter, the ongoing process of “learning”, and the different learning experiences of various systems risks[60].

In summary, fault as a basis for the attribution of liability can be applied to establish responsibility for damage caused in relation to entities involved in the operation of autonomous systems, provided there is a demonstrable breach of the duty of care. This duty of care must be observed by the driver, producer, or any other relevant party connected to the autonomous system, regardless of or notwithstanding the system’s uncontrollability[61]. This principle applies in cases where the harmful outcome could have been averted through appropriate care[62].

4.2. Attribution of liability based on grounds other than fault (liability of the operator, producer, and developer of autonomous public transport systems)

Given the limitations of fault-based liability as a basis for attribution in the context of autonomous public transport systems in smart cities, liability not based on fault will represent a significant element of tort law protection. The distinguishing feature of such liability is its connection to legally approved actions – namely, the creation of abstract risk – the realisation of which leads to a harmful outcome[63]. In such cases, risk is quantified based on the extent and likelihood of damage, typically involving risks arising from operating a transport vehicle, introducing such vehicles to the market, or implementing specific technologies[64].

Strict liability, not based on fault, is generally imposed when there is a significant imbalance between the ability to prevent damage and the injured party’s capacity to avoid harm. In the context of public transport systems, the operator (e.g., transport companies or municipalities managing public transport within smart cities) can be considered a liable entity based on the regulatory framework for damage caused by the operation of transport vehicles. In virtually all jurisdictions, this constitutes strict, no-fault liability.

It is already evident that the operation of autonomous vehicles in smart cities entails a degree of risk (though the precise scope of this risk cannot yet be determined) and a certain level of uncontrollability. This characteristic continues to justify strict liability due to the specific risks inherent in their operation[65].

Liability in such cases is assigned to the operator, defined as the entity with the legal and factual ability to control the given transport vehicle, where such control serves a longer-term functional purpose. This includes the entity on whose behalf, at whose risk, and in whose interest the vehicle is operated – typically, an entity that predominantly profits from the operation, has a vested interest in the system, and bears the financial costs of its operation[66].

Such attribution of liability to an entity that may not have actively contributed to the harm arises from the perception of the vehicle as a closed system characterised by specific operational risks for which the operator bears strict liability. The operator cannot, in their defence, claim that the autonomous system’s behaviour was unpredictable or uncontrollable.

It is necessary to distinguish the operator as the person that carries out, implements, realises and has the legal and factual possibility to dispose of the given means of transport from the person that, due to the nature of the system requiring updates of the software component, assumes the legal and technical responsibility for the functional capability of the control system[67]. In most cases, this is the producer of the autonomous system, but this may not always be the case. The entity of the producer and the entity responsible for the functional capability of the control system may be different.

However, regarding fully autonomous control systems, it is unlikely that one will be able to clearly assess the situation since, in the case of a fully autonomous control system, the operator (city or transport undertaking) will have only minimal control over the system’s behaviour. The resulting behaviour will be influenced by factors beyond the control of any legally recognised entity, whose contribution to the specific behaviour may vary and, consequently, be entirely debatable[68].

For this reason, considerations have also appeared in the literature questioning the effectiveness of continuing to attribute damage to the system operator as the entity with the legal and factual ability to control the vehicle. In particular, considerations are directed towards a solution in which the liability of the autonomous system itself should replace the concept of attributability of damage to the operator[69]. The main reasoning behind this is the nature of the autonomous system as a “subject” endowed with intelligence, capable of reasoning and making decisions on its own based on the processing and evaluation of the information available to it, which would be granted subjectivity in the sense of the status of a form of “electronic person”[70]. However, as in the case of the liability of the controller, and given the nature of this “electronic person”, it would be all the more necessary to link the liability of the autonomous system to some form of specific liability insurance at the same time as adopting such a solution. However, as the literature indicates, even in the case of autonomous driving systems, such a design is no more significant than the current liability of the operator of the means of transport, together with compulsory liability insurance.

Even if an autonomous system is found to be legally and illegally capable of wrongful conduct, transferring assets to that system would not make economic sense because such assets could only be used to cover the damage caused or provide insurance benefits. Moreover, such a competent scheme would have to be always represented by a natural person when it expresses its will (conclusion of the insurance contract, payment of premiums, payment of compensation, etc.)[71].

Moreover, imposing strict liability on a fully autonomous system operator does not always generate fair results. This is particularly the case if the means of transport falls outside the sphere of competence and control of the operator. Such a situation arises, for example, if another person takes over the means of transport without the knowledge or against the will of the operator.

In particular, this could involve various forms of unauthorised interference with the management system (e.g. hacking), which, given the nature of the system, cannot be completely ruled out. It is questionable whether such unauthorised interference with the management of an autonomous system can be attributed as a risk arising from the operation of public transport to its operator. We consider that such a fact does not fall within the scope of the risk in question and that it would not be possible to hold the operator liable. On the contrary, it would be appropriate to impose strict liability on the unauthorised interferer in this case. However, this is a question of the setting of national legislation.

Developing and disseminating technology (i.e., disseminating applicable knowledge as such) does not generally give rise to any liability aspects[72]. The mere development of a technology is unlikely to give rise to liability in itself, as such liability would be akin to liability for a scientific result. However, what could be considered for the future, in line with the risk-utility theory, is to impose liability on the developers if they would benefit economically from the system (which implies the use of the system) and if they would expose the public to the risk of the system[73]. In many cases, the developer is merged with the producer, who is already subject to liability for defects in products manufactured and placed on the market under Council Directive 85/374/EEC on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products.

This almost forty-year-old regulation was not designed with a vision of the advent of autonomous technologies in mind and, therefore, shows significant limits in its application to damage caused by autonomous systems[74]. In December 2024, a new defective product directive came into force, which extends producer liability to defective software, including artificial intelligence systems. The challenges of the new directive, specifically those of autonomous driving systems, are discussed in the next section of the paper.

5. New horizons and upcoming trends in recent legislation to address the issues of liability for autonomous systems

Although in the previous section it was thoroughly drawn out that the liability for damage can be sufficiently addressed by existing scope of standard legislation (i.e. fault-based liability as well as strict liability), however some states have decided to cope with this challenging issue of specific liability by passing new regulations dealing with autonomous systems directly. Namely, Great Britain, Germany as well as Poland passed new laws or amended existing ones in order to particularly address these types of specific relations.

Great Britian passed a brand-new legislation called the Automated and Electric Vehicles Act (the AEV Act)[75]. Apart from other parts of the AEV Act regarding mainly technical and administrative issues like charging and refuelling vehicles the Act deals in great amount of detail also with aspects of liability for damage. The Act divides strict liability not based on fault between insurer (Part 1, sec. 2(1) of the AEV Act) and owner of the vehicle (Part 1, sec. 2(2) of the AEV Act), however, it emphasizes insurers position regarding liability. Insurer of owner of the vehicle is liable for death or personal injury, or any damage to property with certain exclusions like damage to the vehicle itself, or to any carried goods or other property in custody of the insured person or the person in charge of the vehicle (Part 1, sec. 2(3) of the AEV Act). The Act also addresses an issue regarding negligence when it comes to unauthorised software alterations or negligence regarding proper installation of safety-critical software and its updates resulting with a possibility to exclude insurer’s liability in these cases (Part 1, sec. 4 of the AEV Act).

The Act also states a contributory negligence of the harmed person if and when the accident was to any extent caused by the injured party (Part 1, sec. 3(1) of the AEV Act). Liability of insurer or owner of a vehicle for damage to the person in charge of the vehicle is fully excluded if the accident caused was wholly due to the person’s negligence in allowing the vehicle to begin driving itself when it was not appropriate to do so (Part 1, sec. 3(2) of the AEV Act). However, it seems that the negligence on the part of the person in charge of the vehicle does not affect liability for other injuries or damage to property to harmed persons other than person in charge of the vehicle.

Germany on the other hand has chosen a different legislative approach. Although Germany did not pass a brand-new legislation, in 2017 the Road Traffic Act (Straßenverkehrsgesetz – StVG) was amended and automated vehicles were allowed to travel public roads. Although there were no specific provisions added to the liability regarding autonomous vehicles, an important element of the amended version of the StVG is an obligation to equip a vehicle with a system for data collecting on position and time when there is a change of entity controlling the vehicle from the system to the driver, when the system indicates to take control of the vehicle as well as when a technical fault occurs in the system (paragraph 63a StVG). Faulty failure of complying with obligations to take control of the vehicle when the need is signalised may establish a liability for caused damage of a driver in charge of the vehicle.

In Poland, a definition of an autonomous vehicle was added through amendment of the Road Traffic Act (RTA) in order to define conditions and principles for conducting research works related to testing autonomous vehicles[76]. For the purpose of this testing a compulsory third party liability insurance of the research organizer is prescribed and one of many obligations of the organizer is to equip every autonomous vehicle by a driver ready to take control over the vehicle whenever needed to prevent any risk for causing damage to other parties[77].

It was already pointed out that the strict liability of the producer forms a possible ground for liability for the damage caused by autonomous vehicles. Regarding all the specifics regarding functioning of the autonomous vehicles described above, one of the crucial elements when it comes to risk of a caused damage is a software failure of the vehicle. Pursuant to previous Council Directive 85/374/EEC it was not clear whether software could be considered to be part of a “product” definition[78] for which a producer holds a rather strict liability. This shall be significantly changed pursuant to new Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective goods and repealing Council Directive 85/374/EEC (hereinafter as the new Product Liability Directive). The new Product Liability Directive is better suited for current technological and artificial-intelligence development. According to sec. 6 of the Preamble, no-fault liability for defective products should apply to all movables, including software, including when they are integrated into other movables or installed in immovables. This means that software regardless being installed in the autonomous vehicle, is considered to be a product pursuant to the new Product Liability Directive.

This means that regardless the integration, software alone is still considered to be a separate product from the vehicle itself. If a producer (a manufacturer) of a software is a different legal entity from the producer of the vehicle, strict product liability is still imposed on the software manufacturer. Unlike the previous directive, the new regulation provides a certain degree of protection to small subcontractors of software and related components to large motor vehicle manufacturers by excluding their liability in two cases. In the first case if the producer of the defective software component was, at the time of the placing on the market of that software component, a microenterprise or a small enterprise (Article 3(2) of the Annex to Commission Recommendation 2003/361/EC) or the manufacturer in the product contractually agreed with the producer of the defective software to waive that right to claim damages.

Characteristic for the software as a product is that the manufacturer usually retains control over it even after placing it on the market and putting it into service in the form of system upgrades or updates by itself or via third party (Article 4 sec. 5(b)of the new Product Liability Directive). This aspect justifies continuous liability of the software producer up to the point of losing this control. This means that the manufacturer shall be held liable even for the defects of software occurring later through defective upgrades. If the software shall be substantially amended or modified either by software update or upgrade or due to the continuous learning of the system itself, the substantially modified product should be considered as available on the market or put into service at the time that modification is actually made. However, if a person using a vehicle or exercising control over it fails to upgrade the system and damage is caused due to lack of integrating the latest updates, producer is not liable due to contributory negligence if the owner of the vehicle itself is harmed or the causality link between defect of the product and damage is interrupted by another person’s negligence. In these cases the owner of the vehicle or a person excercising control is liable based on fault.

In conclusion, the current approach to liability on both fault-based and not-fault-based principle seems to suffice the needs to regulate new horizons of liability relations when it comes to damages caused by autonomous vehicles. There are several options for an infringed party to claim damage and in practice it would not be rare that numerous obliged entities at once would be responsible at the same time. The standard approach is, whenever this happens, a formation of solidary obligation between all the responsible parties.

However, autonomous vehicles are specific when it comes to self-learning and self- or semi-self-sufficient driving system with little or no need of control by a present driver. With this regard the new German approach of obligation to equip a vehicle with a system for data collecting on position and time when there is a change of entity controlling the vehicle from the system to the driver seems to be a suitable way of ad a) dividing liability between either the driver or the producer/the operator and ad b) building a system which significantly simplifies proving who and when was in charge of the vehicle and hence is responsible for the damage.

The current liability system is sufficient when it comes to damages caused by smart-city autonomous vehicles. However, due to possible imminent damage that may be caused on several lives at once or on property of quite a large scale, the importance of a suitable insurance system with clear limits for when and on what grounds the infringed parties may claim damage directly against the insurer of either the operator, the owner of the vehicle or the producer of the software might be a suitable way of risk allocation between all the affected parties. The more possible responsible entities, the more pressing is the issue of fair and equitable division of costs when it comes to plausible harm and injury.

6. Conclusion

Legal approach to resolving liability aspects of autonomous technologies is at least as important as the development and improvement of the technology itself. Despite the appearance that existing liability rules and concepts would be able to deal with the issue of liability for damage caused by autonomous control systems in their existing form and that there are already available means for the fair attribution of damage and distribution of risks associated with the operation and use of autonomous systems, this is not entirely true. The existing regulatory framework will not be totally able to cover the damage caused by the new generation of autonomous systems, which will be equipped with adaptive capabilities and the ability to learn by itself, which will inevitably be associated with a certain degree of unpredictability of behaviour. It is precisely for these aspects that it can be stated, taking into account the latest trends in newest regulation, whether at the national level of individual states or at the supranational level of EU regulation, that at the level of these specific relations, compulsory insurance is of a crucial importance, which suitably complements or even fully replaces the strict no-fault liability of operators, owners or producers of autonomous vehicles and their control components such as, mainly, software.

  1. This article is a part of the implementation of the APVV project No. APVV-20-0171 “Concurrence of delicts and quasi-delicts in non-contractual relations and their overlap with contract and property law”.
  2. Slogan “Giving Cars the Power to See, Think, and Learn” by NVIDIA, a Tesla partner in California.
  3. See A. Taeihagh, H. S. M. Lim, Governing Autonomous Vehicles: Emerging Responses for Safety, Liability, Privacy, Cybersecurity, and Industry Risks, in Transport reviews, 39, 2018, https://ssrn.com/abstract=3211839.
  4. L. Collingwood, Privacy implications and liability issues of autonomous vehicles, in Information and Communications Technology Law, 1, 2017, pp. 32-45.
  5. M. A. Richter, M. Hagenmaier, O. Bandte, V. Parida, J. Wincent, Smart cities, urban mobility and autonomous vehicles: How different cities needs different sustainable investment strategies, in Technological Forecasting and Social Change, 184, 2022, p. 1.
  6. Last ride for RAC Intellibus as groundbreaking trial ends, (3rd of July 2023), available at https://southperth.wa.gov.au/about-us/news-and-publications/news-and-public-notices/news-detail/2023/07/03/last-ride-for-rac-intellibus-as-trial-ends.
  7. T. Andrejčák, Mercedes-Benz Future Bus: This bus rides alone. And it’s perfect at it, 20th of November 2017, https://auto.pravda.sk/magazin/clanok/448138-mercedes-benz-future-bus-tento-autobus-jazdi-sam-a-ide-mu-to-skvelo/
  8. J. Procházka, In China they have a hybrid – a train and a bus in one. It runs on drawn tracks, 7th of June 2017, https://www.techbox.sk/v-cine-maju-hybrid-vlak-autobus-v-jednom-jazdi-po-nakreslenych-kolajach.
  9. M. Biel, Germany tests first self-driving tram. A stroller gets in its way, 27th of September 2018, https://www.trend.sk/technologie/nemecku-otestovali-prvu-samojazdiacu-elektricku-cesty-jej-vtlacili-kocik.
  10. R. Mališka, The world’s first autonomous tram runs in Germany, 25th of September 2018, https://techpedia.ta3.com/veda-a-vyvoj/novinky/5923/v-nemecku-jazdila-prva-autonomna-elektricka-na-svete.
  11. P. Steigauf, Radical change in public transport: The end of bus drivers? Robo-buses will soon flood cities, 27th of November 2024, https://www.proficars.sk/novinky/radikalna-zmena-mhd-koniec-soferov-autobusov-robo-busy-vraj-coskoro-zaplavia-mesta/1577.
  12. Autonomous buses in public transport, what’s going on? An overview of projects, technology, challenges, 19th of December 2024, https://www.sustainable-bus.com/its/autonomous-bus-public-transport-driverless/.
  13. N. Lavars, Volvo’s first electric driverless bus swings into action in Singapore, 6th of March 2019, https://newatlas.com/volvo-first-electric-driverless-bus-singapore/58743/.
  14. Volvo 7900 Electric, 25th of October 2019, https://landtransportguru.net/volvo-7900-electric/.
  15. Autonomous buses in public transport, what’s going on? An overview of projects, technology, challenges,19th of December 2024, https://www.sustainable-bus.com/its/autonomous-bus-public-transport-driverless/.
  16. R. Mališka, The world’s first autonomous tram runs in Germany, 25th of September 2018, https://techpedia.ta3.com/veda-a-vyvoj/novinky/5923/v-nemecku-jazdila-prva-autonomna-elektricka-na-svete.
  17. CAVforth Bus wins first Self-driving Industry Vehicle of the Year award, 23rd of November 2023, https://www.smmt.co.uk/2023/11/cavforth-bus-wins-first-self-driving-industry-vehicle-of-the-year-award/.
  18. L. Dyson, The world’s largest-capacity autonomous bus service, 2nd of November 2023, https://pagely.traffictechnologytoday.com/news/autonomous-vehicles/feature-the-worlds-largest-passenger-capacity-autonomous-bus-service.html.
  19. Plan of autonomous mobility up to 2025 of the Ministry of Transporation of the Czech Republic dated 10th April of 2024, https://md.gov.cz/getattachment/Uzitecne-odkazy/Autonomni-mobilita/asdasd/CZ_Plan_Autonomni_mobility.pdf.aspx.
  20. Karsan’s e-Atak Autonomous, green light to enter regular traffic in Norway, 1st of March 2024, https://www.sustainable-bus.com/news/karsans-e-atak-autonomous-green-light-to-enter-regular-traffic-in-norway/.
  21. L. Čížová‚ The city tested the technology for autonomous public transport control, 30th of April, 2021, http://inba.sk/vismo/dokumenty2.asp?id_org=600185&id=4842&p1=8825.
  22. Sophisticated public transport management will help reduce emissions, 27th of April 2021, https://smartmobility.gov.sk/sofistikovane-riadenie-mhd-pomoze-znizovat-emisie/.
  23. Platform of cooperation ‚Intelligent Mobility of Slovakia‘, 27th of April 2021, https://smartmobility.gov.sk/o-nas/platforma-spoluprace-inteligentnej-mobility-slovenska/.
  24. P. Steigauf, Radical change in public transport: The end of bus drivers? Robo-buses will soon flood cities, 27th of November 2024, https://www.proficars.sk/novinky/radikalna-zmena-mhd-koniec-soferov-autobusov-robo-busy-vraj-coskoro-zaplavia-mesta/1577.
  25. S. Zhang, HK’s largest autonomous bus transportation system to launch next year, 4th of July 2024, https://www.chinadailyhk.com/hk/article/587275.
  26. G. Hornung, Rechtsfragen der Industrie 4.0. Datenhoheit – Verantwortlichkeit – rechtliche Grenzen der Vernetzung, Nomos, Baden-Baden, 2018, p. 30.
  27. For algorithmic decision-making systems see K. A. Zweig, T. D. Krafft, Fairness und Qualität algorithmischer Entscheidungen, in R. Mohabbat Kar, B. E. P. Thapa, P. Parycek (eds.), (Un)berechenbar? Algorithmen und Automatisierung in Staat und Gesellschaft, Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS, Kompetenzzentrum Öffentliche IT (ÖFIT), Berlin, 2018, pp. 204-227.
  28. For more on causality see C. Müller-Hengstenberg, S. Kirn, Kausalität und Verantwortung für Schäden, die durch autonome smarte Systeme verursacht werden. Eine Untersuchung der deliktischen Haftung für den Einsatz autonomer Software-agenten, in Computer und Recht, 10, 2018.
  29. Compare H. Zech, Zivilrechtliche Haftung für den Einsatz von Robotern – Zuweisung von Automatisierungs- und Autonomierisiken, in S. Gless, K. Seelmann (eds.), Intelligente Agenten und das Recht, Nomos Verlagsgesellschaft, Baden-Baden, 2016, p. 164.
  30. On the criminal aspects, see e.g. S. Beck, Selbstfahrende Kraftfahrzeuge – aktuelle Probleme der (strafrechtlichen) Fahrlässigkeitshaftung, in B. H. Oppermann, J. Stender-Vorwachs, Autonomes Fahren. Rechtsfolgen, Rechtsprobleme, technische Grundlagen, C. H. Beck, München, 2017, pp. 33-59.
  31. G. Spindler, Roboter, Automation, künstliche Intelligenz, selbst-steuernde Kfz – Braucht das Recht neue Haftungskategorien?, in Computer und Recht, 12, 2015, p. 766; also S. Horner, M. Kaulartz, Haftung 4.0. Verschiebung des Sorgfaltsmaβstabs bei Herstellung und Nutzung autonomer Systeme, in Computer und Recht, 1, 2016, p. 7.
  32. M. R. Calo, Open Robotics, in Maryland Law Review, 3, 2011, p. 125.
  33. G. A. Bekey, Autonomous Robots: From Biological Inspiration to Implementation and Control, Massachusetts Institute of Technology, Massachusetts, 2005, cit. by H. Zech, Zivilrechtliche Haftung für den Einsatz von Robotern – Zuweisung von Automatisierungs- und Autonomierisiken, in S. Gless, K. Seelmann (eds.), Intelligente Agenten und das Rec, Nomos Verlagsgesellschaft, Baden-Baden, 2016, p. 168.
  34. Compare H. Zech, Zivilrechtliche Haftung für den Einsatz von Robotern – Zuweisung von Automatisierungs- und Autonomierisiken, in S. Gless, K. Seelmann (eds.), Intelligente Agenten und das Rec,. Nomos Verlagsgesellschaft, Baden-Baden, 2016, p. 168.
  35. Ibid., p. 169.
  36. Ibid., p. 176.
  37. See also T. M. Gasser, Grundlegende und spezielle Rechsfragen für autonome Fahrzeuge, in M. Maurer, J. CH. Gerdes, B. Lenz, H. Winner, (eds.), Autonomes Fahren. Technische, rechtliche und gesellschaftliche Aspekte, Springer, Berlin, 2015, p. 552.
  38. A study by The National Highway Traffic Safety Administration (NHTSA) revealed that 94% of traffic crashes are due to human factors (National Highway Traffic Safety Administration Federal Automated Vehicles Policy: Accelerating the Next Revolution in Roadway Safety 5, 2016, www.transportation.gov/AV), and it is autonomous vehicles (while obviously unable to eliminate all of these crashes) that can make a significant contribution to improving road safety, with expert studies reporting reductions in vehicle collisions of up to 80% (J. Albright et al., Automobile insurance in the era of autonomous vehicles. Survey results, 2015, https://home.kpmg.com/content/dam/kpmg/pdf/2016/05/kpmg-automobile-insurance-in-era-autonomous.pdf). This is an estimate that foresees a massive replacement of the current 0-2, highly automated and fully automated driving stages with 3-5 driving stages. However, due to the gradual introduction of autonomous-driven vehicles, the above percentage reduction in accidents will not be a blip; studies indicate that by 2035 it should be in the range of 7-15%. These are figures that refer to individual means of transport.
  39. A. Goldman, Bubble? What bubble?, in The New York Times Magazine, 7th of July, 2011, https://www.nytimes.com/2011/07/10/magazine/marc-andreessen-on-the-dot-com-bubble.html.
  40. For more on risk as a criterion for allocating damages, see S. Meder, Risiko als Kriterium der Schadensverteilung, in Juristenzeitung, 11, 1993.
  41. See T. M. Gasser, Grundlegende und spezielle Rechsfragen für autonome Fahrzeug, in M. Maurer, J. CH. Gerdes, B. Lenz, H. Winner, (eds.), Autonomes Fahren. Technische, rechtliche und gesellschaftliche Aspekte, Springer, Berlin, 2015; S. Gless, R. Janal, Hochautomatisiertes und autonomes Autofahren – Risiko und rechtliche Verantwortung, in Juristische Rundschau, 10, 2016; S. Horner, M. Kaulartz, Haftung 4.0. Verschiebung des Sorgfaltsmaβstabs bei Herstellung und Nutzung autonomer Systeme, in Computer und Recht, 1, 2016; H. Zech, Gefährdungshaftung und neue Technologien, in JuristenZeitung, 1, 2013.
  42. According to this theory, risks should be objectively attributed to whoever is able to control and manage them, given the fact that the risk originates in his or her sphere of competence, H. B. Schäfer, C. Ott, Lehrbuch der ökonomischen Analyse des Zivilrechts, 1 Aufl., Springer Berlin, Heidelberg, New York, 1986, p. 152. Usually, the person who is allowed to control and operate the system in certain circumstances is the user, operator or producer of the system in question. If a certain probability of damage is “attributable” predominantly to one party, it thus seems fair to attribute to that party the damage that has been realised within that probability, J. Esser, H. L. Weyers, Schuldrecht, Band II, Besonderer Teil, 7 Aufl., C. F. Müller Juristischer Verlag, Heidelberg, 1991, p. 537.
  43. Following the theory of increased risk, responsibility is imposed on those who have created such increased risk through their operations to motivate them to take the utmost care of all aspects of the systems’ operation and implement the necessary control and protection measures. In relation to autonomous control systems in general, it is the producer or operator who, given his position as a management, organisational and control mechanism, is best placed to deal with the hazards arising from the operation of the autonomous control system. It therefore seems fair to attribute to it those risks which originate in its sphere, from which it benefits and which it can control most effectively.
  44. R. Polčák, Liability of Artificial Intelligence and Information Formations without Legal Personality, in Bulletin of Advocacy, 11, 2018, p. 24.
  45. In this case, it is primarily a question of proving causality between the cause of the damage itself and the adverse consequence or the explicit or analogous subsumption of the functioning of autonomous systems under the already existing liability concepts of tort law. In relation to the definition of causation, factual causation itself is problematic, defined in tort law in terms of the causal rules of the conditio sine qua non formula in the continental system of law or the but-for test in the Anglo-American system of law. As Polcak argues, it is never possible to retroactively reconstruct the robot’s operating code and determine why the robot did or did not do something any more than it is logically possible to determine what factors were involved in the robot’s operating code, i.e., why, in effect, the robot programmed itself in a certain way, R. Polčák, Liability of Artificial Intelligence and Information Formations without Legal Personality, in Bulletin of Advocacy, 11, 2018, p. 24. For this reason, it is questionable whether, with this unclear chain of causation, an imputable link can be defined in the sense of the theory of adequate causation (C. Müller-Hengstenberg, S. Kirn, Kausalität und Verantwortung für Schäden, die durch autonome smarte Systeme verursacht werden. Eine Untersuchung der deliktischen Haftung für den Einsatz autonomer Software-agenten, in Computer und Recht, 10, 2018, p. 686) or on what basis the criteria and conditions of legal causation can be established in this relationship of unclear factual causation.
  46. S. Grundmann, § 276 R. 50,. in Münchener Kommentar zum Bürgerlichen Gesetzbuch: BGB, 7. Aufl., C. H. Beck, München, 2016.
  47. Cf. M. Novotná, M. Jurčová, Liability for damage caused by autonomous and semi-autonomous vehicles under Slovak law, ŠafárikPress UPJŠ, Košice, 2018.
  48. For example, at vehicle autonomy level 3, the autonomous system is able to control the steering while monitoring the environment in which it is moving, but the driver is required to be able to respond appropriately to the system’s request to intervene in the control of the vehicle.
  49. T. M. Gasser (ed.) Rechtsfolgen zumehmender Fahrzeugautomatisierung, Berichte der Bundesanstalt für Straβenwesen, Fahrzeugtechnik, Heft F 83, Verlag für neue Wissenschaf, Bremerhaven, 2012, p. 13 ff.
  50. Also S. Horner, M. Kaulartz, Haftung 4.0. Verschiebung des Sorgfaltsmaβstabs bei Herstellung und Nutzung autonomer Systeme, in Computer und Recht, 1, 2016, p. 9.
  51. Ibid.
  52. Novotná, M., Jurčová, Liability for damage caused by autonomous and semi-autonomous vehicles under Slovak law, ŠafárikPress UPJŠ, Košice, 2018.
  53. Problematic in this context is not only the fact that there are no specific normatively captured technical standards and requirements for autonomous control systems, but also the fact that due to the speed of development of the IT sector, such standards may become rapidly outdated.
  54. It is clear that the obligation to maintain standards of science and technology affects those that have been tested in practice as the most effective and have therefore been generally accepted.
  55. In 2015, a pair of hackers hacked into the systems of a Jeep Cherokee in a controlled experiment (which was conducted in normal traffic) by remotely hacking a connected vehicle. The “attackers” were able to remotely control the air conditioning and other cabin features, disable the automatic transmission, and disable the brakes. They could even take control of the vehicle if reverse gear was engaged. For more, see A. Greenberg, Hackers remotely kill a jeep on the highway – with me in it, https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway.
  56. S. Gless, R. Janal, Hochautomatisiertes und autonomes Autofahren – Risiko und rechtliche Verantwortung, in Juristische Rundschau, 10, 2016, p. 568.
  57. G. Hornung, Rechtsfragen der Industrie 4.0. Datenhoheit – Verantwortlichkeit – rechtliche Grenzen der Vernetzung, Nomos, Baden-Baden, 2018, p. 31.
  58. In March 2018, a tragic incident occurred in the state of Arizona when a pedestrian died because of a collision with an autonomously driven Uber vehicle (this was the first recorded case of a human being killed by being struck by a robot-driven vehicle in a normal operation). In May 2018, Uber issued a statement saying that the probable cause of the autonomous vehicle prototype collision was a problem in the software, which is tasked with deciding how the vehicle should react to objects it detects. While the vehicle’s sensors detected the presence of a pedestrian crossing the road with a bicycle, the software decided that the vehicle did not need to respond immediately. That decision resulted from an algorithm that requires Uber’s software, like that of other autonomous systems, to ignore false positives or objects in the vehicle’s path but not a problem for the vehicle (e.g., a plastic bag in the air), A. Efrati, Uber Finds Deadly Accident Likely Caused By Software Set to Ignore Objects On Road, https://www.theinformation.com/articles/uber-finds-deadly-accident-likely-caused-by-software-set-to-ignore-objects-on-road?shared=56c9f0114b0bb781.
  59. T. Hey, Die außervertragliche Haftung des Herstellers autonomer Fahrzeuge bei Unfällen im Strassenverkehr, Springer, Gabler, Wiesbaden, 2019, p. 63.
  60. S. Kirn, C. D. Müller-Hengstenberg, Rechtliche Risiken autonomer und vernetzter Systeme, De Gruyter, Oldenbourg, 2016, p. 102 ff.
  61. The mere fact that the autonomous system as a whole or some of its components are not generally considered controllable never does and cannot lead to the exemption of the entity concerned (usually the producer in this context) from the obligation to compensate for the damage caused. On the contrary, it leads to a higher standard of care regarding the safety obligations of development, production and subsequent control.
  62. S. Horner, M. Kaulartz, Haftung 4.0. Verschiebung des Sorgfaltsmaβstabs bei Herstellung und Nutzung autonomer Systeme, in Computer und Recht, 1, 2016, p. 8.
  63. H. Zech, Gefährdungshaftung und neue Technologien, in JuristenZeitung, 1, 2013, p. 21.
  64. Ibid., p. 22.
  65. M. Novotná, Liability for damage caused by the operation of means of transport, in M. Števček, et al., Civil Code. Commentary, C. H. Beck, Prague, 2015.
  66. Ibid.
  67. For a more detailed definition of this person, see S. Gless, R. Janal, Hochautomatisiertes und autonomes Autofahren – Risiko und rechtliche Verantwortung, in Juristische Rundschau, 10, 2016, p. 562.
  68. U. Bose, The black box solution to autonomous liability, in Washington University Law Review, 5, 2015, p. 1325.
  69. See e.g. considerations of H. Zech, Zivilrechtliche Haftung für den Einsatz von Robotern – Zuweisung von Automatisierungs- und Autonomierisiken, in S. Gless, K. Seelmann (eds.), Intelligente Agenten und das Recht, Nomos Verlagsgesellschaft, Baden-Baden, 2016, p. 179.
  70. On the status and analysis of the conditions for granting legal personality to artificial intelligence, see J. Zibner, Acceptance of legal personality in the case of artificial intelligence, in Review of Law and Technology, 17, 2018; S. M. Solaiman, Legal personhood of robots, corporations, idols and chimpanzees: a quest for legitimacy, in Artificial Intelligence and Law, 2, 2017, pp. 155-179.
  71. S. Gless, R. Janal, Hochautomatisiertes und autonomes Autofahren – Risiko und rechtliche Verantwortung, in Juristische Rundschau, 10, 2016, p. 571.
  72. H. Zech, Gefährdungshaftung und neue Technologien, in JuristenZeitung, 1, 2013, p. 28.
  73. Ibid.
  74. The limits result in particular from the nature of the Directive as a pro-consumer regulation, conferring standing only on a victim who has suffered injury to life or limb or damage to an object other than the defective product itself, which object must normally be intended for personal use or personal consumption and predominantly serve that purpose for the victim. It follows that the enhanced producer liability under the Product Liability Directive should not apply to damage caused by autonomous systems used on public goods, municipal property, systems not used for personal use (the above means that autonomous transport in smart cities is not covered by the Directive).
  75. https://www.legislation.gov.uk/ukpga/2018/18/pdfs/ukpga_20180018_en.pdf.
  76. E. Jędrzejewska, Autonomous vehicles and the issue of liability for damage caused by the movement of such vehicle, In Journal of Modern Science, 2, 2023, Vol. 51, p. 635.
  77. Ibid. p. 636.
  78. Ľ. Sisák, Artificial intelligence and the Slovak law of obligations: non-conforming performance and non-contractual liability arising out of damage caused to another, in J. Klučka, L. Bakošová, Ľ. Sisák (eds.), Artificial intelligence from the perspective of law and ethics: contemporary issues, perspectives and challenges, Praha, Leges, 2021, pp. 156-157.

 

Marianna Novotná

Professore Associato di Diritto Civile nell’Università di Trnava, Slovacchia

Veronika Zoričáková

Assistant Professor nell'Università di Trnava, Slovacchia