Nel XXI secolo, la regolamentazione dell’uso dell’IA è fondamentale per proteggere i cittadini. Il panorama globale ha visto una dicotomia nella regolamentazione dell’IA, con l’Europa che sposa una posizione conservatrice e “robusta” e gli Stati Uniti che adottano un approccio laissez-faire. Entrambi gli approcci presentano vantaggi e svantaggi, soprattutto per le persone con disabilità. Il potenziale dell’IA per colmare le lacune della società è innegabile, ma richiede una regolamentazione adeguata. La presente analisi propone un modello di regolamentazione ibrido, un “approccio regolatorio dalmata”, per migliorare il panorama europeo dell’IA, in particolare per le persone con disabilità. Questo approccio combina le migliori pratiche sia dell’approccio europeo che americano, consentendo innovazione e sperimentazione pur mantenendo un quadro normativo che protegge e tutela i diritti dei cittadini. Questo approccio consentirebbe all’Europa di preservare il proprio quadro normativo, integrando nel contempo le strategie più efficaci di entrambi gli approcci. Di conseguenza, la cittadinanza digitale ne trarrebbe beneficio senza l’onere normativo dell’approccio europeo e l’approccio laissez-faire della posizione americana.
In the 21st century, regulating AI usage is crucial for protecting citizens. The global landscape has seen a dichotomy in AI regulation, with Europe adopting a conservative and robust stance, and the US embracing a laissez-faire approach. Both approaches have benefits and pitfalls, especially for persons with disabilities. AI's potential to bridge societal gaps is undeniable, but it requires adequate regulation. This paper proposes a hybrid regulatory model, a “regulatory dalmatian approach”, to enhance the European AI landscape, particularly for persons with disabilities. This approach combines best practices from both the European and American approaches, allowing for innovation and experimentation while maintaining a regulatory landscape that protects and safeguards citizens’ rights. This approach would enable Europe to preserve its regulatory framework while integrating the most effective strategies from both approaches. Consequently, the digital citizenry would benefit without the regulatory burden of the European approach and the laissez-faire approach of the American stance.
1. Introductory remarks
Against the backdrop of a rapidly changing societal landscape[1], national and international policymakers have been confronted with a somewhat vintage[2], yet recently accepted phenomenon: Artificial Intelligence (AI)[3]. Given that there is no question regarding AI’s durability and permeability – both as a concept and as a tool – within every aspect of modern-day society, its regulation and implementation has been an all-encompassing, at times controversial, on-going subject of debate. AI enthusiasts and Brave New World[4] technology skeptics, alike, have had to accept the unprecedented pace at which AI transforms society, as well as its ability to transcend geographic boundaries. Therefore, the necessity of regulatory and ethical checks becomes apparent. In such regard, the world has witnessed the formation of two very distinct schools of thought which differ in vision, means and outputs.
On the one hand, the European human-centered and risk-based outlook on AI; an approach that captures AI’s potential efficiency-wise whilst cautioning against its problematic or unethical ramifications[5]. On the other hand, the American approach; a market-driven vision championing innovation, competition, and economic growth, though, not bereft of shortcomings[6].
Although AI’s economic and efficiency potential has been discussed at great length[7], a new stream in academic literature has been gaining momentum; that is, the intersection between AI and fragility[8]. In fact, with the advent of the digital age, numerous categories of people face the risk of social exclusion due to social, economic, or systemic factors.
Hence, the purpose of this research is twofold. Firstly, a comparative approach to AI vision and regulation is going to be discussed in order to extrapolate the underlying best practices. Secondly, the heart of the research is going to focus on discussing AI’s transformative, yet occasionally shadowed potential for social good. In essence, existing case studies extrapolated from the two aforementioned regulatory systems are going to be confronted and assessed in order to expand upon AI’s potential in ameliorating the quality of life for people with disabilities. By doing so, this research aims to position itself within the academic debate that lies at the intersection between AI and fragility by elucidating identified best practices and possible ways forward within such a realm.
2. The European approach and the American approach: poles apart
The recent regulatory developments within both the Old Continent and the New one are a testament to the divergence in vision and, for the most part, outcome of the implementation of AI based solutions and their subsequent role within society. In fact, the global arena is witnessing an AI regulatory vision tug-of-war, where, on the one hand there is a strong push toward laissez-fairism and scant regulation to maximize innovation, and on the other, a strong pull for greater user protection via strict and ethical regulation. This difference in approach requires mentioning as it is of particular significance in its interface with the development of AI solutions at the benefit of persons with disabilities: on the one hand, Europe’s anthropocentric regulatory protectionism favors the development of inclusive AI solutions but deters capital investment. On the other hand, American regulatory laissez-fairism attracts investments but such competitiveness has the potential to disincentivize the development of inclusive AI solutions.
However, prior to delving into the respective legal frameworks, it is of utmost importance to understand the context within which these two views coexist and operate. These differences are not so much attributable to Europe being founded on history and America being founded on philosophy, as famously stated by Margaret Thatcher[9], more so to the two systems’ developed viewpoints in light of their respective histories and preferred modus operandi.
2.1. European Approach
Born out of economic convenience, the European Union (EU) has witnessed various structural developments over time culminating in the signing of the Maastricht Treaty. Thus, signaling to the world a shared intention of extending the Union’s scope; from a purely economic alliance, the EU intended to become a political union based upon shared ethics and values, as elucidated within both the Treaty on European Union (TEU) and the Treaty on the Functioning of the European Union (TFEU)[10]. Hence, it logically follows that the European stance, and subsequent approach to AI and its regulation, ought to be in consonance with the EU’s human-centric motto of deriving unity from diversity whilst prioritizing core founding values such as human dignity, freedom and democracy[11].
As part of its political and economic mission to remain a competitive global player, the EU endorsed the importance of the Twin Transitions – green and digital – embedding AI within its broader European Digital Decade Policy Programme 2030[12]. Moreover, various regulatory packages introduced by the EU[13] lay the foundations for the Artificial Intelligence Act (AIA)[14], which not only represents the world’s first comprehensive legally binding framework for the development, market placing and use of AI systems, but also reaffirms the Union’s commitment to its legacy of values in its regulatory efforts.
Noteworthy of mention is the fact that the AI Act and the aforementioned regulatory packages’ scope are heavily intertwined, forming a robust regulatory net upon which AI systems ought to conform to. By doing so, the Union became a leader in AI regulation, which, in and of itself, is a best practice regulatory wise as it not only sets a supranational ideological and regulatory standard, but it also steers individual national standards within the EU to harmonization, essentially creating a robust and compact AI regulatory ecosystem.
A second-best practice extrapolated from the AI Act is represented by the Union’s push toward human oversight of AI deployment to minimize potential risks and thus balance the promise of automation with the necessity for authentic human reasoning. The significance of the establishment of human oversight lies in its power of AI limitation; by requiring human checks the Union established boundaries to the autonomy of AI systems[15].
Nonetheless, the Union’s conservative-leaning stance toward AI regulation is not devoid of potential limitations. For instance, if on the one hand adopting a structured classification system of AI applications encourages safety and transparency, on the other its regulatory staticity in the face of the dynamism that characterizes the very realm it is meant to regulate may defeat its purpose, or at least require further futureproofing. This issue is best exemplified by the AI Act’s limitations vis-à-vis General-Purpose AI models (GPAI)[16] given their versatile nature and multifold applications which renders them unfit for a single risk category. To bridge such a gap, the Union is set to draft a General-Purpose AI Code of Practice[17].
Furthermore, given the AI Act’s stringent regulations placed at every stage along the AI value chain[18], the Union can inadvertently place a heavy regulatory burden on innovation on two main levels: (i) the international realm, whereby foreign companies seeking to operate within the EU market might find it troublesome to harmonize their practices to European standards, and (ii) European developers with limited financial maneuvering space, for whom, high compliance costs and excessive bureaucratization represent barriers to entry. Although the EU has envisaged the creation of regulatory sandboxes[19] to bridge this gap, the aforementioned stringent limitations have deleterious consequences within the Union; a region in need of investments to level the infrastructural and research playing field, fuel innovation and increase its competitiveness and attractiveness within the global AI realm[20].
2.2. American Approach
Rooted in liberty and freedom, the American approach to AI regulation is best exemplified by founding father of the United States of America (USA), James Madison’s cautionary words on excessive regulation: «The internal effects of a mutable policy are still more calamitous. It poisons the blessings of liberty itself. It will be of little avail to the people, that the laws are made by men of their own choice, if the laws be so voluminous that they cannot be read, or so incoherent that they cannot be understood: if they be repealed or revised before they are promulg[at]ed, or undergo such incessant changes, that no man who knows what the law is to-day, can guess what it will be to-morrow»[21]
Holding faith to such a vision, the American stance is characterized by limited regulation to ensure a market-driven approach capable of fostering innovation and enhancing economic competitiveness[22]. Unlike the EU, the American approach is decentralized and sector-specific, therefore various agencies are entrusted with overseeing AI applications within their respective domains[23]. Moreover, in the absence of an exhaustive and subject-specific means of regulating AI, coupled with Congress’ light legislative touch, agencies have been forced to apply older legislative tools to this emerging technology, ultimately leading to legislative fragmentation and allowing for increased judicial discretion[24].
In an attempt to harness AI’s benefits whilst tackling its challenges, former President Biden issued Executive Order 14110 (EO 14110)[25]. Such a directive aimed at shaping American AI governance in a responsible and safe manner. Nonetheless, EO 14110 was rescinded by current President Trump, whom, via EO 14179[26] revoked various AI policies and directives acting as barriers to AI innovation. Thus, reaffirming America’s commitment to remaining at the forefront of AI innovation by pledging allegiance to its market-driven modus operandi.
The adoption of a liberal approach toward AI regulation is, by its very nature, a best practice innovation-wise, as fewer restrictions can foster innovation by subjecting developers to a lighter regulatory burden, thus, lowering compliance costs and increasing ultimate payoffs. This, in turn, increases investment opportunities within the USA, rendering it a fertile land for both the development of AI companies and the adoption of AI solutions within existing technology hubs.
Nonetheless, the American approach has, by no means, created an El Doradean AI landscape[27]. In fact, the lack of robust regulatory guidance poses risks on two fronts: (i) privacy concerns for users, and (ii) legal exposure of businesses. As per the former, unlike the human-centric regulatory philosophy shaping the European approach, the American approach does not guarantee a robust federal protection of privacy laws. Instead, privacy in the USA – with the exception of twenty US states[28] – is governed by a jigsaw puzzle ecosystem of sectoral laws that can often fall short in guaranteeing adequate user protection[29]. Regarding the latter, inadequate or inexhaustive regulatory guidelines can subject businesses offering AI solutions to significant legal exposure, which can, in turn, lead to burdensome court proceedings or costly settlements[30].
3. No one left beh(A)Ind?
A decade back, in an attempt to build upon the best practices of the Millennium Development Goals and to close the circle on such objectives, the United Nations announced the Sustainable Development Goals (SDGs) as part of its 2030 Agenda[31]. Among the pillar principles of such an agenda were ideals of universality, integration and respect for human rights, applied with the aim of «leaving no one behind»[32]. Within such a framework, the concept of disability is referenced multiple times[33], particularly in regard to quality education (SDG 4), decent work and economic growth (SDG 8), reduced inequalities (SDG 10), and sustainable cities and communities (SDG 11)[34].
Although the advent of the proliferation of accessible AI technology directed the global focus of attention toward its application in commercial or efficiency-enhancing solutions, AI’s potential in ameliorating the quality of life for people with disabilities has recently gained momentum. In fact, when AI’s power is harnessed and directed toward solutions with an expected and meaningful social impact, AI solutions can transcend their infamous Orwellian[35] attributes and become an integral force for inclusion and social good by not only leveling the playing field for persons with disabilities, but also by acting as dignity and autonomy enhancing vehicles for social change[36].
3.1. Landscape for persons with disabilities
Edward Hopper’s 1942 masterpiece, Nighthawks has withstood the test of time as being one of the greatest depictions of loneliness; four people in a social environment united by physical proximity, yet alone[37]. Though such a depiction antecedes the various recent groundbreaking technological innovations, it can almost be seen as prophesying the exacerbation of that feeling of loneliness which can result from the ephemeral everlasting connectivity that characterizes the Twenty-first century.
While technology and digitalization processes have pervaded virtually every possible realm and have altered the nature of our interface with society by adding a layer of hyper-connectivity, loneliness and social isolation have become rampant[38]. The aforementioned trend becomes even more worrisome when taking into account the abyssal divide presents between the digitally proficient and the digitally excluded[39], such as the elderly or persons with disabilities.
Whereas for the former category of users the digital divide is less problematic as it is typically characterized by unfamiliarity and skepticism toward the rapidly evolving digital realm, and thus amenable, when it comes to the latter, trouble arises on two main fronts. Firstly, given the vastness of the disability spectrum and the inherent layer of subjectivity that characterizes such a realm, identifying the precise needs of persons with disabilities, which would be the sole means to develop targeted approaches and thus eliminate accessibility barriers, is a challenge. Secondly, in light of the liveliness that characterizes the digitization process and the emergence of new technologies, robust and dynamic technological capacity-building programs would be required to bridge the digital divide through the adoption of new technologies in order to render technology an asset, not a liability for persons with disabilities.
3.2. AI solutions for persons with disabilities in the context of the digital transition
As a result of the aforementioned digital transition, the world has witnessed, though to varying extents, the migration of services and procedures which once required some degree of physicality to the e-realm. From eHealth[40], which aims at improving healthcare with the aid of information and communication technologies (ICTs) to eGovernment[41], where the main objective is that of developing border-transcending digital public services, the digital transition is shaping society at its core. Within such a context, the development of AI solutions is due to enhance the already-running digitalization engine.
Whereas on the one hand the aforementioned digital migration of services is desirable in terms of efficiency, such a process, if not dealt with in an all-encompassing manner, might just burn as many bridges as it creates, ultimately rendering digitalization a force of exclusion. This can be the case for persons with disabilities, whom, by oftentimes being left at the margins of such processes, tend to face great barriers to entry.
Nonetheless, upon acknowledgement of the structural nature of such barriers to entry, particular attention has been devoted toward the concept of digital accessibility, or e-accessibility, which refers to the ease with which digital products are utilized holistically, without discriminating on the basis of an individual’s abilities or disabilities[42]. Within the context of the e-accessibility realm, the World Wide Web Consortium (W3C) developed the Web Content Accessibility Guidelines (WCAG) with the aim of providing a harmonized international standard to be upheld in terms of web content accessibility, specifically at the benefit of persons with disabilities[43]. The WCAG, in light of its innovatively inclusive stance and robust standard-setting capabilities is, without a doubt, a best practice in terms of e-accessibility.
Similarly, with regard to accessibility best practices in favor of persons with disabilities, the Estonian model for the e-State is noteworthy of mention. With its inception being the ambitious Tiger Leap program[44] implemented at the dusk of the Twentieth-century, Estonia has since developed into a fully functioning digital e-State where the vast majority of public services are «just a click away»[45]. Estonia’s digital transition is particularly interesting as it has been capable of truly embracing the notion of “leaving no one behind”. By curating every aspect of the digital transition, Estonia has attended to the needs of persons with disability via various initiatives such as piloting digital skills development training programs for a wide array of users, or establishing a representative, coherent and outcome-oriented National Accessibility Taskforce whose aim is that of auditing e-Services to identify shortcomings in e-accessibility[46]. This has allowed Estonia to become a pioneer in the digital transition by rendering it immune to the typical digital divide dynamics plaguing other countries at different stages of the digitalization process.
Throughout the course of the digital transition, Estonia gave rise to a second-best practice: the development of a population with superb digital skills. In fact, with the dawn of AI systems, Estonian startups are capitalizing on their digital capabilities by integrating AI solutions to their business models[47]. Although most of such applications target efficiency maximization within the corporate realm, there are a few noteworthy exceptions whose focus is on ameliorating the quality of life for persons with disabilities. The Estonian startup 7Sense is one such example.
Rooted in a strong belief in the transformative and empowering potential of technology, 7Sense has developed complex devices, like SuperBrain 1, capable of translating digital information into tactile signals. In doing so, 7Sense has been at the forefront of the new haptic revolution; a technology and AI crossover driven effort to reinvent sight via a remote sense of touch[48]. The development of such a system is a milestone in tech and AI applications for the benefit of persons with disabilities as it can be considered a first step in bridging the gap between reality as is, and how it is perceived or lived differently by persons with disabilities.
Following in the footsteps of the Estonian example, various European countries have been exploring AI’s potential in leveling the societal accessibility playing field for persons with disabilities. AccessiWay and In&Valid are two startups created with this precise operational framework. Focusing on AI potential in increasing e-accessibility, AccessiWay is revolutionary insofar as it provides auditing services to businesses and international organizations in order to determine: (i) the level of e-accessibility of a specific platform, and (ii) the required steps forward[49]. In doing so, AccessiWay embodies the ethos of democratic deliberation as all of the audits are accompanied by user tests conducted by persons with disabilities for persons with disabilities.
Similarly, In&Valid startup[50], aims at eliminating barriers to access for persons with disabilities. In fact, In&Valid focuses on harnessing AI’s management and operational capacities in fostering the creation of a new ecosystem for the provision and erogation of services at the benefit of persons with disabilities and their respective caregivers. Within such an ecosystem, users will be able to identify employment opportunities, access public services and counseling services, and training programs[51].
From Echo Lab’s CASPER[52] captioning tool to Microsoft Seeing AI[53] via Synchron’s Brain-Computer Interface[54], the USA has also been a leader in terms of AI applications at the benefit of persons with disabilities. Such a large-scale adoption of applied AI-driven solutions in favor of enhancing both the autonomy and the dignity of persons with disabilities is a testament to the true potential of AI as a catalyst for positive social impact.
3.3. Balancing regulation and quality of life for persons with disabilities
Although the aforementioned initiatives are of paramount importance in shedding light on AI’s transformative capabilities in favor of the development of a digitally equitable society, such solutions are not bereft of potential risks. Of which, two are of particular relevance, especially considering the target beneficiary audience: (i) AI’s ability to deceive[55]; and (ii) the algorithmic bias present within AI solutions[56].
The fact that AI, at its current stage of development, is capable of deceiving humans entrusted with its oversight is problematic on a multitude of levels. Firstly, this raises issues in terms of AI’s trustworthiness, particularly when it comes to AI applications tailored toward specific categories of users and crafted to serve the social good. Secondly, AI’s ability to deceive is problematic in regulatory terms, too. The European approach to AI regulation requires human oversight, especially when the AI system used is considered high-risk. Similarly, though the American approach lacks federal human oversight regulation, various sector-specific laws require such compliance[57]. Granted that most of the aforementioned case studies elucidating best practices in AI solutions vis-à-vis persons with disabilities would fall into the EU high-risk or American required compliance categories, AI’s ability to deceive places a hefty burden on human oversight.
When considering AI’s potential in respect of improving the quality of life for persons with disabilities, potential algorithmic biases[58] which potentially yield socially biased outcomes become a prime concern. Granted, algorithms are the pillars upon which the digital economy rests. As such, it is of utmost importance to ensure that such systems reflect principles of fairness, accountability, and transparency[59].
Nonetheless, it has come to light that such systems reflect societal long-standing structural inequalities[60]. From the lack of holistically inclusive designs to concrete exacerbations of existing discriminations stemming from the core of the algorithm’s training, AI-driven solutions have been targeted for undermining the principle of social justice by reinforcing structural inequalities[61]. Therefore, if on the one hand AI solutions show ample transformative potential in quality of life enhancement, on the other hand potential algorithmic biases present a halting force to be addressed in the pursuit of the aforementioned objectives.
It is precisely at this junction that the Italian case gains relevance. Deeply engrained within Italian law is the impossibility of withdrawing or detaching the human component within the decision-making process[62]. In such regard, the Italian stance appears to be harmonized to the anthropocentric vision that characterizes the Union; whereby an intrinsic de facto value is granted to human beings in light of their very essence as human beings[63]. Although such a concept is far from being a novelty, the discourse on the necessity of human liability within modern decision-making processes – especially when considering the AI realm – gains relevance. Notwithstanding that proposed motivation appears to fall amid straightforward accountability concerns, the fact that such a rationale falls short vis-à-vis the aforementioned algorithmic biases; whereby the fallacy in human logic, particularly in terms of intrinsic biases, emerges.
Of such, perhaps the most emblematic example is portrayed by the judicial process. Article 111 of the Italian Constitution requires all judicial decisions to be motivated, implicitly requiring a human component to judicial procedures. However, the emergence of algorithmic biases might question the so-acclaimed human integrity, as algorithms, by their very nature, are the mere product of human rationale, including, among all good, human biases. Hence, the natural questions that arise are: (i) in the era of technology, what is the value added of human beings in decision-making procedures?; (ii) how can the identified value added be catered toward serving the common good?; and (iii) where does the comfort stemming from human accountability derive from, and why is it not applicable to artifacts of human beings (ie. algorithms)?
In light of the above-stated structural concerns, a reassessment of AI driven solutions’ potential vis-à-vis the current troublesome scenarios that have arisen, is warranted as it is only through such an assessment that an equitable way forward can be crafted. Given the fact that AI solutions have pervaded the present, there is no question as to whether they will pervade the future. It is precisely for this reason that the adoption of such solutions ought to be approached with an all-encompassing layer of caution, particularly when it comes to its applicability and repercussions within the disability realm. In such regard, it is of utmost importance to recall the fact that the beneficiaries of AI-driven solutions are human; AI is the product of the brightness of humanity, for humanity. Hence, in order to implement AI solutions capable of enhancing the common good, the entire AI lifecycle – from development to regulation – will require the act of balancing promise with peril.
4. A potential way forward?
History has elucidated the development of nations via the Gold standard[64]. Nowadays, society has witnessed the formation of a new standard, one based on information extrapolated from invaluable data; somewhat of an Orwellian prophecy whereby «big [Tech] is always watching»[65].
Therefore, in a society where transparency is not a de facto approach, and privacy becomes a privilege, the fright surrounding AI solutions becomes understandable. However, what is oftentimes overlooked is the fact that technology, per se, is never inherently good or bad. Technology, in a Machiavellian sense, is really a means to an end[66]. And, whether that end is benevolent or not, or whether the means can exhaustively justify the ends, is an entirely subjective endeavor.
Such an endeavor is heavily correlated with intrinsic societal values, as elucidated by the differing approaches to AI, from its inception to its regulation. Whereas the European stance on AI stems from Europe’s anthropocentric system of values, thus, establishing baseline standards at the benefit of the European people, the American stance is entirely different, prioritizing returns and dynamism, at times at the expense of users.
This dichotomy in results is particularly relevant when considering AI applications at the benefit of persons with disabilities. AI-driven solutions do possess the potential to alter the current reality of persons with disabilities for the better–through the enhancement of accessibility, independence and inclusion. However, in order to leverage the positive power of AI, structural fine tuning is required at all stages of the process; a cradle to grave approach guiding AI development from its early stages of algorithmic training to the later regulatory juncture.
Of utmost importance, though, throughout the entirety of the aforestated process is an often-disregarded aspect: participation. For the development of durable and effective AI-driven solutions within the disability realm, it is not sufficient to rely solely on outsiders’ perspectives of what is needed. Persons with disabilities ought to be at the forefront of such a process, guiding targeted innovation according to explicit needs. Hence, rendering the ultimate beneficiaries’ structural pillars of the innovation process, not mere outcome claimants.
In such regard, the optimal way forward for the development of AI-driven solutions at the benefit of persons with disabilities relies, first and foremost, on a re-democratization of the process of innovation. Oftentimes, innovative solutions ended up being shelved by persons with disabilities given unforeseen accessibility barriers on the users’ end. This has notably been the case with AI transcription technologies which could have been a game-changing solution in enhancing the autonomy of persons with disabilities but are oftentimes disregarded as not being user friendly. However, through the inclusion of the beneficiaries within the decision-making process – from design to regulation – stakeholders ensure the development of a useful and accessible solution to concretely bridge pinpointed gaps[67].
Moreover, a second crucial aspect to consider going forward is the perpetuation of inequalities caused by the lack of universal coverage in AI-driven solutions for persons with disabilities. Although innovation is desirable given its inclusive potential, when not accompanied by appropriate coverage efforts in the provision of assistive products, innovation becomes a source of perpetuating inequalities. Hence, innovation ought to be accompanied by adequate, stable, and predictable financial resources in order to bridge more gaps that it widens[68].
Lastly, there is no way forward in the development of AI-driven solutions at the true benefit for persons with disabilities without robust regulation in the face of fragility. Ultimately, the two aforementioned regulatory stances, namely the European and the American approach, both give rise to numerous best practices worth capitalizing on as a viable path forward. The American laissez-fairism in respect of regulation is the ideal magnet for innovation, whereas the European conservative stance on AI allows for unparalleled user protection through its robust regulatory standards.
Hence, a regulatory dalmatian approach[69] could be a viable way forward to foster innovation in a context of protection at the benefit of persons with disabilities. Following an attractive makeover of the European AI regulatory sandboxes, including speedier approval and onboarding processes, enhanced public-private cooperation and collaboration and amplified cross-border compatibility, European innovators would benefit from taking advantage of the provisions elucidated within article 57 of the AIA. In fact, by leveraging the power of regulatory sandboxes, European innovators could operate by taking advantage of the strengths of the European regulatory framework whilst marginally capitulating to the charms of the American approach. In so doing, startups and enterprises would be able to flirt with innovation in a flexible yet protected environment whilst not being subjected to the stringent – oftentimes, especially in the development of AI for social good, innovation halting – constraints characterizing the broader European AI regulatory landscape.
In such regard, a potential way forward could be represented by the merger of the best of both worlds; American dynamism and European core-values protectionism. Such a regulatory dalmatian approach finds its novelty in terminology, not practice. In fact, a preceding example – though extrapolated from disparate realm – of a European regulatory sandbox with American characteristics is represented by the FinTech Sandbox in Lithuania which has placed Lithuania at the heart of FinTech innovation[70].
The importance of Lithuania’s FinTech Sandbox stems from its ability to leverage European regulatory safeguards at the advantage of innovation. However, this achievement was possible due to the unity of best practices from both the Old Continent and the New one. For instance, Lithuania embraced meritocracy of innovation – a similar forma mentis to the United States – by welcoming projects devoid of strict geographical barriers to entry. Here, a notable example being Revolut, who leveraged Lithuania’s FinTech Sandbox to scale its financial services throughout the Union. Moreover, Lithuania’s FinTech Sandbox can be considered a success story in light of a few other characteristics which render it the poster-child elucidation of the regulatory dalmatian approach, namely: (i) flexible regulatory oversight, whereby the Bank of Lithuania, taking advantage of the regulatory sandboxes, is able to apply a looser regulatory framework compared to that being upheld across the Union. Here, a parallelism with American state-level FinTech sandboxes is due; (ii) fast-track licensing coupled with innovation-friendly policies which, again, resemble American FinTech-friendly states; and (iii) embracing, not fearing, emerging technologies, as the Lithuanian FinTech regulatory sandbox first-handedly encourages the development and testing of innovative solutions – even though which are, at times, frowned upon. This highly resembles the American vision of innovation; whereby cutting-edge developments are welcomed – though in a controlled regulatory environment – even if at risk of being disruptive[71].
This same aforementioned approach – meaning, a “regulatory dalmatian approach” – could revolutionize the modern-day European AI landscape, bridging the gap between European goals in AI leadership and reality. Most importantly, though, such an approach would allow for true inclusive AI innovation at the benefit of persons with disabilities for two main reasons: (i) user-centered co-creation and feedback, and (ii) risk reduction for developers. In fact, these controlled environments allow for the real-world testing of emerging technologies, but in the presence of fewer regulatory hurdles acting as barriers to innovation. For the development of AI solutions at the benefit of persons with disabilities, this means more than mere technical experimentation; it enables meaningful and durable user-centered co-creation. Given the fact that regulatory sandboxes are oftentimes accompanied temporary exemptions or flexible compliance pathways, developers are allowed to experiment without carrying extreme financial or regulatory burdens. This, in turn, fosters an environment where it becomes convenient for developers to directly involve the beneficiaries, starting from the design process, through development, to deployment. Such an approach favors timely beneficiary feedback whilst supporting rapid iteration. Hence, holding at the heart of such an approach the true, explicated, needs of the beneficiaries in developing functional, accessible and tailor-made AI solutions for persons with disabilities.
In essence, a regulatory dalmatian approach would allow for a coating of European standards to innovation, with sporadic specks of regulatory shortcomings as elucidated by the American approach. Ultimately, though the sporadic and calculated merger of the best practices extrapolated from the two aforementioned approaches, the European landscape could have the potential of becoming an AI innovation hub whereby dynamism is balanced with safety, ultimately devising competitive and accessible solutions for all.
5. Preliminary conclusion
As the global landscape proceeds in its pursuit of grappling with the transformative force that is AI, tensions between the desire for innovation and the necessity of regulation remain at the heart of the debate. Although the aforementioned analysis of the two dichotomous approaches to AI regulation underscores various best practices, it is also of monumental importance in revealing the fact that neither unfettered innovation nor inflexible regulation alone are fully capable of harnessing AI’s potential, particularly in terms of its transformative power for social good. Hence, the challenge lies in crafting an approach to AI development and regulation capable of balancing the need for groundbreaking technological advancements with the imperative of safeguarding fundamental rights and ensuring equitable access. Such a balance is of crucial importance particularly when considering AI-driven solutions deployed in ameliorating the quality of life for persons with disabilities; a demographic that has oftentimes remained at the periphery of inclusion yet stands to benefit immensely from AI’s potential.
Hence, moving forward, a hybridized regulatory approach merging the best practices of the European approach with those of the American approach could serve as a regulatory model for fostering the development and adoption of AI solutions that are safe, competitive, and inclusive. This, coupled with the enhancement of cross-sector collaboration and the embedding of participatory design principles centered on the insights provided by persons with disabilities would allow for the harnessing of AI’s potential as a force of unity, equity, dignity, and autonomy. Thus, guaranteeing that, in a time where the trajectory of societal progress is dictated by technology, no one is left behind – or beh(A)Ind.
- This research aims to provide an initial comparative analysis of key identified differences in Artificial Intelligence governance between the United States and the European Union, whilst granting particular attention to how the two aforementioned frameworks impact both the development and deployment of AI solutions targeting the social good, in this case: AI solutions at the benefit of persons with disabilities. Hence, while the analysis aims to identify core foundational contrasts and potential areas of impact – always with a focus on the development of AI solutions for social good and not mere economic gains – it is intended as a point of departure for a broader research agenda focusing on the specificities underlying the intersection between inclusion and innovation. ↑
- Artificial Intelligence (AI) was initially coined by Professor John McCarthy in the mid-1950s to describe the seeming manifestation of intelligence exhibited by computers whilst performing tasks that would require human intelligence. See, for example, V. Rajaraman, John McCarthy – Father of Artificial Intelligence, in Resonance, 19, 2023, available at: https://doi.org/10.1007/s12045-014-0027-9. ↑
- Given that Artificial Intelligence (AI) is solely the theoretical foundation which allows for the functioning of AI systems, for the purpose of this research, the two terms are going to be utilized interchangeably.Granted the lack of consensus regarding a working definition of AI systems, the following definition (AI Act, Article 3) is going to be adopted for the purpose of this research: «artificial intelligence system (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with». For reference, see Regulation 2024/1689/EU of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance).Whilst discussed at great length, for further reference on the subject of Artificial Intelligence, see, among others, (i) Digitalization: D.U. Galetta, Digitalizzazione e diritto ad una buona amministrazione (il procedimento amministrativo, fra diritto UE nuove tecnologie dell’informazione e della comunicazione), in R. Cavallo Perin, D.U. Galetta (a cura di), Il Diritto dell’Amministrazione Pubblica Digitale, II Ed., Giappichelli, Torino, 2025; S. D’Ancona, E. Furiosi, Digitalizzazione e Giustizia Amministrativa, in R. Cavallo Perin, D.U. Galetta (a cura di), Il Diritto dell’Amministrazione Pubblica Digitale, II Ed., Giappichelli, Torino, 2025; G. Armao, La dimensione digitale della coesione europea e l’emersione del criterio del “non nuocere alla coesione”, in CERIDAP, 2024, available at DOI: 10.13130/2723-9195/2024-4-74; F. Conte, La trasformazione digitale della pubblica amministrazione: il processo di transizione verso l’amministrazione algoritmica, in Federalismi.it, 2023, available at https://www.federalismi.it/nv14/articolo-documento.cfm?Artid=48765; I. Macrì, Dalle infrastrutture digitali delle amministrazioni ai cloud, il nuovo regolamento per la sicurezza dei dati e dei servizi pubblici, in Azienditalia, 2022, available at https://biblio.liuc.it/scripts/essper/schedaArticolo.asp?codice=11443195; (ii) Artificial Intelligence: F. Di Porto, M. Fontana, La regolazione di fronti alle sfide dell’intelligenza artificiale, in R. Cavallo Perin, D.U. Galetta (a cura di), Il Diritto dell’Amministrazione Pubblica Digitale, II Ed., Giappichelli, Torino, 2025; S. Orlando, Regole di immissione sul mercato e «pratiche di intelligenza artificiale» vietate nella proposta di Aritificial Intelligence Act, in Persona e Mercato, 2022, available at https://iris.uniroma1.it/retrieve/c62b2bfa-004e-404f-808f-e0d29e25c3d1/Orlando_Regole_2022.pdf; S. Orlando, Gli emendamenti alla proposta di AI Act approvati dal Parlamento europeo il 14.6.2023, in Persona e Mercato, 2023, available at https://iris.uniroma1.it/handle/11573/1687965; S. Aceto di Capriglia, Intelligenza artificiale: una sfida globale tra rischi, prospettive e responsabilità. Le soluzioni assunte dai governi unionale, statunitense e sinico. Uno studio comparato, in Federalismi.it, 2024, available at https://www.federalismi.it/nv14/articolo-documento.cfm?Artid=50422&content=&content_author=; G. Crialesi, Verso un’intelligenza artificiale UE antropocentrica e affidabile cha garantirà la sicurezza e i diritti di imprese e cittadini, in Pratica Fiscale e Professionale, 2024; G. Lo Sapio, L’Artificial Intelligence Act e la prova di resistenza per la legalità algoritmica, in Federalismi.it, 2024, available at https://www.federalismi.it/nv14/articolo-documento.cfm?Artid=50868; J. Himmelreich, Against “Democratizing AI”, in AI & Society, 2022, available at https://link.springer.com/article/10.1007/s00146-021-01357-z; (iii) Artificial Intelligence and Administrative Justice: R. Cavallo Perin, G. M. Racca, Intelligenza artificiale e responsabilità della pubblica amministrazione, in R. Cavallo Perin, D.U. Galetta (a cura di), Il Diritto dell’Amministrazione Pubblica Digitale, II Ed., Giappichelli, Torino, 2025; M. Ramajoli, Una giustizia amministrativa digitale?, Il Mulino, Bologna, 2023; G. Botto, Decisione algoritmica, discrezionalità e sindacato del giudice amministrativo, in Federalismi.it, 2024, available at https://www.astrid-online.it/static/upload/bott/botto.pdf; G. Carullo, L’Amministrazione Quale Piattaforma di Servizi Digitali, in CERIDAP, 2022; F. Costantino, Intelligenza artificiale e decisioni amministrative, in Rivista Italiana per le Scienze Giuridiche, 2017; I. M. Delgado, Automazione, intelligenza artificiale e pubblica amministrazione: vecchie categorie concettuali per nuovi problemi?, in Istituzioni del Federalismo, 2019, available at https://www.regione.emilia-romagna.it/idf/numeri/2019/3-2019/delgado.pdf; S. B. Grenci, Le applicazioni di Intelligenza artificiale a supporto dell’automazione del procedimento amministrativo, in Rivista Italiana di Informatica e Diritto, 2024, available at https://www.rivistaitalianadiinformaticaediritto.it/index.php/RIID/article/view/234; G. Pinotti, Amministrazione digitale algoritmica e garanzie procedimentali, in Labour & Law Issues, 2021, available at https://doi.org/10.6092/issn.2421-2695/13175. ↑
- Brave New World by Aldous Huxley presents a cautionary tale about the inherent dangers of unregulated technological progress. See A. Huxley, Brave New World, Harper Perennial Modern Classics, New York, 2006. ↑
- For reference, see D.U. Galetta, Decidere con l’IA: un Probelma Comune a tutte le Aree della Scienza, in CERIDAP, 2024, available at DOI: 10.13130/2723-9195/2024-2-32; B. Cappiello, The EU and the AI Act. Was it Worthwhile to be the First?, in CERIDAP, 2024, available at DOI: 10.13130/2723-9195/2024-4-175; G. Barone, La regolamentazione dell’Intelligenza Artificiale: “è corsa agli armamenti”, in Diritto Penale e Processo, 8, 2024, available at: https://www.altalex.com/documents/2024/08/29/la-regolamentazione-intelligenza-artificiale-corsa-agli-armamenti; G. Lo Sapio, L’Intelligenza Artificiale Generativa nella Giustizia Amministrativa: Scenari, Rischi e Opportunità, in Giustizia Amministrativa, 2025, available at https://www.giustizia-amministrativa.it/documents/20142/74202881/AI+generativa+e+Giustizia+amministrativa+generativa.+3+febbraio+2025-def.pdf/6be8a0eb-6001-2c5d-ad23-da856c69fed1?t=1738918964906; C. Cancela-Outeda, The EU’s AI Act: A Framework for Collaborative Governance, in Internet of Things, 27, 2024, available at https://doi.org/10.1016/j.iot.2024.101291; I. Kusche, Possible Harms of Artificial Intelligence and the EU AI Act: Fundamental Rights and Risk, in Journal of Risk Research, 2024, available at https://www.tandfonline.com/doi/full/10.1080/13669877.2024.2350720?scroll=top&needAccess=true#abstract; F. Busch, J. N. Kather, C. Johner, M. Moser, D. Truhn, L. C. Adams, K. K. Bressem, Navigating the European Union Artificial Intelligence Act for Healthcare, in Digital Medicine, 7, 2024, available at https://www.nature.com/articles/s41746-024-01213-6#citeas; M. Ebers, Truly Risk-based Regulation of Artificial Intelligence How to Implement the EU’s AI Act, in European Journal of Risk Regulation, 2024, available at https://www.cambridge.org/core/journals/european-journal-of-risk-regulation/article/truly-riskbased-regulation-of-artificial-intelligence-how-to-implement-the-eus-ai-act/E526C1D0D7368F9691082220609D60F4; G. Finocchiaro, The Regulation of Artificial Intelligence, in AI & Society, 39, 2023, available at https://link.springer.com/article/10.1007/s00146-023-01650-z; B. Cappiello, The EU and the AI Act. Was it Worthwhile to be the First?, in CERIDAP, 2024, available at DOI: 10.13130/2723-9195/2024-4-175. ↑
- For reference, see G. G. Cusenza, Cosa dicno le corti statunitensi sull’utilizzo dell’intelligenza artificiale?, in CERIDAP, 2024, available at DOI: 10.13130/2723-9195/2024-3-34; F. Pesapane, C. Volonté, M. Codari, F. Sardanelli, Artificial Intelligence as a Medical Device in Radiology: Ethical and Regulatory Issues in Eruope and the United States, in Insights into Imagining, 9, 2018, available at https://link.springer.com/article/10.1007/s13244-018-0645-y; D. Almeida, K. Shmarko, E. Lomas, The Ethics of Facial Recognition Technologies, Surveillance, and Accountability in an Age of Artificial Intelligence: a Comparative Analysis of US, EU, and UK regulatory frameworks, in AI and Ethics, 2, 2021, available at https://link.springer.com/article/10.1007/S43681-021-00077-W; T. O. Agbadamasi, L. K. Opoku, T. K. Adukpo, N. Mensah, Navigating the Intersection of U.S. Regulatory Frameworks and Artificial Intelligence, in World Journal of Advanced Research and Reviews, 25, 2025, available at https://doi.org/10.30574/wjarr.2025.25.3.0814; M. Sloane, E. Wüllhorst, A Systematic Review of Regulatory Strategies and Transparency Mandates in AI Regulation in Europe, the United States, and Canada, in Data & Policy, 2025, available at https://www.cambridge.org/core/journals/data-and-policy/article/systematic-review-of-regulatory-strategies-and-transparency-mandates-in-ai-regulation-in-europe-the-united-states-and-canada/A1BE4A34845C2C9227382053ECD1938A; M. Geist, AI and International Regulation, in F. Martin-Bariteau, T. Scassa (eds.), Artificial Intelligence and the Law in Canada, LexisNexis, New York, 2021, available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3734671; M. Luther, An Analysis of Privacy Rights in the United States in the Age of AI, in Harvard Undergraduate Law Review, 2023, available at https://hulr.org/spring-2024/b3vp15sbbt93g2ti8peehvsdof90lb; M. E. Kaminski, J. M. Urban, The Right to Contest AI, in Columbia Law Review, 121, 2021, available at https://www.columbialawreview.org/wp-content/uploads/2021/11/Kaminski-Urban-The_Right_to_Contest_AI.pdf. ↑
- For reference, see D.U. Galetta, G. Pinotti, Automation and Algorithmic Decision-Making Systems in the Italian Public Administration, in CERIDAP, 2023, available at DOI: 10.13130/2723-9195/2023-1-7;J. P. Scheider, F. Enderlein, Automated Decision-Making Systems in German Adminsitrative Law, in CERIDAP, 2023, available at DOI: 10.13130/2723-9195/2023-1-102; J. G. Corvalán, E. M. Le Fevre Cervini, Prometea experience. Using AI to Optimize Public Institutions, in CERIDAP, 2020, available at DOI: 10.13130/2723-9195/2020-2-2; R. Ejjami, Public Administration 5.0: Enhancing Governance and Public Services with Smart Technologies, in International Journal for Multidisciplinary Research, 6, 2024, available at https://www.researchgate.net/profile/Rachid-Ejjami/publication/383117309_Public_Administration_50_Enhancing_Governance_and_Public_Services_with_Smart_Technologies/links/66bd0c098d0073559252459d/Public-Administration-50-Enhancing-Governance-and-Public-Services-with-Smart-Technologies.pdf. ↑
- For reference, see T. Blasi, L’Intelligenza artificiale nel Terzo Settore: così la tecnologia aiuta la progettazione sociale, in Vita, 2025, available at https://www.vita.it/idee/lintelligenza-artificiale-nel-terzo-settore-cosi-la-tecnologia-aiuta-la-progettazione-sociale/(last accessed March 2025); M. Wald, AI Data-Driven Personalisation and Disability Inclusion, in Frontiers in Artificial Intelligence, 3, 2020, available at https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2020.571955/full; M. F. Almufareh, S. Kausar, M. Humayum, S. Tehsin, A Conceptual Model for Inclusive Technology: Advancing Disability Inclusion Through Artificial Intelligence, in Journal of Disability Research, 3, 2024, available at https://www.scienceopen.com/hosted-document?doi=10.57197/JDR-2023-0060; N. Tilmes, Disability, Fairness, and Algorithmic Bias in AI Recruitment, in Ethics and Information Technology, 24, 2022, available at https://link.springer.com/article/10.1007/s10676-022-09633-2. ↑
- Referenced in D. S. Broder, The Thatcher View America Must Lead, in The Washington Post, 1991, available at https://www.washingtonpost.com/archive/opinions/1991/03/13/the-thatcher-view-america-must-lead/2101c34e-50d5-48f5-ba83-3deba0ac0ee8/ (last accessed March 2025), yet official source remains unknown. ↑
- See Treaty on European Union (Maastricht Treaty), 1992, OJ C 191/1; Treaty on the Functioning of the European Union (TFEU), 2012, OJ C 326/47.For reference, see also Treaty Establishing the European Economic Community (Treaty of Rome), 1957, 298 UNTS 3; Treaty of Amsterdam amending the Treaty on European Union, the Treaties establishing the European Communities and certain related acts, 1997, OJ C340/1; Treaty of Nice amending the Treaty on European Union, the Treaties establishing the European Communities and certain related acts, 2001, OJ C80/1; Treaty of Lisbon amending the Treaty on European Union and the Treaty establishing the European Community, 2007, OJ C306/1. ↑
- Article 2 of the TEU (see n 9) proclaims that «The Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities. These values are common to the Member States in a society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail». Similarly, Article 14 of the European Convention on Human Rights (ECHR) elucidates the Union’s institutional set-up vis-à-vis the enjoyment of the enumerated liberties – such as: (i) the right to life (Article 2); (ii) the right to liberty and security (Article 5); (iii) the right to a fair trial (Article 6), and (iv) freedom of expression (Article 10), among others – within its previous articles by stating that «The enjoyment of the rights and freedoms set forth in this Convention shall be secured without discrimination on any group such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status». The same anthropocentric vision that is elucidated on paper finds its concrete application via numerous cases ruled upon by the European Court of Human Rights – whether it be taking an anthropocentric stance by expanding Article 3 (Prohibition of torture) of the ECHR and ruling excessive corporal punishment an outdated practice displaying degrading treatment, or reaffirming one’s right to express one’s identity, including religious identities (Article 9, Freedom of thought, conscience and religion). Naturally, the aforementioned list is for explicatory purposes alone and in no way represents an exhaustive list of all of the monumental cases brought before the ECHR which display the EU’s anthropocentric stance.For reference, see European Convention on Human Rights (ECHR), 213 UNTS 221 (adopted 4 November 1950, entered into force 3 September 1953); European Court of Human Rights, judgment 25 April 1978, Application n. 5856/72, A/26, Case of Tyrer v. The United Kingdom; European Court of Human Rights, judgement 15 January 2013, Applications n. 48420/10, 59842/10, 51671/10, 36516/10, 159, Case of Eweida and Others v. The United Kingdom. ↑
- Decision 2022/2481 of the European Parliament and of the Council of 14 December 2022 establishing the Digital Decade Policy Programme 2030 (Text with EEA relevance) elucidates the Union’s path toward the digital transformation by stating that «[…] the digital transformation of the economy and society should encompass digital sovereignty in an open manner, respect for fundamental rights, the rule of law and democracy, inclusion, accessibility, equality, sustainability, resilience, security, improving quality of life, the availability of services and respect for citizens’ rights and aspirations». The aforementioned agenda is substantiated by the values enshrined within the European Declaration on Digital Rights and Principles for the Digital Decade 2023/C 23/01. Such a declaration is of paramount importance in bridging the gap in upholdment of values present between the concrete realm and the e-realm by affirming that «[w]ith the acceleration of the digital transformation, the time has come for the EU to spell out how its values and fundamental rights applicable offline should be applied in the digital environment». Thus, «[t]he digital transformation should not entail the regression of rights. What is illegal offline, is illegal online». ↑
- See Regulation 2022/2065/EU of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) (Text with EEA relevance); Regulation 2022/1925/EU of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) (Text with EEA relevance); Regulation 2022/868/EU of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act) (Text with EEA relevance); Regulation 2016/679/EU of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance). ↑
- The AI Act sets out a multi-level risk approach: (i) unacceptable risk AI applications (prohibited), (ii) high risk (require a conformity assessment), (iii) limited risk (prerequisite is transparency), (iv) minimal risk (minimally regulated through voluntary codes of conduct), categorizing AI applications accordingly. See Regulation 2024/1689/EU of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (n 2). ↑
- L. Enqvist, ‘Human Oversight’ in the EU Artificial Intelligence Act: What, When and by Whom?, in Law Innovation and Technology, 15, 2023, available at https://doi.org/10.1080/17579961.2023.2245683. ↑
- Article 3(63) of the AI Act defines GPAI models as «an AI model, including where such an AI model is trained with large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is places on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are places on the market».See T. Higgins, The EU AI Act: Concerns and Criticisms, in Clifford Chance, 2023, available at https://www.cliffordchance.com/insights/resources/blogs/talking-tech/en/articles/2023/04/the-eu-ai-act–concerns-and-criticism.html (last accessed March 2025). ↑
- See European Commission, General-Purpose AI Code of Practice, in Shaping Europe’s Digital Future, 2025, available at https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice (last accessed March 2025). ↑
- Clifford Chance, The EU AI Act: Overview of Key Rules and Requirements, in Clifford Chance LLP, 2024, available at https://www.cliffordchance.com/content/dam/cliffordchance/PDFDocuments/the-eu-ai-act-overview.pdf (last accessed March 2025). ↑
- Regulatory sandboxes are controlled environments where businesses can experiment with innovative products, ultimately allowing for the shaping of business and regulatory best practices.See T. Madiega, A. L. Van De Pol, Briefing: Artificial Intelligence Act and Regulatory Sandboxes, in European Parliamentary Research Service, 2022, available at https://www.europarl.europa.eu/RegData/etudes/BRIE/2022/733544/EPRS_BRI(2022)733544_EN.pdf. ↑
- European Commission White Paper, How to Master Europe’s Digital Infrastructure Needs?, 2024, available at https://digital-strategy.ec.europa.eu/en/library/white-paper-how-master-europes-digital-infrastructure-needs; G. Smith, K. D. Stanley, K. Marcinek, P. Cormarie, S. Gunashekar, General-Purpose Artificial Intelligence (GPAI) Models and GPAI Models with Systemic Risk, in RAND, 2024, available at https://www.rand.org/pubs/research_reports/RRA3243-1.html (last accessed March 2025). ↑
- J. Madison, The Federalist No. 62 in A. Hamilton, J. Madison, J. Jay, The Federalist Papers, Clinton Rossiter ed, Signet Classics, 2003. ↑
- T. Devtyan, The U.S. Approach to AI Regulation: Federal Laws, Policies, and Strategies Explained, in SSRN, 2024, available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4954290. ↑
- See Fair Credit Reporting Act, Pub L No 91-508, 84 Stat 1114, 1970; Family Educational Rights and Privacy Act, Pub L No 93-380, 88 Stat 571, 1974; Computer Fraud and Abuse Act, Pub L No 99-474, 100 Stat 1213, 1986; Electronic Communications Privacy Act, Pub L No 99-508, 100 Stat 1848, 1986; Health Insurance Portability and Accountability Act, Pub L No 104-191, 110 Stat 1936, 1996; Children’s Online Privacy Protection Act, Pub L No 105-277, 112 Stat 2681-728, 1998; Digital Millennium Copyright Act, Pub L No 105-304, 112 Stat 2860, 1998; Gramm-Leach-Bliley Act, Pub L No 106-102, 113 Stat 1338, 1999. ↑
- H. West, D. Hake, S. P. Ingis, Supreme Court’s Chevron Decision and Its Implications for AI Regulation, in Venable LLP, 2024, available at https://www.venable.com/insights/publications/2024/chevron-decision/supreme-courts-chevron-decision-and-its-implicat (last accessed March 2025). ↑
- Executive Order No 14110, 88 Fed Reg 75191, 30 October 2023. ↑
- Executive Order No 14179, 90 Fed Reg 8741, 23 January 2025. ↑
- The concept of “El Dorado” (literal translation “golden one”) dates back to Sixteenth-century Spanish colonial chronicles which birthed the dream of a lost city of gold located somewhere in South America. Recent archeological research, though, confirms that what was once thought to be a place was actually a person; a King bathed in gold, the “golden one”. Nonetheless, the hopes of finding a mysterious land of riches and perfection in South America rendered the concept of “El Dorado” an engrained myth within then-society, to the point of crafting expeditions with the purpose of pinpointing such a land. The idea of “El Dorado”, though slightly altered nowadays, has withstood the test of time. Hence, for the purpose of this research, “El Dorado” represents an ideal situation, the emblem of perfection.For reference, see J. Silver, The Myth of El Dorado, in History Workshop Journal, 34, 1992, available at https://doi.org/10.1093/hwj/34.1.1; C. Nicholl, The Creature in the Map: A Journey to El Dorado, University of Chicago Press, Chicago, 1995. ↑
- See California Consumer Privacy Act, 2018; California Privacy Rights Act, 2020; Virginia Consumer Data Protection Act, 2021; Colorado Privacy Act, 2021. ↑
- Bloomberg Industry Group, Inc., Which States Have Consumer Data Privacy Laws?, in Bloomberg Law, 2024, available at https://pro.bloomberglaw.com/insights/privacy/state-privacy-legislation-tracker/#map-of-state-privacy-laws (last accessed March 2025). ↑
- G. Wright, Amazon to Pay $25m Over Child Privacy Violations, in BBC News, 2025, available at https://www.bbc.com/news/technology-65772154 (last accessed March 2025); J. Stempel, Apple to Pay $95 Million to Settle Siri Privacy Lawsuit, in Reuters, 2025, available at https://www.reuters.com/legal/apple-pay-95-million-settle-siri-privacy-lawsuit-2025-01-02/ (last accessed March 2025). ↑
- See United Nations General Assembly Resolution 70/1, Transforming our world: the 2030 Agenda for Sustainable Development, 2025, UN Doc A/RES/70/1. ↑
- E. Dugarova, B. Slay, J. Papa, S. Marnie, Leaving No One Behind in Implementing the 2030 Agenda for Sustainable Development: Roma Inclusion in Europe, in United Nations Development Programme, 2017, available at https://www.undp.org/sites/g/files/zskgke326/files/migration/eurasia/LeavingNoOneBehindinthe2030Agenda_Roma-inclusion-in-Europe.pdf. ↑
- For the purpose of this research, the term “disability” is defined as «a limitation in a fundamental domain that arises from the interaction between a person’s intrinsic capacity, and environmental and personal factors», as elucidated in United Nations Department of Economic and Social Affairs, Disability and Development Report, Realizing the Sustainable Development Goals by, for and with persons with disabilities, in DESA, 2018, available at https://social.un.org/publications/UN-Flagship-Report-Disability-Final.pdf. ↑
- United Nations Department of Economic and Social Affairs, Sustainable Development Goals (SDGs) and Disability, in DESA, available at https://social.desa.un.org/issues/disability/sustainable-development-goals-sdgs-and-disability (last accessed March 2025). ↑
- Intended as relating to Orwell’s vision of society, as portrayed in his novel titled 1984. See G. Orwell, 1984, Penguin, London, 2025. ↑
- For reference, see C. Feijóo, Y. Kwon, J. M. Bauer, E. Bohlin, B. Howell, R. Jain, P. Potgieter, K. Vu, J. Whalley, J. Xia, Harnessing Artificial Intelligence (AI) to Increase Wellbeing for All: The Case for a New Technology Diplomacy, in Telecommunications Policy, 44, 2020, available at https://doi.org/10.1016/j.telpol.2020.101988; B. Enjolras, L. M. Salamon, K. H. Sivesind, A. Zimmer, The Third Sector As A Renewable Resource for Europe, Palgrave Macmillan, London, 2018. ↑
- Edward Hopper, Nighthawks, Art Institute of Chicago, 1942. ↑
- R. Nowland, E. A. Necka, J. T. Cacioppo, Loneliness and Social Internet Use: Pathways to Reconnection in a Digital World?, in Perspectives on Psychological Science, 13, 2018, available at https://journals.sagepub.com/doi/full/10.1177/1745691617713052. ↑
- S. Lythreatis, S. K. Singh, A. N. El-Kassar, The Digital Divide: A Review and Future Research Agenda, in Technological Forecasting and Social Change, 175, 2022, available at https://www.sciencedirect.com/science/article/pii/S0040162521007903. ↑
- European Commission, Digital Health and Care, in Public Health, available at https://health.ec.europa.eu/ehealth-digital-health-and-care/digital-health-and-care_en (last accessed March 2025) ↑
- European Commission, eGovernment and Digital Public Services, in Digital Strategy, 2024, available at https://digital-strategy.ec.europa.eu/en/policies/egovernment (last accessed March 2025). ↑
- Oxford Review, Digital Accessibility: Definition and Explanation, in DEI dictionary, available at https://oxford-review.com/the-oxford-review-dei-diversity-equity-and-inclusion-dictionary/digital-accessibility-definition-and-explanation/ (last accessed March 2025). ↑
- World Wide Web Consortium (W3C), Web Content Accessibility Guidelines (WCAG), in Web Accessibility Initiative, 2024, available at https://www.w3.org/WAI/standards-guidelines/wcag/#intro (last accessed March 2025). ↑
- See Education Estonia, Tiger Leap, in Education Estonia, available at https://www.educationestonia.org/tiger-leap/ (last accessed March 2025). ↑
- See e-Estonia, Digital Inclusion as a Fundamental Block in Building a Digital Society, in e-Estonia, 2023, available at https://e-estonia.com/digital-inclusion-as-a-fundamental-block-in-building-a-digital-society/ (last accessed March 2025). ↑
- European Disability Forum, Estonia Accessible DATA Project, in EDF, 2024, available at https://www.edf-feph.org/content/uploads/2024/10/estonia-accessible_DATA_project.pdf. ↑
- Invest Estonia, 10 Estonian Startups That Make the Most of AI, in Invest in Estonia, 2023, available at https://investinestonia.com/10-estonian-startups-that-make-the-most-of-ai/ (last accessed March 2025). ↑
- M. Siilivask, 7Sense Captures Deutsche Telekom and NVIDIA’s Attention with Groundbreaking Technology, in Trade with Estonia, 2024, available at: https://tradewithestonia.com/7sense-captures-deutsche-telekom-and-nvidias-attention-with-groundbreaking-technology/ (last accessed March 2025). ↑
- See AccessiWay, Il Tuo Partner per l’Accessibilità Digitale, in AccessiWay, available at https://freeconsulting.accessiway.com (last accessed March 2025). ↑
- Editorial Staff, In&Valid la Startup che Usa l’IA per Supportare Disabili e Caregiver, in StartupBusiness, 2025, available at https://www.startupbusiness.it/invalid-la-startup-che-usa-lia-per-supportare-disabili-e-caregiver/144605/ (last accessed March 2025). ↑
- In&Valid, Da Ogni Limite Può Nascere Un’Opportunità, in Inevalid, available at https://inevalid.it/ (last accessed March 2025). ↑
- Echo Labs, AI Accessibility Tools that Save Schools Millions, in Echo Labs Blog, available at https://el.ai/ (last accessed March 2025). ↑
- Microsoft, Seeing AI, in Microsoft Garage, available at https://www.microsoft.com/en-us/garage/wall-of-fame/seeing-ai/ (last accessed March 2025). ↑
- Synchron, Neurotechnology to Address the Limitations of the Human Body, in Synchron, available at https://synchron.com/ (last accessed March 2025). ↑
- P. S. Park, S. Goldstein, A. O’Gara, M. Chen, D. Hendrycks, AI Deception: A Survey of Examples, Risks and Potential Solutions, in Patterns, 5, 2024, available at https://www.cell.com/patterns/fulltext/S2666-3899(24)00103-X (last accessed March 2025). ↑
- For reference, see G. Carullo, Large Language Models for Transparent and Intelligible AI-Assisted Public Decision-Making, in CERIDAP, 2023, available at DOI: 10.13130/2723-9195/2023-3-100; R. Walker, J. Dillard-Wright, Algorithmic Bias in Artificial Intelligence Is a Problem – And that Root Issue is Power, in Nursing Outlook, 71, 2023, available at https://www.sciencedirect.com/science/article/pii/S0029655423001288 (last accessed March 2025). ↑
- See (n 17). ↑
- The term ‘algorithmic bias’ describes systematic or recurring errors present within a computer system that can create unfair outcomes like privileging one arbitrary group of users over others. See Florida State University Libraries, Algorithm Bias, in FSU Library Guides, 2024, available at https://guides.lib.fsu.edu/algorithm#:~:text=Algorithm%20Bias%20%2D%20algorithmic%20bias%20describes,group%20of%20users%20over%20others (last accessed March 2025). ↑
- S. Akter, G. McCarthy, S. Sajib, K. Michael, Y. K. Dwivedi, J. D’Ambra, K.N. Shen, Algorithmic Bias in Data-Driven Innovation in the Age of AI, in International Journal of Information Management, 60, 2021, available at https://www.sciencedirect.com/science/article/pii/S0268401221000803#bib3 (last accessed March 2025). ↑
- For reference, see A. Lambrecht, C. Tucker, Can Big Data Protect a Firm from Competition?, in SSRN, 2015, available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2705530 (last accessed March 2025); A. Pandey, A. Caliskan, Disparate Impact of Artificial Intelligence Bias in Ridehailing Economy’s Price Discrimination Algorithms, in Proceedings of the 2021 AAA/ACM Conference on AI, Ethics, and Society, 2021, available at https://dl.acm.org/doi/abs/10.1145/3461702.3462561 (last accessed March 2025). ↑
- Z. Chen, Ethics and Discrimination in Artificial Intelligence-Enabled Recruitment Practices, in Humanities and Social Sciences Communications, 10, 2023, available at https://www.nature.com/articles/s41599-023-02079-x (last accessed March 2025).See also Louis et al v SafeRent Solutions, LLC, No 1:22-cv-10800-AK (D Mass, 9 January 2023); Derek Mobley v Workday, Inc, No 23-cv-00770-RFL (ND Cal, 12 July 2024); R. Booth, Blind People Excluded From Benefits of AI, Says Charity, in The Guardian, 2024, available at https://www.theguardian.com/society/2024/dec/25/blind-people-excluded-from-benefits-of-ai-says-charity?utm_source=chatgpt.com (last accessed March 2025). ↑
- See Costituzione della Repubblica Italiana – Article 111 of the Italian Constitution states that «all judicial decisions shall include a statement of reasons», implicitly alluding to the necessity of human discretion in the judicial process as a source of such reasons.In such regard, of monumental importance is the insight present in R. Cavallo Perin, I. Alberti, Atti e procedimenti amministrativi digitali, in R. Cavallo Perin, D.U. Galetta (a cura di), Il Diritto dell’Amministrazione Pubblica digitale, II Ed., Giappichelli, Torino, 2025, where Cavallo Perin and Alberti focus on administrative procedure to exemplify and reaffirm the impossibility of withdrawing the human component from the decision-making process by stating that «[t]he idea that the algorithm – beyond its legal nature – is ex ante conclusive to the procedure cannot be shared because a general act – a fortiori if normative – typically has a further moment of ascertainment of the facts that are relevant to each single case. [In such regard], the ultimate foundation of such lies within the idea that public administration retains an unavoidable constitutional function that is on par with the judicial and legislative function, even when ultimate via a machine that has learned from human work, albeit purified from errors and prejudices» (translated from Italian). ↑
- See (n. 10). ↑
- The term “Gold Standard” refers to a monetary system in which a currency’s value is pegged to that of gold. See CFI Team, Gold Standard, in Corporate Finance Institute, available at https://corporatefinanceinstitute.com/resources/economics/gold-standard/ (last accessed March 2025). ↑
- See Orwell (n 28). ↑
- Niccolò Machiavelli, The Prince, Tim Parks tr, Penguin Classics, London, 2003. ↑
- For reference, see M. Interlandi, Funzione amministrativa e diritti delle persone con disabilità, Editoriale Scientifica, Napoli, 2022; P. Smith, L. Smith, Artificial Intelligence and Disability: Too Much Promise, Yet Too Little Substance?, in AI and Ethics, 1, 2021, available at https://link.springer.com/article/10.1007/s43681-020-00004-5. ↑
- K. Tay-Teo, D. Bell, M. Jowett, Financing Options for the Provision of Assistive Products, in Assistive Technology, 33, 2021, available at https://www.tandfonline.com/doi/full/10.1080/10400435.2021.1974979#abstract (last accessed March 2025). ↑
- For the purpose of this research, “regulatory dalmatian approach” refers to a regulatory system that, at its very core, is coated in European regulatory standards (white coat of the dalmatian), but nonetheless presents specks (black polka dots, like those on a dalmatian) of the looser American regulatory framework. ↑
- For reference, see R. Raudla, E. Juuse, V. Kuokštis, A. Cepilovs, V. Cipinys, M. Ylönen, To Sandbox or not to Sandbox? Diverging Strategies of Regulatory Responses to FinTech, in Regulation & Governance, 2024, available at https://doi.org/10.1111/rego.12630; R. Raudla, E. Juuse, V. Kuokštis, A. Cepilovs, J. W. Douglas, Regulatory Sandboxes and Innovation Hubs for FinTech: Experiences of the Baltic States, in European Journal of Law and Economics, 2024, available at https://link.springer.com/article/10.1007/s10657-024-09830-y#citeas; R. Ciukaj, M. Folwarski, FinTech Regulation and the Development of the FinTech Sector in the European Union, in Journal of Banking and Financial Economics, 1, 2023, available at https://www.ceeol.com/search/article-detail?id=1241313. ↑
- For reference, see E. Kvedaravičiūtė, Developing Fintech Sector in Lithuania: Regulatory Sandbox, in Financial Market Development Center, 2022, available at https://projects2014-2020.interregeurope.eu/fileadmin/user_upload/tx_tevprojects/library/file_1654156148.pdf; Invest Lithuania, The Fintech Landscape in Lithuania, Vilnius, 2022, available at https://investlithuania.com/wp-content/uploads/The-Fintech-Landscape-in-Lithuania-2021-2022.pdf; A. C. R. Martins, A Sandbox for the U.S. Financial System, in The Regulatory Review, 2021, available at https://www.theregreview.org/2021/08/19/rossi-martins-sandbox-for-us-financial-system/; C. Poncibò, The Laboratories of Competition Law: EU-US Perspectives on Regulatory Sandboxes, in Stanford Law School, available at https://law.stanford.edu/transatlantic-technology-law-forum/projects/the-laboratories-of-competition-law-eu-us-perspectives-on-regulatory-sandboxes/; C. M. Sharkey, K. M. K. Fodouop, AI and the Regulatory Paradigm Shift at the FDA, in Duke Law Journal Online, 72, 2022, available at https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1100&context=dlj_online. ↑