Il presente contributo esamina l’ambito di applicazione del quadro normativo dell’Unione europea in materia di intelligenza artificiale nei confronti di soggetti non europei. L’analisi combina un approccio giuridico-dottrinale con una prospettiva socio-giuridica. I principali atti normativi europei, insieme alla più recente giurisprudenza, sono analizzati mediante il metodo della “black-letter law”. Gli sviluppi che hanno condotto all’adozione di una normativa complessiva volta a limitare i rischi dell’IA sono altresì confrontati a livello globale. Vengono inoltre esaminate, in chiave socio-giuridica, recenti situazioni determinate da diversi attori sociali e politici, tra cui X di Elon Musk e l’amministrazione statunitense di Donald Trump, con particolare attenzione al possibile impatto della normativa europea su soggetti stranieri o su imprese stabilite all’estero.
La conclusione è nel senso che, nonostante l’indubbia necessità di regolamentare l’uso dell’intelligenza artificiale e di tutelare i consumatori europei, l’applicazione dei principi della legislazione europea a soggetti non europei può generare conflitti e costituire un ostacolo alla competitività del mercato tecnologico europeo. D’altro canto, consentire alle richieste provenienti dall’esterno di influenzare la legislazione europea rappresenterebbe una minaccia per la sovranità europea.
This paper examines the scope of the applicability of the European Union’s regulatory framework on artificial intelligence to non-European subjects. The analysis combines doctrinal legal analysis with a socio-legal approach. We assess core pieces of European legislation, along with recent case law, using black-letter methodology. We compare worldwide developments that lead to comprehensive legislation aimed at limiting the risks of AI. Recent situations created by several social and political actors, including Elon Musk’s X and the U.S. Donald Trump’s administration, are dealt with in a socio-legal analysis, which focuses on the possible impact of European legislation on foreign nationals or foreign-residing companies. It is concluded that despite the undoubtable necessity to regulate the use of AI and to protect European consumers, the application of European legislation's principles on non-European subjects may lead to conflicts and form obstacles to the European technological market's competitiveness. On the other hand, allowing foreign demands to influence European legislation would pose a threat to European sovereignty.
1. Introduction
The European Union aims to lead the way worldwide in regulating artificial intelligence (AI). The purpose of such regulation is to avoid risks and solve in advance ethical issues such as using AI by public administration to identify individuals remotely or based on biometrical data and to practice real-time surveillance. The purpose of regulating AI is also to establish a competitive and fair digital market and to protect consumers therein. Nonetheless, the aim of AI legislation is also to prevent spread of disinformation at hate speech, while the use of AI, large language models, and chatbots can lead to radicalizing certain social groups.
The EU’s regulation of AI has significant effect on subjects residing outside the EU, especially high-tech companies of outside the EU entering the European market. The EU’s digital market has over 300 million consumers and it is one of the usages of these consumers’ data is one of the main concerns of legislators. With the global impact of AI technologies, international borders become less significant, and it is increasingly common that domestic laws are applied extraterritorially[1].
Recent developments have shown that the EU’s aspiration to protect its consumers may be paid for by growing animosity between the EU and the U.S. While the EU aims to regulate all subjects trading in its territory equally, the U.S.’s administration has recently demonstrated that they will back U.S.’s giant companies violating EU laws.
These developments raise questions about the EU’s sovereignty and capacity to legislate in its territory and about the possible fragility of the EU’s aspiration to act as a pioneering subject in regulating the use of AI and diminishing its possible risks.
2. The European regulatory framework, the race to AI legislation, and relevance to foreign subjects
The EU has developed a comprehensive piece of legislation to regulate the use of AI – the Artificial Intelligence Act[2] (“AIA” hereinafter). Such regulation serves many purposes, based on avoiding risks of manipulating with vulnerable groups, social scoring, biometric identification and categorization of people, and real-time identification of people in public spaces.
While the EU has introduced a regulatory regime for the use of AI, other jurisdictions are attempting to do the same. One of the driving forces to such attempts to legislate is the increasing visibility of the fact that the technology is carrying dangers regarding legality and ethics. Together with the ongoing «race to AI»[3], which is motivated by securing a competitive place on the global market, governments are also perceiving the risks of using AI and at the same time taking part in a «race to AI regulation»[4]. Recent developments have on one hand shown that there may be vast differences in the preferences for the application of AI regulations, caused by economical, political, and cultural factors. On the other hand, recent events have also shown that regulatory frameworks in various jurisdictions have started to converge, potentially leading to harmonizing AI regulation worldwide.
The EU regulatory framework for the use of AI is not only significant as a role-model for other jurisdictions. According to the concept of «extraterritoriality» stated in international law, countries are permitted to exercise their jurisdictions over foreign subjects in their territory, be it individuals or corporations[5].
Any regulation aimed at limiting the risks of using AI is based on introducing means to influence the behaviour of actors using AI, be it individuals and companies. Lessing defines four modalities, law (regulatory modality) being one of them, the other three modalities being social norms, the market, and architectural design of technological applications[6]. Legal regulation of AI has several roles, the most prominent ones being a protective role (i.e. steering behaviour towards minimising adverse impacts) and an enabling role (i.e. stimulation of beneficial innovation). There is no clear distinction between the two and they overlap[7]. Recent case studies discussed below, however, show, that in the recently exacerbated conflict between the U.S. and the EU over regulating American tech companies on the European market, the EU’s efforts to protect its consumers through the emerging legislation framework may lead to the hinderance of economic growth and the decrease in the competitiveness of European-based high tech companies.
While the regulation of AI in the EU has mostly happened through the introduction of new laws, it may as well come to place through amending or derogating the existing laws, or introducing soft-law measures[8]. The EU aims to lead the way with AI regulation, as it successfully did with data protection legislation, but is now clear that other jurisdictions target at different outcomes and take different legislative means. While the EU regulation is mostly focused on outcomes, other approaches are centred around regulating processes, and some combine both aspects[9].
One of the difficulties posed to any efforts in adopting a coherent and universally applicable piece of legislation regulating AI is the fact that there is no single, universally accepted definition of AI as of today. The lacking definition of AI is posing complications to governments aiming to adopt a leading legislation. Many risks of AI may also arise with other technologies[10]. Despite attempts to define it by various bodies such as European Commission’s High-Level Expert Group on AI, and the scope of applications counted as falling under the umbrella term of AI constantly changes[11]. However, such lack of a universally accepted and legally binding definition has formerly proven not to be an obstacle in the application of a sound regulatory framework. The EU’s GDPR[12] is an example of technology that is neutral towards specific technologies, because it is focused on the aims to be achieved[13]. Similarly to the GDPR, the AI regulation can affect domains as distant from each other as tax law, public procurement law, consumer protection, health law, tort law, privacy and data protection law, consumer protection law, and many other areas[14].
Apart from the EU, other legislators worldwide are attempting to regulate AI and to minimize its risks and adverse effects on consumers or markets. These include the U.S., Canada, China, and other countries. As part of participation in the race to AI regulation, China has, similarly to the EU, put AI in the centre of its own economic strategy and it aims to create and international regulatory framework governing the use of AI[15]. It is at the same time clear that the legislators worldwide are aware of risks of regulating AI ranging from deepening inequalities to existential risks[16].
The EU and China serve as two examples of vastly different approaches to extraterritoriality of the AI laws. The EU shows and explicit territorial extension, whereas China is an example of vertical regulation with a narrower territorial scope of application[17]. The EU aims to be the global leader and the shaper of the international debate on regulating the AI and maintaining European values and it also aims to apply the AIA to global AI markets[18]. Chinese regulation, on the other hand, focuses on Chinese citizens, and does not primarily aim at their protection, which is evident from the fact that the regulation was adopted without a significant societal debate and there is therefore a risk that the resulting policies will not align with citizens’ interests[19].
In addition to the EU and China, there are other international standards for using and developing AI systems. One of them comes from the Council of Europe which Established its Framework Convention on AI in 2024, with the purpose of making sure that AI activities follow human rights protection[20]. There are also some non-binding instruments, such as the UNESCO’s Recommendation on the Ethics of Artificial Intelligence[21] and the OECD’s AI Principles[22]. Some AI regulating instruments are also derived from the Northern America, such as the U.S.’s proposed Algorithmic Accountability Act in 2019[23], followed by an AI Bill of Rights in 2022[24]. Canada established the so-called Algorithmic Impact Assessment Tool[25] in 2023[26]. Additionally, China published the Beijing Artificial Intelligence Principles[27] in 2019[28].
The need for AI regulation also comes from the fact that new technologies are also transforming governments’ work[29]. This is posing dangers because of AI’s capacity to cause error or bias, which is mostly problematic in areas where governments have traditionally practices discretions, and thus such areas are traditionally connected with the society’s tolerance towards limited legal liability of the governments and decision-making bodies[30].
Despite the global ongoing race to AI regulation, the EU AI Act is a pioneering piece of legislation in its coercive nature, including banning some applications altogether on moral grounds. The risk-based approach can be viewed as grounded in constructivist and critical political economy[31]. It also has foundations in cultural political economy, where the EU Commission is seen to employ the risk heuristics to enact the EU’s vision of rights-oriented and at the same time globally competitive common market[32].
The EU legal standards and policies such as the GDPR, the Digital Markets Act[33] (“DMA” hereinafter), and the DSA[34] (“DSA” hereinafter) influence international tech giants. These regulations aim and succeed in preventing international tech companies from engaging in anticompetitive behaviour. On the other hand, such regulations lead to prolonged disputes and contribute to Europe’s falling behind the tech developments. Because of the data protection laws, European companies gain access to large language models much later that U.S. companies and companies in many other states in the world. These delays pose obstacles in European innovation, and critics argue that data protection is not worth such delays[35].
3. Case studies
3.1. The Digital Services Act and recent sanctions to X
The acute need for regulating digital technologies together with the problematic nature of cross-border and transatlantic working of such regulation has recently become evident with the European Commission’s decision[36] to impose sanctions on Elon Musk’s X over violating the Digital Services Act.
The DSA, as well as the general concept of the regulation of digital technologies and social media platforms, has recently been challenged, especially by the U.S, administration. Such concerns are driven by the fears ranging from those of unnecessary economic hinderance to those of limiting freedom of expression. Preceding the AIA, the DSA is within these discussions often regarded as a central component of an emerging regulatory framework[37]. Even though the DSA is an important step in the European Union’s regulation of online services, as it addresses issues such as online disinformation and harmful content, its effectiveness is largely dependent on consistent and robust enforcement throughout the EU[38] and, as we have recently witnessed, even beyond the EU’s borders. As there is increasing attention paid to the DSA’s enforcement framework, it has now become obvious that with the digital services market outreaching the EU’s borders, we have just so far seen a tip of the iceberg as for the possible contested issues of cross-border application.
On 5 December 2025, the European Commission imposed a fine of €120 million on X over violations of the DSA. Specifically, the violations were seen firstly, in the deceptive design which exposes users to scams; secondly, in the lack of transparency over X’s ads depository; and thirdly, in the failure to provide researchers access to public data[39].
On 6 December 2025, Elon Musk responded to the fine and suggested dissolving the European Union[40]. On 7 December 2025, Trump administration issued an official statement in support for Musk, and the United States imposed visa bans on five individuals, including Thierry Breton, who had had a leading role in the development and implementation of the DSA. The visa ban was rejected by the European Parliament[41].
The official position of the U.S, and the Trump administration can be seen as an appeal on the EU to allow the U.S, companies to trade within the European market under the same conditions as would be applicable to them in the United States. However, such approach would disable the EU to apply its laws within its territory and as a result, the EU would be disallowed to protect its consumers. In short, granting exemptions to American companies would lead to clashes with the established principles of jurisdiction and regulatory sovereignty.
While the U.S. government has clearly expressed its view on European regulation of digital technologies – i.e. apprehending EU regulations as disproportional burdens, the EU maintains that is laws apply equally to all subjects. The visa bans to certain individuals to enter the U.S. indicate that the Trump administration is willing to defend the interests of American companies. The EU’s stance, on the other hand, indicates that enforcement measures are based on criteria such as company size and market impact, rather than the company’s country of origin.
While Musk’s X as well as some media, including The Guardian, are presenting the issue as a limitation to freedom of expression[42], the wording of the European Commission’s decision makes it clear that the DSA application and EU’s policies overall are aimed at non-discriminatory application to all subjects[43].
The recent case has exacerbated the debate about jurisdictional claims and discrimination. While the DSA is highly relevant to the United States, as the U.S, is a home to many large and impactful tech companies, and while the EU aims and succeeds in shaping global legislative standards for technological regulation, it is more than clear that any limitations imposed on the free digital market can lead to the hinderance of economic growth in the EU and lowering the EU’s competitiveness in comparison to other global high tech markets.
While there have not been any cases of sanctions to companies based on the violations on the AIA, it is obvious that the implementation of the AIA will lead to similar problems, as it, similarly to the DSA, aims at regulating digital platforms and technologies, albeit addressing different aspects of online services. The DSA together with the AIA present complimentary parts of the EU’s digital governance framework.
3.2. Grok and disinformation spread through system prompts
The European Commission on X over violating the DSA has not been the first issue when X has been regarded as violating EU’s legislation as well as the global standards of freedom of expression and its justifiable limitations. Another case of X, specifically xAI’s chatbot Grok, seen as transgressing European regulations happened in May 2025, while at the same time there were no means to impose sanctions or fines by the European Commission.
The AIA prohibits the use of «subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person»[44]. It is also prohibited to «put into service an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, in a manner which would cause harm to such vulnerable persons»[45].
In May 2025, the xAI’s chatbot Grok has been observed by many users bringing up the issue of “white genocide” in South Africa, while such mentions were unprompted and were presented by the chatbot together with unrelated responses to other issues[46]. Following this event, X has issued Grok’s system prompt to increase transparency. This event has revealed more about Grok’s workings than what was expected from the “white genocide” unprompted mentions: It was revealed that Grok is programmed to «be extremely sceptical» in its responses and not to «defer to mainstream narratives», while it is encouraged to «always seek alternative truth»[47].
Such way of programming the AI large language models, i.e. to spread disinformation, false narratives, or hate speech, is clearly in violation with the wording of the AIA. Even though the AIA is not applicable in this case, the above issues of cross-border applicability of EU’s standards of AI may soon become relevant to similar cases. The European Commission has identified disinformation as a threat to society [48]. The digital sphere enables actors to spread disinformation targeted to prone individuals and to targeted groups with precision[49]. While disinformation is not a new phenomenon, online social media and large language models have accelerated the spread of disinformation[50]. Because disinformation is exacerbated in social bubbles, people because increasingly disinformed through algorithms sharing information among like-minded individuals[51]. As a result, people are increasing prone to half-truth and value judgements[52]. These processes can force people into rabbit holes, i.e. make them get into deeper misinformed states, resulting in radicalisation[53].
Up to recently, regulating freedom of expression has been seen as granting one of the essential human rights and posing some limits, and such human-centred approach to free expression and criminalization of hate speech would certainly have been applicable to foreign nationals in the EU’s territory. These concepts were however only familiar when we had in mind naturally occurring language. When we come to large language models and chatbots, there is much less transparency about how discourses are being produced[54] and the legislation standards are not yet ready to work in an equally efficient way as they do with naturally occurring language. At the same time, it has proven to be extremely difficult to regulate the production of speech by large language models, as they are constant processes rather than finalized products[55].
We are entering a post-human future, while we are also inheriting certain dangerous codes from our modern past[56], and while the residues of modern racisms and sexisms have not yet been overcome, they are being perpetuated through generative AI. There may be cultural variations among jurisdictions on how the discourses of hate speech are criminalized, prosecuted, and punished[57], and the introduction of a universal regulatory mode applicable across borders is even more so problematic.
3.3. GDPR violations sanctions and relation to AI regulation
The AIA entered into force on 1August 2024. It has a phased implementation schedule, and thus its most salient provisions regarding enforcement will enter into force gradually, some as late as in 2027. Therefore, we have not yet witnessed any sanctions imposed upon the AIA violations, similar to the penalties according to the DSA discussed above.
The case law of the Court of Justice of the EU related to the AIA so far only concerns interpretation and application issues[58]. In practice, some of the recent violations of the AIA provisions or principles have been dealt with by domestic courts in EU Member States under the regime of GDPR violations, as these constituted violations of data-processing rules by AI systems[59].
4. Conclusions
The EU has recently become part of tensions over several issues with the U.S. With the ongoing peace negotiations and the rapid development of technologies offered by American companies on the European market, such tensions are exacerbating. The late 2025 has seen a major conflict between Trump administration and the European Commission over a fine imposed on Elon Musk’s X over violations of the DSA. This incident did not, however, arrive out of the blue and had been preceded by other cases of violations of European regulations by global technological companies, while this raised questions about the applicability of European law on foreign subjects and allegations of inequality and discrimination.
With the European Commission’s fine over DSA violations, a discussion has been instigated about limitations on freedom of expression and this has been framed by some media, such as The Guardian, as a «war for freedom of speech». The reaction of Trump’s administration, including entry bans to five European individuals, was centred around claims of the EU’s “censorship of online platforms”. Therefore, it is unsurprising that one of the five personae non gratae was Thierry Breton, a former European Commissioner and one of the leading figures who had overseen the EU regulation of digital technologies and the regulation of companies on the EU market.
Some large U.S. and other global tech firms, while supported by other actors such as the U.S. government, have accused the EU of discrimination against them on the European market. The EU has, on the contrary, introduced legislation and rules for companies trading with digital technologies on its market, in order to avoid discrimination and apply unified standards to all subjects within this realm.
Europe presents a digital market of more than 300 million consumers. Therefore, any decisions taken in terms of regulating the use of digital technologies are vital to foreign companies. Limiting the EU’s regulatory initiatives and giving up to the U.S.’s pressure would weaken the autonomy the EU has so far had in terms of regulating its market.
- J. Lee, Artificial intelligence and international law, Singapore, Springer, 2022. ↑
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). ↑
- N. A. Smuha, From a ‘race to AI’ to a ‘race to AI regulation’: regulatory competition for artificial intelligence, in Law, innovation and technology, 1, 2021. ↑
- Ibid. ↑
- R. Arnold, J. Cremades, Rule of Law, Technology and Environment: Contributions to the World Law Congress 2023, Cham, Springer Nature, 2025. ↑
- L. Lessig, The Law of the Horse: What Cyber Law Might Teach, in Harvard Law Review, 1, 1999. ↑
- Smuha, op.cit. ↑
- W. Hoffmann-Riem , Artificial Intelligence as a Challenge for Law and Regulation, in Regulating Artificial Intelligence, eds. Wischmeyer, T. and Rademacher, T., Springer, 2020. ↑
- P. Frantz, N. Instefjord, Regulatory Competition and Rules/Principles-Based Regulation, in Journal of Business Finance and Accounting, 1, 2018. ↑
- Smuha, op.cit. ↑
- M. Haenlein, A. Kaplan, A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence, in California Management Review, 1, 2019. ↑
- Regulation 2016/679/EU of the European Parliament and of the Council of 27 April 2016 concerning the protection of individuals with regard to the processing of personal data, as well as the free circulation of such data and which repeals Directive 95/46/EC (General Data Protection Regulation). ↑
- Smuha, op.cit. ↑
- Ibid. ↑
- E. Gibney, China wants to lead the world on AI regulation – will the plan work?, in Nature, 1, 2025. ↑
- Ibid. ↑
- Y. Wang, Do not go gentle into that good night: The European Union’s and China’s different approaches to the extraterritorial application of artificial intelligence laws and regulations, in The computer law and security report, 53, 2024. ↑
- E. Thelisson, H. Verma, Conformity Assessment under the EU AI Act General Approach, in AI and Ethics, 4, 2024. ↑
- H. Roberts, et al., Governing Artificial Intelligence in China and the European Union: Comparing Aims and Promoting Ethical Outcomes, in Information Society, 39, 2023. ↑
- Council of Europe. Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, Council of Europe Treaty Series No. 225 (opened for signature 5 September 2024; adopted 17 May 2024). Council of Europe, Strasbourg. ISBN 978-92-871-9621-7 (bilingual French/English edition). ↑
- United Nations Educational, Scientific and Cultural Organization (UNESCO). Recommendation on the Ethics of Artificial Intelligence. Adopted 23 Nov. 2021, UNESCO Digital Library, 2022. ↑
- Organisation for Economic Co-operation and Development (OECD). Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449, adopted 22 May 2019 (revised 3 May 2024). ↑
- Algorithmic Accountability Act of 2019, H.R. 2231, 116th Cong. (U.S. House of Representatives April 10, 2019), introduced by Rep. Yvette Clarke (D-NY), referred to the House Committee on Energy and Commerce. U.S. Congress, Library of Congress Congress.gov. ↑
- The White House Office of Science and Technology Policy. Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. Washington, DC: The White House, October 2022. ↑
- Treasury Board of Canada Secretariat. (2023). Algorithmic Impact Assessment Tool (AIA). Government of Canada. ↑
- R. Paul, European artificial intelligence ‘trusted throughout the world’: Risk‐based regulation and the fashioning of a competitive common AI market, in Regulation & governance, 4, 2024. ↑
- Beijing Academy of Artificial Intelligence, Peking University, Tsinghua University, Institute of Automation (Chinese Academy of Sciences), Institute of Computing Technology (Chinese Academy of Sciences), & AI Industry Innovation Strategy Alliance (AITISA). (2019, May 25). Beijing Artificial Intelligence Principles. Beijing AI Principles. Retrieved from linking-ai-principles.org. ↑
- Ibid. ↑
- C. Coglianese, D. Lehr, Transparency and algorithmic governance, in Administrative Law Review, 1, 2018. ↑
- D. F. Engstrom, A. Haim, Regulating Government AI and the Challenge of Sociotechnical Design, in Annual review of law and social science, 1, 2023. ↑
- R. Paul, op.cit. ↑
- J. Bareis, C. Katzenbach, Talking AI into being: The narratives and imaginaries of national AI strategies and their performative politics, in Science, Technology, & Human Values, 5, 2022. ↑
- Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act), Official Journal of the European Union L 265, 12 October 2022, pp. 1–66. ↑
- Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act). ↑
- N. Kshetri, Navigating EU Regulations: Challenges for U.S. Technology Firms and the Rise of Europe’s Generative AI Ecosystem, in Computer, 10, 2024. ↑
- Commission Decision, C(2025) 8630 final, decision of 5.12.2025 pursuant to Articles 73(1), 73(3) and 74(1) of Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) Cases DSA.100101, DSA.100102 and DSA.100103 – X (formerly Twitter). ↑
- M. Fasel, Is the Digital Services Act Here to Protect Users? Platform Regulation and European Single Market Integration, in Nordic Journal of European Law, 4, 2024. ↑
- P. Mattioli, Navigating the Complexities of the DSA’s Enforcement Framework: Sincere Cooperation in Action?, in Utrecht Law Review, 1, 2023. ↑
- Commission Decision, C(2025) 8630 final, op.cit. ↑
- Elon Musk’s X account, 6 December, 2025, available from https://x.com/elonmusk/status/, date accessed 15 February 2026. ↑
- European Parliament, EP leaders reject the visa ban imposed on former Commissioner Breton, 22 January 2026, available from https://www.europarl.europa.eu/news/cs/press-room/20260122IPR32567/ep-leaders-reject-the-visa-ban-imposed-on-former-commissioner-breton, date accessed 15 February 2026. ↑
- H. Horton, Elon Musk says UK wants to suppress free speech as X faces possible ban, in The Guardian, 10 January 2026, available from https://www.theguardian.com/technology/2026/jan/10/elon-musk-uk-free-speech-x-ban-grok-ai, date accessed 15 February 2026. ↑
- Commission Decision, C(2025) 8630 final, op.cit. ↑
- AIA, op.cit, Article 5(1)a). ↑
- AIA, op.cit, Article 5(1)b). ↑
- H. Murphey, C. Criddle, Elon Musk’s AI chatbot shared ‘white genocide’ tropes on X, in Financial Times, 14 May 2025, available from https://www.ft.com/content/37416a0e-8f35-45af-9ace-2cf4c973daa5, date accessed 15 February 2026. ↑
- E. Steedman, For hours, chatbot Grok wanted to talk about a ‘white genocide’. It gave a window into the pitfalls of AI, in Australian Broadcasting Corporation, 24 May 2025, available from https://www.abc.net.au/news/2025-05-25/grok-ai-accuracy-doubts-after-white-genocide-claims-fixation/105325028, date accessed 15 February 2026. ↑
- T. Enarsson, Countering ‘lawful but awful’ disinformation online: EU-regulations targeting disinformation on major social media platforms, in Nordic Journal of European Law, 3, 2025. ↑
- E. De Blasio, D. SelvaWho Is Responsible for Disinformation? European Approaches to Social Platforms Accountability in the Post-Truth Era, in American Behavioral Scientist, 2021. ↑
- W. Mbioh, Beyond Echo Chambers and Rabbit Holes: Algorithmic Drifts and the Limits of the Online Safety Act, Digital Services Act, and AI Act, in Griffith Law Review, 3, 2024. ↑
- S. Steinert, M. J. Dennis, Emotions and Digital Well-Being: On Social Media’s Emotional Affordances, in Philosophy & Technology, 1, 2022. ↑
- C. Diaz Ruiz, T. Nilsson, Disinformation and Echo Chambers: How Disinformation Circulates on Social Media Through Identity-Driven Controversies, in Journal of Public Policy & Marketing, 1, 2023. ↑
- B. Lewis, Rabbit Hole: Creating the Concept of Algorithmic Radicalization, in J. Farkas, M. Maloney (eds), Digital Media Metaphors, Routledge, 2024. ↑
- M. Gillings, T. Kohn, G. Mautner, The rise of large language models: Challenges for Critical Discourse Studies, in Critical Discourse Studies, 2, 2024. ↑
- N. S. Noble, Algorithms of Oppression: How Search Engines Reinforce Racism, 2018, New York University Press. ↑
- A. Adib-Moghaddam, Is Artificial Intelligence Racist? The Ethics of AI and the Future of Humanity, 2023, Bloomsbury. ↑
- E. Siapera, AI content moderation, racism and (de) coloniality, in International journal of Bullying Prevention, 1, 2022. ↑
- Case C‑806/24: A preliminary ruling request referred to the CJEU on 25 November 2024 by the Sofia District Court (Bulgaria). ↑
- Autoriteit Persoonsgegevens. (2024, May 16). Administrative decision imposing a €30 500 000 fine on Clearview AI Inc. for violations of the General Data Protection Regulation (GDPR) (Decision, Netherlands). Autoriteit Persoonsgegevens; Commission nationale de l’informatique et des libertés (CNIL). (2022, October 19). Sanction imposing a €20 000 000 fine on Clearview AI for GDPR violations. CNIL. ↑