Artificial Intelligence (AI) is poised to permeate across different industry sectors, offering unprecedented opportunities alongside significant risks. Effective governance necessitates coordinated cross-border efforts to build institutional expertise, dispel misconceptions, foster innovation, and align global safety priorities. Advocating structured dialogue and a bottom-up approach, Oscar Borgogno and Alessandra Perrazzelli present a proposal which aims to avoid institutional redundancy and legal unpredictability for individuals and firms.
One of the most pervasive misconceptions about artificial intelligence (AI) governance and regulation is the notion that they inherently stifle innovation. Critics often argue that new legal frameworks impede technological progress and economic growth. However, as Anu Bradford recently pointed out, attributing the existing technological gap between the United States and the European Union solely to the laxity of American laws and the stringency of European digital regulation is overly simplistic.
It is true that several structural features of the European Union have indeed hindered the emergence of innovative European companies on the global stage, as highlighted by Bank of Italy Governor Fabio Panetta in his concluding remarks during the publication of the Bank of Italy’s 2023 Annual Report. These challenges include the well-known need for a common fiscal policy and an integrated capital market, as well as the ‘middle technology trap’ described by Jean Tirole et al., where European business R&D is concentrated in mid-tech sectors that lack the growth and innovative potential of high-tech industries. Additionally, a weaker culture of risk-taking, as criticized by David S. Evans, further hampers progress. Conversely, as noted by Andrenelli et al. in an upcoming WTO-OECD working paper on cross-border data flows, well-designed and implemented regulations can create a framework that fosters trust in markets among consumers and companies, while ensuring safety standards and protecting human rights.
It is worth acknowledging that the direction of AI innovation is not predetermined; it is a choice that policymakers must make. Indeed, Daron Acemoglu and Simon Johnson argue that AI will do whatever we choose it to do. Therefore, it is crucial to steer AI innovation responsibly to mitigate potential harms and maximize benefits. Well-designed regulations can help prevent the misuse of AI, guarantee healthy competition, and ensure that AI technologies are developed in a manner that benefits society as a whole.
As highlighted by a number of prominent AI researchers in Science, several areas of technology-driven industries, from international telecommunication regulations to financial systems and nuclear energy, show that society requires and effectively uses government oversight to target and address technology-related risks. Regulatory frameworks in the pharmaceutical industry ensure that new drugs are safe and effective, thereby building public trust and promoting further responsible innovation. AI is no different in this regard and requires tailored supervisory mechanisms to oversee AI’s potential to act autonomously, progress explosively, trigger adversarial threats, and cause irreversible damage.
Similar to the challenges faced in transnational data governance, where the Joint Statement Initiative on E-commerce at the WTO encountered significant setbacks regarding cross-border data flows, conflicting national priorities are hindering the creation of a cohesive global regulatory framework for AI. Some countries prioritize rapid AI development to gain a competitive edge and create a national industry that will not jeopardize its national security (as seen in the U.S.), while others focus on social control (as in China) or emphasize human rights and safety (as in the EU). This diversity of approaches highlights the urgent need for a common understanding among global enforcers and market supervisors to address AI-enabled risks, which, like all aspects of the digital economy, will inevitably have cross-border impacts.
The rapid pace of AI advancements highlights the urgency of acquiring such expertise at the supra-institutional level. Without a globally shared understanding of risks and supervisory strategies to tackle them, some countries might overly rely on AI private developers’ self-policing. This could potentially lead to the widespread deployment of unsafe AI systems, resulting in negative externalities for the global public good.
At the same time, AI safety research is lagging behind the pace of AI development. Arguably, market authorities might lack the mechanisms and know-how needed to prevent misuse and reckless behaviour. The scale of the risks associated with AI means that governance initiatives need to be proactive. Supervisors and market authorities are called to anticipate the amplification of ongoing harms, as well as new risks, and prepare for the largest threats well before they materialize.
However, while agencies should be vigilant and proactive, they face budget constraints and need to prioritize their respective institutional tasks over exploratory endeavours. Therefore, it is impractical for each authority to tackle these challenges independently, as this approach could lead to legal unpredictability and redundancy, given AI’s widespread impact across industries in the entire economy.
Against this backdrop, what we still need is a shared monitoring scheme that brings supervisors and market authorities together to develop best practices and avoid acting in silos when addressing the cross-industry challenges of AI’s massive deployment. As argued also by the Governor of the Bank of Spain, Pablo Hernández de Cos, the complexity of achieving high-level convergence and trust in AI supervision requires coordination and sustained information sharing efforts.
For instance, the distribution of financial services is set to be impacted by large-scale AI data analytics coupled with personal data sharing in Internet of Things (IoT) environments. This topic intersects with the responsibilities of various market authorities: financial supervisors, due to potential implications for financial stability and exploitation of retail consumers; intellectual property offices as both the models and the training materials could be covered by copyright, patents or sectorial IP protection; data-protection authorities, as it involves personal data; competition agencies, considering the potential for collusion and exclusionary behaviours by firms with market power; and telecommunication authorities, given its reliance on access to internet infrastructure.
Thus, such public bodies need to engage in continuous and structured dialogue to address the multifaceted challenges posed by AI within their institutional mandates.
On a more pragmatic note, it is key that such a monitoring scheme is developed both at the domestic as well as at the international stage, paving the way for coordinated AI oversight among as many countries as possible. This is why we argue for a bottom-up approach to AI monitoring activities, starting at the domestic level.
First, by establishing robust national frameworks that ensure sustained dialogue and coordination between market supervisors, countries can develop institutional expertise and ensure a whole-of-government response, while laying the groundwork for international understanding. The UK Digital Regulation Cooperation Forum (DRCF) is a notable example of an effective council between regulatory and supervisory agencies at the national level. It brings together the Information Commissioner’s Office (ICO), the Competition and Markets Authority (CMA), the Office of Communications (Ofcom), and the Financial Conduct Authority (FCA) to foster coordinated regulatory efforts. Similar initiatives are underway in Australia (The Digital Platform Regulators Forum) and the Netherlands (The Digital Regulation Cooperation Platform). Such forums can serve as models for other nations looking to create a cohesive supervisory approach not only to AI specifically, but also to any cross-industry risks peculiar to the digital economy.
Notably, the EU has already begun implementing a comprehensive approach with the first worldwide set of rules on AI safety, known as the AI Act, which came into force on August 1, 2024. Competent authorities responsible for supervising and enforcing EU financial services legal acts have been designated, within their respective competences, to oversee the implementation of the AI Act unless member states decide otherwise. This includes market surveillance activities related to AI systems provided or used by regulated and supervised financial institutions, as well as signalling to the European Central Bank any information identified during such new market surveillance activities that may be of potential interest for the ECB’s prudential supervisory tasks. Similarly, on May 22, 2024, EU regulators on consumer protection, competition, data protection, and audio-visual media issued a public statement to ensure that the Digital Markets Act and other sectoral regulations applicable to digital gatekeepers are implemented in a coherent and complementary manner. This is only a first step, but it demonstrates that such institutional alignment is achievable.
Second, by ensuring coordination and shared expertise within each jurisdiction, it becomes easier to strengthen meaningful dialogue at the transnational level, especially between supervisors of like-minded regions, such as the United Kingdom and the EU. We believe there is room for mutual understanding and the exchange of supervisory best practices, even though the U.K. is implementing a light-touch regulatory approach toward AI safety. As these jurisdictions share similar values and regulatory standards in areas such as financial regulation, data protection, and competition, there is room to establish a constructive dialogue between supervisors and market authorities.
Third, the solution we are advancing does not build in a vacuum. Existing international institutions, such as the UN Secretary-General’s AI Advisory Body and the OECD, have the potential to facilitate global dialogue on the risks posed by AI diffusion. These platforms, alongside efforts by specialized agencies—such as the WIPO Conversation on Intellectual Property and Frontier Technologies, and the AI for Good Global Summit organized by the International Telecommunication Union (ITU)—offer essential infrastructure for structured international cooperation. Their expertise in IP policy and technical standards can be leveraged to develop guidelines for responsible AI innovation and deployment. This approach is already helping national policymakers worldwide expand their understanding of AI’s evolutionary dynamics.
On a higher governmental level, the ongoing work of the G7 to establish shared principles on AI governance is a promising endeavour to build some form of international coherence. However, as noted by Aziz Huq, the complexities of agreeing on common AI regulations between major geopolitical blocs are significant, making it unlikely that a major international agreement could be achieved on such a sensitive issue any time soon.
In light of such complexities, the establishment of a cross-supervisory AI monitoring scheme is not only necessary but also achievable to make sure that old and new regulations are enforced in a coherent and predictable way within and across borders. Through institutional coordination and a balanced approach to AI supervision, we can navigate the challenges and opportunities presented by AI innovation, ensuring that this transformative technology benefits all of humanity.
This article represents the opinions of their writers, not necessarily those of Bank of Italy.
Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.