By Tamar Vardiashvili
On September 5, 2024, the Council of Europe (CoE) introduced the first legally binding international treaty on AI governance, marking a pivotal moment in global technology regulation. This landmark framework establishes unprecedented legal standards for Artificial Intelligence (AI) development, deployment, and oversight across the public and private sectors. Importantly, initial signatories include Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom, Israel, the United States of America, and the European Union.
The treaty’s provisions aim to ensure that AI systems operate consistently with human rights, democracy, and the rule of law. Covering both the public and private sectors, it offers flexible compliance models to accommodate diverse national legal systems. Furthermore, it establishes context-specific transparency and oversight requirements, including the identification of AI-generated content.
As emphasised by the CoE, the Treaty fills legal gaps resulting from rapid technological advances, potentially becoming a global standard-setting instrument for AI.
The Treaty’s global reach has sparked discussions about its role in shaping international AI cooperation and its implications for global power dynamics. As we delve deeper into the Treaty’s background and key innovations, we will explore how this new framework might reshape the landscape of global tech governance.
Background on AI Regulation
The path to this Treaty has been paved by several international AI governance initiatives. In 2019, OECD member countries adopted a set of AI ethics principles, which was followed by G20 leaders committing to principles drawn from the OECD set. In November 2021, all of UNESCO’s member states adopted a Recommendation on the Ethics of Artificial Intelligence, designed to guide signatories in developing appropriate legal frameworks. Furthermore, in 2023, the G7 initiated the Hiroshima AI Process to enhance cooperation in AI governance. Most recently, on August 1, 2024, the European Union’s Artificial Intelligence Act also came into force.
The CoE initiated its AI framework convention in 2019, leveraging its long-standing reputation for promoting human rights, democracy, and the rule of law. The convention was drafted by 46 member states with participation from observer states, the EU, and 11 non-member states including Australia, Argentina, and Peru. The process also engaged many stakeholders from civil society, academia, and industry, ensuring a comprehensive approach to AI governance.
Key Innovations in the Treaty
The convention brings novel approaches to cross-border AI regulation. It covers the entire lifecycle of AI systems, from development to decommissioning, while remaining essentially technology-neutral. Its originality stems from directly linking AI governance to human rights, democracy, and the rule of law.
A key feature of the Treaty is its risk management framework, outlined in Article 16. This requires states to implement measures for identifying, assessing, preventing, and mitigating risks posed by AI systems. The vague formulation of this article leaves room for interpretation, potentially setting new standards for proactive AI governance.
The Treaty also introduces accountability mechanisms, establishing obligations for transparency, oversight, and remedies (Articles 8, 14, and 26). These could significantly influence global regulation of AI accountability, particularly regarding the identification of AI-generated content.
Addressing the need for ‘safe innovation’ (Article 13), the Treaty encourages AI advancement without compromising human rights, democracy, and the rule of law. It also provides a framework for international cooperation (Article 25), recognising the complexities of AI regulation across different legal systems.
Dynamics in International Relations
Despite the call for international cooperation, balancing power between AI-advanced nations and those still developing their capabilities is still ambiguous, as the Treaty doesn’t directly address bridging the global AI divide. However, it should be emphasised that adherence to the Treaty ostensibly will become a measure of a country’s commitment to responsible and ethical AI development.
Currently, the EU stands as a trailblazer in AI regulation, having adopted the EU AI Act and acceded to the CoE framework convention. While these conventions may overlap in some areas, they complement each other: the EU Act aims to harmonise the EU internal market with AI systems, while the CoE convention focuses on AI systems’ compliance with human rights and the rule of law.
On the other hand, considering the US-China dynamics and ongoing AI competition, the US became an early signatory, whereas China is notably absent from the treaty. This adds another layer of complexity to the current state of AI governance. Arguably, a country’s stance on AI ethics and governance, as embodied in this treaty, might become a new source of soft power in international relations.
On another note, the Convention is open for signature by the Council of Europe Member States, the European Union, and non-member States that participated in its development. Other non-member States may accede to the Convention by invitation once it has entered into force, subject to unanimous consent from the Parties following consultations by the Committee of Ministers of the Council of Europe (Article 30-31). Europe’s ambition to lead the global legal regulation of AI may fail with the noticeable absence of some of the most populated countries, such as India, China, Brazil, etc. Emphasising the specific nature of the AI world, if China —one of the pioneers of AI— does not bind itself with the treaty, it could provide AI devices to the world that will not align with the standards intended to be set by the Framework Convention.
Challenges the Treaty Might Face
Effective implementation of the Treaty also faces several hurdles. Reconciling diverse national AI strategies in a rapidly evolving technological environment will be challenging. For instance, if China were to become a signatory, its state-centric model would likely conflict with the US market-driven approach. Finding common ground among countries at different stages of AI development adds another layer of complexity.
Establishing flexible yet effective enforcement mechanisms for AI regulation will be crucial, especially considering the challenges of monitoring across borders. Interestingly, AI systems themselves could potentially contribute to treaty enforcement and compliance monitoring processes.
Civic Society also addresses the possible loopholes in the convention, including the bland standards for fundamental rights impact assessment (FRIA). For instance, according to The European Center for Not-for-Profit Law Stichting (ECNL), while the AI Act requires deployers of high-risk AI systems to list potential impacts on fundamental rights, there is no clear obligation to assess whether these impacts are acceptable or to prevent them, where possible.
Another concern circulates around the use of AI for national security purposes. The AI Act contains a blanket exemption for AI systems exclusively designed or used for national security purposes, irrespective of whether the developer is a public or private entity. This exemption effectively creates a regulatory loophole whereby governments could strategically invoke national security claims to deploy AI systems that would otherwise be strictly prohibited. By providing this broad exemption, the legislation permits the introduction of such systems without mandating technical safeguards or comprehensive fundamental rights protections, thereby potentially enabling the proliferation of AI technologies that are fundamentally problematic and potentially harmful.
Conclusion
The CoE Framework Treaty sets a precedent for global technological governance and promotes mindful AI development and usage. Its success depends on ratification pace and ability to remain relevant amid rapid technological advancement. The treaty creates a solid foundation for shaping a new era of international cooperation, balancing national interests with global standards.
As we navigate this new landscape, key questions emerge: How will nations reconcile their AI ambitions with the treaty’s ethical framework? Can global cooperation in AI governance bridge the growing technological divide between nations? And perhaps most intriguingly, how will AI itself shape the future of its own regulation?
The answers to these questions will unfold in the coming years, as countries grapple with the challenges and opportunities presented by the treaty. One thing is certain: the global approach to AI development and governance is entering a new phase, one that will require unprecedented levels of international cooperation, ethical consideration, and adaptive policymaking.
About the Author
Tamar Vardiashvili is a practicing lawyer in Georgia. She holds a Bachelor of International Law (summa cum laude) and a Master of International Law (cum laude) from Tbilisi State University, with a semester spent at the University of Groningen.