Harnessing AI

Risk Initiative


The convergence of a shocking acceleration of AI innovation and unregulated digital communications has brought us to what may be the most critical juncture in human history.

We can still turn the dawn of this new era into the greatest opportunity for humanity, but only if we come together globally like never before to govern its immense risks and opportunities.


The Harnessing AI Risk Initiative aims to catalyze the creation of a new global democratic organization that will ensure that AI will turn out to be humanity's greatest invention rather than its worst.

We are facilitating an open treaty-making process for AI based on the model of the intergovernmental constituent assembly, and on other time-tested innovative democratic processes and technologies.

We believe such an approach is the most likely to result in a treaty-organization that will sustainably avoid AI’s immense risks, for human safety and concentration of power and wealth, as well as realize AI's potential to usher us in an era of unimagined well-being for all.

Unlike all other ongoing AI treaty-making initiatives, ours is be based on history's most successful and democratic treaty-making model: that of the intergovernmental constituent assembly that started with two US states convening the Annapolis convention in 1786 and culminated with the federal constitution of the United States.

Together with a wide network of experts and NGOs, we are aggregating a critical mass of states - through a series of structured summits in Geneva - to design and jump-start a similar process by agreeing on the Mandate and Rules for the Election of an Open Transnational Constituent Assembly for AI and Digital Communications.

The Assembly will be mandated to draft a treaty for a new organization to develop, regulate, and jointly exploit the most advanced safe AI technologies, while reliably banning unsafe ones.

The Assembly will be guided by the principle of subsidiarity, where control rested to the most localised level possible, from global to state, community, and individual. 

The Assembly will aim to maximise expertise, timeliness, and agility, while also emphasising participation, democratic processes, impartiality, and inclusivity, to ensure that the resulting treaty-organisation will be widely trusted to:

  • Encourage broad compliance with future bans and oversight

  • Enhance safety through diversity and transparency in setting standards

  • Ensure a fair and safe distribution of power and wealth

  • Mitigate destructive inter-state competition and global military instability

Given the inherently global nature of AI’s primary threats and opportunities, the mandate of the Assembly will include the following:

  • Setting global AI safety, security and privacy standards

  • Enforcing global bans for unsafe AI development and use

  • Developing world-leading or co-leading safe AI capabilities via a public-private $15+ billion Global Public Benefit AI Lab and supply chain

  • Developing globally-trusted governance-support systems

A sweeping AI treaty has been called for by hundreds of AI expert signatories of the AI treaty, by the UN Secretary General, by Open AI and explored in fine detail by Google Deepmind. Sam Altman even suggested last March the US Constitutional Convention of 1787 as a “platonic ideal” treaty-making model for AI.

While admittedly ambitious, we are hopeful that even AI superpowers like the US and China will eventually support our Initiative. There are several compelling reasons for this optimism. Firstly, preventing the proliferation of catastrophically dangerous AI will be much more challenging than nuclear weapons, requiring a wide global compliance. Secondly, the enormous abundance that AI is almost certain to deliver, if the risks are properly addressed, reduces the incentive for hoarding and competition at all costs. Thirdly, according to a recent survey 77% of US voters support a comprehensive international treaty for AI. Lastly, how can the US oppose an initiative that will replicate verbatim, globally and for AI, the democratic process that led to its constitution?

A Better Treaty-Making Method

Regrettably, the prevailing approach to treaty-making - characterized by unanimous non-binding statements, and unstructured summits largely co-opted by a few powerful states - has proven to be both undemocratic and inefficient, as evidenced by the outcomes in areas like climate change and nuclear disarmament.

To address this, the Initiative will adopt, specifically for AI, what is widely considered the most successful and democratic model of intergovernmental treaty-making in history. This model began with two U.S. states calling a meeting of three additional states at the Annapolis Convention of 1786, which subsequently led to the adoption of a federal constitution at the U.S. Constitutional Convention of 1787 through a simple majority vote. This constitution required ratification by at least nine states, eventually receiving unanimous approval from all 13 states in 1789.

Given the significant disparities in AI capabilities, global power, and literacy rates—with over three billion people either illiterate or without internet access—the Open Transnational Constituent Assembly for AI and Digital Communications will apply vote a weighting based  primarily on population size and GDP.

This mirrors the early U.S. Constitution and the ancient Athenian democracy, when only one in eight adult residents were initially eligible to vote. Yet, the mandate of the Assembly will ensure that nearly all citizens in participating states achieve literacy and internet connectivity within a specified timeframe, and so progressively reduce the influence of GDP to zero.

Twenty percent of the Assembly’s delegates will be elected directly by citizens of participating states through uniform electoral processes, while five percent will be selected by random sample.

The US and China, as global and AI superpowers, are welcome to join at any stage, yet their participation will be held in suspension until the other one also joins. Early-joining states and superpowers will receive significant temporary economic and voting advantages.

Strategic Positioning

The Initiative seeks to fill the wide gaps in global representation and democratic participation left by​ global AI governance and infrastructure initiatives by leading states, IGOs and firms - including the US, China, the EU, the UN and OpenAI's public-private "trillion AI plan" - and become the platform for their convergence. 

The Initiative aims to become the critical enabler of the UN Secretary-General’s call for an “IAEA for AI.” It aims to build a treaty-making vehicle with the global legitimacy and representativity that is needed, and his office, agencies and boards are lacking - in line with his clarification that "only member states can create it, not the Secretariat of the United Nations.” The Initiative will eventually constitute a Caucus within the UN General Assembly and later seek approval by the UN General Assembly to become a part of the UN system while retaining full governance autonomy. 

As in 1946, when the US and Russia proposed a new independent UN agency to manage all nuclear weapons stockpiles and weapons and energy research via their Baruch and Gromyko Plans but disagreed, we now have a second chance with AI. We can harness AI's risk to turn it into an unimagined blessing for humanity and set a governance model for other dangerous technologies and global challenges.

Preliminary Designs and Scope of the new IGO

The Initiative is advancing a proof-of-concept proposal for the scope, functions, and character of a new intergovernmental organisation that matches the scale and nature of the challenge, with unique levels of detail and comprehensiveness and the support of dozens of advisors and experts. 

We group the required functions in three agencies of a single IGO, subject to a federal, neutral, participatory, democratic, resilient, transparent and decentralised governance structure with effective checks and balances: 

  • (1) An AI Safety Agency will set global safety standards and enforce a ban on all development, training, deployment and research of dangerous AI worldwide to sufficiently mitigate the risk of loss of control or severe abuse by irresponsible or malicious state or non-state entities.

  • (2) A Global Public Benefit AI Lab will be a $15+ billion, open, partly decentralised, democratically governed joint venture of states and suitable tech firms aimed at achieving and sustaining solid global leadership or co-leadership in human-controllable AI capability, technical alignment research and AI safety measures. 

    • It will accrue member states' capabilities and resources and distribute dividends and control to member states and directly to their citizens while stimulating and safeguarding private initiative for innovation and oversight. 

    • It will be primarily funded via project finance, buttressed by pre-licensing and pre-commercial procurement from participating states and firms. 

    • It will seek to achieve and sustain a resilient “mutual dependency” in its wider AI supply chain - vis-a-vis AI superpowers and other future consortia - through joint investments, diplomacy, trade relations and strategic industrial assets of participant states.

  • (3) An IT Security Agency will develop and certify radically more trustworthy and widely trusted AI governance-support systems, particularly for confidential and diplomatic communications, control subsystems for frontier AIs and other critical societal infrastructure, such as social media. 

Far from being a fixed blueprint, such a proposal aims to fill a glaring gap in the availability of detailed and comprehensive proposals. It aims to stimulate the production of other similarly comprehensive proposals to foster concrete, cogent, transparent, efficient, and timely negotiations among nations leading up to such Assembly and eventually arrive soon at single-text procedure negotiation based on majority and supermajority rule, rather than unanimity.

Momentum and Roadmap

Through our collaborative efforts, we have successfully onboarded over 32 world-class experts and advisors to the Association and the Initiative. Additionally, over over 39 world-class experts, policymakers, and 12 NGOs are participating in our upcoming Summit.

In March 2024, our organisation conducted high-level consultations with the United Nations missions in Geneva from four states. These meetings included three heads of mission - ambassadors - and specialists in artificial intelligence and digital technologies. These states, located in Africa and South America, collectively represent a population of 120 million, have a Gross Domestic Product (GDP) of $1.4 trillion, and manage sovereign wealth funds amounting to $130 billion. We are currently engaging with three additional delegations.

In early April 2024, we received formal correspondence expressing interest from the Ambassador to the United Nations in Geneva, representing one of the largest regional intergovernmental organisations encompassing dozens of member states. Since December, we have extensively discussed with three of the top five AI Laboratories regarding their participation in the Global Public Interest AI Lab.

On April 23rd, 2024, we launched the Coalition for the Harnessing AI Risk Initiative. This launch led to the creation of an Open Call, an invitation extended to all individuals and organisations to participate, collaborate, and share their expertise.

We plan to host bilateral and multilateral meetings in Geneva in May and June with states, intergovernmental organisations (IGOs), and AI labs. These meetings will coincide with the United Nations International Telecommunication Union World Summit on the Information Society (UN ITU WSIS), scheduled for May 25-29, and the United Nations AI for Good Summit, set for June 10-13. Additionally, we will hold our Pre-Summit Virtual Conference on June 12th, leading to our debut 1st Harnessing AI Risk Summit, this November in Geneva.

Learning from History’s Greatest Treaty-Making Success

Nine years after the ratification of the U.S. Articles of Confederation in 1781, it became evident to many states that these measures were insufficient to adequately protect their economic interests and security. Therefore, in 1786, two states initiated the Annapolis Convention by inviting three others to participate. This meeting laid the foundation for the U.S. Constitutional Convention of 1787, which established a robust federation.

During the Constitutional Convention, state delegations reached a consensus through a simple majority vote on a draft of the U.S. Constitution. This constitution was set to be ratified if endorsed by the legislatures of at least nine of the thirteen states. In retrospect, this process marked a significant achievement, even though only one in eight adults had voting rights.

Given the historical context and the success of this approach, a similar strategy should be adopted at the global level for Artificial Intelligence (AI). AI represents a pivotal technology with far-reaching consequences for the economy, safety, security, and the very fabric of human existence. Suppose we can initially bring together seven or more globally diverse states. In that case, it will pave the way to easily engage additional nations in a "Global Annapolis Convention for AI” and ensure its success.

Opportunities

Find below detailed opportunities to join, support or partner with the Harnessing AI Risk Initiative and/or its 1st Harnessing AI Risk Summit this November 2024, in Geneva:

Overview PDF

For more information the Initiative and the Lab, review our 90-page weekly-updated Harnessing AI Risk Initiative Overview. It includes the text of this page on top, and it can be easily navigated via a clickable table of contents: