Techno-multilateralism: The UN in the age of post-truth diplomacy

CIDOB Report _6_Inglés
Publication date: 09/2020
Author:
Carme Colomina, Research Fellow, CIDOB
Download PDF

 A new hybridity of power is increasingly eroding multilateralism and impacting the work of the United Nations (UN). Technological acceleration has brought new global interdependencies but also new vulnerabilities. Societies and economies undergoing digital transformation face new digital divides, altered media landscapes, multiplying political and communication actors, and proliferating information sources of doubtful traceability, as well as deeper erosions of privacy and human rights. The heavy machinery of the United Nations architecture, governed by power balances forged out of now-distant battles, struggles to adapt to challenges and threats the digital realm poses.

Imagine a growing collection of audio and video depicting high-profile leaders, from Donald Trump to Vladimir Putin and Xi Jinping, saying things they never said. Imagine the political impact and public disbelief those fake speeches, generated with machine learning technology, would provoke in a world of uncertainties, power disruptions and fast technological change. It takes only a few hours of work, less than $10 (in cloud computing resources) and access to a wide archive of United Nations General Assembly speeches to fake a credible political speech using Artificial Intelligence (AI), as verified by Global Pulse, the UN Secretary-General’s initiative on big data and AI for development, humanitarian action, and peace.1 When we can no longer believe what we see, truth and trust are hard to discern and diplomacy is more undermined than ever.

Lies have always been part of governments’ foreign policy toolkit. Historically, the UN has been exposed to strategic and deliberate manipulation, intentional disinformation and propagandist speeches. In front of the UN Security Council on February 5th 2003, Colin Powell, US secretary of state under President George W. Bush, consciously deceived the world when he accused Saddam Hussein’s Iraq of possessing weapons of mass destruction. Claiming to be stating only “facts and conclusions based on solid intelligence”, Powell (2003) justified a war that was “illegal” and breached the UN charter, according to former United Nations secretary-general, Kofi Annan. What has changed since then?

We are immersed in a technological acceleration that is transforming the concept of power, the idea of threat and the scenarios of global confrontation. Statecraft must therefore adapt to an evolving landscape in which military capabilities are not the only ultimate strength. The United States and the European Union (EU) feel overwhelmed by Chinese technological development, and the new hegemonies of power are contested using more diffuse, hybrid threats and in more diverse settings. Latest-generation disinformation has more resources, more capacity to penetrate public discourse and new avenues of political interference. It aggravates societal tensions and amplifies public polarisation. The perception of facts is now mediated by emotions, and the sense of what is or is not true seems to be a free choice. The transformation of the public sphere we are witnessing is explained not only by the crisis of traditional media systems but also by the new algorithmic order that largely controls the selective predetermination of the information we see.

Information embodies a mental framework and implies values. It is logical then that the information space is under strain not only from power contests but also from clashing models. As the World Economic Forum’s Global Risks Report 2019 stated, “new technological capabilities have amplified existing tensions over values – for example, by weakening individual privacy or deepening polarization – while differences in values are shaping the pace and direction of technological advances in different countries”. And yet, artificial intelligence can also be a powerful tool for international development. The World Bank, in collaboration with other global partners including the UN, is building a Famine Action Mechanism, which relies on deep learning systems developed by Microsoft, Google and Amazon, to detect when food crises will become famines. UNICEF is collaborating with MIT on deep learning expertise to simulate images of major global cities “in ruin” to help promote empathy and connection with the suffering of those who  have experienced bombing, loss and war. There are companies using AI technology in autonomous drones to deliver critical medical supplies, such as vaccines, to rural hospitals in Africa. These examples show the enormous potential for development and humanitarian aid, but the convergence of AI with other emerging technologies also creates unprecedented vulnerabilities and risks to global security (Pauwels, 2019). There is a big technological power competition underway and growing inequalities between tech-taking and tech-leading countries. Digital acceleration widens digital divides and multiplies fundamental asymmetries.

In this context, the pioneers of the digitalisation of public diplomacy have been quickly left behind by the new reality. Politics through social media has become less about connectivity and image-building than public showcases and the disruption of the traditional dynamics of international politics. The idea of a post-truth era refers not only to the ability to penetrate the public discourse with lies, but to the intentional distortion of the truth. Diplomatic engagement requires a minimum level of shared understanding and mutual openness. But international relations have so far failed to escape today’s emotion-driven reality that pushes facts to the margins, while social media “Twiplomacy” breaks with the old cultural dynamics and tempos of foreign policy. In this post-truth era, weaponising information has become a tool with which to erode opposition in any kind of political system; almighty leaders of global powers, whether in the White House or the Alvorada Palace, are able to spread lies and disinformation from their Twitter accounts and feed their own people’s polarisation. As Laura Rosenberger points out “the new great-power competition won’t necessarily take place on battlefields or in boardrooms; it will happen on smartphones, computers, and other connected devices and on the digital infrastructure that supports them” (Rosenberger, 2020). This information contest has created new democratic dilemmas.

“The near-future will see the rise of cognitive-emotional conflicts: longterm, tech-driven propaganda aimed at generating political and social disruptions, influencing perceptions, and spreading deception” (Pauwels, 2019: 16). How can multilateralism prevail in this age of post-truth diplomacy? What kind of governance can we foresee for a new reality where “automated machine processes not only know our behaviour but also shape our behaviour at scale” (Zuboff, 2019: 8)? What role can the UN, so long overdue reform, play in this bipolar reality, torn between what Shoshana Zuboff has coined “surveillance capitalism”, as global technological platforms betray the early digital dream, and political regimes that use the uncertainties of the COVID-19 pandemic to step up techno-authoritarian social control? How can the multilateral system better understand and anticipate risks without being caught in this bipolarity?

Disinformation versus human rights

The rights to freedom of thought and opinion are critical to any democratic system. Disinformation therefore entails a human rights threat because it can damage the right to free and fair elections and the rights to non-discrimination and to protecting one’s honour and reputation from unlawful attacks. At the same time, the legal and political abuse of a vaguely labelled fight against fake news has in some contexts and countries resulted in the persecution of freedom of expression or political dissent.

In March 2017, a joint declaration by the UN Special Rapporteur on Freedom of Opinion and Expression, the Organization for Security and Co-operation in Europe (OSCE) Representative on Freedom of the Media, the Organization of American States (OAS) Special Rapporteur on Freedom of Expression and the African Commission on Human and Peoples stressed that “the human right to impart information and ideas is not limited to ‘correct’ statements, that the right also protects information and ideas that may shock, offend and disturb”. They declared themselves alarmed “at instances in which public authorities denigrate, intimidate and threaten the media, including by stating that the media is ‘the opposition’ or is ‘lying’ and has a hidden political agenda” and warned that “general prohibitions on the dissemination of information based on vague and ambiguous ideas, including ‘false news’ or ‘non-objective information’, are incompatible with international standards for restrictions on freedom of expression [...] and should be abolished”.

The fact that the debates about regulating the online ecosystem have already reached the UN shows the political risks involved in establishing common standards to address the new challenges. So far, the concept of cybercrime has opened a new door to the repression of dissent and freedom of expression, something that various civil society organisations have denounced. In December 2019, a Russian-led, Chinese-backed resolution on cybercrime entitled “Countering the use of information and communications technologies for criminal purposes” was adopted by 79 votes to 60 with 33 abstentions, despite opposition from several major Western powers. Votes in favour were cast by countries such as Cambodia, North Korea, Burma, Venezuela, Algeria, Syria, Belarus and Kazakhstan. All EU member states, Canada, Australia and the United States voted against. Opponents of the text feared that the resolution would serve to erode freedom of expression online. One month before the vote, a group of NGOs and human rights associations sent a letter to the UN General Assembly alerting that “the criminalization of the ordinary activities of the Internet by individuals and organizations through cybercrime law enforcement is a growing trend in many countries around the world”,2 and questioned the need for a specific convention for such cases. This political clash at the UN headquarters last December embodied the collision of different models and values with digital reality.

For a majority of countries around the world concerns about cybercrime have less to do with hacking attacks and identity theft and much more with the repression of political dissent. Hence, the 2019 resolution criticised the existing treaty, the Budapest Convention on Cybercrime, in order to make the “fighting of cybercrime” advance in ways that facilitate information control and the suppression of political dissidents. There is a tangible risk that authoritarian multilateralism could shape internet governance.

Cyber-insecurities?

Technology continues to play a profound role in shaping the global risks landscape. “Cyber-attacks” and “massive data fraud and threat” have for two consecutive years ranked among the top global risks listed by the World Economic Forum (WEF 2019: 16), together with economic and political confrontations between major powers, erosion of multilateral trading rules and agreements, loss of confidence in collective security alliances, populist and nativist agendas, and media echo chambers and “fake news”. New mechanisms of cooperation on data governance are urgently needed. It is not just about the ongoing race over who owns the data, but also about what use is made of it.

This need for multilateralism goes beyond states to impact on the “unprecedented new species of power” (Zuboff 2019: 352). Data concentration is empowering a limited number of global corporations. The proliferation of big technological actors and the cross-jurisdictional nature of internet activity makes responding to the challenges of cyberspace at a national level impossible. However, “there is an increasingly urgent need to establish guidelines, both at national and international levels, to accompany the progressive deployment of augmentation technologies in civil and military contexts” (Pauwels, 2019: 21). But it will not be easy to do that from an scenario of duality and structural confrontation. Global incoherence and bipolar collision – embodied by the trade and technology war between the United States and China – is shaping international relations and is at the root of the divisions between the key UN members attempting to set some sort of regulation.

Global security and stability are increasingly dependent on digital security and stability and the UN could be the space for debating values and norms in this field, setting standards and contributing to arbitration and dispute resolution. But the challenge relates not only to ensuring any changes are addressed multilaterally but concerns whose agenda should be followed. “When democracies regulate content and increase control over the Internet’s architecture, they weaken democratic institutions” (Rosenberger 2020). The open and free exchange of information to empower citizens to make informed decisions lies at the heart of any democratic system. As Laura Rosenberger (2020) puts it, “in democratic philosophy, information rests with citizens; in the autocratic vision, it rests with those in power”. Technological vulnerabilities can increase democratic deficits. The challenge is to build a new multilateral framework out of the architecture of control and to transcend competition in favour of cooperation.

References

Bullock, Joseph and Luengo-Oroz, Miguel. Automated Speech Generation from UN General Assembly Statements: Mapping Risks in AI Generated Texts. International Conference on Machine Learning AI for Social Good Workshop, Long Beach, United States, 2019. ArXiv:1906.01946

Organization for Security and Co-operation in Europe. Joint declaration on freedom of expression and “fake news”, disinformation and propaganda. 3 March 2017 (online). [Accessed on 20.08.2020]: https://www.osce.org/fom/302796

Pauwels, Eleonore. The New Geopolitics of Converging Risks. The UN and Prevention in the Era of AI. United Nations University Centre for Policy Research, 29 April 2019 (online). [Accessed on 20.08.2020]: https://collections.unu.edu/eserv/UNU:7308/PauwelsAIGeopolitics.pdf

Powell, Colin L. Remarks to the United Nations Security Council. February 5, 2003, US Department of State Archive (online). [Accessed on 20.08.2020]:https://2001-009.state.gov/secretary/former/powell/remarks/2003/17300.htm

Rosenberger, Laura. “Making Cyberspace Safe for Democracy. The New Landscape of Information Competition”. Foreign Affairs May/June 2020. See https://www.foreignaffairs.com/articles/china/2020-04-13/making-cyberspace-safe-democracy

Schwarz, Jon. “Lie After Lie: What Colin Powell Knew about Iraq 15 Years ago and What He Told the UN”. The Intercept. February 6, 2018 (online). [Accessed on 20/08/2020]: https://theintercept.com/2018/02/06/lie-after-lie-whatcolin-powell-knew-about-iraq-fifteen-years-ago-and-what-he-told-the-un/

United Nations. The Age of Digital Interdependence. Report of the Secretary-General’s High-level Panel on Digital Cooperation, 10 June 2019 (online). [Accessed on 15.08.2020]: https://digitalcooperation.
org/report

United Nations General Assembly. Countering the use of information and communications technologies for criminal purposes A/74/401 (25 November 2019), (online). [Accessed on 15/08/2020]: https://www.undocs.org/A/74/401

World Economic Forum. The Global Risks Report2019. WFE, 15 January 2019, (online). [Accessed on 15/08/2020]: https://www.weforum.org/reports/theglobal-risks-report-2019

Zuboff, Shoshana. The Age of Surveillance Capitalism. The Fight For a Human Future at the New Frontier of Power. London, Profile Books Ltd, 2019.

Notes:

  1. See https://www.unglobalpulse.org/2019/06/new-studyby-global-pulse-highlights-risks-of-ai-generated-textscreates-fake-un-speeches/
  2. See https://www.apc.org/en/pubs/open-letter-un-general-assembly-proposed-international-convention-cybercrime-poses-threat-human