Invitation Only Research Event
Chatham House | 10 St James's Square | London | SW1Y 4LE
Professor Robert Shiller, Sterling Professor of Economics, Yale University
Chair: Marianne Schneider-Petsinger, Research Fellow, US and the Americas Programme Chatham House
The 2007-08 financial crisis wreaked havoc on the lives of millions of people across the globe, and upended the faith of many in the prevailing economic system, with many countries still recovering a decade on.
Drawing on extensive research in his new book, Narrative Economics: How Stories Go Viral and Drive Major Economic Events, Professor Shiller will draw on a rich array of historical examples and data and outline a new way to think about economic change, and the narratives that shape it, to provide answers to questions such as whether lessons have been learned since the last financial crisis, are the same dislocations likely to occur again and what toolkits, if any, are there for anticipating the next financial crisis or recession?
Attendance at this event is by invitation only.
Yojiro Uchino was director of the defence budget at the Ministry of Finance in Japan from 2016 to 2019, working on budgets for the country's National Defense Program Guidelines and also its Mid-Term Defense Program.
During his fellowship, Yojiro will be undertaking research on the relationship between national security and fiscal positions, as well as the balance between free trade and national security.
Yojiro Uchino is based at Chatham House until July 2021, hosted by the Global Economy and Finance programme.
2015-16 | Director, Allowance Control and Mutual Assistance Insurance Division |
2014-15 | Director, Inter-Division Affairs of Budget |
2012-14 | Director, Government Shareholding Office (where he planned simultaneous IPO of Japan Post, the holding company, and its subsidiaries Japan Post Bank and Japan Post Insurance) |
1997 | Admitted to the Bar in New York State Supreme Court |
1996 | LLM, University of Michigan Law School |
1992 | BA Law, University of Tokyo |
17 June 2019
The nationalist urge to keep the world off your back extends to foreign finance.It is nearly 30 years since Rudiger Dornbusch and Sebastian Edwards published a seminal book, The Macroeconomics of Populism. Their conclusion back then was that the economic policies of populist leaders were quintessentially irresponsible. These governments, blinded by an aim to address perceived social injustices, specialised in profligacy, unbothered by budget constraints or whether they might run out of foreign exchange.
Because of this disregard for basic economic logic, their policy experiments inevitably ended badly, with some combination of inflation, capital flight, recession and default. Salvador Allende’s Chile in the 1970s, or Alan García’s Peru in the 1980s, capture this story perfectly.
These days, the macroeconomics of populism looks different. Of course there are populist leaders out there whose policies follow, more or less, the playbook of the 1970s and 1980s. Donald Trump may prove to be one of those, with a late-cycle fiscal expansion that seemed to have no basis in economic reasoning; Recep Tayyip Erdogan, by some accounts, may be another.
But a much more interesting phenomenon is the apparent surge in populist leaders whose economic policies are remarkably disciplined.
Take Mexico’s president, Andrés Manuel López Obrador. When it comes to fiscal policy, it is odd indeed that this fiery critic of neoliberalism seems fully committed to austerity. His budget for 2019 targets a surplus before interest payments of 1 per cent of GDP, and on current plans he intends to increase that surplus next year to 1.3 per cent of GDP. He has upheld the autonomy of the central bank and, so far at least, his overall macroeconomic framework is anything but revolutionary.
Hungary’s prime minister Viktor Orban offers another example of conservative populism. Under his watch, budget deficits have been considerably lower than they had been previously, helping to push the stock of public debt down from 74 per cent of GDP in 2010, the year Orban took over, to 68 per cent last year.
This emphasis on the virtues of fiscal prudence is also visible in Poland, where Jaroslaw Kaczynski’s PiS has managed public finances with sufficient discipline in the past few years to push the debt/GDP ratio below 50 per cent last year, the first time this has happened since 2009.
The obvious question is: what has changed in the decades since Dornbusch and Edwards went into print?
One answer is that today’s populists tend to strive for national self-reliance, which encourages them to avoid building up any dependence on foreign capital. And since that goal is achieved by keeping a tight rein on macro policy, fiscal indiscipline is avoided in order to limit vulnerability to foreign influences.
Perhaps this is because the 'them', or the perceived enemy, for many of today’s populists tends to be outside the country rather than inside. Broadly speaking, it is the forces of globalisation — and global capital in particular — that are the problem for these leaders, and self-reliance is the only way to keep those forces at arm’s length. This helps to explain why, for example, Orban has been so keen to repay debt to Hungary’s external creditors. He has relied instead on selling bonds to Hungarian households to finance his deficits, even though the interest rates on those bonds are much higher than he would pay to foreign creditors. It also helps explain why the PiS in Poland has presided over a decline in foreign holdings of its domestic bonds. Foreign investors owned 40 per cent of Poland’s domestic government debt back in 2015, but only 26 per cent now.
In other words, among many of today’s populists there is a blurring of the distinction between populism and nationalism. And the nationalistic urge to keep the rest of the world off your back seems to dominate the populist urge to spend money. The perfect example of that instinct is Vladimir Putin: not necessarily a populist, but his administration has been emphatic about the need to keep public spending low and to build solid financial buffers. National self-reliance is an economic obsession for the Russian government, and provides a model for other countries who wish to insulate themselves from international finance.
One of the reasons why the macroeconomics of populism have changed in this way is the historical legacy of economic disaster. If you are a populist leader in a country where financial crisis is part of living memory — as it is in Mexico, Hungary and Russia, say — you might do well to err on the side of conservatism for fear of repeating the mistakes of your predecessors.
But another reason why populism looks different for countries like Poland, Hungary, Mexico and Russia has to do with mere luck. Hungary and Poland, in particular, enjoy the luck of geography: having been absorbed into the EU, they have received financial transfers from Brussels averaging some 3-4 per cent of GDP in the past few years, so that populism in these countries has been solidly underpinned by the terms of their EU membership. López Obrador is enjoying the inheritance of his predecessor’s sound macro policy, together with a buoyant US economy and low US interest rates. Russia has had the good fortune of oil exports to rely on.
The thing about luck is that it can run out. So maybe it’s not quite time yet to bury the old macroeconomics of populism. But for the time being, it seems true to say that many of today’s populists have an unexpectedly robust sense of economic discipline.
This article was originally published in the Financial Times.
1 May 2020
Although the pandemic means the Nuclear Non-Proliferation Treaty (NPT) Review Conference (RevCon) is postponed, the delay could be an opportunity to better the health of the NPT regime.Despite face-to-face diplomatic meetings being increasingly rare during the current disruption, COVID-19 will ultimately force a redefinition of national security and defence spending priorities, and this could provide the possibility of an improved political climate at RevCon when it happens in 2021.
With US presidential elections due in November and a gradual engagement growing between the EU and Iran, there could be a new context for more cooperation between states by 2021. Two key areas of focus over the coming months will be the arms control talks between the United States and Russia, and Iran’s compliance with the 2015 Joint Comprehensive Plan of Action (JCPOA), also known as the Iran Nuclear Deal.
It is too early to discern the medium- and longer-term consequences of COVID-19 for defence ministries, but a greater focus on societal resilience and reinvigorating economic productivity will likely undercut the rationale for expensive nuclear modernization.
Therefore, extending the current New START (Strategic Arms Reduction Treaty) would be the best, most practical option to give both Russia and the United States time to explore more ambitious multilateral arms control measures, while allowing their current focus to remain on the pandemic and economic relief.
But with the current treaty — which limits nuclear warheads, missiles, bombers, and launchers — due to expire in February 2021, the continuing distrust between the United States and Russia makes this extension hard to achieve, and a follow-on treaty even less likely.
Prospects for future bilateral negotiations are hindered by President Donald Trump’s vision for a trilateral arms control initiative involving both China and Russia. But China opposes this on the grounds that its nuclear arsenal is far smaller than that of the two others.
While there appears to be agreement that the nuclear arsenals of China, France, and the UK (the NPT nuclear-weapons states) and those of the states outside the treaty (India, Pakistan, North Korea, and Israel) will all have to be taken into account going forward, a practical mechanism for doing so proves elusive.
If Joe Biden wins the US presidency he seems likely to pursue an extension of the New START treaty and could also prevent a withdrawal from the Open Skies treaty, the latest arms control agreement targeted by the Trump administration.
Under a Biden administration, the United States would also probably re-join the JCPOA, provided Tehran returned to strict compliance with the deal. Biden could even use the team that negotiated the Iran deal to advance the goal of denuclearization of the Korean peninsula.
For an NPT regime already confronted by a series of longstanding divergences, it is essential that Iran remains a signatory especially as tensions between Iran and the United States have escalated recently — due to the Qassim Suleimani assassination and the recent claim by Iran’s Revolutionary Guard Corps to have successfully placed the country’s first military satellite into orbit.
This announcement raised red flags among experts about whether Iran is developing intercontinental ballistic missiles due to the dual-use nature of space technology. The satellite launch — deeply troubling for Iran’s neighbours and the EU countries — may strengthen the US argument that it is a cover for the development of ballistic missiles capable of delivering nuclear weapons.
However, as with many other countries, Iran is struggling with a severe coronavirus crisis and will be pouring its scientific expertise and funds into that rather than other efforts — including the nuclear programme.
Those European countries supporting the trading mechanism INSTEX (Instrument in Support of Trade Exchanges) for sending humanitarian goods into Iran could use this crisis to encourage Iran to remain in compliance with the JCPOA and its NPT obligations.
France, Germany and the UK (the E3) have already successfully concluded the first transaction, which was to facilitate the export of medical goods from Europe to Iran. But the recent Iranian escalatory steps will most certainly place a strain on the preservation of this arrangement.
COVID-19 might have delayed Iran’s next breach of the 2015 nuclear agreement but Tehran will inevitably seek to strengthen its hand before any potential negotiations with the United States after the presidential elections.
As frosty US-Iranian relations — exacerbated by the coronavirus pandemic — prevent diplomatic negotiations, this constructive engagement between the E3 and Iran might prove instrumental in reviving the JCPOA and ensuring Iran stays committed to both nuclear non-proliferation and disarmament.
While countries focus their efforts on tackling the coronavirus pandemic, it is understandable resources may be limited for other global challenges, such as the increasing risk of nuclear weapons use across several regions.
But the potential ramifications of the COVID-19 crisis for the NPT regime are profound. Ongoing tensions between the nuclear-armed states must not be ignored while the world’s focus is elsewhere, and the nuclear community should continue to work together to progress nuclear non-proliferation and disarmament, building bridges of cooperation and trust that can long outlast the pandemic.
21 April 2020
COVID-19 is proving to be a grave threat to humanity. But this is not a one-off, there will be future crises, and we can be better prepared to mitigate them.A controversial debate during COVID-19 is the state of readiness within governments and health systems for a pandemic, with lines of the debate drawn on the issues of testing provision, personal protective equipment (PPE), and the speed of decision-making.
President Macron in a speech to the nation admitted French medical workers did not have enough PPE and that mistakes had been made: ‘Were we prepared for this crisis? We have to say that no, we weren’t, but we have to admit our errors … and we will learn from this’.
In reality few governments were fully prepared. In years to come, all will ask: ‘how could we have been better prepared, what did we do wrong, and what can we learn?’. But after every crisis, governments ask these same questions.
Most countries have put in place national risk assessments and established processes and systems to monitor and stress-test crisis-preparedness. So why have some countries been seemingly better prepared?
Some have had more time and been able to watch the spread of the disease and learn from those countries that had it first. Others have taken their own routes, and there will be much to learn from comparing these different approaches in the longer run.
Governments in Asia have been strongly influenced by the experience of the SARS epidemic in 2002-3 and - South Korea in particular - the MERS-CoV outbreak in 2015 which was the largest outside the Middle East. Several carried out preparatory work in terms of risk assessment, preparedness measures and resilience planning for a wide range of threats.
Case Study of Preparedness: South KoreaBy 2007, South Korea had established the Division of Public Health Crisis Response in Korea Centers for Disease Control and Prevention (KCDC) and, in 2016, the KCDC Center for Public Health Emergency Preparedness and Response had established a round-the-clock Emergency Operations Center with rapid response teams. KCDC is responsible for the distribution of antiviral stockpiles to 16 cities and provinces that are required by law to hold and manage antiviral stockpiles. |
And, at the international level, there are frameworks for preparedness for pandemics. The International Health Regulations (IHR) - adopted at the 2005 World Health Assembly and binding on member states - require countries to report certain disease outbreaks and public health events to the World Health Organization (WHO) and ‘prevent, protect against, control and provide a public health response to the international spread of disease in ways that are commensurate with and restricted to public health risks, and which avoid unnecessary interference with international traffic and trade’.
Under IHR, governments committed to a programme of building core capacities including coordination, surveillance, response and preparedness. The UN Sendai Framework for Disaster Risk highlights disaster preparedness for effective response as one of its main purposes and has already incorporated these measures into the Sustainable Development Goals (SDGs) and other Agenda 2030 initiatives. UN Secretary-General António Guterres has said COVID-19 ‘poses a significant threat to the maintenance of international peace and security’ and that ‘a signal of unity and resolve from the Council would count for a lot at this anxious time’.
Case Study of Preparedness: United StatesThe National Institutes of Health (NIH) and the Center for Disease Control (CDC) established PERRC – the Preparedness for Emergency Response Research Centers - as a requirement of the 2006 Pandemic and All-Hazards Preparedness Act, which required research to ‘improve federal, state, local, and tribal public health preparedness and response systems’. The 2006 Act has since been supplanted by the 2019 Pandemic and All-Hazards Preparedness and Advancing Innovation Act. This created the post of Assistant Secretary for Preparedness and Response (ASPR) in the Department for Health and Human Services (HHS) and authorised the development and acquisitions of medical countermeasures and a quadrennial National Health Security Strategy. The 2019 Act also set in place a number of measures including the requirement for the US government to re-evaluate several important metrics of the Public Health Emergency Preparedness cooperative agreement and the Hospital Preparedness Program, and a requirement for a report on the states of preparedness and response in US healthcare facilities. |
This pandemic looks set to continue to be a grave threat to humanity. But there will also be future pandemics – whether another type of coronavirus or a new influenza virus – and our species will be threatened again, we just don’t know when.
Other disasters too will befall us – we already see the impacts of climate change arriving on our doorsteps characterised by increased numbers and intensity of floods, hurricanes, fires, crop failure and other manifestations of a warming, increasingly turbulent atmosphere and we will continue to suffer major volcanic eruptions, earthquakes and tsunamis. All high impact, unknown probability events.
Preparedness for an unknown future is expensive and requires a great deal of effort for events that may not happen within the preparers’ lifetimes. It is hard to imagine now, but people will forget this crisis, and revert to their imagined projections of the future where crises don’t occur, and progress follows progress. But history shows us otherwise.
Preparations for future crises always fall prey to financial cuts and austerity measures in lean times unless there is a mechanism to prevent that. Cost-benefit analyses will understandably tend to prioritise the urgent over the long-term. So governments should put in place legislation – or strengthen existing legislation – now to ensure their countries are as prepared as possible for whatever crisis is coming.
Such a legal requirement would require governments to report back to parliament every year on the state of their national preparations detailing such measures as:
In addition, further actions should be taken:
And at the international level:
COVID-19 has been referred to as the 9/11 of crisis preparedness and response. Just as that shocking terrorist attack shifted the world and created a series of measures to address terrorism, we now recognise our security frameworks need far more emphasis on being prepared and being resilient. Whatever has been done in the past, it is clear that was nowhere near enough and that has to change.
Case Study of Preparedness: The UKThe National Risk Register was first published in 2008 as part of the undertakings laid out in the National Security Strategy (the UK also published the Biological Security Strategy in July 2018). Now entitled the National Risk Register for Civil Emergencies it has been updated regularly to analyse the risks of major emergencies that could affect the UK in the next five years and provide resilience advice and guidance. The latest edition - produced in 2017 when the UK had a Minister for Government Resilience and Efficiency - placed the risk of a pandemic influenza in the ‘highly likely and most severe’ category. It stood out from all the other identified risks, whereas an emerging disease (such as COVID-19) was identified as ‘highly likely but with moderate impact’. However, much preparatory work for an influenza pandemic is the same as for COVID-19, particularly in prepositioning large stocks of PPE, readiness within large hospitals, and the creation of new hospitals and facilities. One key issue is that the 2017 NHS Operating Framework for Managing the Response to Pandemic Influenza was dependent on pre-positioned ’just in case’ stockpiles of PPE. But as it became clear the PPE stocks were not adequate for the pandemic, it was reported that recommendations about the stockpile by NERVTAG (the New and Emerging Respiratory Virus Threats Advisory Group which advises the government on the threat posed by new and emerging respiratory viruses) had been subjected to an ‘economic assessment’ and decisions reversed on, for example, eye protection. The UK chief medical officer Dame Sally Davies, when speaking at the World Health Organization about Operation Cygnus – a 2016 three-day exercise on a flu pandemic in the UK – reportedly said the UK was not ready for a severe flu attack and ‘a lot of things need improving’. Aware of the significance of the situation, the UK Parliamentary Joint Committee on the National Security Strategy launched an inquiry in 2019 on ‘Biosecurity and human health: preparing for emerging infectious diseases and bioweapons’ which intended to coordinate a cross-government approach to biosecurity threats. But the inquiry had to postpone its oral hearings scheduled for late October 2019 and, because of the general election in December 2019, the committee was obliged to close the inquiry. |
20 April 2020
Nuclear deterrence theory, with its roots in the Cold War era, may not account for all eventualities in the 21st century. Researchers at Chatham House have worked with eight experts to produce this collection of essays examining four contested themes in contemporary policymaking on deterrence.
Summary
2 April 2020
The current crisis is an opportunity for the UK government to show agility in how it deals with cyber threats and how it cooperates with the private sector in creating cyber resilience.The World Health Organization, US Department of Health and Human Services, and hospitals in Spain, France and the Czech Republic have all suffered cyberattacks during the ongoing COVID-19 crisis.
In the Czech Republic, a successful attack targeted a hospital with one of the country’s biggest COVID-19 testing laboratories, forcing its entire IT network to shut down, urgent surgical operations to be rescheduled, and patients to be moved to nearby hospitals. The attack also delayed dozens of COVID-19 test results and affected the hospital’s data transfer and storage, affecting the healthcare the hospital could provide.
In the UK, the National Health Service (NHS) is already in crisis mode, focused on providing beds and ventilators to respond to one of the largest peacetime threats ever faced. But supporting the health sector goes beyond increasing human resources and equipment capacity.
Cybersecurity support, both at organizational and individual level, is critical so health professionals can carry on saving lives, safely and securely. Yet this support is currently missing and the health services may be ill-prepared to deal with the aftermath of potential cyberattacks.
When the NHS was hit by the Wannacry ransomware attack in 2017 - one of the largest cyberattacks the UK has witnessed to date – it caused massive disruption, with at least 80 of the 236 trusts across England affected and thousands of appointments and operations cancelled. Fortunately, a ‘kill-switch’ activated by a cybersecurity researcher quickly brought it to a halt.
But the UK’s National Cyber Security Centre (NCSC), has been warning for some time against a cyber attack targeting national critical infrastructure sectors, including the health sector. A similar attack, known as category one (C1) attack, could cripple the UK with devastating consequences. It could happen and we should be prepared.
Although the NHS has taken measures since Wannacry to improve cybersecurity, its enormous IT networks, legacy equipment and the overlap between the operational and information technology (OT/IT) does mean mitigating current potential threats are beyond its ability.
And the threats have radically increased. More NHS staff with access to critical systems and patient health records are increasingly working remotely. The NHS has also extended its physical presence with new premises, such as the Nightingale hospital, potentially the largest temporary hospital in the world.
Radical change frequently means proper cybersecurity protocols are not put in place. Even existing cybersecurity processes had to be side-stepped because of the outbreak, such as the decision by NHS Digital to delay its annual cybersecurity audit until September. During this audit, health and care organizations submit data security and protection toolkits to regulators setting out their cybersecurity and cyber resilience levels.
The decision to delay was made to allow the NHS organizations to focus capacity on responding to COVID-19, but cybersecurity was highlighted as a high risk, and the importance of NHS and Social Care remaining resilient to cyberattacks was stressed.
The NHS is stretched to breaking point. Expecting it to be on top of its cybersecurity during these exceptionally challenging times is unrealistic, and could actually add to the existing risk.
Now is the time where new partnerships and support models should be emerging to support the NHS and help build its resilience. Now is the time where innovative public-private partnerships on cybersecurity should be formed.
Similar to the economic package from the UK chancellor and innovative thinking on ventilator production, the government should oversee a scheme calling on the large cybersecurity capacity within the private sector to step in and assist the NHS. This support can be delivered in many different ways, but it must be mobilized swiftly.
The NCSC for instance has led the formation of the Cyber Security Information Sharing Partnership (CiSP)— a joint industry and UK government initiative to exchange cyber threat information confidentially in real time with the aim of reducing the impact of cyberattacks on UK businesses.
CiSP comprises organizations vetted by NCSC which go through a membership process before being able to join. These members could conduct cybersecurity assessment and penetration testing for NHS organizations, retrospectively assisting in implementing key security controls which may have been overlooked.
They can also help by making sure NHS remote access systems are fully patched and advising on sensible security systems and approved solutions. They can identify critical OT and legacy systems and advise on their security.
The NCSC should continue working with the NHS to enhance provision of public comprehensive guidance on cyber defence and response to potential attack. This would show they are on top of the situation, projecting confidence and reassurance.
It is often said in every crisis lies an opportunity. This is an opportunity for the UK government to show agility in how it deals with cyber threats and how it cooperates with the private sector in creating cyber resilience.
It is an opportunity to lead a much-needed cultural change showing cybersecurity should never be an afterthought.
1 April 2020
The COVID-19 pandemic has highlighted the potential of complex systems modelling for policymaking but it is crucial to also understand its limitations.Complex systems models have played a significant role in informing and shaping the public health measures adopted by governments in the context of the COVID-19 pandemic. For instance, modelling carried out by a team at Imperial College London is widely reported to have driven the approach in the UK from a strategy of mitigation to one of suppression.
Complex systems modelling will increasingly feed into policymaking by predicting a range of potential correlations, results and outcomes based on a set of parameters, assumptions, data and pre-defined interactions. It is already instrumental in developing risk mitigation and resilience measures to address and prepare for existential crises such as pandemics, prospects of a nuclear war, as well as climate change.
In the end, model-driven approaches must stand up to the test of real-life data. Modelling for policymaking must take into account a number of caveats and limitations. Models are developed to help answer specific questions, and their predictions will depend on the hypotheses and definitions set by the modellers, which are subject to their individual and collective biases and assumptions. For instance, the models developed by Imperial College came with the caveated assumption that a policy of social distancing for people over 70 will have a 75 per cent compliance rate. This assumption is based on the modellers’ own perceptions of demographics and society, and may not reflect all societal factors that could impact this compliance rate in real life, such as gender, age, ethnicity, genetic diversity, economic stability, as well as access to food, supplies and healthcare. This is why modelling benefits from a cognitively diverse team who bring a wide range of knowledge and understanding to the early creation of a model.
Machine learning, or artificial intelligence (AI), has the potential to advance the capacity and accuracy of modelling techniques by identifying new patterns and interactions, and overcoming some of the limitations resulting from human assumptions and bias. Yet, increasing reliance on these techniques raises the issue of explainability. Policymakers need to be fully aware and understand the model, assumptions and input data behind any predictions and must be able to communicate this aspect of modelling in order to uphold democratic accountability and transparency in public decision-making.
In addition, models using machine learning techniques require extensive amounts of data, which must also be of high quality and as free from bias as possible to ensure accuracy and address the issues at stake. Although technology may be used in the process (i.e. automated extraction and processing of information with big data), data is ultimately created, collected, aggregated and analysed by and for human users. Datasets will reflect the individual and collective biases and assumptions of those creating, collecting, processing and analysing this data. Algorithmic bias is inevitable, and it is essential that policy- and decision-makers are fully aware of how reliable the systems are, as well as their potential social implications.
Increasing use of emerging technologies for data- and evidence-based policymaking is taking place, paradoxically, in an era of growing mistrust towards expertise and experts, as infamously surmised by Michael Gove. Policymakers and subject-matter experts have faced increased public scrutiny of their findings and the resultant policies that they have been used to justify.
This distrust and scepticism within public discourse has only been fuelled by an ever-increasing availability of diffuse sources of information, not all of which are verifiable and robust. This has caused tension between experts, policymakers and public, which has led to conflicts and uncertainty over what data and predictions can be trusted, and to what degree. This dynamic is exacerbated when considering that certain individuals may purposefully misappropriate, or simply misinterpret, data to support their argument or policies. Politicians are presently considered the least trusted professionals by the UK public, highlighting the importance of better and more effective communication between the scientific community, policymakers and the populations affected by policy decisions.
While measures can and should be built in to improve the transparency and robustness of scientific models in order to counteract these common criticisms, it is important to acknowledge that there are limitations to the steps that can be taken. This is particularly the case when dealing with predictions of future events, which inherently involve degrees of uncertainty that cannot be fully accounted for by human or machine. As a result, if not carefully considered and communicated, the increased use of complex modelling in policymaking holds the potential to undermine and obfuscate the policymaking process, which may contribute towards significant mistakes being made, increased uncertainty, lack of trust in the models and in the political process and further disaffection of citizens.
The potential contribution of complexity modelling to the work of policymakers is undeniable. However, it is imperative to appreciate the inner workings and limitations of these models, such as the biases that underpin their functioning and the uncertainties that they will not be fully capable of accounting for, in spite of their immense power. They must be tested against the data, again and again, as new information becomes available or there is a risk of scientific models becoming embroiled in partisan politicization and potentially weaponized for political purposes. It is therefore important not to consider these models as oracles, but instead as one of many contributions to the process of policymaking.
9 March 2020
How would the states of the Gulf Cooperation Council (GCC) respond to a serious cyber incident? This could be a global ransomware event, a critical infrastructure incident targeted at the energy sector, or an attack on government departments. This paper examines cyber resilience in the states of the GCC.
Summary
Research Event
Chatham House
Konstantinos Komaitis, Senior Director, Policy Development & Strategy, Internet Society
Gregory Asmolov, Leverhulme Early Career Fellow Russia Institute, King’s College London
Further speakers to be announced.
Chair: Joyce Hakmeh, Senior Research Fellow, International Security Programme, Chatham House and Co-Editor of the Journal of Cyber Policy.
Several governments have been moving towards a stronger sovereignty narrative when it comes to the internet with some trying to impose borders in cyberspace to extend their physical borders in cyberspace. From attempts to create isolatable domestic internets to data localization laws and to increased calls for sovereignty in the digital space, all these approaches are raising concerns regarding the fate of the internet.
While the impact of these approaches varies and the motivations behind them are arguably different too, all governments have been working towards the pursuit of greater technological independence and in some instances greater control.
The panellists will discuss the impact that these approaches have on the internet. They will address the question of whether the era of an 'open web' is drawing to an end and whether these territorialization efforts lead to a fragmentation of the internet, making a 'splinternet' inevitable?
This event is being organized with the kind support of DXC Technology.
This event will be followed by a reception.
PLEASE NOTE THIS EVENT IS POSTPONED UNTIL FURTHER NOTICE.
This programme of work addresses the conundrum of nuclear weapons as a wicked problem in a complex adaptive system.
Understanding the complexity and the wickedness of the situation allows analysts and strategic planners to approach these complex and intractable issues in new and transformative ways – with a better chance of coping or succeeding and reducing the divisions between experts.
Using complexity theory, a complex adaptive system representing the international system and its interaction with the environment can be represented through an interactive visualization tool that will aid thought processes and policy decision-making.
Until recently, analysts did not have the tools to be able to create models that could represent the complexity of the international system and the role that nuclear weapons play. Now that these tools are available, analysts should use them to enable decision-makers to gain insights into the range of possible outcomes from a set of possible actions.
This programme builds on work by Chatham House on cyber security and artificial intelligence (AI) in the nuclear/strategic realms.
In order to approach nuclear weapons as wicked problems in a complex adaptive system from different and sometimes competing perspectives, the programme of work involves the wider community of specialists who do not agree on what constitutes the problems of nuclear weapons nor on what are the desired solutions.
Different theories of deterrence, restraint and disarmament are tested. The initiative is international and inclusive, paying attention to gender, age and other aspects of diversity, and the network of MacArthur Grantees are given the opportunity to participate in the research, including in the writing of research papers, so that the complexity modelling can be tested against a wide range of approaches and hypotheses.
In addition, a Senior Reference Group will work alongside the programme, challenging its outcome and findings, and evaluating and guiding the direction of the research.
This project is supported by the MacArthur Foundation.
The objective of the project is to understand Alliance obligations within the framework of nuclear non-proliferation and disarmament treaties.
Examining obligations under the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), and exploring new approaches that NATO could adopt to reconcile strategic stability with nuclear disarmament policies which could be introduced at the 2020 NPT Review Conference (RevCon).
The project facilitates the reconciliation of different positions in advance of the RevCon by providing a platform for stakeholders to communicate their respective positions and engage in constructive dialogue. Key research findings and contemporary analysis will be disseminated to officials and the policy community.
Through dialogue and research, the project aims to reduce polarisation in the nuclear field and consequently lay a foundation for increased collaboration during the discussions. It also provides a unique opportunity for NATO countries to explore specific new approaches, including in relation to identifying and analysing relevant geopolitical conditions for nuclear disarmament measures that will inform their inputs into the RevCon and related policy framework discussions.
This project is supported by the Carnegie Corporation of New York.
The aim of this project is to provide a space to explore creative/disruptive ideas in order to make headway on perspectives concerning deterrence. This will encourage ‘responsible disruption’ in the nuclear field.
Concerns about transatlantic security are high following the US 2018 Nuclear Posture Review and its interpretation of the Russian doctrine, the demise of the Intermediate Range Nuclear Forces Treaty (INF), the uncertainty surrounding the potential extension of the New Strategic Arms Reduction Treaty (New START), and Russian deployment of Avangard hypersonic, nuclear-capable missile systems.
Emerging technologies, especially quantum technologies, jeopardize the reliability of existing encryption measures. Some of the most sophisticated cyber attacks are already assisted by artificial intelligence. The possibility that nuclear weapons systems can be interfered with both during conflict and peacetime by these technologies, without the knowledge of the possessor state, raises questions on the reliability and integrity of these systems, with implications for military decision-making, particularly to deterrence policy.
These issues and more indicate the changes in the security landscape that have a bearing on the future of nuclear deterrence.
This project is supported by the Hiroshima Prefecture and Government of Ireland.
This project aims to improve resilience in NATO’s nuclear weapons systems against cyber threats.
Cyber security is a vital part of the national and international strategic infrastructure and weapons systems. The increasing cyber capabilities of countries such as China, Russia and North Korea put the North Atlantic Treaty Organization’s (NATO’s) nuclear systems - capabilities that include nuclear command, control and communication, weapons systems and early warning systems - in danger.
There is an urgent need to study and address cyber challenges to nuclear assets within NATO and in key NATO countries. Greater awareness of the potential threats and vulnerabilities is key to improving preparedness and mitigating the risks of a cyber-attack on NATO nuclear weapons systems.
Chatham House produces research responding to the need for information on enhancing cybersecurity for command, control and communications. This project constitutes the beginning of the second phase of the Cyber Security of Nuclear Weapons Systems: Threats, Vulnerabilities and Consequences, a report published in January 2018 in partnership with the Stanley Foundation.
The project responds to the need both for more public information on cyber risks in NATO’s nuclear mission, and to provide policy-driven research to shape and inform nuclear policy within NATO member states and the Nuclear Planning Group.
This project is supported by the Ploughshares Fund and the Stanley Foundation.
Invitation Only Research Event
Muscat, Oman
The GCC states have invested significantly in cybersecurity and have made large strides in protecting governments, businesses and individuals from cyber threats, with the aim of delivering on their ambitious national strategies and future visions. However, several challenges to cybersecurity and cyber resilience in the region persist, putting those ambitious plans at risk.
These challenges include the uneven nature of cybersecurity protections, the incomplete implementation of cybersecurity strategies and regulations, and the issues around international cooperation. Such challenges mean that GCC states need to focus on the more difficult task of cyber resilience, in addition to the simpler initial stages of cybersecurity capacity-building, to ensure they harness the true potential of digital technologies and mitigate associated threats.
Set against this background, this workshop will explore opportunities and challenges to cyber resilience in the GCC focusing on four main pillars:
1. Cyber resilience: in concept and in practice
2. Building an effective cybersecurity capacity
3. The potential of regional and international cooperation to cyber resilience
4. Deterrence and disruption: different approaches
This event will be held in collaboration with the Arab Regional Cybersecurity Centre (ARCC) and OMAN CERT.
PLEASE NOTE THIS EVENT IS POSTPONED UNTIL FURTHER NOTICE.
Invitation Only Research Event
Smart Peace brings together global expertise in conflict analysis and research, peacebuilding and mediation programming, and behavioural science and evaluation. Together, Smart Peace partners are developing integrated and adaptive peace initiatives, working with local partners to prevent and resolve complex and intractable conflicts in Central African Republic, Myanmar and northern Nigeria.
This roundtable is an opportunity for Smart Peace partners to share the Smart Peace concept, approach and objectives, and experiences of the first phases of programme implementation. Roundtable discussions among participants from policy, practice and research communities will inform future priorities and planning for Smart Peace learning, advocacy and communication.
Smart Peace partners include Conciliation Resources, Behavioural Insights Team, The Centre for Humanitarian Dialogue, Chatham House, ETH Zurich, International Crisis Group and The Asia Foundation.
Ana Alecsandru is a research assistant for the International Security programme, covering projects related to nuclear weapons policy and emerging technologies. She is also a PhD candidate at the University of Birmingham (awaiting Viva examination).
Her doctoral research examined the relationship between trust and verification in nuclear arms control negotiations between the United States and Russia.
Prior to joining Chatham House, she worked at the University of Birmingham on various projects concerning nuclear weapons policy while doing her PhD.
Ana completed an internship in the Arms Control, Disarmament, and WMD Non-Proliferation Centre at NATO HQ in Brussels in 2014. She was also a research intern at the United Nations Office for Disarmament Affairs (UNODA) in New York in 2016.
During her doctoral studies, she received full grants to participate in the 2017 IGCC’s Public Policy and Nuclear Threats Boot Camp hosted at UC San Diego and the 2017 Nuclear Safeguards and Non-Proliferation Training Course hosted by the European Commission’s Research Centre in Ispra.
Ana holds an MA in Security Studies and an MA in Research Methods from the University of Birmingham. She completed her BSc (hons) in International Relations at the University of Bath. For her doctoral research, she was awarded a studentship by the UK Economic and Social Research Council.
Invitation Only Research Event
Chatham House, London
In April 2018, the Commonwealth Heads of Government Meeting (CHOGM), held in London, saw the creation and the adoption of the Commonwealth Cyber Declaration. The declaration outlines the framework for a concerted effort to advance cybersecurity practices to promote a safe and prosperous cyberspace for Commonwealth citizens, businesses and societies.
The conference will aim to provide an overview on the progress made on cybersecurity in the Commonwealth since the declaration was announced in 2018. In addition, it will examine future challenges and potential solutions going forward.
This conference is part of the International Security Programme's project on Implementing the Commonwealth Cybersecurity Agenda and will convene a range of senior Commonwealth representatives as well as a selection of civil society and industry stakeholders. This project aims to develop a pan-Commonwealth platform to take the Commonwealth Cyber Declaration forward by means of a holistic, inclusive and representative approach.
Please see below meeting summaries from previous events on Cybersecurity in the Commonwealth:
Attendance at this event is by invitation only.
Invitation Only Research Event
Chatham House
Dr Joseph Pilat, Los Alamos National Laboratory and Woodrow Wilson International Center for Scholars
Chair: Dr Patricia Lewis, Research Director, International Security Programme, Chatham House
In the late 1980s, with the 1995 decision on the future of the nuclear Non-Proliferation Treaty (NPT) looming, Joseph Pilat wrote an essay on "A World without the NPT?" which was published in The International Nuclear Non-Proliferation Regime in the 1990s, edited by John Simpson (Cambridge, England: Cambridge University Press, 1986). In this piece, the speaker attempted to think through the effects of a limited or no agreement on extension to the treaty and regime, to nuclear non-proliferation, arms control and energy and to the broader geopolitical landscape. The purpose was not a prediction, but a cautionary tale of the value of the treaty.
Now, nearly 25 years after indefinite extension and 50 years after the NPT's entry into force, the treaty and the regime are facing serious challenges. In this roundtable meeting, the speaker will revisit the questions he addressed thirty years ago.
16 January 2020
Change was slow to come but progress has since been swift. Not only can a continuing focus on inclusivity benefit service people and the organization, it is also an essential element of a values-based foreign policy.The new UK government will conduct a review of foreign, security and defence policy in 2020. If the UK decides to use values as a framework for foreign policy this needs to be reflected in its armed forces. One area where this is essential is continuing to deepen inclusivity for LGBTIQ+ personnel, building on the progress made since the ban on their service was lifted in 2000.
I witnessed the ban first-hand as a young officer in the British Army in 1998. As the duty officer I visited soldiers being held in the regimental detention cells to check all was well. One day a corporal, who I knew, was there awaiting discharge from the army having been convicted of being gay. On the one hand, here was service law in action, which was officially protecting the army’s operational effectiveness and an authority not to be questioned at my level. On the other, here was an excellent soldier in a state of turmoil and public humiliation. How extreme this seems now.
On 12 January 2000 Tony Blair’s Labour government announced an immediate lifting of the ban for lesbian, gay and bisexual personnel (LGB) and introduced a new code of conduct for personal relationships. (LGB is the term used by the armed forces to describe those personnel who had been banned prior to 2000.) This followed a landmark ruling in a case taken to the European Court of Human Rights in 1999 by four LGB ex-service personnel – supported by Stonewall – who had been dismissed from service for their sexuality.
Up to that point the Ministry of Defence's long-held position had been that LGB personnel had a negative impact on the morale and cohesion of a unit and damaged operational effectiveness. Service personnel were automatically dismissed if it was discovered they were LGB, even though homosexuality had been decriminalized in the UK by 1967.
Proof that the armed forces had been lagging behind the rest of society was confirmed by the positive response to the change among service personnel, despite a handful of vocal political and military leaders who foresaw negative impacts. The noteworthy service of LGBTIQ+ people in Iraq and Afghanistan only served to debunk any residual myths.
Twenty years on, considerable progress has been made and my memories from 1998 now seem alien. This is a story to celebrate – however in the quest for greater inclusivity there is always room for improvement.
Defence Minister Johnny Mercer last week apologized following recent calls from campaign group Liberty for a fuller apology. In December 2019, the Ministry of Defence announced it was putting in place a scheme to return medals stripped from veterans upon their discharge.
The armed forces today have a range of inclusivity measures to improve workplace culture including assessments of workplace climate and diversity networks supported by champions drawn from senior leadership.
But assessing the actual lived experience for LGBTIQ+ people is challenging due to its subjectivity. This has not been helped by low participation in the 2015 initiative to encourage people to declare confidentially their sexual orientation, designed to facilitate more focused and relevant policies. As of 1 October 2019, only 20.3 per cent of regular service people had declared a sexual orientation.
A measure of positive progress is the annual Stonewall Workplace Equality Index, the definitive benchmarking tool for employers to measure their progress on LGBTIQ+ inclusion in the workplace; 2015 marked the first year in which all three services were placed in the top 100 employers in the UK and in 2019 the Royal Navy, British Army and Royal Air Force were placed 15th=, 51st= and 68th respectively.
Nevertheless, LGBTIQ+ service people and those in other protected groups still face challenges. The 2019 Ministry of Defence review of inappropriate behaviour in the armed forces, the Wigston Report, concluded there is an unacceptable level of sexual harassment, bullying and discrimination. It found that 26-36% of LGBTIQ+ service people have experienced negative comments or conduct at work because of their sexual orientation.
The Secretary of State for Defence accepted the report’s 36 recommendations on culture, incident reporting, training and a more effective complaints system. Pivotal to successful implementation will be a coherent strategy driven by fully engaged leaders.
Society is also expecting ever higher standards, particularly in public bodies. The armed forces emphasise their values and standards, including ‘respect for others’, as defining organisational characteristics; individuals are expected to live by them. Only in a genuinely inclusive environment can an individual thrive and operate confidently within a team.
The armed forces also recognize as a priority the need to connect to and reflect society more closely in order to attract and retain talent from across all of society. The armed forces’ active participation in UK Pride is helping to break down barriers in this area.
In a post-Brexit world, the UK’s values, support for human rights and reputation for fairness are distinctive strengths that can have an impact on the world stage and offer a framework for future policy. The armed forces must continue to push and promote greater inclusivity in support. When operating overseas with less liberal regimes, this will be sensitive and require careful handling; however it will be an overt manifestation of a broader policy and a way to communicate strong and consistent values over time.
The armed forces were damagingly behind the times 20 years ago. But good progress has been made since. Inclusion initiatives must continue to be pushed to bring benefits to the individual and the organization as well as demonstrate a values-based foreign policy.
Will Davies is the Army Chief of General Staff Research Fellow in the International Security programme. He commissioned into the British Army in 1996 and has deployed to Bosnia, Kosovo, Iraq and Afghanistan in tank and reconnaissance units and latterly as an advisor.
He recently returned from the Kurdistan Region of Iraq as the UK’s advisor to the regional government’s Peshmerga reform programme. In 2015 he helped change defence policy to enable women to serve in combat roles including the infantry.
Will’s research focus at Chatham House is on armed forces’ overseas engagement.
2018-19 | Special Defence Advisor to Ministry of Peshmerga Affairs, Kurdistan Region of Iraq |
2015-16 | Women in Ground Close Combat, Deputy Team Leader |
2012-15 | Commanding Officer, 1st The Queen’s Dragoon Guards (recce regiment) |
2008-14 | Three deployments to Helmand Province, Afghanistan with British Army |
2005 | Masters in Defence Administration, Cranfield University |
2003 | Deployment to Iraq with British Army |
1996-99 | Deployments to Bosnia and Kosovo with British Army |
1995 | MA(Edin) Geography, University of Edinburgh |
7 January 2020
Targeting cultural property is rightly prohibited under the 1954 Hague Convention.As tensions escalate in the Middle East, US President Donald Trump has threatened to strike targets in Iran should they seek to retaliate over the killing of Qassem Soleimani. According to the president’s tweet, these sites includes those that are ‘important to Iran and Iranian culture’.
Defense Secretary Mark Esper was quick on Monday to rule out any such action and acknowledged that the US would ‘follow the laws of armed conflict’. But Trump has not since commented further on the matter.
Any move to target Iranian cultural heritage could constitute a breach of the international laws protecting cultural property. Attacks on cultural sites are deemed unlawful under two United Nations conventions; the 1954 Hague Convention for the Protection of Cultural Property during Armed Conflict, and the 1972 UNESCO World Heritage Convention for the Protection of the World Cultural and Natural Heritage.
These have established deliberate attacks on cultural heritage (when not militarily necessary) as a war crime under the Rome Statute of the International Criminal Court in recognition of the irreparable damage that the loss of cultural heritage can have locally, regionally and globally.
These conventions were established in the aftermath of the Second World War, in reaction to the legacy of the massive destruction of cultural property that took place, including the intense bombing of cities, and systematic plunder of artworks across Europe. The conventions recognize that damage to the cultural property of any people means ‘damage to the cultural heritage of all mankind’. The intention of these is to establish a new norm whereby protecting culture and history – that includes cultural and historical property – is as important as safeguarding people.
Such historical sites are important not simply as a matter of buildings and statues, but rather for their symbolic significance in a people’s history and identity. Destroying cultural artefacts is a direct attack on the identity of the population that values them, erasing their memories and historical legacy. Following the heavy bombing of Dresden during the Second World War, one resident summed up the psychological impact of such destruction in observing that ‘you expect people to die, but you don’t expect the buildings to die’.
Targeting sites of cultural significance isn’t just an act of intimidation during conflict. It can also have a lasting effect far beyond the cessation of violence, hampering post-conflict reconciliation and reconstruction, where ruins or the absence of previously significant cultural monuments act as a lasting physical reminder of hostilities.
For example, during the Bosnian War in the 1990s, the Old Bridge in Mostar represented a symbol of centuries of shared cultural heritage and peaceful co-existence between the Serbian and Croat communities. The bridge’s destruction in 1993 at the height of the civil war and the temporary cable bridge which took its place acted as a lasting reminder of the bitter hostilities, prompting its reconstruction a decade later as a mark of the reunification of the ethnically divided town.
More recently, the destruction of cultural property has been a feature of terrorist organizations, such as the Taliban’s demolition of the 1,700-year-old Buddhas of Bamiyan in 2001, eliciting international condemnation. Similarly, in Iraq in 2014 following ISIS’s seizure of the city of Mosul, the terrorist group set about systematically destroying a number of cultural sites, including the Great Mosque of al-Nuri with its leaning minaret, which had stood since 1172. And in Syria, the ancient city of Palmyra was destroyed by ISIS in 2015, who attacked its archaeological sites with bulldozers and explosives.
Such violations go beyond destruction: they include the looting of archaeological sites and trafficking of cultural objects, which are used to finance terrorist activities, which are also prohibited under the 1954 Hague Convention.
As a war crime, the destruction of cultural property has been successfully prosecuted in the International Criminal Court, which sentenced Ahmad Al-Faqi Al-Mahdi to nine years in jail in 2016 for his part in the destruction of the Timbuktu mausoleums in Mali. Mahdi led members of Al-Qaeda in the Islamic Maghreb to destroy mausoleums and monuments of cultural and religious importance in Timbuktu, irreversibly erasing what the chief prosecutor described as ‘the embodiment of Malian history captured in tangible form from an era long gone’.
Targeting cultural property is prohibited under customary international humanitarian law, not only by the Hague Convention. But the Convention sets out detailed regulations for protection of such property, and it has taken some states a lot of time to provide for these.
Although the UK was an original signatory to the 1954 Hague Convention, it did not ratify it until 2017, introducing into law the Cultural Property (Armed Conflicts) Act 2017, and setting up the Cultural Protection Fund to safeguard heritage of international importance threatened by conflict in countries across the Middle East and North Africa.
Ostensibly, the UK’s delay in ratifying the convention lay in concerns over the definition of key terms and adequate criminal sanctions, which were addressed in the Second Protocol in 1999. However, changing social attitudes towards the plunder of antiquities, and an alarming increase in the use of cultural destruction as a weapon of war by extremist groups to eliminate cultures that do not align with their own ideology, eventually compelled the UK to act.
In the US, it is notoriously difficult to get the necessary majority for the approval of any treaty in the Senate; for the Hague Convention, approval was achieved in 2008, following which the US ratified the Convention in 2009.
Destroying the buildings and monuments which form the common heritage of humanity is to wipe out the physical record of who we are. People are people within a place, and they draw meaning about who they are from their surroundings. Religious buildings, historical sites, works of art, monuments and historic artefacts all tell the story of who we are and how we got here. We have a responsibility to protect them.
2 December 2019
The fallout from disinformation and online manipulation strategies have alerted Western democracies to the novel, nuanced vulnerabilities of our information society. This paper outlines the implications of the adoption of AI by the the legacy media, as well as by the new media, focusing on personalization.
Summary
14 November 2019
As internet governance issues emerge in the wake of innovations such as the Internet of Things and advanced artificial intelligence, there is an urgent need for the EU and US to establish a common, positive multi-stakeholder vision for regulating and governing the internet.
Research Event
Chatham House, London
Andrew Sullivan, President and CEO, Internet Society
Jennifer Cobbe, Research Associate, Department of Computer Science and Technology, University of Cambridge
Jesse Sowell, Assistant Professor, Department of International Affairs, Bush School of Government and Public Service, Texas A&M University
Chair: Emily Taylor, Associate Fellow, International Security, Chatham House, Editor, Journal of Cyber Policy
In recent years, there has been a growing debate around the influence of a few large internet technology companies on the internet’s infrastructure and over the popular applications and social media platforms that we use every day.
The internet which was once widely viewed as a collective platform for limitless, permissionless innovation, competition and growth, is now increasingly viewed as a consolidated environment dominated by a few. Such market dominance threatens to undermine the internet’s fundamental benefits as a distributed network in which no single entity has control.
The panel examines the risks of consolidation throughout the internet’s technology stack such as the impact on complex supply chains that support applications, including cloud provisions, ‘as a service’.
It also explores the potential benefits, for example, when building out essential infrastructure to support faster and cheaper internet services in developing economies, consolidation can create economies of scale that bring the resource-intensive building blocks of the internet economy within the reach of new start-ups and innovators.
The panel provides an interdisciplinary perspective exploring the relationship between consolidation and evolutions in the internet infrastructure as well as unpacking its policy implications.
This event supports a special issue of the Journal of Cyber Policy as part of a collaboration between Chatham House and the Internet Society which explores the impact of the consolidation on the internet’s fundamental architecture.
3 October 2019
Disinformation, as the latest iteration of propaganda suitable for a digitally interconnected world, shows no signs of abating. This paper provides a holistic overview of the current state of play and outlines how EU and US cooperation can mitigate disinformation in the future.
Research Event
Chatham House, London
Rt Hon Baroness Neville-Jones DCMG, Minister of State for Security and Counter Terrorism (2010-11)
Jamie Condliffe, Editor, DealBook Newsletter and Writer, Bits Tech Newsletter, The New York Times
Jamie Saunders, Partner, Wychwood Partners LLP; Visiting Professor, University College London
Chair: Dr Patricia Lewis, Research Director, International Security Department, Chatham House
New technology such as 5G, artificial intelligence, nanotechnology and robotics have become, now more than ever, intertwined with geopolitical, economic and trade interests. Leading powers are using new technology to exert power and influence and to shape geopolitics more generally.
The ongoing race between the US and China around 5G technology is a case in point. Amid these tensions, the impact on developing countries is not sufficiently addressed.
Arguably, the existing digital divide will increase leading developing countries to the early, if not hasty, adoption of new technology for fear of lagging behind. This could create opportunities but will also pose risks.
This panel discusses how new technology is changing the geopolitical landscape. It also discusses the role that stakeholders, including governments, play in the creation of standards for new technologies and what that means for its deployment in key markets technically and financially.
Finally, the panel looks at the issue from the perspective of developing countries, addressing the choices that have to be made in terms of affordability, development priorities and security concerns.
This event was organized with the kind support of DXC Technology.
With the number of violent conflicts increasing, there is a worldwide need to respond more effectively. Dialogue and mediation are proven to be effective in preventing and resolving conflicts, which are often complex, political and frequently-changing.
But there is more to be done to understand how these approaches can adapt – responding quickly to changing politics and overcoming obstacles that block progress.
Smart Peace is a global initiative led by Conciliation Resources, which combines the varied expertise of different consortium partners to address the challenges of building peace – focusing on the Central African Republic, Nigeria and Myanmar.
This work combines peacebuilding techniques, conflict analysis, rigorous evaluation and behavioural insights. The resulting lessons will help communities, international organisations and governments to implement peace strategies with greater confidence.
This project is funded with UK aid from the UK government.
9 September 2019
Emily Taylor examines the controversy around the Chinese tech giant’s mobile broadband equipment and the different approaches taken by Western countries.As countries move towards the fifth generation of mobile broadband, 5G, the United States has been loudly calling out Huawei as a security threat. It has employed alarmist rhetoric and threatened to limit trade and intelligence sharing with close allies that use Huawei in their 5G infrastructure.
While some countries such as Australia have adopted a hard line against Huawei, others like the UK have been more circumspect, arguing that the risks of using the firm’s technology can be mitigated without forgoing the benefits.
So, who is right, and why have these close allies taken such different approaches?
Long-standing concerns relating to Huawei are plausible. There are credible allegations that it has benefitted from stolen intellectual property, and that it could not thrive without a close relationship with the Chinese state.
Huawei hotly denies allegations that users are at risk of its technology being used for state espionage, and says it would resist any order to share information with the Chinese government. But there are questions over whether it could really resist China’s stringent domestic legislation, which compels companies to share data with the government. And given China’s track record of using cyberattacks to conduct intellectual property theft, there may be added risks of embedding a Chinese provider into critical communications infrastructure.
In addition, China’s rise as a global technological superpower has been boosted by the flow of financial capital through government subsidies, venture and private equity, which reveal murky boundaries between the state and private sector for domestic darlings. Meanwhile, the Belt and Road initiative has seen generous investment by China in technology infrastructure across Africa, South America and Asia.
There’s no such thing as a free lunch or a free network – as Sri Lanka discovered when China assumed shares in a strategic port in return for debt forgiveness; or Mexico when a 1% interest loan for its 4G network came on the condition that 80% of the funding was spent with Huawei.
Aside from intelligence and geopolitical concerns, the quality of Huawei’s products represents a significant cyber risk, one that has received less attention than it deserves.
On top of that, 5G by itself will significantly increase the threat landscape from a cybersecurity perspective. The network layer will be more intelligent and adaptable through the use of software and cloud services. The number of network antennae will increase by a factor of 20, and many will be poorly secured ‘things’; there is no need for a backdoor if you have any number of ‘bug doors’.
Finally, the US is threatening to limit intelligence sharing with its closest allies if they adopt Huawei. So why would any country even consider using Huawei in their 5G infrastructure?
The truth is that not every country is free to manoeuvre; 5G technology will sit on top of existing mobile infrastructure.
Australia and the US can afford to take a hard line: their national infrastructure has been largely Huawei-free since 2012. However, the Chinese firm is deeply embedded in other countries’ existing structures – for example, in the UK, Huawei has provided telecommunications infrastructure since 2005. Even if the UK decided tomorrow to ditch Huawei, it cannot just rip up existing 4G infrastructure. To do so would cost a fortune, risk years of delay in the adoption of 5G and limit competition in 5G provisioning.
As a result, the UK has adopted a pragmatic approach resulting from years of oversight and analysis of Huawei equipment, during which it has never found evidence of malicious Chinese state cyber activity through Huawei.
At the heart of this process is the Huawei Cyber Security Evaluation Centre, which was founded in 2010 as a confidence-building measure. Originally criticized for ‘effectively policing itself’, as it was run and staffed entirely by Huawei, the governance has now been strengthened, with the National Cyber Security Centre chairing its oversight board.
The board’s 2019 report makes grim reading, highlighting ‘serious and system defects in Huawei’s software engineering and cyber security competence’. But it does not accuse the company of serving as a platform for state-sponsored surveillance.
Similar evidence-based policy approaches are emerging in other countries like Norway and Italy. They offer flexibility for governments, for example by limiting access to some contract competition through legitimate and transparent means, such as security reviews during procurement. The approaches also raise security concerns (both national and cyber) to a primary issue when awarding contracts – something that was not always done in the past, when price was the key driver.
The UK is also stressing the need to manage risk and increase vendor diversity in the ecosystem to avoid single points of failure. A further approach that is beginning to emerge is to draw a line between network ‘core’ and ‘periphery’ components, excluding some providers from the more sensitive ‘core’. The limited rollouts of 5G in the UK so far have adopted multi-provider strategies, and only one has reportedly not included Huawei kit.
Managing the risks to cyber security and national security will become more complex in a 5G environment. In global supply chains, bans based on the nationality of the provider offer little assurance. For countries that have already committed to Huawei in the past, and who may not wish to be drawn into an outright trade war with China, these moderate approaches offer a potential way forward.
Invitation Only Research Event
Chatham House | 10 St James's Square | London | SW1Y 4LE
Beyza Unal, Senior Research Fellow, International Security Department, Chatham House
Patricia Lewis, Research Director, International Security Department, Chatham House
Strategic systems that depend on space-based assets, such as command, control and communication, early warning systems, weapons systems and weapons platforms, are essential for conducting successful NATO operations and missions. Given the increasing dependency on such systems, the alliance and key member states would therefore benefit from an in-depth analysis of possible mitigation and resilience measures.
This workshop is part of the International Security Department’s (ISD) project on space security and the vulnerability of strategic assets to cyberattacks, which includes a recently published report. This project aims to create resilience in NATO and key NATO member states, building the capacity of key policymakers and stakeholders to respond with effective policies and procedures. This workshop will focus on measures to mitigate the cyber vulnerabilities of NATO’s space-dependent strategic assets. Moreover, participants will discuss the type of resilience measures and mechanisms required.
Attendance at this event is by invitation only.
29 August 2019
As more space activities develop, there is an increasing requirement for comprehensive space situational awareness (SSA). This paper provides an overview of the current landscape in SSA and space traffic management as well as possible scenarios for EU–US cooperation in this area.
Summary
8 August 2019
The military importance of AI-connected brain–machine interfaces is growing. Steps must be taken to ensure human control at all times over these technologies.Technological progress in neurotechnology and its military use is proceeding apace. As early as the 1970s, brain-machine interfaces have been the subject of study. By 2014, the UK’s Ministry of Defence was arguing that the development of artificial devices, such as artificial limbs, is ‘likely to see refinement of control to provide… new ways to connect the able-bodied to machines and computers.’ Today, brain-machine interface technology is being investigated around the world, including in Russia, China and South Korea.
Recent developments in the private sector are producing exciting new capabilities for people with disabilities and medical conditions. In early July, Elon Musk and Neuralink presented their ‘high-bandwidth’ brain-machine interface system, with small and flexible electrode threads packaged into a small device containing custom chips and to be inserted and implanted into the user’s brain for medical purposes.
In the military realm, in 2018, the United States’ Defense Advanced Research Projects Agency (DARPA) put out a call for proposals to investigate the potential of nonsurgical brain-machine interfaces to allow soldiers to ‘interact regularly and intuitively with artificially intelligent, semi-autonomous and autonomous systems in a manner currently not possible with conventional interfaces’. DARPA further highlighted the need for these interfaces to be bidirectional – where information is sent both from brain to machine (neural recording) and from machine to brain (neural stimulation) – which will eventually allow machines and humans to learn from each other.
This technology may provide soldiers and commanders with a superior level of sensory sensitivity and the ability to process a greater amount of data related to their environment at a faster pace, thus enhancing situational awareness. These capabilities will support military decision-making as well as targeting processes.
Neural recording will also enable the obtention of a tremendous amount of data from operations, including visuals, real-time thought processes and emotions. These sets of data may be used for feedback and training (including for virtual wargaming and for machine learning training), as well as for investigatory purposes. Collected data will also feed into research that may help researchers understand and predict human intent from brain signals – a tremendous advantage from a military standpoint.
The flip side of these advancements is the responsibilities they will impose and the risks and vulnerabilities of the technology as well as legal and ethical considerations.
The primary risk would be for users to lose control over the technology, especially in a military context; hence a fail-safe feature is critical for humans to maintain ultimate control over decision-making. Despite the potential benefits of symbiosis between humans and AI, users must have the unconditional possibility to override these technologies should they believe it is appropriate and necessary for them to do so.
This is important given the significance of human control over targeting, as well as strategic and operational decision-making. An integrated fail-safe in brain-machine interfaces may in fact allow for a greater degree of human control over critical, time-sensitive decision-making. In other words, in the event of incoming missiles alert, while the AI may suggest a specific course of action, users must be able to decide in a timely manner whether to execute it or not.
Machines can learn from coded past experiences and decisions, but humans also use gut feelings to make life and death decisions. A gut feeling is a human characteristic that is not completely transferable, as it relies on both rational and emotional traits – and is part of the ‘second-brain’ and the gut-brain axis which is currently poorly understood. It is however risky to take decisions solely on gut feelings or solely on primary brain analysis—therefore, receiving a comprehensive set of data via an AI-connected brain-machine interface may help to verify and evaluate the information in a timely manner, and complement decision-making processes. However, these connections and interactions would have to be much better understood than the current state of knowledge.
Fail-safe features are necessary to ensure compliance with the law, including international humanitarian law and international human rights law. As a baseline, human control must be used to 1) define areas where technology may or may not be trusted and to what extent, and 2) ensure legal, political and ethical accountability, responsibility and explainability at all times. Legal and ethical considerations must be taken into account from as early as the design and conceptualizing stage of these technologies, and oversight must be ensured across the entirety of the manufacturing supply chain.
The second point raises the need to further explore and clarify whether existing national, regional and international legal, political and ethical frameworks are sufficient to cover the development and use of these technologies. For instance, there is value in assessing to what extent AI-connected brain-machine interfaces will affect the assessment of the mental element in war crimes and their human rights implications.
In addition, these technologies need to be highly secure and invulnerable to cyber hacks. Neural recording and neural stimulation will be directly affecting brain processes in humans and if an adversary has the ability to connect to a human brain, steps need to be taken to ensure that memory and personality could not be damaged.
Military applications of technological progress in neurotechnology is inevitable, and their implications cannot be ignored. There is an urgent need for policymakers to understand the fast-developing neurotechnical capabilities, develop international standards and best practices – and, if necessary, new and dedicated legal instruments – to frame the use of these technologies.
Considering the opportunities that brain-machine interfaces may present in the realms of security and defence, inclusive, multi-stakeholder discussions and negotiations leading to the development of standards must include the following considerations:
9 August 2019
The use of AI in counterterrorism is not inherently wrong, and this paper suggests some necessary conditions for legitimate use of AI as part of a predictive approach to counterterrorism on the part of liberal democratic states.
Summary
Invitation Only Research Event
Addis Ababa, Ethiopia
This roundtable is part of a series under the project, 'Implementing the Commonwealth Cybersecurity Agenda', funded by the UK Foreign and Commonwealth Office (FCO). The roundtable aims to provide a multi-stakeholder, pan-Commonwealth platform to discuss how to implement the Commonwealth Cyber Declaration with a focus on its third pillar 'To promote stability in cyberspace through international cooperation'.
In particular, the roundtable focuses on points 3 and 4 of the third pillar which revolve around the commitment to promote frameworks for stability in cyberspace including the applicability of international law, agreed voluntary norms of responsible state behaviour and the development and implementation of confidence-building measures consistent with the 2015 report of the UNGGE.
The workshop also focuses on the commitment to advance discussions on how existing international law, including the Charter of the United Nations and applicable international humanitarian law, applies in cyberspace.
The roundtable addresses the issue of global cyber governance from a Commonwealth perspective and will also include a discussion around the way forward, the needed capacity of the different Commonwealth countries and the cooperation between its members for better cyber governance.
Participants include UNGGE members from Commonwealth countries in addition to representatives to the UN Open-Ended Working Group from African countries as well as members from academia, civil society and industry.
Dorothy was the founding director general of the Ghana-India Kofi Annan Centre of Excellence in ICT, a position which she held for over a decade.
She works globally as a policy adviser, evaluator, project manager and organizational management consultant.
Over the course of her 30-year career in international development and technology she has held management positions with the UN and global management consulting firms on four continents.
As a strong advocate of the importance of building robust local innovation ecosystems based on open source technologies, she serves on the board and as a mentor to a number of start-ups and NGOs focused on women in tech.
This project brings together experts on the use of armed drones, including current and former military officials, academia, think-tanks and NGOs, to discuss and exchange perspectives based on their different experiences, with the aim of sharing knowledge and increasing understanding on these issues, and to inform and provide input into the European debate.
With the increased use of armed drones in recent years, ethical and legal concerns have been raised in regard to civilian casualties, secrecy and lack of transparency and accountability for drone strikes.
This project brings together experts on the use of armed drones, including current and former military officials, academia, think-tanks and NGOs, to discuss and exchange perspectives based on their different experiences, with the aim of sharing knowledge and increasing understanding on these issues, and to inform and provide input into the European debate. The experts explore the issues and controversies surrounding the use of drones outside formal armed conflict and study the broader policy implications in detail, particularly with regards to what this means for the UK and other European countries.
Building on the findings from the workshops, this project will hold a simulation exercise to stress test critical areas of concern around the use of armed drones that are relevant for the UK and other EU member states.
The discussions and the simulation exercise will provide opportunities for policy input on areas of mutual concern and feed into practical policy recommendations on the use of armed drones.
This project builds on previous work on armed drones by the International Security Department and is funded by the Open Society Foundations.
24 July 2019
Cyberattacks are increasingly challenging critical national infrastructure. This paper considers the security by design approach for civil nuclear power plants and analyses areas of risk and opportunities for the nuclear industry.
Summary
2 July 2019
‘Left-of-launch’ attacks that aim to disable enemy missile systems may increase the chance of them being used, not least because the systems are so vulnerable.After President Trump decided to halt a missile attack on Iran in response to the downing of a US drone, it was revealed that the US had conducted cyberattacks on Iranian weapons systems to prevent Iran launching missiles against US assets in the region.
This ‘left-of-launch’ strategy – the pre-emptive action to prevent an adversary launch missiles – has been part of the US missile defence strategy for some time now. President George W Bush asked the US military and intelligence community to infiltrate the supply chain of North Korean missiles. It was claimed that the US hacked the North Korean ballistic missile programme, causing a failed ballistic missile test, in 2012.
It was not clear then – or now – whether these ‘left-of-launch’ cyberattacks aimed at North Korea were successful as described or whether they were primarily a bluff. But that is somewhat irrelevant; the belief in the possibility and the understanding of the potential impact of such cyber capabilities undermines North Korean or Iranian confidence in their abilities to launch their missiles. In times of conflict, loss of confidence in weapons systems may lead to escalation.
In other words, the adversary may be left with no option but to take the chance to use these missiles or to lose them in a conflict setting. ‘Left of launch’ is a dangerous game. If it is based on a bluff, it could be called upon and lead to deterrence failure. If it is based on real action, then it could create an asymmetrical power struggle. If the attacker establishes false confidence in the power of a cyber weapon, then it might lead to false signalling and messaging.
This is the new normal. The cat-and-mouse game has to be taken seriously, not least because missile systems are so vulnerable.
There are several ways an offensive cyber operation against missile systems might work. These include exploiting missile designs, altering software or hardware, or creating clandestine pathways to the missile command and control systems.
They can also be attacked in space, targeting space assets and their link to strategic systems.
Most missile systems rely, at least in part, on digital information that comes from or via space-based or space-dependent assets such as: communication satellites; satellites that provide position, navigation and timing (PNT) information (for example GPS or Galileo); weather satellites to help predict flight paths, accurate targeting and launch conditions; and remote imagery satellites to assist with information and intelligence for the planning and targeting.
Missile launches themselves depend on 1) the command and control systems of the missiles, 2) the way in which information is transmitted to the missile launch facilities and 3) the way in which information is transmitted to the missiles themselves in flight. All these aspects rely on space technology.
In addition, the ground stations that transmit and receive data to and from satellites are also vulnerable to cyberattack – either through their known and unknown internet connectivity or through malicious use of flash drives that contain a deliberate cyber infection.
Non-space-based communications systems that use cable and ground-to-air-to-ground masts are likewise under threat from cyberattacks that find their way in via internet connectivity, proximity interference or memory sticks. Human error in introducing connectivity via phones, laptops and external drives, and in clicking on malicious links in sophisticated phishing lures, is common in facilitating inadvertent connectivity and malware infection.
All of these can create a military capacity able to interfere with missile launches. Malware might have been sitting on the missile command and control system for months or even years, remaining inactivated until a chosen time or by a trigger that sets in motion a disruption either to the launch or to the flight path of the missile. The country that launches the missile that either fails to launch or fails to reach the target may never know if this was the result of a design flaw, a common malfunction or a deliberate cyberattack.
States with these capabilities must exercise caution: cyber offence manoeuvres may prevent the launch of missile attacks against US assets in the Middle East or in the Pacific regions, but they may also interfere with US missile launches in the future. Even, as has recently been revealed, US cyber weapons targeting an adversary may blow back and inadvertently infect US systems. Nobody is invulnerable.
Yasmin Afina joined Chatham House as research assistant for the International Security programme in April 2019. She formerly worked for the United Nations Institute for Disarmament Research (UNIDIR)’s Security and Technology Programme, and the United Nations Office for Disarmament Affairs (UNODA).
Yasmin’s research at Chatham House covers projects related to nuclear weapons systems, strategic weapons systems, emerging technologies including cyber and artificial intelligence, and international law.
In her previous capacities, Yasmin’s research included international, regional and national cybersecurity policies, the international security implications of quantum computing, and algorithmic bias in autonomous technologies and law enforcement operations.
Yasmin holds an LL.M. from the Geneva Academy of International Humanitarian Law and Human Rights, an LL.B. from the University of Essex, and a French Bachelor of Laws and Postgraduate degree (Maîtrise) in International Law from the Université Toulouse I Capitole.
2018-19 | Programme assistant, security and technology, United Nations Institute for Disarmament Research (UNIDIR) |
2017-18 | Project assistant, emerging security issues, United Nations Institute for Disarmament Research (UNIDIR) |
2017 | Weapons of Mass Destruction Programme, United Nations Institute for Disarmament Research (UNIDIR) |
2017-18 | LL.M., Geneva Academy of International Humanitarian Law and Human Rights (CH) |
2016-17 | Maîtrise, Université Toulouse I Capitole (FR) |
2016 | Convention on Certain Conventional Weapons Implementation Support Unit, United Nations Office for Disarmament Affairs (UNODA) Geneva Branch |
2013-17 | LL.B., University of Essex (UK) |
2013-16 | Licence (Bachelor of Laws), Université Toulouse I Capitole (FR) |
2014 | Volunteer, World YWCA |
1 July 2019
Almost all modern military engagements rely on space-based assets, but cyber vulnerabilities can undermine confidence in the performance of strategic systems. This paper will evaluate the threats, vulnerabilities and consequences of cyber risks to strategic systems.
Summary
Peter Watkins became an associate fellow for Chatham House in June 2019. Before that, from 2014 to 2018, he was Director General (DG) in the UK Ministry of Defence (MoD) responsible for strategic defence policy, including key multilateral and bilateral relationships (such as NATO), nuclear, cyber, space and prosperity (latterly this post was known as the DG Strategy and International).
Previously he served as DG of the Defence Academy, Director of Operational Policy, Director responsible for the UK share of the multinational Typhoon combat aircraft programme and as Defence Counsellor in the UK Embassy in Berlin.
He is a frequent participant in conferences on defence and security in the UK and overseas.
He was awarded the CB (2019) and CBE (2004) for services to defence. He has an MA from Cambridge University.
2006-07 | Fellow, Weatherhead Center for International Affairs, Harvard University |
1993-94 | Senior course member, NATO Defense College |
12 June 2019
The rules governing human activity in space have been in place for only a few decades, and yet they are already out of date. They need to be built on and extended to reflect the dramatic and rapid changes in the use of space.The 1967 Outer Space Treaty (OST) is the mainframe for space law. It recognizes the importance of the use and scientific exploration of outer space for the benefit and in the interests of all countries. It also prohibits national sovereignty in space, including of the Moon and other celestial bodies.
The OST prohibits all weapons of mass destruction in space – in orbit or on other planets and moons – and does not allow the establishment of military infrastructure, manoeuvres or the testing of any type of weapon on planets or moons. As the treaty makes clear, outer space is for peaceful purposes only. Except of course, it is not – nor has it ever been so.
The very first satellite, Sputnik, was a military satellite which kicked off the Cold War space race between the US and the USSR. The militaries of many countries followed suit, and space is now used for military communication, signals intelligence, imaging, targeting, arms control verification and so on.
However, in keeping with international aspirations, space is also being used for all kinds of peaceful purposes such as environmental monitoring, broadcast communications, delivering the internet, weather prediction, navigation, scientific exploration and – very importantly – monitoring the ‘space weather’ (including the activity from the Sun).
There are several other international agreements on space, such as on the rescue of astronauts, the registration of satellites and liability for damage caused by space objects. There is also the Moon Treaty, which governs activities on the Moon and other moons, asteroids and planets.[i]
More recently, states at the UN Committee on the Peaceful Uses of Outer Space (COPUOS) in Vienna have agreed on guidelines to deal with the worrying situation of space debris which is cluttering up orbits and posing a danger to satellites, the space station and astronauts.
The problem the international community now faces is that the use of space is changing dramatically and rapidly. There are more satellites than ever – well over 1,000 – and more owners of satellites – almost every country uses information generated from space. Increasingly, however, those owners are not countries, militaries or international organizations but the commercial sector. Very soon, the owners will even include individuals.
Small ‘mini-satellites’ or ‘cube-sats’ are poised to be deployed in space. These can act independently or in ‘swarms’, and are so small that they piggy-back on the launching of other satellites and so are very cheap to launch. This is changing the cost–benefit equation of satellite ownership and use. Developing countries are increasingly dependent on space for communications, the internet and information on, for example, weather systems, coastal activities and agriculture.
Another major development is the advent of asteroid mining. Asteroids contain a wide range of metals and minerals – some asteroids are more promising than others, and some are closer to Earth than others. Several companies have been set up and registered around the world to begin the exploitation of asteroids for precious metals (such as platinum) and compounds (such as rare-earth minerals).
Legally, however, this will be a murky venture. The current international treaty regime prohibits the ownership of a celestial body by a country – space is for all. But does international law prohibit the ownership or exploitation of a celestial body by a private company? The law has yet to be tested, but there are space lawyers who think that companies are exempt. Luxembourg and Australia are two countries that have already begun the registration of interest for space-mining companies.
As humanity becomes more dependent on information that is generated in or transmitted through space, the vulnerability to the manipulation of space data is increasing. The demands on the use of communications frequencies (the issue of spectrum availability and rights), managed by the International Telecommunication Union (ITU),[ii] need to be urgently addressed.
There are now constant cyberattacks in space and on the digital information on which our systems rely. For example, position, navigation and timing information such as from GPS or Galileo is not only vital for getting us safely from A to B, but also for fast-moving financial transactions that require accurate timing signals.
Almost all of our electronic systems depend on those timing signals for synchronization and basic functioning. Cyber hacks, digital spoofing and ‘fake’ information are now a real possibility. There is no rules-based order in place that is fit to deal with these types of attacks.
Cyberweapons are only part of the problem. It is assumed that states, if they haven’t already done so, will be positioning ‘defensive’ space weaponry to protect their satellites. The protection may be intended to be against space debris – nets, grabber bars and harpoons, for example, are all being investigated.
All of these ideas, however, could be used as offensive weapons. Once one satellite operator decides to equip its assets with such devices, many others will follow. The weaponization of space is in the horizon.
There are no international rules or agreements to manage these developments. Attempts in Geneva to address the arms race in space have floundered alongside the inability of the Conference on Disarmament to negotiate any instrument since 1996.
Attempts to develop rules of the road and codes of conduct, or even to begin negotiations to prohibit weapons in space, have failed again and again. There are no agreed rules to govern cyber activity. The Tallinn Manuals[iii] that address how international law is applicable to cyberwarfare also address the laws of armed conflict in space, but data spoofing and cyber hacking in space exist in far murkier legal frameworks.
The current system of international space law – which does not even allow for a regular review and consideration of the OST – is struggling to keep up. Space is the inheritance of humankind, yet the current generation of elders – as they have done with so many other parts of our global environment – have let things go and failed to shepherd in the much-needed system of rules to protect space for future generations.
It is not too late, but it will require international cooperation among the major space players: Russia, the US, China, India and Europe – hardly a promising line-up of collaborators in the current political climate.
Norms of behaviour and rules of the road need to be established for space before it becomes a 21st-century ‘wild west’ of technology and activity. Issues such as cleaning up space debris, the principle of non-interference, and how close satellites can manoeuvre to each other (proximity rules) need to be agreed as a set of international norms for space behaviour.
A cross-regional group of like-minded countries (for example Algeria, Canada, Chile, France, India, Kazakhstan, Malaysia, Nigeria, Sweden, the UAE and the UK) should link up with UN bodies, including the Office for Outer Space Affairs (UNOOSA), COPUOS and ITU, and key private-sector companies to kick-start a new process for a global code of conduct to establish norms and regulate behaviour in space.
The UN could be the host entity for this new approach – or it could be established in the way the Ottawa process for landmines was established, by a group of like-minded states with collective responsibility for, and collective hosting and funding of, the negotiations.
A new approach should also cover cybersecurity in space. The UN processes on space and cyber should intersect more to find ways to create synergies in their endeavours. And the problems ahead as regards spectrum management – particularly given the large number of small satellites and constellations that are to be launched in the near future – need urgent attention in ITU.
[i] All of these treaties and other documents can be found at UN Office for Outer Space Affairs (2002), United Nations Treaties and Principles on Outer Space, http://www.unoosa.org/pdf/publications/STSPACE11E.pdf.
[ii] ITU (undated), ‘ITU Radiocommunication Sector’, https://www.itu.int/en/ITU-R/Pages/default.aspx.
[iii] The NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE), ‘Tallinn Manual 2.0’, https://ccdcoe.org/research/tallinn-manual/.
This essay was produced for the 2019 edition of Chatham House Expert Perspectives – our annual survey of risks and opportunities in global affairs – in which our researchers identify areas where the current sets of rules, institutions and mechanisms for peaceful international cooperation are falling short, and present ideas for reform and modernization.
12 June 2019
Competing governance visions are impairing efforts to regulate the digital space. To limit the spread of repressive models, policymakers in the West and elsewhere need to ensure the benefits of an open and well-run system are more widely communicated.The development of governance in a wide range of digital spheres – from cyberspace to internet infrastructure to emerging technologies such as artificial intelligence (AI) – is failing to match rapid advances in technical capabilities or the rise in security threats. This is leaving serious regulatory gaps, which means that instruments and mechanisms essential for protecting privacy and data, tackling cybercrime or establishing common ethical standards for AI, among many other imperatives, remain largely inadequate.
A starting point for effective policy formation is to recognize the essential complexity of the digital landscape, and the consequent importance of creating a ‘common language’ for multiple stakeholders (including under-represented actors such as smaller and/or developing countries, civil society and non-for-profit organizations).
The world’s evolving technological infrastructure is not a monolithic creation. In practice, it encompasses a highly diverse mix of elements – so-called ‘high-tech domains’,[1] hardware, systems, algorithms, protocols and standards – designed by a plethora of private companies, public bodies and non-profit organizations.[2] Varying cultural, economic and political assumptions have shaped where and which technologies have been deployed so far, and how they have been implemented.
Perhaps the most notable trend is the proliferation of techno-national regimes and private-sector policy initiatives, reflecting often-incompatible doctrines in respect of privacy, openness, inclusion and state control. Beyond governments, the interests and ambitions of prominent multinationals (notably the so-called ‘GAFAM’ tech giants in the West, and their ‘BATX’ counterparts in China)[3] are significant factors feeding into this debate.
Two particular case studies highlight the essential challenges that this evolving – and, in some respects, still largely unformed – policy landscape presents. The first relates to cyberspace. Since 1998, Russia has established itself as a strong voice in the cyberspace governance debate – calling for a better understanding, at the UN level, of ICT developments and their impact on international security.
The country’s efforts were a precursor to the establishment in 2004 of a series of UN Groups of Governmental Experts (GGEs), aimed at strengthening the security of global information and telecommunications systems. These groups initially succeeded in developing common rules, norms and principles around some key issues. For example, the 2013 GGE meeting recognized that international law applies to the digital space and that its enforcement is essential for a secure, peaceful and accessible ICT environment.
However, the GGE process stalled in 2017, primarily due to fundamental disagreements between countries on the right to self-defence and on the applicability of international humanitarian law to cyber conflicts. The breakdown in talks reflected, in particular, the divide between two principal techno-ideological blocs: one, led by the US, the EU and like-minded states, advocating a global and open approach to the digital space; the other, led mainly by Russia and China, emphasizing a sovereignty-and-control model.
The divide was arguably entrenched in December 2018, with the passage of two resolutions at the UN General Assembly. A resolution sponsored by Russia created a working group to identify new norms and look into establishing regular institutional dialogue.
At the same time, a US-sponsored resolution established a GGE tasked, in part, with identifying ways to promote compliance with existing cyber norms. Each resolution was in line with its respective promoter’s stance on cyberspace. While some observers considered these resolutions potentially complementary, others saw in them competing campaigns to cement a preferred model as the global norm. Outside the UN, there have also been dozens of multilateral and bilateral accords with similar objectives, led by diverse stakeholders.[4]
The second case study concerns AI. Emerging policy in this sector suffers from an absence of global standards and a proliferation of proposed regulatory models. The potential ability of AI to deliver unprecedented capabilities in so many areas of human activity – from automation and language applications to warfare – means that it has become an area of intense rivalry between governments seeking technical and ideological leadership of this field.
China has by far the most ambitious programme. In 2017, its government released a three-step strategy for achieving global dominance in AI by 2030. Beijing aims to create an AI industry worth about RMB 1 trillion ($150 billion)[5] and is pushing for greater use of AI in areas ranging from military applications to the development of smart cities. Elsewhere, the US administration has issued an executive order on ‘maintaining American leadership on AI’.
On the other side of the Atlantic, at least 15 European countries (including France, Germany and the UK) have set up national AI plans. Although these strategies are essential for the development of policy infrastructure, they are country-specific and offer little in terms of global coordination. Ominously, greater inclusion and cooperation are scarcely mentioned, and remain the least prioritized policy areas.[6]
Competing multilateral frameworks on AI have also emerged. In April 2019, the European Commission published its ethics guidelines for trustworthy AI. Ministers from Nordic countries[7] recently issued their own declaration on collaboration in ‘AI in the Nordic-Baltic region’. And leaders of the G7 have committed to the ‘Charlevoix Common Vision for the Future of Artificial Intelligence’, which includes 12 guiding principles to ensure ‘human-centric AI’.
More recently, OECD member countries adopted a set of joint recommendations on AI. While nations outside the OECD were welcomed into the coalition – with Argentina, Brazil and Colombia adhering to the OECD’s newly established principles – China, India and Russia have yet to join the discussion. Despite their global aspirations, these emerging groups remain largely G7-led or EU-centric, and again highlight the divide between parallel models.
No clear winner has emerged from among the competing visions for cyberspace and AI governance, nor indeed from the similar contests for doctrinal control in other digital domains. Concerns are rising that a so-called ‘splinternet’ may be inevitable – in which the internet fragments into separate open and closed spheres and cyber governance is similarly divided.
Each ideological camp is trying to build a critical mass of support by recruiting undecided states to its cause. Often referred to as ‘swing states’, the targets of these overtures are still in the process of developing their digital infrastructure and determining which regulatory and ethical frameworks they will apply. Yet the policy choices made by these countries could have a major influence on the direction of international digital governance in the future.
India offers a case in point. For now, the country seems to have chosen a versatile approach, engaging with actors on various sides of the policy debate, depending on the technology governance domain. On the one hand, its draft Personal Data Protection Bill mirrors principles in the EU’s General Data Protection Regulation (GDPR), suggesting a potential preference for the Western approach to data security.
However, in 2018, India was the leading country in terms of internet shutdowns, with over 100 reported incidents.[8] India has also chosen to collaborate outside the principal ideological blocs, as evidenced by an AI partnership it has entered into with the UAE. At the UN level, India has taken positions that support both blocs, although more often favouring the sovereignty-and-control approach.
Sovereign nations have asserted aspirations for technological dominance with little heed to the cross-border implications of their policies. This drift towards a digital infrastructure fragmented by national regulation has potentially far-reaching societal and political consequences – and implies an urgent need for coordinated rule-making at the international level.
The lack of standards and enforcement mechanisms has created instability and increased vulnerabilities in democratic systems. In recent years, liberal democracies have been targeted by malevolent intrusions in their election systems and media sectors, and their critical infrastructure has come under increased threat. If Western nations cannot align around, and enforce, a normative framework that seeks to preserve individual privacy, openness and accountability through regulation, a growing number of governments may be drawn towards repressive forms of governance.
To mitigate those risks, efforts to negotiate a rules-based international order for the digital space should keep several guiding principles in mind. One is the importance of developing joint standards, as well as the need for consistent messaging towards the emerging cohort of engaged ‘swing states’. Another is the need for persistence in ensuring that the political, civic and economic benefits associated with a more open and well-regulated digital sphere are made clear to governments and citizens everywhere.
Countries advocating an open, free and secure model should take the lead in embracing and promoting a common affirmative model – one that draws on human rights principles (such as the rights to freedom of opinion, freedom of expression and privacy) and expands their applications to the digital space.
Specific rules on cyberspace and technology use need to include pragmatic policy ideas and models of implementation. As this regulatory corpus develops, rules should be adapted to reflect informed consideration of economic and social priorities and attitudes, and to keep pace with what is possible technologically.[9]
[1] Including but not limited to AI and an associated group of digital technologies, such as the Internet of Things, big data, blockchain, quantum computing, advanced robotics, self-driving cars and other autonomous systems, additive manufacturing (i.e. 3D printing), social networks, the new generation of biotechnology, and genetic engineering.
[2] O’Hara, K. and Hall, W. (2018), Four Internets: The Geopolitics of Digital Governance, Centre for International Governance Innovation, CIGI Paper No. 206, https://www.cigionline.org/publications/four-internets-geopolitics-digital-governance.
[3] GAFAM = Google, Amazon, Facebook, Apple and Microsoft; BATX = Baidu, Alibaba, Tencent and Xiaomi.
[4] Carnegie Endowment for International Peace (undated), ‘Cyber Norms Index’, https://carnegieendowment.org/publications/interactive/cybernorms (accessed 30 May 2019).
[5] Future of Life Institute (undated), ‘AI Policy – China’, https://futureoflife.org/ai-policy-china?cn-reloaded=1.
[6] Dutton, T. (2018), ‘Building an AI World: Report on National and Regional AI Strategies’, 6 December 2018, CIFAR, https://www.cifar.ca/cifarnews/2018/12/06/building-an-ai-world-report-on-national-and-regional-ai-strategies.
[7] Including Denmark, Estonia, Finland, the Faroe Islands, Iceland, Latvia, Lithuania, Norway, Sweden and the Åland Islands.
[8] Shahbaz, A. (2018), Freedom on the Net 2018: The Rise of Digital Authoritarianism, Freedom House, October 2018, https://freedomhouse.org/report/freedom-net/freedom-net-2018/rise-digital-authoritarianism.
[9] Google White Paper (2018), Perspectives on Issues in AI Governance, https://www.blog.google/outreach-initiatives/public-policy/engaging-policy-stakeholders-issues-ai-governance/.
This essay was produced for the 2019 edition of Chatham House Expert Perspectives – our annual survey of risks and opportunities in global affairs – in which our researchers identify areas where the current sets of rules, institutions and mechanisms for peaceful international cooperation are falling short, and present ideas for reform and modernization.
Research Event
Chatham House | 10 St James's Square | London | SW1Y 4LE
In recent years, cybercrime has evolved from a niche technological concern into a prominent global issue with substantial preventative and remedial costs for businesses and governments alike. Despite heavy investment in sophisticated cybersecurity measures and the adoption of several legal, organizational and capacity-building measures, cybercrime remains a major threat which is evolving on a daily basis. Today’s cybercrime is more aggressive, more complex, more organized and – importantly – more unpredictable than ever before.
The challenges posed by cybercrime are experienced acutely by countries undergoing digital transformations: as the level of connectivity rises, so too does the potential for online theft, fraud and abuse. Cybercrime is pervasive but governments can work to limit its impact by creating a resilient overall economy and robust institution, and appropriately equipping law enforcement and the justice system to navigate its novel challenges.
To advance the discourse surrounding these issues, this workshop will assess the current cyber threat landscape and how it is evolving. It will identify the main obstacles encountered by law enforcement, the judiciary and prosecutors in their fight against cybercrime. It will also compare national, regional and global approaches that countries can use to effectively curb cybercrime and tackle its emerging challenges.
8 May 2019
This paper sets out a roadmap for how organizations in the civil nuclear sector can explore their options and review their cyber risk exposure.
Christopher Painter is a globally recognized leader on cyber policy, cyber diplomacy, cybersecurity and combatting cybercrime.
He has been at the vanguard of cyber issues for over 27 years, first as a federal prosecutor handling some of the most high-profile cyber cases in the U.S., then as a senior official at the U.S. Department of Justice, the FBI, the White House National Security Council and, finally, as the world’s first cyber diplomat at the U.S. Department of State.
Among other things, Christopher currently serves as a commissioner on the Global Commission for the Stability of Cyberspace and chairs a working group on cyber capacity for the Global Forum for Cyber Expertise.
He is a frequent speaker on cyber issues, frequently is interviewed and quoted in the media and has testified on numerous occasions to U.S. Congressional committees.
He has received a number of awards and honors including Japan’s Order of the Rising Sun, the RSA Security Conference Public Policy Award and the Attorney General’s Award for Exceptional Service.
He received his B.A. from Cornell University and J.D. from Stanford Law School.
2019 | William J. Perry Fellow, Center for Security and Cooperation, Stanford University |
2017 - present | Board member, Center for Internet Security |
2017 - present | Commissioner, Global Commission for the Stability of Cyberspace |
Invitation Only Research Event
Chatham House, London
Invitation Only Research Event
Chatham House, London
With Brexit on the horizon, participants will also consider what impact this may have on future drone developments in Europe.
Attendance at this event is by invitation only.
Research Event
Chatham House | 10 St James's Square | London | SW1Y 4LE
Andrew Sullivan, President and CEO Internet Society
Chair: Emily Taylor, Associate Fellow, International Security Department, Chatham House; Editor, Journal of Cyber Policy
Internet regulation is increasing around the world creating positive obligations on internet providers and exerting negative unintended consequences on the internet infrastructure. In some ways, most of this regulatory activity is justifiable. Governments are concerned about the increased risk that the use of the internet brings to societies. As a response, many governments have been enacting regulations as their main approach to dealing with these concerns. The main challenge is that most of the current regulations are either ill-defined or unworkable.
On the one hand, several governments have established procedures that seek to analyze the impacts of new regulatory proposals before they were adopted. However, there hasn’t been enough attention aimed at analyzing regulations after they have been adopted and only a few have measures in place to evaluate the impacts of the procedures and practices that govern the regulatory process itself.
On the other hand, much of the regulation creates unintended consequences to the internet itself. It undermines many of its fundamental properties and challenges the integrity and resiliency of its infrastructure.
This event discusses current practices in internet-related regulation and the related challenges. Panellists will discuss how governments can enforce regulations that achieve their intended purpose while at the same time protecting the internet’s core infrastructure and its properties, including its openness, interoperability and global reach.
Research Event
Brussels, Belgium
With continuing instability at Europe's borders, along with uncertainty on future US support for NATO, many European countries are increasing their allocations to defence budgets and to collective European strategic defence. In addition, with non-state armed groups creating instability and threatening civilian lives and livelihoods in proximity to the EU’s borders, various operations have been carried out in conflict theatres in the Middle East, North Africa and the Sahel under the auspices of NATO, the UN, the EU or by single EU member states.
Although European military personnel have been deployed in many regions, with countries becoming more reluctant to deploy ‘boots on the ground’, warfare has been increasingly conducted through remote means. This has led to criticism on the limited transparency and accountability mechanisms at work in these operations, while some have questioned the military effectiveness of such tactics or the capacity and willingness of states to ensure that targets are struck accurately and without impact on civilian populations.
Against this background, the EU has started allocating resources to military research and development projects with a focus on unmanned systems and related technologies. Under the auspices of the European Defence Fund such funding is set to increase, while potential bilateral programmes between some states have also been explored. Despite concerns raised by the European Parliament, the development of these policies and technologies has taken place without significant consideration of what the legal, ethical and military-strategic impact of these instruments might be.
This event will bring together a range of experts, policymakers and civil society organizations to discuss the technology horizon of European defence investments and policy developments around remote warfare. Participants will discuss the implications of the new EU defence fund, legal, ethical, and transparency issues in military research and development and the position of the EU as a global actor.
This event is being organized in partnership with PAX Netherlands.
THIS EVENT IS NOW FULL AND REGISTRATION HAS CLOSED.