Security Innovation

Security Innovation

Enhanced governance of emerging technologies needed to promote peace and stability

Col. (ret.) David ShanahaN

Daniel K. Inouye Asia-Pacific Center for Security Studies

Many technology experts predict the next decade will offer cascades of technologically enabled advances. Lowered barriers for new and innovative ways of using old technologies are already offering unprecedented asymmetrical offset opportunities to regional partners, rivals and nonstate actors to enable achievement of security and development goals and allow some actors to pervert them for nefarious purposes.

Many national and multinational governance mechanisms and processes, conceived in an era when the technology governance could be considered discretely within defined technology areas, cannot cope with the pace and the cross-feeding nature of today’s technology environment. Mechanisms and processes must evolve to increasingly coordinate and collaborate between disparate fields to enable opportunities and to define and respond to threats the emerging technology environment will inevitably pose.

Countless advances could play out in fields such as information, artificial intelligence (AI), energy, materials manufacturing, biotechnologies and advanced human health, as future forecasters detail in reports such as the “Paradox of Progress” issued by the Office of the Director of National Intelligence in the United States. Technologies will be increasingly fused and distinctions blurred between the physical, digital and biological spheres, as Klaus Schwab of the World Economic Forum describes in his book The Fourth Industrial Revolution.

The U.S. Defense Advanced Projects Research Agency (DARPA) is developing machines that apply information from new situations to become better and more reliable. [DARPA]

The allure of the world that these advances promise is richly described by such entrepreneur technologists as Singularity University co-founder Peter Diamandis, who has sponsored SpaceX, Tesla’s Elon Musk, and the XPrize and co-authored the book Abundance: The Future Is Better Than You Think. The world they portray is a hope-filled utopia in which the technology-empowered capacity to fulfill human needs will reliably outpace problems and challenges.

The challenge of these rosy perspectives is that, for the organizations and people responsible for ensuring national and regional security, they offer little comfort or insight for engaging with the potential issues and problems created or exacerbated by the advances.

Challenges of disruption

Technology, from the wedge to the smartphone, has propelled and disrupted the sweep of human history. Novel today is the rapid pace from inception to pervasive impact that characterizes many elements of the current technological revolution.

The exponential growth of computational power per unit cost, as expressed by the 52-year-old Moore’s law, has been a key accelerant of the pace of innovation. Whereas past exponential growth appears more horizontal when at the knee in the curve, the future looks increasingly vertical. The opportunity for effective policy and governance interventions is before technologies achieve takeoff on the curve.

Anticipating when, where and how technologies will alter the dynamics of economies, social structures and security is a tough task corresponding to the complex nature of the underlying systems they affect. This inherent complexity stymies attempts to harness technologies to defined ends, suggesting that the best the world can hope to accomplish is to forge accepted rule sets that mark boundaries and how players are required to interact.

Governance processes that enable the establishment of those parameters are under stress and in many cases, are inadequate to face demands that emerging technological advances will impose.

Technological development and deployment increase as equipment, techniques and procedures proliferate widely and combine to achieve new discoveries. Many highly anticipated predictions — such as prevalent nuclear fusion energy, beam weapons or personal flying machines — have not arrived, long after first promised. Some scoff at the idea of the effect of technology’s exponential pace because of these disappointments.

Other fields, though, have shown arguably more impactful advances that have outpaced even what the most optimistic experts initially predicted. Illustrative of the latter are such trends as exponential advances in biotechnologies, information communications technologies (ICT) and AI. Taken together, such technologies hold promise to disruptively alter the near-term future and provide existential challenge to mankind’s role and purpose in the longer term.

A suspected North Korean drone that took pictures of a missile defense system is found in Inje, South Korea, in June 2017. [THE ASSOCIATED PRESS]

A dramatically advancing gene-manipulation technology (CRISPR) offers an example of the potential disruptive power of emergent converging technologies. Over the past five years with the help of advances in computational power and genomics, this technology has opened the door to staggering breakthroughs in the capacity for humans to design and manage the basic building blocks of life. Genomics pertains to the study of the complete set of genetic material within an organism and how it is structured, functions and evolves. The potential to use technologically available modifications to the human “germline” for disease prevention yields to a slippery slope with ill-defined and even less enforceable redlines.

For example, biohacking, or exploiting genetic material experimentally while disregarding ethical standards and even existing laws, has occurred worldwide, especially in the West. Lately, however, enthusiasm for biohacking is growing significantly in the Indo-Pacific, catalyzed by such organizations as the Singapore-based Biochin.Asia and Hong Kong-based Biohacking Asia. It’s concerning that ever-lowering entry barriers for dual-use technologies in the field of bioengineering allow unregulated or monitored amateur “biohackers” to not only manipulate but also create beneficial and potentially deadly life-forms. Little imagination is necessary to predict the woeful impact of their use by bad actors.

Policy convergence

Another example of disruptive technologies converging to produce enormous impacts are in ICT and AI, which together are critical enablers that will influence nearly every industry. Increasingly capable AI algorithms coupled with big data have the promise to move the world from analytics that describe or predict future systems’ behavior to ones that are able to prescribe it. This capacity will be readily usable in benign and nefarious ways.

For example, Richard Thaler and Cass Sunstein in their 2008 book Nudge: Improving Decisions about Health, Wealth and Happiness, describe how governments might steer citizens toward such actions as healthier or more environmentally friendly behavior by means of a “nudge” — or promoting a preferred behavior by creating a choice architecture based on insights about biases and habits instead of implementing a regulation or punishment for an undesired behavior. Some, however, might construe such nudges as a contemporary form of paternalism. A caring government could make sure that citizens do things that it considers right for society and national interests. Already, new commerce and social systems using big data and AI reinforce pre-existing wants rather than intriguing users to discover new ideas. They funnel users into increasingly customized silos in which they are already comfortable, echo chambers in many cases. This is troubling. What happens, however, when the confluence of AI and ICT enables bad and/or hidden actors to nudge citizens to take in only certain information feeds, to impugn correct information or to give credibility to false information? This is scary, and it is playing out in national, regional and world affairs daily.

“Today, thanks to the internet and social media, the manipulation of our perception of the world is taking place on previously unimaginable scales of time, space and intentionality,” stated Rand Corp.’s Rand Waltzman in U.S. Senate testimony on April 27, 2017, titled “The Weaponization of Information.” “We have entered the age of mass customization of messaging, narrative and persuasion. We need a strategy to counter a constantly changing set of adversaries large and small.”

Royal Australian Navy analysts used an advanced side-scan sonar system aboard an autonomous underwater vehicle to discover in late 2017 an 800-ton submarine lost for a century off the coast of Papua New Guinea. [REUTERS]

The claims and counterclaims of manipulation in recent elections reveal the scope of the challenges. For example, existing AI software can create incredibly real-looking images and video for disinformation campaigns, according to a recent study by University of Washington researchers. Unchecked, such developments in AI offer adversaries the ability to readily manipulate information for bad purpose.

“Deep learning” algorithmic models increasingly form the decision-making processes of autonomous systems being placed into automotive, financial, medical and military systems. These models, though, are even now programming themselves in ways the engineers who built them cannot explain. This dilemma will only increase as AI becomes more powerful and ubiquitous. As Tommi Jaakkola, a professor at the Massachusetts Institute of Technology who works on applications of machine learning, observed, “Whether it’s an investment decision, medical decision or military decision, you don’t want to just rely on a ‘black box’ method” in explaining how it was made.

The scenarios anticipated in these examples, as well as a host of other fields experiencing simultaneous technology advances, are daunting. The disruptive changes are so profound that, from the perspective of human history, many argue there has never been a time of greater promise or potential peril. These scenarios challenge the international community to foster exploiting these technologies’ opportunities, as well as mitigate their many embedded risks. Decision-makers, however, are too often caught in traditional, linear (and nondisruptive) thinking or are too consumed by immediate concerns to think strategically about the forces of disruption and innovation shaping the future.

This has sparked urgent calls to increase capacity to govern the technological onslaught through policy, processes and mechanisms to ensure that nations feed, harness and guide the technological horses that will pull the international community on the wild ride into tomorrow. Perhaps foremost among these clarion voices is Azhar Zia-ur-Rehman, whose recent book Technology Governance – Concepts and Practices serves as the recognized reference for the concept of technology governance.

Many helpful steps are emerging. The Association of Southeast Asian Nations (ASEAN) among other regional organizations has focused on technology governance in the cyber area, and in 2016 it initiated a ministerial conference on cyber security.

Recent malware and ransomware attacks have highlighted the urgent need to develop such regional gatherings to mitigate threats and to foster confidence-building measures regionally and internationally between similarly challenged states. These forums, though vitally important, tend to look at technology-enabled challenges discretely and without account for the systemic challenges posed by their combined effect. A more holistic approach is needed.

Risk mitigation

So, what should be done to enable security policymakers to identify and mitigate the risks that the coming decade’s confluence of highly impactful and interconnected technological advancements promise? The start of a to-do list should include the following:

Embed foresight and critical thinking: Make foresight and scenario thinking part of organizational culture of security agencies. Conduct regular horizon scanning that includes how converging technology affects part of the strategy and planning process. For example: Use tech scanning “red teams” to spark opportunistic inquiry, question assumptions and identify risk.

Stimulate public dialogue and education: Raise the extent and quality of public discourse, so society is aware of where science and technology could take us and can be capable of informed debate around what is desirable and acceptable, the safeguards that need to be in place, and the mechanisms required to prepare ourselves for change. Policy practitioners, technologists, educators and ethicists need to create new or invigorate existing collaboration forums to forge societal awareness, assurance and insistence that all necessary will be done to ensure emergent technologies are controlled by people, for the benefit of people, as opposed to being controlled by unauditable algorithms or malevolent actors. In addition, courses in technology, ethics and civics could be added to primary education curricula.

Promote national, regional and international technology governance: Examine ways to help shape the long-term security consequences of trends in international and regional technology governance and standards bodies — for example, the International Organization for Standardization, the International Telecommunications Union, the International Committee for Information Technology Standards, and the Pacific Areas Standards Congress. Develop international collaborative bodies, within or supplementary to existing collaborative forums, chartered to the task of framing risk and developing holistic strategies for ensuring the impacts of emergent disruptive technologies such as AI, geoengineering, lethal autonomous weapons, synthetic biology and nanotechnology don’t grow into unmanageable national, regional or global security threats. Security professionals need to use all means at their disposal to participate in identifying emergent risks to national policymakers, as well as participate actively in the increasingly diverse bodies and mechanisms necessary to mitigate them.

Future workforce development: Examine the impacts of accelerating technology, generational change and cultural evolution on the future workforce. Leverage innovative learning and private sector concepts to build new models for predicting and mitigating the effects of manual labor and cognitive surpluses and shortages due to displacement by automation. Failure to effectively manage these disruptive transitions will exacerbate inequality and foster populist backlash increasing the likelihood of societal and interstate violence.

The next decades promise with near certainty a wild ride as security practitioners and policymakers try to keep pace with governance requirements adequate to guide development and mitigate the risks embedded in the pervasive effects of exploding technological growth. Governance mechanisms must be created to evolve and to adequately guide, without thwarting, emergent technologies’ potential to fulfill human need. To paraphrase George Clemenceau speaking of war and generals, emergent technology use and impacts are too important to leave to the technologists, especially technologists in silos. Security policymakers and practitioners, therefore, must proactively contribute in forging and exploiting the governance tools necessary to be reliable sentinels in protecting the region and world from future peril.