Artificial Intelligence

Artificial Intelligence

U.S. strategy seeks to promote an international environment that supports AI research and development


The United States leads the world in artificial intelligence (AI). The People’s Republic of China (PRC), despite its ambitions, has failed to capture the top spot, but the competition is intensifying.

Experts predict that AI, along with powering the world’s future economic base, will revolutionize the future battlespace by enabling machines to act without human supervision, process and interpret massive amounts of data, and enhance the command and control of warfare. With the stakes being economic and military supremacy and control of technologies that can be applied for social control, the U.S. has moved to counter the PRC threat and maintain its AI dominance.

“If you look at industry output, if you look at the leading academic institutions that are leading the way and advancing the state of the art and AI, they’re American industries and they’re American academics,” Lynne Parker, the White House coordinator on AI policy, told Politico, a U.S. political journalism website, in July 2019.

“We’re clearly producing the most impactful commercial products. And certainly, that’s not to say that the rest of the world isn’t waking up to the great opportunities of AI — but clearly, the United States is in the lead.”

Then-U.S. Energy Secretary Rick Perry speaks about the importance of U.S. leadership and public-private partnerships in artificial intelligence at an August 2019 forum. THE ASSOCIATED PRESS

A study released in August 2019 by the Center for Data Innovation (CDI), a global think tank based in Washington, D.C., confirmed U.S. leadership in AI in talent, research, development and hardware, among other parameters.

“Despite China’s bold AI initiative, the United States still leads in absolute terms; China comes in second, and the European Union lags further behind,” said the 106-page report, titled “Who is Winning the AI Race: China, the EU or the United States?”

The PRC’s State Council released a strategic plan in 2017 to rival the U.S. in AI by 2020 and assume world leadership in AI technology by 2025, although the plan didn’t specify what such leadership might look like. The CDI report found China trails the U.S. in many metrics despite the PRC’s plan to invest U.S. $150 billion in AI by 2030, as part of Chinese President Xi Jinping’s Made-in-China push.

The U.S. “has the most AI start-ups, with its AI start-up ecosystem having received the most private equity and venture capital funding,” the CDI report said. The U.S. “leads in the development of both traditional semiconductors and the computer chips that power AI systems; while it produces fewer AI scholarly papers than the EU or China, it produces the highest-quality papers on average.” The PRC still lags behind on most metrics, especially on a per capita basis.

Moreover, the U.S. has the leading AI talent in the world, although the EU has access to more AI talent, the report found. The PRC lags behind both the U.S. and EU in talent.

The PRC has collected more civilian data than the U.S. and EU, and its population is adopting AI more rapidly, according to the report. However, its policies such as civil-military integration will hinder its success in the global market because the PRC’s practices foster distrust in other societies, the report said.

Russia, under President Vladimir Putin, has also been investing in AI, especially for military purposes. However, its efforts trail the rest of the world because it has failed to establish the required culture of innovation, according to analysts. Putin openly asserted “whoever becomes the leader in this sphere will become the ruler of the world.” Ironically, many of Russia’s leading innovators have fled Russia for the U.S. and Europe, according to a June 2019 report by the website Defense One.

Villagers wait in line to have their photos taken for a facial data collection project with artificial intelligence applications in Jia, Henan province, China, in March 2019. REUTERS

U.S. AI Strategy

To protect the U.S. competitive advantage in AI technology, the U.S. Department of Defense (DOD) introduced its AI strategy in February 2019, in conjunction with the White House issuing an executive order creating the American AI Initiative, which calls for the administration to “devote the full resources of the federal government” to propel AI innovation. The White House also created a National Security Commission on AI that first convened in March 2019.

The U.S., along with its allies and partner nations, must adopt AI to dominate the future battlespace and ensure not only a free and open Indo-Pacific but also an international order. The U.S. Pentagon’s budget for 2020 allocates U.S. $927 million for AI and about U.S. $3.7 billion for AI-driven unmanned and autonomous capabilities. Meanwhile, leading U.S. technology companies taken together have been investing tens of billions of dollars in AI in recent years. For example, large tech invested roughly U.S. $20 billion to U.S. $30 billion in AI in 2016, The Economist magazine reported.

“The success of our AI initiatives will rely upon robust relationships with internal and external partners, interagency, industry, our allies and the academic community will all play a vital role in executing our AI strategy,” Dan Deasy, DOD’s chief information officer, said on the launch of the strategy.

“It’s hard to overstate the importance of operationalizing AI across the department, and to do so with the appropriate sense of urgency and alacrity,” added Lt. Gen. John N.T. Shanahan, director of the Joint Artificial Intelligence Center (JAIC), which began operation in June 2018 to drive AI capability across the DOD. “Everything we do in the JAIC will center on enhancing relationships with industry, academia, and with our allies and international partners.”

Shanahan previously led the Pentagon’s pathfinder intelligence project on AI and machine learning, known as Project Maven.

Daniel Castro, CDI’s director, called for expansion of the U.S. AI strategy to cover digital free trade, data collection practices and other related issues. 

Three unmanned aerial systems soar at Edwards Air Force Base, California. The U.S. Pentagon is developing craft that rely on artificial intelligence to collaborate in high-threat environments without human contact. STAFF SGT. RACHEL SIMONES/U.S. AIR FORCE

“If the administration wants its AI initiative to be transformative, it will need to do more than reprogram existing funds for AI research, skill development, and infrastructure development,” Castro, who was a lead author on the August 2019 report on AI competitiveness, told The Associated Press (AP). The CDI report also recommended that the DOD create a body of government and industry stakeholders to accelerate adoption of dual-use AI technologies by the military.

“By consolidating expertise, DOD can better prioritize projects, focus on solving scaling issues, and develop a culture of AI-driven innovation within DOD,” Castro told FORUM. “In addition, the U.S. government should consider joint funding initiatives with allies around the globe to foster research collaboration.”

Some analysts worry that the PRC’s edge over the U.S. and its allies and partners in data collection, due in large part to the PRC’s endeavors to amass data on its citizens for social control purposes, could wend its way to the future battlespace. However, the utility of civilian data for critical military applications is likely limited, others contend.

“What I don’t want to see is a future where our potential adversaries have a fully AI-enabled force and we do not when it goes back to this question of time and decision cycles, and I don’t have the time luxury of hours or days to make decisions. It may be seconds and microseconds where AI can be used to our — to our competitive advantage,” JAIC’s Shanahan said at a late August 2019 DOD briefing. “I doubt I will ever be entirely satisfied that we’re moving fast enough when it comes to DOD’s adoption of AI. My sense of urgency remains palpable.”

Ethics and Oversight

Many nations are concerned about how AI technologies might be used in the future, not only in battle, but also by governments and authoritarian regimes. The U.S. has pledged to deploy AI in keeping with American values, and the Pentagon is working with industry and academia to set ethical guidelines for AI applications, according to AP.

The U.S. is “using [AI] technology to help speed up the process but not supplant the command structure that is in place,” Todd Probert, an executive at Raytheon’s intelligence division, told the AP in February 2019. His firm is working with the Pentagon on various AI projects, including Project Maven, which is employing deep learning and other techniques to analyze video for actionable intelligence.

Many Western governments are working to ensure that humans remain in the command loop. However, some military experts are wary of where the technology may lead, given that emerging capabilities could exceed those of human cognition. Linked AI systems could then take the battlespace to a new level of automation.

“It seems likely humans will be increasingly both out of the loop and off the team in decision-making from tactical to strategic,” Wing Commander Keith Dear, a Royal Air Force intelligence officer, told The Economist in September 2019.

The Organization for Economic Co-operation and Development (OECD) adopted the first international standards on AI in May 2019 to guide how the technology will evolve and be employed. Forty-two nations, including the U.S., agreed to the OECD principles. In June 2019, the G20 adopted human-centered AI principles drawn from the OECD principles. Meanwhile, the PRC, through its National New Generation of Artificial Intelligence Governance Committee, released its own principles that are similar to those of OECD. However, many experts worry that the way the PRC and some other countries interpret ethical issues in the science and technology communities often varies from international norms. The PRC, for example, has drawn criticism for numerous ethical breeches in its research and application of technologies, ranging from its widespread use of fraudulent data to ill-advised experimentation with genetic-editing capabilities in humans and with monkey-human hybrids.

Already, the PRC’s uses of AI-enabled facial and voice recognition technologies to monitor its Uighur community, a majority Muslim population in Xinjiang province, have drawn criticism for enabling the PRC to discriminate against the minority group by tracking movement of members through the country, storing their profiles in separate databases and placing them in so-called re-education camps, The New York Times newspaper and other media organizations have reported.

Surveillance State

As the PRC’s economic growth has been slowing and signs of social unrest have been growing, the Chinese Communist Party has sought to tighten control over not only its 11 million Uighurs but also on its general population. In 2016, for example, the PRC introduced its so-called Sharp Eyes project to increase video surveillance throughout the nation with the goal of “coverage across all regions, sharing across all networks, availability at all times, and controllability at all points by 2020.” At the time, roughly 176 million video surveillance cameras monitored China’s streets, buildings and public spaces, compared with 50 million in the U.S., according to global consultancy IHS Markit. Cameras were already covering “every block in Beijing,” according to the Los Angeles Times newspaper.

The PRC is also using AI applications to power its social credit scoring system, another tool to control its citizens, which is targeted to be fully operational in 2020. Using a secret methodology, the system, which is already partially in place, will monitor people’s behavior, analyze the collected data and punish them by restricting travel, access to luxury items and other such perks, all on the basis of their scores.

Not only is the PRC’s use of AI technologies to control minority groups and its population at large troubling, but the PRC also is exporting such AI capabilities to other authoritarian regimes and countries in the international community.

Air traffic control engineers test high-definition cameras combined with artificial intelligence and machine-learning tools to improve landing capacity at London Heathrow Airport. AFP/GETTY IMAGES

The U.S. government, meanwhile, is working on a framework to regulate AI in the United States.

“We always want to use AI in a way that’s consistent with civil liberties and privacy and American values. So clearly, we don’t want to become a surveillance state like China,” Parker, the White House coordinator on AI policy, told Politico. “On the other hand, the opposite extreme is to over-regulate to the point where we can’t use it at all.”

Other factors could stall the PRC’s AI ambitions, such as the nation’s lack of contribution to the theories employed to create the tools on which the field is being built and the reluctance of Chinese companies to invest in basic research, analysts contend.

The PRC, for example, still trails the U.S. in AI hardware; U.S. companies manufacture most of the world’s AI-enabled semiconductor chips. The PRC also lacks “expertise in designing computing chips that can support advanced AI systems,” Zheng Nanning, director of the Institute of Artificial Intelligence and Robotics at Xi’an Jiaotong University, told the journal Nature in August 2019.

Moreover, the mere scale of the PRC’s investment thus far has not translated into real results. “The downside to having a centralized focused approach is that you get very quickly to an end goal that may be the wrong goal. The advantage of the American innovation ecosystem is that we allow many good ideas to be explored in depth and we can see which ones are going to be fruitful,” Parker told Politico.

AI Next

The U.S. government’s Defense Advanced Research Projects Agency (DARPA) has historically succeeded by cooperating with the country’s leading researchers and innovation centers to produce game-changing capabilities, ranging from the internet to GPS and self-driving cars. DARPA is continuing to apply its winning research and development formula to AI. In 2018, DARPA unveiled its AI Next program to invest U.S. $2 billion on AI-related research over five years, which is to be strategically allocated to help usher in the next wave of AI to produce machines that understand and reason in context.

“With AI Next, we are making multiple research investments aimed at transforming computers from specialized tools to partners in problem-solving,” according to DARPA director Dr. Steven Walker. “Today, machines lack contextual reasoning capabilities, and their training must cover every eventuality, which is not only costly but ultimately impossible. We want to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them.”

“Within this U.S. $2 billion that we’re spending, it’s across a very wide range of projects — no two of which are alike — and so we’re placing a lot of strategic bets on technologies that may emerge in the future,” said John Everett, DARPA’s deputy director of the Information Innovation Office, according to Politico. “A lot of the money that’s going into the research in China seems to be going into pattern recognition. So, they will be able to do incrementally better pattern recognition by spending an enormous amount of money on it. But there’s a declining return to incremental expenditures.”

“In today’s world of fast-paced technological advancement, we must work to expeditiously create and transition projects from idea to practice,” Walker said.

Earlier generations of DARPA AI research endeavors are already bearing fruits. The agency, for example, has succeeded in developing Real-time Adversarial Intelligence and Decision-making (RAID) software that can predict the goals, movements and possible emotions of enemy forces five hours into the future. RAID applies an aspect of game theory to reduce problems into smaller games, decreasing the amount of computational power needed to achieve a solution. In recent tests, the software outperformed human planners in terms of speed and accuracy and is close to being fielded for U.S. Army use. 

U.S. thought leaders remain optimistic that U.S. ingenuity and values will prevail in shaping how AI is adopted worldwide in the future.

“Assuming that the U.S. does in fact learn how to draw on all of its talent and embrace a national identity that reflects the world, American traditions of individualism, openness, rebellion, and humanism (notwithstanding a national infatuation with STEM [science, technology, engineering and math]) will offer the best chance of harnessing AI in the service of humanity rather than of private profits or public power. Or at least a better chance than China,” Dr. Anne Marie Slaughter, president and chief executive officer of New America, a think tank dedicated to renewing America in the Digital Age, summed in an article published on the website in March 2019.  

AI Initiatives

FORUM talks with Daniel Castro, director of the Center for Data Innovation, about U.S. competitiveness

FORUM: What does the United States need to do to maintain its lead in artificial intelligence [AI]?

Castro: The U.S. needs a comprehensive national strategy on AI, which should include increased research and development [R&D] funding, targeted initiatives to increase skills in the workforce and educational pipeline, efforts to recruit and retain foreign AI workers, and a strategic effort to craft regulatory policies to enable more use of AI. On the regulatory side, this will especially require revamping policies allowing for the collection, use and sharing of data, both in the private sector and public sector. 

FORUM: What are your impressions of the U.S. artificial intelligence strategy?

Castro: This administration’s efforts are starting to gain steam. The executive order has kicked off a series of actions that will lead toward meaningful improvements in how the federal government lays the foundations for future work on AI. In addition, the recent [September 2019] White House AI summit was an important catalyzing event to spur enthusiasm and momentum across federal agencies to pursue their own AI efforts independently. As the administration makes it clear that there is top-level support for this work, more people in government will start making this a priority. Also creating will help.

Electronics Show in Las Vegas, Nevada, in January 2019 AFP/GETTY IMAGES

FORUM: How do you think the U.S. strategy could be expanded?

Castro: There is always more work to be done. One area is funding. It’s difficult, if not impossible, to get a clear sense of how much some other governments are pouring into AI R&D, yet it is substantial. Congress needs to prioritize additional funding to supplement what the private sector provides and the “business as usual” funding it already offers to computer science and related disciplines. There is also a need to better track this funding across different agencies, particularly in relation to the various AI R&D priorities that the administration has identified. A lot of R&D on AI will not come from the National Science Foundation, but from the Department of Energy, Health and Human Services, the Department of Transportation and other federal agencies. In addition, the U.S. government should consider joint funding initiatives with allies around the globe to foster research collaboration.

FORUM: How important is the Joint Artificial Intelligence Center (JAIC) to U.S. competitiveness in AI?

Castro: The JAIC will be especially important in the near-term as a lack of talent in the Defense Department [DOD] constrains what defense agencies can do with AI. By consolidating expertise, DOD can better prioritize projects, focus on solving scaling issues, and develop a culture of AI-driven innovation within DOD.

FORUM: What is the bottom line of your recent report from the Center for Data Innovation?

Castro: The U.S. has an early lead in artificial intelligence, but China is gaining fast, and if the U.S. does not commit the resources necessary to compete, it will squander its early lead. The implications of falling behind in AI could have seriously negative implications for the U.S. economy, national security, and overall global competitiveness. Many other countries are making AI a priority, and the U.S. needs to do so as well.

FORUM: Are most people aware that the U.S. still leads China in absolute terms? Were your findings surprising in any way?

Castro: China is further behind in some areas than I would have initially thought. Although it’s important to note that this is a snapshot, and while we are using the most recent data, some of it is still about a year out of date. And even over the past few years, we have seen China making significant strides forward, so the gap between the countries is narrowing quickly. Moreover, both the U.S. and China have a lot of focus on using AI in the military, and much of that work cannot be publicly compared.

But one of the most significant factors is that China leads in adoption of AI, especially around piloting the technology. And public polls show that Chinese citizens generally are optimistic about AI. So, China has the wind at its back as it pursues ever more ambitious AI initiatives. In contrast, the U.S. public is more pessimistic about the potential of AI, which likely limits the willingness of lawmakers to pursue the technology.

FORUM: How does Russia factor in this competition, especially in the realm of military technology?

Castro: Russia is a big player, too, although not at the same scale as China, the EU or the U.S. From a geopolitical standpoint, the bigger concern is what are the implications for the U.S. if Russia and China establish closer ties on AI. This is one reason why the U.S. needs to form close partnerships on AI with allies like Australia, Canada, France, Germany, Japan, South Korea and the U.K. [United Kingdom].

FORUM: What role will Europe play in AI?

Castro: Europe is a major force for AI research, and it is committing a lot of resources to this issue. From an economic standpoint, Europe right now is not as much of a threat to U.S. leadership in the field. But if European leaders revamp their AI strategies to focus more on commercialization and adoption, they could be a much stronger player.

FORUM: What is the significance of China leading in adoption of AI and data?

Castro: Adoption of AI is key for disrupting industries. The threat for most countries from China isn’t just that it leads in AI development, but that by being a lead adopter of AI, it will be more competitive in traded sectors like financial services or health services. China beats a lot of other countries on data partially because of its size, but also because of its policies. But there are still big opportunities for China to increase data sharing, particularly government data to the private sector. So again, this is an area where countries like the U.S. should not only look to improving domestically, but also recognizing that to compete with China they will need to form international partnerships with allies to ensure U.S. companies have access to global data sets.

FORUM: Do you think there is a need for an international ethics panel on AI?

Castro: Many of the ethical questions about AI are going to be highly context specific. That is to say that questions about AI ethics are going to be related to how AI is used in self-driving vehicles or credit-risk scoring or other specific scenarios. So, countries should not try to develop a single forum for debating AI ethics, but rather recognize that sectors where AI is having a transformative effect will need to consider the ethical implications of using the technology. But even in these sectors, it is unlikely that the focus should be on ethics of AI, and instead on ethics of the larger system or process that is being used. Too often people get tunnel vision talking about AI ethics and ignore, or get distracted from, the broader ethical questions associated with a particular issue.

FORUM: What do you think of U.N. or other international efforts to establish a ban on so-called killer robots? Do you think humans should be maintained in the decision-making loop?

Castro: AI will be increasingly integrated into military and defense systems. Attempts to impose bans on the technology are likely misguided at this stage. Almost everyone agrees that no country should be unleashing unaccountable killer robots on the world. The harder question is deciding how much accountability is appropriate, under what conditions, and what the implications of those choices will be. Fortunately, there is still time to study this question and work toward developing global consensus and norms.