Human-Machine Military Teams Harness the Best of What Each Offers
PHOTOS BY THE ASSOCIATED PRESS
Gone are the days of turning to science fiction to glimpse a future in which the artificial intelligence (AI) of machines rivals the brain power of mankind.
“Artificial intelligence is already here,” said Brig. Gen. Matthew Easley, director of the U.S. Army Futures Command and AI Task Force. “There’s huge AI applications in your hand as you hold your smartphone. It’s both in your device and in all the systems that it’s connected to.”
Defined simply, AI represents any type of computing system used to augment decision-making — including the ability to make decisions on its own.
Researchers working to harness AI for military applications say armed forces remain years away from deploying machines with 100% decision-making capabilities in the battlefield. The future of multi-domain operations (also known as cross-domain operations) will instead consist of “centaur” teams, where man and machine bind the top abilities of each for optimal performance.
“The fact that the best human chess players can no longer beat super computer chess players is old news. What’s less old news is that amateur chess players who have computers that you can buy at Best Buy are beating both grand master chess players and the best super computers in chess,” Broc Perkuchin, a retired U.S. Army colonel who held command and staff positions in engineering and logistical organizations in the Middle East, Asia and the United States, said during the Land Forces Pacific (LANPAC) Symposium and Exhibition held in May 2019 in Honolulu, Hawaii. “That’s because these teams, these human-machine teams, dubbed centaurs after the mythical creature that’s half man, half horse, bring the best of what humans have to offer — intuition, judgment and creativity — and they bring the best of what machines have to offer in terms of data processing speed and capacity.”
From the military’s perspective, these centaur teams must be integrated to seemingly function as one body and one mind, said Perkuchin, who now works for Cougaar Software Inc. as vice president of government solutions and leads the company’s efforts to enhance the U.S. Department of Defense’s operational performance through application of the company’s multiagent systems AI technology.
“This isn’t about artificial intelligence replacing Soldiers, and this isn’t about Soldiers using artificial intelligence as tools,” Perkuchin said. “It’s about a physically and mentally integrated symbiotic relationship, where each brings the best that they have to offer to the fight. Ultimately, it’s about machines and humans helping each other think.”
Beyond research and development, proper implementation of AI into multi-domain operations requires a vetted process, infrastructure, network, policies and people, said Easley, who served as chairman of the AI and autonomous capabilities panel during LANPAC.
Part of the process, Easley said, requires deep learning. That involves joint training of Soldiers and AI. Take for example, rifleman training. A Soldier would use a smart scope with technology similar to a smartphone that would collect data on Soldier performance to help predict accuracy and even identify the best shooters in a unit.
Deep learning also involves discovering how the machine works and teaching it how to learn as it collects data.
Developers still have to train machine models how to identify help versus harm, fear and other emotions as well as physical objects in a scenario. Experts warn that establishing that baseline knowledge for AI should remain objective, because data manipulation on any level and by any means presents challenges and the risk of injecting false or harmful information. Humans come with their own biases. Inventors must be careful not to introduce those biases into AI components.
“Any technology — or, really, anything that we build — reflects the values, the norms, and, of course, the biases of its creators. We know that the people who build AI systems today are predominantly male, white and Asian, and a lot of the innovations come out of the United States,” said Douglas Yeung, a social psychologist at Rand Corp. whose specialty includes human behavior. “People have expressed concern that this could potentially introduce bias. It’s of concern because AI, by its very definition, can have broader impact. We should be asking, ‘What might be the unintended consequence of bias?’”
Companies have realized that they can’t train facial-recognition technology by mainly using photos of Caucasian men because that feeds a bias into the algorithms, explained Osande A. Osoba, a Rand information scientist with a background in the design and optimization of machine learning algorithms.
“But better training data alone won’t solve the underlying problem of making algorithms achieve fairness,” Osoba said. “Algorithms can already tell you what you might want to read, who you might want to date and where you might find work. When they are able to advise on who gets hired, who receives a loan or the length of a prison sentence, AI will have to be made more transparent — and more accountable and respectful of society’s values and norms. Accountability begins with human oversight when AI is making sensitive decisions.”
Perkuchin agreed. “There’s no switch where we say now, we do everything with AI. Make sure the right verification techniques are in place,” he said. “There’s a difference between decision enablement and the actual decision. We’re far away from the autonomous decision whether to shoot or not to shoot or make a particular action.”
Accordingly, it’s important to design products that enhance a Soldier’s performance and not make his life more difficult, Easley said. “Provide only what’s necessary so that Soldiers can win and have decision advantage to operate at the highest level against the risks as they unfold,” Easley said.
Soldiers, not machines, should maintain final decision-making authority, he added. “You still need to apply [a] commander’s judgment. The laws of war don’t go away. Design systems that still allow for the human operator to make the decision.”
Three potential applications of AI at the operational level illustrate wide-ranging applications for the military: omnipresent and omniscient autonomous vehicles; big-data-driven modeling, simulation and wargaming; and focused intelligence collection and analysis, according to Zachary S. Davis, a senior fellow at the Center for Global Security Research at Lawrence Livermore National Laboratory and a research professor at the Naval Postgraduate School in Monterey, California. He expounds on them in a March 2019 report titled “Artificial Intelligence on the Battlefield: An Initial Survey of Potential Implications for Deterrence, Stability and Strategic Surprise.”
Exploiting the new generation of autonomous vehicles is a high priority for military application given the focus on navigation for a variety of unmanned land, sea and air systems, Davis contends. “Autonomous vehicles and robotics are poised to revolutionize warfare,” Davis wrote. “AI-informed navigation software supported by ubiquitous sensors enables unmanned vehicles to find their way through hostile terrain and may eventually make it possible for complex formations of various types of drones to operate in multiple domains with complementary armaments.”
Easley shared similar sentiments during LANPAC, when he said, “Let the robot do the dirty and dangerous work. Don’t put Soldiers at risk. Use drones or other hardware.”
Where big data and simulation are concerned, models have enabled scientists to confirm the reliability of nuclear stockpiles without nuclear testing, for example.
“Simulation and modeling [are] already a key part of the design process for nearly all major weapons systems, from jets and ships to spacecraft and precision-guided munitions,” Davis wrote. “Massive modeling and simulation will be necessary to design the all-encompassing multi-domain system of systems envisioned for battle management and complex missions such as designing, planning and managing systems for space situational awareness.”
For intelligence collection and analysis, machine learning will remain an important tool to all analysts who consider information from a combination of sources, locations and disciplines to understand the global security environment, Davis wrote. “Machine learning also makes it possible to combine open-source trade and financial data with multiple forms of intelligence to glean insights about illicit technology transfers, proliferation networks, and the efforts of proliferators to evade detection. These insights enable analysts to inform policy makers and support counterproliferation policy and actions.”
Insights gleaned from AI also have practical applications in the field, according to Perkuchin. AI can help locate the golden needle, predict when a platform will break and eliminate communication problems between armies that don’t speak the same language, he said.
“Most significantly, a broader application of artificial intelligence helps multi-domain operations commanders achieve convergence, which is a rapid and continuous integration of capabilities in all domains. That is a key to a centaur army that will best deploy AI.”
After all, Perkuchin concluded, “It’s a human-machine team for the next many years that will yield the most power. Elevate a Soldier, elevate a command environment.”