DepartmentsMedia & Tech

Envisioning the Future Battlespace

Drone Killers 

The U.S. Marine Corps successfully used a new directed-energy weapon called the Light Marine Air Defense Integrated System (LMADIS) to take down an Iranian unmanned aerial vehicle in July 2019 that came within 1,000 meters of a U.S. Navy assault ship and failed to heed warnings. LMADIS is designed to blast radio signals to disrupt communications between a drone and its home base, but in this application, it fried the drone’s circuits, according to media accounts.

“It’s not all that different from the drone zappers you can buy commercially,” Bryan Clarke, former special assistant to the chief of naval operations, told Wired magazine. “It’s just higher power, and it operates on a wider frequency range. You can have so much power in a small frequency range or a little amount of power over a large frequency range.”

LMADIS includes two specially made Polaris all-terrain vehicles dubbed MRZRs. The first functions as a command unit; the second is equipped with sensors and signal jammers. An operator can interpret the sensor data collected by the MRZR and then decide to blast radio frequencies to take down communications between the drone and its base. 

Two U.S. Air Force F-22 Raptor stealth jet fighters, with fifth-generation features, fly near Andersen Air Force Base near Agafo Gumas, Guam. REUTERS/U.S. AIR FORCE/MASTER SGT. KEVIN J. GRUENWALD

The U.S. military is testing other electronic warfare systems that can jam drones and cruise missiles. The U.S. Air Force, for example, in June 2019 tested its tactical high-power microwave operational responder, or THOR, which will eventually be capable of bringing down a swarm of drones with a single blast.

The Marines’ battle-proven LMADIS already offers advantages over previous capabilities. The radio weapon is less expensive than artillery and doesn’t require precise targeting or optical sighting as laser weapons do, Clarke said.

AI Warfighters

Artificial intelligence (AI) has outwitted chess grandmasters, military planners and even human pilots in simulated dogfights. 

“Already, an AI system can outperform an experienced military pilot in simulated air-to-air combat,” Kenneth Payne of King’s College London told The Economist magazine in August 2019.

The U.S. Defense Advanced Research Projects Agency (DARPA) wants to take AI, however, to the next level in the cockpit by training warfighters to trust computers the way they trust other human beings. Through its air combat evolution (ACE) program, DARPA wants to push U.S. pilots to trust AI for increasingly complicated fighter pilot operations. Through films such as Top Gun, “the media have kind of put dogfight up on this apex of human creativity and vision, but rather, in reality, a dogfight is a pretty simple problem to solve,” U.S. Air Force Lt. Col. Dan Javorsek, DARPA’s ACE program manager, said at a July 2019 conference, according to the FedScoop website. That’s why DARPA sees collaborative human-machine dogfighting as a great starting point to build trust.

“Being able to trust autonomy is critical as we move toward a future of warfare involving manned platforms fighting alongside unmanned systems,” Javorsek said in a DARPA release. “We envision a future in which AI handles the split-second maneuvering during within-visual-range dogfights, keeping pilots safer and more effective as they orchestrate large numbers of unmanned systems into a web of overwhelming combat effects.”

In this way, ACE will help the U.S. military train pilots to be battle managers and move away from mainly manned systems to a mix of manned and less-expensive unmanned systems that can be rapidly developed, fielded and upgraded to address evolving threats.

Dogfighting, although nonlinear in behavior, offers measurable objectives and outcomes within the limits of flight dynamics that make it ideal for advanced tactical automation. Like human pilot combat training, the AI performance expansion will be closely monitored by fighter instructor pilots in the autonomous aircraft, which will help co-evolve tactics with the technology.

“Only after human pilots are confident that AI algorithms are trustworthy in handling bounded, transparent and predictable behaviors will the serial engagement scenarios increase in difficulty and realism,” Javorsek said.  Forum Staff

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button