Deepfake audio falsely portrays Marcos as confrontational
FORUM Staff
Philippine President Ferdinand Marcos Jr. did not call for military action against “a particular foreign country” as depicted in a deepfake online post that manipulated his voice, his communications office said.
The forged audio surfaced in late April 2024 amid increasingly aggressive actions by the People’s Republic of China (PRC) in part of the South China Sea that Beijing claims as its territory despite an international tribunal’s 2016 ruling that the region is within the Philippines’ exclusive economic zone.
The aggression continued after the fake post, with Chinese coast guard ships attempting to deter Philippine vessels from patrolling Scarborough Shoal, The Associated Press (AP) reported. “China’s coast guard and maritime militia vessels harassed, blocked and rammed vessels of the Philippine coast guard and the Bureau of Fisheries and Aquatic Resources,” Philippine authorities said.
The Philippines and its Allies and Partners have condemned the Chinese maritime assaults.
A “foreign actor” likely is responsible for the deepfake posted on a video streaming platform, Marcos’ communications office said. “No such directive exists nor has been made,” the office reported, adding that the deepfake account was removed. The government is investigating the post.
The discredited video featured audio intended to portray Marcos as ordering “armed forces and special task groups” to respond appropriately should the PRC “attack” the Philippines, according to Rappler, a Manila-based news website.
A Philippine lawmaker suggested classifying deepfake technology that threatens national security as terrorism, the government’s Philippine News Agency reported. Lanao del Norte Rep. Mohamad Khalid Dimaporo said the Marcos deepfake was a “sabotage” of the president’s foreign policy, according to the news agency.
The Philippines’ Cybercrime Investigation and Coordinating Center said an individual source, not a foreign country, was responsible for the deepfake post.
The phony report appeared near the beginning of Balikatan, the largest Philippines-United States military exercise, which included participating Australian and French forces and 14 observer nations. Beijing claims that the annual drills, which marked their 39th iteration in 2024, aggravate tensions and undermine regional stability, Reuters reported.
Manipulated audio and video are increasingly common on social media platforms. Rapid advances in generative artificial intelligence (AI) make such deepfakes difficult to identify and, therefore, a favored misinformation tool.
The United Nations and a host of individual countries are considering how to supervise the technology. Japanese Prime Minister Fumio Kishida has suggested a regulatory framework, the AP reported in May 2024. “Generative AI has the potential to be a vital tool to further enrich the world,” Kishida said. But “we must also confront the dark side of AI, such as the risk of disinformation.”
In late 2022, pro-PRC bot accounts on Facebook and Twitter, now X, distributed avatars created by AI software, according to The New York Times. It was the first known instance of deepfake video technology being used to create fictitious people as part of a state-aligned disinformation campaign, the newspaper reported in February 2023.