Countering cognitive warfare: Lessons for the EU from Taiwan
Countering cognitive warfare: Lessons for the EU from Taiwan
WRITTEN BY JIA YIN CHEN AND LUC VAN DE GOOR
12 December 2025
Democracies around the world face a growing threat from cognitive warfare. Social media in particular is being exploited to spread disinformation meant to mislead the public, interfere in elections, destabilise democratic institutions, and generally undermine public confidence in governments.
The European Union and several European states have in recent years been the targets of election interference believed to emanate from Russia, through state agencies or their proxies. Asian democracies are facing similar challenges, particularly Taiwan. The Economist Intelligence Unit ranks Taiwan as the world’s 12th strongest democracy, while Reporters without Borders ranks it 24th in terms of press freedom. Taiwan has also topped the ranking for being targeted by “foreign governments’ dissemination of false information” for 11 consecutive years, according to the V-Dem Institute.
Taiwan’s democratic processes and values are under relentless attack from cognitive warfare — propaganda and disinformation meant to erode a population’s morale, social cohesion, and confidence in the state, much of it perpetrated through social media. But preserving Taiwan’s highly democratic and open society means striking a delicate balance between censorship and freedom of speech, while still preventing the destructive influence of disinformation.
A sharp rise in social media disinformation
Digital platforms, particularly social media, are now the main source of news for most of Taiwan’s population. In 2025, around 91 per cent of the Taiwanese population has an internet connection, and the number of active social media identities is equal to around 80 per cent of the total population — both levels far above the global averages.
Global social media platforms like Facebook, Instagram, TikTok, and YouTube are widely used in Taiwan, although two Taiwan-based Chinese-language platforms — PTT and Dcard — are also among the most popular. Taiwan is also one of the largest markets for the Japan-based LINE messaging platform. The use of social media to spread disinformation has, perhaps inevitably, become a key tactic in cognitive warfare against Taiwan.
Crucially, countering cognitive warfare is not just about timely dissemination of factual counter-narratives. It must also build each citizen’s defences against disinformation — making them more skeptical of the information they receive and willing to actively verify or debunk it.
According to analysis by Taiwan’s National Security Bureau (NSB), around 2.2 million instances of “contentious messaging” — false or misleading information attributed to the Chinese Communist Party’s (CCP) attempts to manipulate public opinion and deepen societal tensions — were identified in 2024, far surpassing the 1.3 million recorded in 2023. Most of these were disseminated via social media, with the use of online forums (including PTT) increasing by 664 per cent and the use of video-sharing platforms (including TikTok) rising by 151 per cent year on year.
The NSB also identified 28,216 “abnormal” social media accounts: accounts that, for example, concealed basic information about the account holder, had unusual friend lists, or echoed official CCP announcements. The vast majority of these were Facebook accounts, although the number on TikTok also increased sharply.
Disinformation accompanying Chinese military exercises
Particularly intensive disinformation campaigns on Taiwanese social media have often coincided with military exercises carried out by China’s People’s Liberation Army (PLA) in Taiwan’s vicinity. For example, when United States House Speaker Nancy Pelosi led a congressional delegation to Taiwan on 2–3 August 2022, China responded with a series of provocative military actions, including large-scale military exercises that encircled the island immediately after her departure.
On 8 August 2022, Taiwan’s Ministry of National Defense Political Warfare Bureau held a press conference urging the public to ignore surging disinformation. The Bureau claimed to have identified 272 separate contentious messages that had been massively replicated and disseminated on social media between 1 and 8 August. These messages were categorised as promoting “military reunification”, undermining the Taiwanese government’s credibility, or weakening military and civilian morale. Examples included claims such as a PLA Su-35 combat aircraft crossing the Taiwan Strait and intercepting four Taiwanese F-16s, and a PLA naval vessel approaching Taiwan’s coastline. Both were apparently aimed at instilling the idea that the PLA was capable of invading Taiwan by force.
The combination of military exercises and disinformation may have had at least a temporary impact on Taiwanese public opinion. According to an opinion poll carried out on 8–9 August, 39 per cent of respondents believed an attack on Taiwan by the PLA was imminent, up from 27 per cent six months earlier. Similarly, the share of respondents who thought a PLA attack was unlikely or impossible fell from 63 to 53 per cent.
Two years later, on 14 October 2024, the PLA carried out another military exercise around Taiwan, dubbed “Joint Sword-2024B”. Once again, this was accompanied by a wave of social media disinformation. One false narrative appearing on the PTT online forum claimed that a liquified natural gas tanker operated by Taiwan’s state energy company CPC Corporation had been blocked from entering the port of Kaohsiung in southern Taiwan by PLA Navy vessels during the exercise. Messages variously blamed this on the incompetence of the Ministry of National Defense and the Coast Guard Administration as well as on the ship’s captain being hungover.
A preliminary investigation by Taiwan’s Ministry of Justice Investigation Bureau published the following day identified this as cognitive warfare. It said the IP addresses of the accounts that initiated these messages were either routed through proxy servers or originated from hacked internet surveillance cameras. It also said similar tactics had been used by “foreign cyber actors” in recent years to hijack accounts on platforms like PTT and impersonate Taiwanese users.
In addition, CPC Corporation promptly issued a press release clarifying that the LNG carrier had not been obstructed by the PLA Navy. Other government agencies, including the Ministry of Economic Affairs and the NSB, also issued statements debunking this claim, and the Taiwanese media quickly relayed these clarifications to the public.
Disinformation during the 2024 presidential election
Social media disinformation was also widely deployed during Taiwan’s 2024 presidential election, even including the use of deepfake technology to fabricate news about presidential candidate Lai Ching-te. According to one report, between 10 and 16 January 2024 — from three days before to three days after voting took place — 188 YouTube channels uploaded a total of 429 videos and 173 TikTok accounts posted 469 videos mentioning vote rigging. These videos were viewed more than 32 million times.
One group of videos claiming electoral fraud appeared on TikTok on 13 January, most usually beginning with the same opening line, “I was shocked by something I just saw” and replayed footage of the 2020 presidential election vote-counting process. They claimed that the footage showed electoral fraud and further implied that the 2024 vote would also be compromised. The uniformity of these videos, in both script and tone, strongly suggested a coordinated and carefully planned effort to push a specific narrative.
The independent Taiwan FactCheck Center (TFC) had already begun, one month prior to the election, to monitor and track various types of disinformation concerning electoral procedures appearing on social media. Drawing on experience of several votes since 2020, the TFC even released an updated version of its “script of electoral rumours” a month before the 2024 presidential election to warn people of what false narratives they might see.
A second wave of videos appeared after the voting. The most widely shared of these — titled “If this isn’t vote rigging, what is?” and viewed more than 2 million times — contained clips of vote counting filmed by onlookers, which it claimed showed incidents of electoral fraud. The TFC rapidly investigated the 28 separate incidents shown in this video and discovered that all the clips showed issues that had been dealt with appropriately or were otherwise misleading. For example, one of the clips purporting to show a “hidden compartment” for concealing phony ballot papers in a carboard ballot box in fact only showed a stabilising flap in the bottom of the box. Electoral officials confirmed that the box had been checked both before and after the voting to ensure it was empty. The TFC published a further 35 fact-checking reports related to electoral disinformation during the election period alone.
On 17 January 2024, Taiwan’s Central Election Commission announced that cases of rumour dissemination would be referred to judicial authorities for investigation. Subsequently, eight of the ten most-viewed TikTok videos related to disinformation about electoral fraud were deleted.
Taiwan’s countermeasures against disinformation
According to an interview with Audrey Tang, who was Taiwan’s first Minister of Digital Affairs between 2020 and 2024, one principle of the government’s response to disinformation around the 2024 presidential election was to try to publish clarifications before a false narrative can become entrenched, ideally within 60 minutes. Taiwan has established a “whole-of-government” mechanism to combat disinformation. The NSB takes the lead, alerting government agencies to disinformation concerning their area of work. The agencies then issue clarifications to the public through the government’s Breaking News Clarification platform, which are frequently reported by the news media.
The NSB referred over 3900 contentious messages to government agencies in 2024. Furthermore, the NSB publishes two or three fact-checking reports each month on its website concerning disinformation it deems to threaten national security or disrupt social stability. These reports are produced in collaboration with third-party fact-checking organisations such as the TFC and MyGoPen.
However, since user traffic to such official channels is limited, the Taiwanese government collaborates with the LINE Fact Checker platform, which quickly relays clarifications to make this information more widely accessible. LINE users can also submit suspicious messages to the Fact Checker’s LINE account for verification. As of November 2025, the platform had received over 800,000 user reports covering more than 30,000 suspicious messages — not only texts but also suspect URLs, text-based images, and even voice messages.
Suppressing disinformation without censorship
A survey last year showed a rapid rise in the percentage of Taiwanese citizens who believe they have been exposed to disinformation: from 75 per cent in 2022 to 95 per cent in 2024. This not only reflects the increasing prevalence of disinformation but also suggests that the public has become more alert and capable of identifying it. The survey also found that the proportion of citizens using fact-checking mechanisms increased from 61 per cent in 2023 to 71 per cent in 2024 — indicating that fact-checking has become a key tool for Taiwanese society in countering disinformation.
Unlike the EU’s ban on Russia-linked media outlets, Taiwan’s approach makes only limited use of coercive measures such as prosecuting perpetrators of disinformation or ordering social media posts and videos containing disinformation to be taken down. Instead, it emphasises the use of fact-checking and official clarifications, in an effort to preserve freedom of speech while combatting disinformation.
Several European countries have also experienced election interference by Russia through social media disinformation, such as the 2025 German federal election and the 2024 Romanian presidential election. Social media serves as a channel for disinformation infiltration, despite the efforts made by civil society, regulators, and the platforms themselves to expose such activities. Therefore, providing transparent and trustworthy channels of information can help ensure that the public validates false information or receives accurate information and is better equipped to counter disinformation.
The EU Commission has also developed a range of policy measures to tackle online information manipulation and foreign interference. These focus on making online platforms responsible for proactively combatting the spread of false or misleading information. Over time, these measures have evolved from voluntary self-regulatory instruments such as the 2018 EU Code of Practice on Disinformation, which set standards and commitments for fighting disinformation on online platforms. The code was strengthened in 2022 with 34 signatories agreeing to increase the transparency and accountability of their platforms’ actions. A Transparency Centre was also established, providing information on the Code and implemented actions.
In August 2023, the Digital Services Act (DSA) became legally enforceable. The DSA regulates that online intermediaries and platforms must prevent illegal and harmful activities, including the spread of disinformation. It ensures user safety, protects fundamental rights, and creates a fair and open online environment. To strengthen the enforcement, the 2022 Code of Practice was integrated into the DSA framework in February 2025 as a Code of Conduct on Disinformation, making it a benchmark for platforms' compliance.
The EU Commission also created the Artificial Intelligence (AI) Act, the world's first-ever legal framework on AI. The Act aims to address the risks posed by AI, positioning Europe as a global leader in AI regulation. Recognising the effects of disinformation on elections, the EU enacted the Regulation on the transparency and targeting of political advertising (2024). This Regulation mandates that political adverts must be clearly labelled and provide detailed information about the funding source and any targeting techniques used. Most of its provisions apply as of 10 October 2025.
Comparing the EU and the Taiwanese approaches reveals two distinct paths. The steps taken by Taiwan are concrete and directly geared towards societal resilience. In contrast, the EU’s approach is more legalistic, establishing regulatory frameworks, which creates challenges in the transatlantic relationship. The US views the DSA’s content moderation policies as a form of censorship that is incompatible with America’s free speech tradition, thus almost undermining the security relationship with the US.
The examples from the EU, Taiwan, and Germany show that countering disinformation and cognitive warfare involves balancing various dimensions, particularly in an increasingly connected world where national and international levels are difficult to disentangle. Crucially, countering cognitive warfare is not just about timely dissemination of factual counter-narratives. It must also build each citizen’s defences against disinformation — making them more skeptical of the information they receive and willing to actively verify or debunk it. This commitment helps keep public debate in Taiwan both free and grounded in facts.
DISCLAIMER: All views expressed are those of the writer and do not necessarily represent that of the 9DASHLINE.com platform.
Authors biography
Jia Yin Chen is a former Guest Researcher in the SIPRI China and Asia Security Programme. His main research areas include cognitive warfare, grey zone conflicts, and psychological warfare.
Luc van de Goor is Senior Researcher and Director of Studies on Conflict, Peace and Security at the Stockholm International Peace Research Institute (SIPRI). Image credit: Unsplash/Emma Ou.