【FactLink演講_印尼IRIS 2025分享台灣抵禦AI假訊息的案例研究】
IRIS 2025: Advancing Information Resilience in the Age of Generative AI
FactLink創辦人陳慧敏在2025年8月21日,受到印尼智庫Safer Internet Lab和Center for Digital Society邀請,於IRIS 2025年發表台灣在2024年總統大選期間,政府與民間社會如何籌備與防禦可能發生的AI假訊息。
相關連結:
IRIS 2025 研討會網頁連結在這
IRIS 2025年會議直播影片在這裏
FactLink創辦人陳慧敏於此研討會發表報告在這裡IRIS 2025: Advancing Information Resilience in the Age of Generative AI
Safer Internet Lab (SAIL), in collaboration with the Center for Digital Society (CfDS) UGM, held a regional academic conference titled the Information Resilience and Integrity Symposium (IRIS) on 21 August 2025 at the Faculty of Social and Political Sciences (FISIPOL), Universitas Gadjah Mada (UGM).
The symposium was held in response to the growing risks posed by generative AI. While the technology brings opportunities, it also has the potential to amplify misinformation that threatens democracy, fuel online fraud that harms the digital economy, and influence geopolitics through Foreign Information Manipulation and Interference.
Carrying the theme “Generative AI and Information Resilience in the Asia-Pacific: Actions and Adaptions,” IRIS served as a platform where policymakers, private sectors, and academics came together to exchange ideas and propose sustainable actions in addressing emerging challenges.
Panel: The role of information in democratic resilience
Speakers: Summer Chen (FactLink Taiwan), Rahmat Fauzi (Pilkada.AI), Michelle Anindya (freelance journalist), and Dr. Abdul Gaffar Karim (Faculty of Social and Political Sciences, UGM)
Moderator: Titik Firawati (Faculty of Social and Political Sciences, UGM).
Through these sessions, IRIS aimed to raise public awareness of both the risks and opportunities of GenAI, foster cross-sector dialogue, and disseminate SAIL’s research as a reference for policy development.
Research Paper: Countering AI Disinformation: Lessons from Taiwan’s 2024 Election Defense Strategies, Summer Chen (2025)
Full Article is here.
研究報告全文在這裡
Summary: This study examines the emergence and impact of AI-generated and deepfake disinformation during Taiwan’s 2024 presidential and legislative elections. Although AI-generated content, deepfake, and AI-cloned audio did not become dominant tactics, they revealed evolving strategies for information manipulation. Drawing on official records, interviews, and firsthand observations, the report analyzes responses from both government and civil society—including legal amendments, law enforcement actions, and AI literacy initiatives. Law enforcement faced significant challenges, including limited AI verification tools, delayed responses from platforms, hard tracking origins and politically sensitive cases involving deepfake false denial. Civil society experimented with collaborative verification networks and public AI literacy education, achieving breakthroughs in public awareness while also encountering resource and capacity constraints. As AI-driven disinformation continues to evolve, its growing influence highlights the urgent need for proactive policies, stronger collaboration between tech experts and media professionals, and greater investment in training and AI literacy to strengthen information resilience.



