The Hoover Institution's Project on Taiwan in the Indo-Pacific Region held The Fimi Challenge: Countering Foreign Information Manipulation and Interference in Taiwan and the United States on Monday, March 9, 2026 from 2:00 - 5:00 PM PT. 

This event explored the practice of foreign information manipulation and interference (FIMI) in democracies in Taiwan and the United States, and the responses of both country’s governments, private companies, and civil society organizations to this challenge. In Taiwan, the threat of influence and interference from the People’s Republic of China (PRC) in its information ecosystem looms large. The Taiwan government has struggled to develop an effective response while balancing respect for civil liberties and freedom of speech: a tension manifested in the decision in December 2025 to ban the social media platform Rednote (xiaohongshu). In the United States, efforts during the Biden administration to limit the spread of COVID misinformation online – some of it clearly tied to foreign influence campaigns originating in the PRC – led to a political backlash, and as a consequence some social media companies have taken a more passive approach to FIMI. At the same time, however, motivated by worries about PRC influence and data security of American citizens, the US Congress passed a law requiring the Chinese company Bytedance to divest from its popular platform TikTok or face a government-imposed ban in the US market. With a compromise agreement now brokered by the Trump administration, TikTok remains available in the United States, but the underlying concerns about social media platforms as vectors for PRC influence remain.

This symposium brought together several experts from Taiwan and the United States who discussed the FIMI challenge, including the efforts of private social media companies, civil society organizations, and governments in both places.

- Okay, I'm gonna go ahead and call us to order. I wanna extend a welcome to to Sergey. Thanks for joining us to our other panelists and most importantly to our audience here, in person and online, out there, wherever you may be via the magic of the internet, we can reach you all out in, in the ether out there. I am Ka Templeman. I'm a research fellow here at the Hoover Institution at Stanford, and I manage our project on Taiwan in the Indo-Pacific region. Our event today is being organized in partnership with a think tank in Taiwan called dset, or the Research Institute for Democracy Society and Emerging Technology. And they've done us a big favor by helping provide some, some of Taiwan's foremost experts on the topic of foreign information manipulation and interference or femi. This event will explore the practice of Femi in democracies in both Taiwan and the United States, and the responses of both countries, governments, their private companies, and their civil society organizations, to the challenge of foreign inf in interference within our democracies in Taiwan, of course, the threat of influence and interference looms large from the people's Republic of China. It has a a, a free wielding information ecosystem. There are very few, or traditionally have been very few controls over Taiwan's media and information space. It's well connected to the global information commons and that has made it both a resilient, resilient pillar of Taiwan's liberal democracy, but also a potential liability in the face of concerted PRC efforts to try to influence discourse in Taiwan in a way that favors their own, their own views and their own interests. And the Taiwan government has struggled to, to develop an effective response while balancing respect for civil liberties and freedom of speech. This is a tension manifested in the decision last December by the Taiwan government to ban the social media platform Red Note, or that was a kind of crossing of the Rubicon in Taiwan. No social media platform prior to that had ever been restricted in Taiwan. And so there's an important potential precedent behind that decision. Xhu is a PRC media, social media platform that had become quite popular among particularly young people in Taiwan under the age of 18. And the decision to ban it was based in part on concerns that that was influencing national identification in Taiwan and increasing, increasing connections between the people of Taiwan and the people of mainland China. But there was also a claim that the platform was facilitating fraudulent behavior, collecting data, spying on the individuals using it, and generally behaving in a way that was inimical to Taiwan security and national interests. I raised this example because in the United States we've also had heated debates about what to do about social media platforms that are based originally in authoritarian countries. So in the United States, there were efforts during the Biden administration, if you remember way back in the beginning of COVID, to limit the spread of COVID misinformation online. Some of it clearly tied to foreign influence campaigns originating in the PRC that led to a political backlash in the United States. And as a consequence, some social media companies in the US have now taken a more passive approach to moderating or identifying or trying to fight back against Femi. At the same time, however, there's been an opposite trend in the United States Congress where Congress was worried by worried about A PRC influence and the data security of American citizens using platforms that originated in mainland China and ultimately passed a law in April, 2024 requiring the Chinese company bike dance to divest from its popular platform, TikTok, or instead face a government imposed ban in the US media market. Their that ban survived a challenge that was taken all the way to the Supreme Court, but then President Trump stepped in and negotiated a kind of compromise agreement in which TikTok remains available as of today in the United States. However, the underlying concerns about social media platforms as vectors for PRC influence in the US remain and they remain, I think, insufficiently addressed in this country. So there are several ironies to this series of events involving TikTok, not least of which is that Beijing has positioned itself in the debate over TikTok as a champion of free speech while playing down the fact that TikTok itself remains banned in mainland China for a regime that invented censorship of the internet and maintains a tightly regulated firewall between its domestic and international information spheres That is quite cynical, breathtakingly cynical in my view. But in the United States and in Taiwan, we find ourselves in a bit of a dilemma because it's hard to claim the moral high ground here to be advocates of free speech at the same time that platforms in both countries are being threatened with bans based on the content that they show or on fairly vague national security grounds. So I raise all of these issues to point to at least three big challenges, which I hope we can grapple with in this this event today. The first is the challenge of ownership. Authoritarian based platforms or platforms based in authoritarian regimes pose a challenge to the free and open societies that we all hold dear. This is more than simply authoritarian propaganda. It also includes attempts to facilitate from the bottom up people to people interaction in a way that benefits the interests of the regimes that ultimately control these platforms. One concern about SHU in Taiwan was the way it was reshaping national identity in Taiwan. And in fact, among young people, junior high children in particular, there was considerable evidence that interaction with mainland counterparts on this platform was shaping their views of cross street relations in a way that ultimately caused them to favor eventual political unification. Or at least that's the accusation control of these platforms. In other words, is moderated to conform to a particular narrative or a worldview that comes into conflict with the interests, the national interests of people in Taiwan or in the United States. And that has caused these platforms to be viewed through a national security lens rather than simply a free speech lens. And so I hope that we're able to grapple today with the tension between those two objectives. Second, this example highlights the na, the challenge of transnational social media for state sovereignty and security. To take another example that will I think ring home in this room, Facebook is by far the most important political platform in Taiwan today. Almost all elected officials have Facebook accounts. They respond to the news of the day there, they post typically daily and receive a lot of feedback from their constituents in on their Facebook platforms. But Facebook, as you all know, is not a Taiwanese company. Meta with its headquarters just a few miles up the road from here in Menlo Park, ultimately is accountable for, or ultimately decides, I should say, how the political speech is moderated and regulated on their platform even in foreign countries. And it is up to Facebook to decide not the democratically elected government or the courts of Taiwan to decide what constitutes politically acceptable and unacceptable speech in Taiwan. Other online platforms face the same sort of tension between trying to promote the use of their platform and moderating in a way that is sensitive to local political and social contexts. Those include Instagram Threads X and YouTube as well, which are incredibly important sources of information for publics in democratic regimes. But the way they are moderated and the way they counter foreign efforts to influence this information space very widely. And so I hope we'll talk a bit today about the practices of different platforms and their response to foreign interference and manipulation. And then the third is, this example points to the need for content neutral principles for moderating social media platforms and potentially for regulating or, or maybe establishing best practices for ownership structures. So who decides what platforms are good or bad? A lot rests on the answer to that question. Can we afford being influenced by foreign autocracies only by copying their heavy handed censorship methods? There is a, a longstanding assertion in many China watchers in this country that we should insist on reciprocity in the relationship with the PRC. Well, reciprocity unfortunately will not work here. We cannot respond to the PRCS ban on Facebook or on X or even on TikTok by turning around and banning those platforms in our own country. We have to find some other method to combat PRC efforts to shape the discourse space to tackle all of these questions. And I think I've put quite a bit on the table already. We're fortunate to have a group of experts from Taiwan and from the United States who've spent a significant amount of their careers investigating and thinking about these issues. We'll have over the course of this event, two, two different sessions. The first featuring Jerry Yu from Double Think Labs and wiping Lee from Fact Link both WEP Lee is actually at the University of Maryland, joins us here today from Maryland. Jerry Yu is at Double Think Labs in Taiwan and has made the trip from Taiwan. For this, for our second panel, we'll have Joha Lai from dce, and then we have our own Graham Webster from CSAC at Stanford University to speak on. Just get my agenda here to speak on the Femi challenge for democratic governments and more generally the challenge of information manipulation in this age of rapidly evolving information technology. All right, so I should note this event is public and on the record, we will make a recording of this available on our website once we wrap up. And the, the format for today is we will have our speakers introduce their topics. They'll, they'll speak for roughly 10 to 15 minutes each. And then we'll turn it over to some comments from our discussants and then we'll open it to a broader round of discussion with our, our terrific audience here today. Okay. So without further ado, I will turn us over to session one, civil Society private firms, and Femi. Larry, you have the chair's prerogative at this point.

- Well, I mean after the rich introduction that you provided, I don't think I need to say much more. It's a pretty, pretty sobering, a challenge which has technical dimensions and frankly democratic ethical philosophical dimensions, which I think you laid out well. So I would rather give people who traveled a long distance and have a lot to say the floor to, you know, put the issues on the table. So first is a senior analyst at Double Think Labs. Jerry, you, we visited your tremendous and cutting edge think tank around the time of the January 20, 24 elections, right? It was a very inspiring visit. So thank you for coming here and why don't you kick us off?

- Okay. Thanks for Car and Larry introduction. Yeah, I'm Jerry Yu. I'm from ING Lab. We are a think tank NGO, you can say NGO found. We founded in 2019. We focus on the research about PRC influenced, especially on online information space. So we tackling those PME incidents at least for four years. Yeah, so today I'm gonna share some case studies from last National elections and during that time, because we had a election on January 13th, 2024. So we assembled a number of organizations together and individuals like fact checkers and fact checking organizations, civil society organizations, academics and cybersecurity researchers. Together we launched a project called femi Monitoring Hub. Yeah, so we together, we detect some case during the elections and also I, I got the instructions that to talk about cases on TikTok. So today I'm gonna share some case studies on TikTok, but if you are interested in these topics, you can scan the QR code to see all the cases that we found during that time, including cases on Facebook and YouTube. Yeah, and also X. Okay, here's a quick introduction about that project. So from October to 2024, January we launched this project and there are like around 30 people, including our analyst and interns and those individuals who work in this field together, we collect data about 10,000 posts. So it's like observation, if you see something interesting and suspicious and you can report to us. So it's like a manually collect without our like scraping tools. We have another data source from our scraping tools for this data points, 10,000 data points are all manually collected by our partners. So the platforms including Facebook, YouTube, Instagram, and DE card and PTs are two online forum rooms from Taiwan and also on X and also other Chinese social media platforms. And as you can see here, there are some topics trending during that time. Especially, you can see the criticism of our ruling party. Democratic progress party DPP is the highest topic during late October. And also when the election state coming, you can see more about criticism about our political figures like sex scandals and other stuff to, you know, give them bad reputation like this kind of posts. So yeah, when the election day coming, you'll see those political figures have been targeted by threat actors. Okay, so the first case we can talk about this case, we call that Indian migrant workers because on September 26th, 2023 and Indian newspaper Hindustan Times reported that Taiwan and India would sign an MOU to bring India migrant workers into Taiwan. But at that time, we don't know how many, but we see, we've observed some accountants talk about, oh there are exactly one, sorry, a hundred thousand Indians gonna come to Taiwan. And you can see those narratives claiming that, you know, India is a country of sexual assault. So this kind of negative narratives and also Taiwan is turning into a rape island. So those negative narratives have been circulated on different social media platforms. And as you can see the case here, I'll play the videos on TikTok who was, okay, so this is influencer A. Okay, so you can see they use the same script, right? And there are two different influencers and they ordinarily not talk about too much political issues, but it turned out that during that time they talk about this issue and they, you know, usually talk about like how to styling, you know, how to dress like a fashion, you know guys. Yeah. But during that time they talk about this issue. Okay, so we don't have evidence saying that the PRC give them directions to do this, but it obviously they got older, no matter from PR firms or other, you know, higher level organizations, they got orders to talk about this criticism about Taiwanese government gonna open for the Indian migrant workers to come to Taiwan. Yeah. Okay. And also another case talk about election fraud rumors. So, you know, during every time every elections in Taiwan you got this rumor, there are gonna be election fraud. So last la last election, you can see the similar patterns happen again. Okay. It's in, Okay, so they're not used the same sentences, but exactly, you know, same meanings. So there are definitely a script there and they use the same material and same music, background music, right? So the same, there are definitely some PR firms contact them and to let them to produce this kind of videos to talk about gonna be election fraud. Okay? So we cannot identify if this really come from the PRC, but you know, TikTok is a platform that's popular and also hosted by Biden. So we are really afraid that if they use discount tactics every year, then more and more people will affected by this kind of narrative to distrust our democracy system, especially the vote system. Yeah. And also there's another case talk about our president lie has a legitimate child and you can see the same, these influencers, they, they don't use like short videos to show themselves, but they have clips to talk about. President Lie has a legitimate child, and again, they don't usually talk about politics, but at that time they got, maybe they got orders to talk about this. Yeah, they usually talk about like, like how to dance, you know, sexy and to dress well. So they're like small influences, but somehow at that time they, they all talk about this criticism about our leaders.

- Yeah.

- And also a very big case during that time, critic criticized about our former president Tai. Yeah. And it's a ebook called Secret History of T. So it's exactly a, a hard evidence that it's not from Taiwan, it's because we check their source code, this ebook source code, and you can see the directions that they save their images and you can see simplify Chinese characters in it. Right? So it definitely not from Taiwan. And also wishing pictures, WeChat pictures in it. So it definitely not come from Taiwan. The original author might come from China or Malaysia or other, you know, Chinese character users. Yeah. And also there are a lot of AI generated videos have been created and spread all over different platforms. The name you can, the, the platforms that you can name, you can find those clips. Okay. So you can see it's actually pretty good, can see the gesture and their voice might be a little bit like robot, but you can see the gestures. Okay. And talk about how president have become a powerful woman in Taiwan. Really like anchors, right? Like legitimate anchors. Yeah. The same script and also in different languages. Yeah. And how we know that it's not from Taiwan and how, what kind of product they use. Actually we, we use reverse image search to check.

- So - Actually it's come from Ance product called Cap Cut and those products named, called Digital Man. So actually you can write some prompt and give them those prompt, it will create the script to you and also generate those AI anchors videos. And it's also on all the platforms and you can see some AI generated profile pictures on different platforms like X and TikTok. And if you put those profile pictures together, you can see their eyes, their position of their eyes are in the same position. I'm not sure if now it's better, but during that time it's a way that we can detect it's a AI generated profile pictures. Yep. And also we check about to better understand the landscape of TikTok users in Taiwan. And what are the differences between active and inactive users on their connective reshaping. So we also did a, a survey at that time. So there are some samples about TikTok active users and some inactive users. So we can check their frequent see of using TikTok per day or per week. So first we ask them some questions overall, do you like or dislike the current ruling party? So no, it's common sense. If you are DP supporters then you know, likely you're are strong likely, or slightly likely DPP. But if you ask Cam D supporters and TPP supporters, they dislike our ruling party. But the interesting is that overall across different supporters of all political parties, you can see active TikTok users tend to hold more negative views of their ruling party TPP than those who do not actually use TikTok. That's a, and it's also a significant difference between TikTok active users and inactive users. Okay. And also we ask them, Taiwan Taiwanese society has a very serious issue of judicial injustice. So you can see TT TPP supporters and CT supporters, they are like, agree Taiwan has a severe issues of judicial injustice. And if you check like somehow agree or strongly agree this issue, you can also see a significant difference between TikTok active users and in active users among MT supporters and TPP supporters. So that means if you use TikTok a lot, then you are kind of like strong agree that Taiwan face severe issues on judicial injustice. And also we ask a key point question about the current government as the primary producer of Misin info or Disin info in Taiwan. Then the same if you check the supporters and TPP supporters, you can see if you use TikTok a lot, then you more agree our government, they produce misinformation or disinformation. Okay. So actually there are a bunch of questions you can check from this reports and it actually include three reports. The first one is the qualitative research about interviewing ad adolescents, TikTok users about their political views and experience on how to use TikTok. And the second and third research is about the quantitative research about that I shared some questions. And what's their political attitude differences between TikTok users and inactive users? Yeah, so you can also check the QR code because I don't have too much time to talk about these or researchers. Yeah. So the conclusion is that we know that threat actors, they collaborated with influencers to amplify messaging around specific wedge issues. They don't create the, the, the stories actually they're all wedge issues have already happened in Taiwan. And also threat actors have begun using AI tools to generate contents and profile pictures to, you know, do a, a large scale dissemination. And also for the perceptions of the ruling party, you can see active talkers, TikTok users tend to hold negative views of the ruling party. Then those who don't use TikTok and also the trust in the judicial systems. You can see TikTok users are more dislike or to to to, to untrust our judicial systems. And the same as who, who is the producer of this information, then TikTok users might answer that, oh, it's our government. So we have some ongoing projects regarding TikTok. So we collect TikTok data and to collect them and showed it on our dashboard. And actually we have four persona to collect this data. So first is pan, d, PP and our ruling party and pan, k, m, t, pan, TP, p, and pan PRC, these four personas. So it'll help help us to diverse our data source. Yep. So we are, you know, collecting those data now to prepare the election, the local election this year and also in the future. We all know that there's some differences between, and TikTok ing is for domestic users in China and TikTok is for non-Chinese users. I mean not not live in China. China, okay. So we want to better understand their differences and different strategies. So I think this year, the, the end of this year or next year, we are started to collect ING because there are some tech technical barrier. So I think we'll collect doing in the near future. Yeah. So that's my presentation. We'll come to any discussions later. Yeah,

- Very. Before we move on, could you just go back to the slide with the link to the, the code so that we can Sure. Those who want to navigate to it.

- Yep.

- Can have a few seconds to do that.

- Yeah, yeah. It's a portal for the TikTok researchers. Researchers, yeah. And also another one is the election reports. So there are a bunch of cases not just about TikTok and also on other platforms.

- Alright, well I I think we should have both presentations first and then we'll have the discussion. So we ping Lee the re research director at Fact Link, you know, that's an institution I'm less familiar with. Maybe in the course of your presentation you can just say a word about that. But we welcome you and we're, we're really looking forward to your presentation.

- Thank you. So my name is W Lee, I am at the, I'm the researcher director at Fact Link. First of all, I want to thank Uber Institute and for inviting me here to share my, our thoughts and our observations about how Taiwan counter foreign information influences and manipulation. So before I dive into my presentation, I do want to spend a little bit time introduce our organization. Perfect. So thank you. So backlink is a relative, very new organization we were launched at in 2025. We're an organization that dedicated to digital investigation of Taiwan's information environment and also the information environment in the Chinese language world. And we also promote media literacy and AI literacy. So my colleague and I, we previously worked for the Taiwan Infectious Center. I think that might be more well known, right? And we have means through a lot of events and incidents such as the COVID-19 incidents, culminating pandemic, and also several elections in Taiwan and United States because you know, every time when United States has the election, there are also this information spreading in Taiwan. So based on our experiences, I want to shed some light on the challenges and also some success story story about Taiwan's fight against the disinformation. So in my presentation I want to use three stories to illuminate my points. The first one is a successful story, which is the, so the first one is the 2024 election presidential election, which Jerry also mentioned and this is a successful story. But since changed, we have faced the uphill battle here. So in 2025, in the great recall, we have observed some alarming trends constitute very severe challenges. And the third one is how we're going to address these challenges. So the first one is 2024 elections. So on the night when President Laa was elected and OOE and the cour, the two other candidates considered their defeated also very soon. However, on that night, as Jerry mentioned, rumors about electoral fraud, also widespread on social media, mostly on YouTube and TikTok and promoted by some Taiwanese influencers, it was difficult to pinpoint who those malicious actors created this abroad, this fake disinformation. But we didn't know that some Taiwanese influencers are promoting this information. But thankfully this disinformation was quickly deterred by Taiwan civil society and the government. So why we can do that, first of all, I think the voting process and the vote accounting process were very transparent in Taiwan and we all have witnessed this on this, those polling sites. And we also gave very detailed records of this voting. So when the rumor breakout on the social media, the government also reacted very quickly and they published those figures as very strong evidence to, to show that this election was fair. And so based on this evidence, fact checkers can also produce fact checking reports very quickly to show the public that things are okay, there was no fraud going on there. And we also have the media, like the mainstream media and some other information, I mean influencers helping us to spread this messages to the public, convincing this public that you know, the election was fair so quickly we successfully deterred this information, this false information from spreading spread. So this is the story of 2024. However things changed, especially over the past year. We know that AI technology has, has advanced amazingly and brought us a lot of problems. So I want to use what happened in the gradual recall to highlight those challenges. So in 2025, which was last year, and as many of you new led Taiwan held great recall in this great recall, we wanted to recall 2131 Ang legislators and 1D PP legislator and one TPP, which was Ong Mayor Gho. And, and I know a lot of people had a very high expectation that we would successfully recall these legislators. However, things turned out that none of them was were recorded. And so this, you know, this result was shocked. There were a lots of reasons explaining the failure of this gray recall. And this may, this information might be one of them, but I won't say that this was a deciding factor. But I do want to highlight that we have seen, we have seen some very concerning trends there. So what are the concerning signs that we have seen from this great recall? So first of all, now we're facing propaganda problems instead of disinformation problem. So what's the difference between disinformation and propaganda? This information is the piece of information that is incorrect and they, they usually come pieces by pieces and fact checkers can quickly identify the part of incorrectness and to debunk the information. But the propaganda is more ambiguous. Propaganda includes both correct information and incorrect information. So in this very call we found a lot of propaganda generated by ai as you can see on the right side of this screen. These are some examples of this propaganda. Here you see there are just images, but in reality there are produced in the form of AI slaps and they are spread in large amount and spread on stress or talk and or YouTube. And you can see that this images actually don't tell you a lot of information. They don't have a lot of text, right? They just, they show you images and they let you yourself to explain, to interpret the meaning of this image. And they, they don't say a lot, but they impact audiences a lot. As you can see, they just show those who, who, who are not support recall as beautiful women or righteous or very good citizens defend democratic process. And they test this we call supporters as talks as zombies as those who are manipulated by the GPP government. And they just, they, they just show you these images and then they spread very wisely. We also conducted research and we found that those images that are divisive actually are quite popular and spread really wisely. So this is one very concerning sign that we have seen and especially it was largely produced by AI in great every month and flawed the social media platforms. And also another thing is that it's very hard for us to identify who those malicious actors are. And we have found that is some of these mil malicious accounts impersonate as Taiwanese and this impersonating accounts actually have some ties with corporations based in China. But there are also accounts who are who, who looks really like Taiwanese and, but we don't know their motivations that they seem to have been coordinating with actors in China. However, this is something that very hard for fact checkers or us who are doing this information manipulation research because we cannot just randomly pointing fingers at actors without valid evidence. But it is really hard for us to get to this evidence. So this kind of becoming very, very tricky situation for us and a very great challenges for us. Okay, so what we should do, you know, it's very hard and we are still thinking about solutions to counter this newly development and situations. So what I proposed to, we proposed to here is that we can actually focus more on propaganda. Don't get me wrong, I think fact checking is still really, really, really important and the fact checkers are still much needed. But what I'm saying here is that we need to look this as a, a whole picture and propaganda is definitely, definitely a part of this picture. And we also have to identify the entities, individuals or sectors in the information process. Who are our allies and who are the actors that we should pay more attention to? And I think media and the audiences are important factors that we want to pay more attention here who are largely ignored in the past conversations. I know that a lot of people are disappointed at the Taiwanese media and something that there are amplifier of misinformation, disinformation, which I agree, but they are still very important and I know that there are still a lot of good journalists there and they need help and they need maybe more resources and maybe more help to enhance their skills, especially at the age of ai. And we also need more empirical studies on audiences. This is where the academia can step in. I, I think we need more studies on how algorithms affect audiences and how audiences responded to different propaganda. And with this research we can devise better strategies against this information manipulation and platforms. Platforms is definitely very important. And I know for over the past years we have seen the retreat of this platforms from the front line of information for front information manipulation research and which is very depressing. And we, you know, we have seen AI, disinformation and propaganda came as tsunami, not just flawed. And we really need the platforms to invest in more resources on that. And I think at least one thing that these platforms can do is to make their data more transparent. For example, just to identify who are behind these accounts and who are using VPN trying to manipulate audiences. I think X, which although is not the best example, but I think they did something to reveal who are using the VPN right to expose their real locations. And I think this is important another for us researchers, but also important for average users because when average users have this kind of tool, have this kind of information, they can evaluate this information themselves. They have more tools to identify who are malicious actors and to evaluate this information. Okay. And and, and I also want to say that civil society in Taiwan is just strong and resilient, but we need more collaborations among ourselves and also, I mean among other countries who are also suffering from information manipulation from China, for example, like China, Japan and the Philippines. So this is what we do at Fact Link, we trying to be the link to link different sectors in the civil society like academia and, sorry, yeah, like academia and also in the communities. And we also help journalists to enhance their capability. And this is the last story. I am not going to invest too much time in it, but just want to know that this is a product of collaboration between us and double think lab. We had a very good collaboration to investigate in how Chinese propaganda against the Japanese prime ministers and Ichi and especially concerning her remark on Taiwan. We also worked with our friends in Japan, Athena Town Unru, and we investigate how Japanese society react to this Chinese propaganda. And if you are interested, you can scan your code QR code here. I will be very appreciative if you read this report and give us feedback. And if you have any questions, feedback, please email me at this email address. And thank you so much for letting me share our observations and the thoughts. We'll see you. Thank you.

- Thank you both so much for these ex excellent. Very rich and informative and succinct and disciplined presentations. They're really models for what we love to see around this table. Let me pose an initial question to each of you that you can answer as you wish and then people can raise their hands or yeah, I guess you'll, there are very few tent cards, so just raise your hands and I'll ask you to introduce yourselves when I call on you. So there's a paradox in this. You're trying to rebut, debunk, inoculate against these flows of disinformation and the kind of more, well I think a propaganda in a way is a more coherent, organized, systematic effort rather than just loose, random flows of disinformation. There's, there's a logic to it that's very coherent, but you're trying to reach the society and say, no, this is false, here's what we found, this is why it's false, this is what we've revealed, but how do you get the people who are, you know, internalizing this stuff on TikTok to, to believe you because they're deeply immersed in the world of the platforms that have led them down the rat hole of bad information. It's in some ways similar, in many ways similar well to any society but here in the US to people who are following websites, including TikTok where they're getting bad information, where they're living in information bubbles and then these bubbles kind of hardened and it's hard to, it's hard to penetrate them. So I'd like to ask each of you, what is your strategy to reach the people who are, who most need to be reached and presented with truthful information with the tools you mentioned? And to what extent is Taiwan's society acting proactively in two senses? First of all, I think this is a term I learned from double think labs to engage in pre bunking to anticipate what they're gonna say and flood the zone of the information environment in advance of an election in particular to warn you're likely to get this information and this is why it would be wrong, distributing information about the transparency and rigor of the electoral process in Taiwan, I guess would be one element of this. But also using the school system. All of the young people on TA, on TikTok are either in the school system, probably in secondary school, sadly may even before secondary school or they were in the school, the public school system at one point and now maybe they're, you know, in their twenties or something. So how is the school system being mobilized here to alert young people? So we'll go in order, we'll start with Jerry, we'll come to WAP and Yeah and then we'll move on. Go ahead.

- Yep. So it's actually a huge challenge that we face from the past year. So from this year ING lab, we have some projects that we gonna do a better strategic communication plan in our organizations. So because we used to think we are think tank or we are organization that produce reports, but I just want to ask if ordinary people, they really read our reports like 30 pages. I think nobody like ordinary people, they won't read our reports, right? Yeah. So this year we're gonna change our strategy. So we have our own social media platforms, but it's not, you know, popular. But we have friends like other NGOs in Taiwan, they have more impact on social media platforms. So now we can transfer our reports to some materials and pass to them to let them to amplify our reports. Okay. So that's our strategy. And also we'll post some contents on our social media platforms and use, you know, friendly languages and short articles or clips to talk about our research. And also there's another, I would say it's ambitious but we want, we we're gonna try it, we're gonna send them materials to those pan blue and pan white influencers. For example, Hui Han Hu Han is a journalist who used to know, be think his pro mt and now he is, you know, sometimes he criticize mt a lot. So we're gonna send our materials to him to let him to say something about our report so we can cross the echo chamber, but we haven't done this, but we're gonna do this. So we're gonna send those materials to those influencers about our report to see if there's any further collaborations and we want to leverage this kind of strategy to expand our audiences. Yeah, that's one thing we're gonna try this year. Yeah,

- I think this is an excellent question because on our way here Jerry and I are discussing how to reach to the public. Yeah. I think for us, we have done a lot of, you know, discussions with the Ciro group in Taiwan because you know, well Taiwan has a, a lot of what's college, a community college and the world worked with community college teachers and some of them are even yoga teachers, but you know, and they talk with this information with their students and with this kind of like going into your real life and talking with real people like interpersonal communication, I think it still works a lot. And I sometimes that worked better than we were just posting our po our reports on website because we really talk to people and answer people's concerns. So that's our strategy. And, and by the way, I have seen the same situation here in the States because I, I also teach United the University of Maryland and my students are so like to, and they don't even believe that they will be manipulated by TikTok. And so I have to, I mean as the class and we, we watch the examples together and the, to talk about this issue face-to-face. And so they are, I hope they, they buy what I say. Yeah.

- Oh it is so depressing the situation. All I can say is just as we're learning from Ukraine how to shoot down Iranian drones, we're gonna have to learn from Taiwan how to shoot down various forms of hostile actor disinformation chorus. I'm so sorry. I was so inspired by these two presentations. I completely forgot that you're our commentator. So you start and then we'll go to Serge and the others.

- Yeah, so I'll just, I'll, I'll make three quick points in the interest of time 'cause I've already spoken quite a bit. I, I wanna ask actually three questions of both of you. And the first is, so I, I wrote a piece on the Taiwan 20th 20 election for the Journal of Democracy. And my takeaway there was that Taiwan actually had pretty robust defenses against Miss and disinformation. Part of it was that Taiwanese are pretty sophisticated in general, they're used to this crap, they can identify it pretty easily and the people who are taken in by it want to be taken in by it. They don't wanna know the truth. They're already convinced that everything bad you can say about their, their least preferred party is true. And everything good you can say about their most preferred party is also true. And so the partisan divide in Taiwan, while not ideal also creates a certain amount of grounding or anchoring of politics and you're not gonna have a rumor that then dramatically swings an election in one direction or another. And I'm wondering in the last five years with the emergence of new platforms, new technology and so forth, has that changed? Should we be more worried than we were when Hung was running for president in 2020? And there was a pretty credible, I think PRC effort to influence the results of that election in favor. The other question I have for you is about the kind of positive and negative examples from Taiwan on the positive side. You're both great examples of this robust civil society in Taiwan that is dedicated to identifying and pushing back against Disin misinformation. And you were at Taiwan FactCheck Center before. I wonder if you could talk a little bit about the coordination with Facebook previously and, and what the status of that is now, whether or not that's a good story. Again, I'm trying to get at have things changed over the last five years and then on the negative side, Taiwan has, you know, a freewheeling media, it's notorious for having reporters that can say and report on anything, but it's also deeply unprofitable. All of the major media outlets are losing money. Journalists are under tremendous strain to publish story stories. And so I'm wondering if there are examples in the Taiwan media context that offer us hope for how this kind of unprofitable race to the bottom is being countered in some cases. Are there new journalistic outlets that are emerging that, that provide some, some better guidance than the typical partisan media on sophisticated issues? So yeah,

- So yeah, I, I guess I'm taking this questions. The first one is coordination with Facebook is meta. So Time Fact Check Center used to be a partner with Meta, so the corporations that we fact check some posts for, for meta and meta funding us for, for, for, you know, with some money. Yeah, that's kind of the co a collaboration. And I know that they meta stopped funding fact checking organizations here in the States, but as far as I know, I cannot speak for, for Taiwan fact Check Center now, but I, as far as as I know that they still support Taiwan fact Check Center and other fact checking organizations in Taiwan. Although I don't know if they scaled down. That's the question that many of you maybe can investigate. Yeah. But still, yeah, that's a change. There's some worry that we have already had about it, whether they will further retreat. And I think another thing is that Investigation Tool Meta has stopped using Crowd Tango and the Crowd Tango is a tool that we use to investigate information manipulation, which is so important depending now we cannot access and, and actually the, the tool cannot be used anymore. So we, we do, they, they they have some new replace like replacement tool, but it's not as useful as crowd tango and data is always the problem that we're facing right now in our against false information. So yeah, sorry about this depressing news again. And yeah, but here is the positive one about the media. I I know that a lot of media, Chinese media are disappointing and I, I once interviewed an editor in Chip in Taiwan is a very famous TV organization, a TV station. And she told me that we need to survive first and we can enhance the quality of our news. And she said that we are now at the hell or the dismal of this situation we're looking for and we are waiting for someone who can provide us more resources so that we can do better journalism. And this is is very hard breaking to hear about that. But I think funding resources is, are the main challenges for these media organizations. But it's, it's tricky because people want to fund media organization because they have good reports, right? So, so this is such a difficult thing to, to, to solve. But we do see some news organizations has had some breakthrough, I don't know if you are familiar with the reporter about da, right? Yeah. And they have found a new business model to have more donations from the public. So, and they're, I think they're doing quite well. They're just celebrating their 10th anniversary, right?

- This is the Taiwanese, this

- Is the Taiwanese or news media and they are very good at investigative journalism, but whenever there is emergency things like the, the guy who who planted bomb in at the metro, you remember that's the year and or and, and it harmed a lot of pedestrians. It's kind of as this kind of emergency situation, they produce the breaking news, deliver very trustworthy information to the audiences. So I, I think this is is an example that we can kind of celebrate and some models that we can look forward to improve for our media environment.

- Of course, you know, if you have Jimmy Lie who just believes in independent journalism and speaking truth to power and is willing to lose money to support it, you know, then you have a, a precious situation. But of course we've seen what happened at Jimmy Ly in Hong Kong. And then if I can say, so we've seen what happened in the United States when a wealthy entrepreneur bought the Washington Post and basically was giving an indication that he was willing to spend as much money as possible to just support quality and fearless journalism. Truth dies in darkness was, and he turned out the lights. Yeah, I need say no more. Thank you Jim Sergey.

- Sure. So I, i I will offer just a few points of reference here, starting with, to Larry's point, learning from Ukraine. I think the most robust empirical evidence we have about government dealing with this kind of social media threats comes from Ukraine. There is an excellent paper by I Gian who's Ukrainian, but works at the University of Copenhagen and it's called Fighting Propaganda with Censorship. And that is about Ukrainian government banning Russian social media network tact, which is equivalent of Facebook. And you know, they banned it in 2018 at that time, now it's completely controlled by the Russian government, you know, only approved resources can post news there. But at that time there were quite a lot of like independent news there. But the problem was that FSV Russian political Police, they were just requiring conducted to give personal information about Ukrainian users, you know, particularly activists to the security services. And you know, then, you know, of course these people would be threatened. So what Y Guinea did, he looked at effects in Crimea, which was already occupied and parts of Ukraine were, that was still controlled by Ukrainian governments on, on the border with, with Crimea. And the interesting effects was that everybody who accessed internet through Ukrainian networks, including over the border, still already in in Russian control Crimea, they reduced activity on, on Kentucky. Basically censorship did work. But I think important to to point out here is that it's dependent on having, you know, robust supply of alternatives that are trustworthy and also content-wise, technically commercially, these are tools for advertisements, et cetera, that, that can replace whatever is censored. So this is sort of equilibrium kind of market equilibrium kind of question. And the last one I would make on that is that of course I, I I agree with Harris and many others that, you know, just emulating authoritarian tactics is not often a good strategy, but still, you know, when, when, for example the, the trade negotiations happen, I think that should be part of that because the idea that, you know, this foreign platforms we have access to, to US market, but, but US social media or AI companies will not have access to, to China or to Russia, to other countries that control their internet space. I think is like j just unjust from tr trade point of view and I think should be addressed as, as part of these trade negotiations. Second on, on Jerry's discussion about trust in government, people who own TikTok or off TikTok, et cetera. I think important to keep in mind that there was simply secular decline. I I, I see the difference, but, but I think secular decline of trust in government for all people who use social media and have access to wireless internet connection that is documented by grief and mounting and many other researchers by now. And I think it's, it is sort of a more, more universal kind of tone now to, to weigh in. I think I agree with Harry that, you know, robust defenses when, when we're talking about Taiwan, I think we, we see this robust defenses against disinformation and they come from accountable politicians who are responsible, who still have sense of shame and sort of refuse to take advantage of these opportunities. But talking about the algorithmic effects that, that you mentioned, I think it's important to point out that that latest research suggests that, you know, people online are not less exposed to alternative point of views than people offline, maybe even more so. But Larry's point to like people who you need to reach the tail ends, the tail ends really is the dangerous part. And there is good research on YouTube by George Tucker and co-authors on that. That small group of people can really go like down the rabbit hole and, and become extreme and could be used by foreign powers, et cetera. Like real problem. And lastly, let me say a good word about TikTok for once. So

- There was one to be said.

- Yes. So speaking of data access, you mentioned the situation with Facebook data for example, that CrowdTangle was this program was ended. Now there is meta content library as as you mentioned, but I just got data from them. I started my application in late September, I just got access and Stanford had to conclude an formal agreement with University of Michigan. Yeah. To access a sliver of this data. It's not easy. We also applied for data from X through European Union channels that were supposed to be established and just got like a one line refusal saying that it, it doesn't fall under the article 41 of European digital services. Thank you very much. Now, but to on, as a hand does provide data both through the European Union channels and for researchers in United States and other Anglosphere countries. So the pressure on them did work at distance is

- API or donation the data donation or API,

- So TikTok is API Okay. And metal library is API, but again, it's very limited. But, and again, like you first get like UI access and then you separately apply for API, it's very technically difficult, but it's theoretically possible.

- So do either of you wanna respond any further?

- No, I think that's excellent points. And actually I was refused by meta

- Oh,

- Accessing their library.

- Yeah. So I, if if anyone want to use meta content library and you don't have access, I would say just don't use it because if we talk about like Chinese data, it's not enough. So for us, we, we have collaboration with meta as well, so we can use their content library, but it turns out that the data point's not good enough. So actually we don't use their data, we collect data by ourselves. So if you want to use their con meta library, I would suggest you just find another way. Yeah. Because you need to use their workspace and then you cannot download the metadata then it's not worthy. Right. You can just play in their playground. Yeah.

- Okay. So we have about 15 minutes left, a little less. Three people have asked to comment and their Rowena Ha David Federer and our speaker on the next panel. You ha lie. So let's take all three of those questions. Maybe the two of you could take notes along the way if you wanna, if there are things you wanna answer and Yeah. Question to speak up a little. I don't think there amplification.

- I I think that the rest of at the sentence just, just came out last line, right? So he, he had disappeared, he's the editor for Bache in Taiwan and he had disappeared for two years and we just find out I think less than 24 hours that he got a three year sentence, one of the key publisher in Taiwan. I think that kind of fear created by the CCP and my question to the panelist following up Larry comments, I think we are doing with the CCP often in democracy here. I often heard people telling me, Hey Rowina, I know that you study truth and, and and, and lies and historical truth, but there are so many different versions of truth and, and and that kind of idea that they, they, they took for granted in democracy thinking that this is just about different interpretation of truth. But in fact we are dealing with the CCP not just Taiwan, but internationally we are dealing with the CCP that they, they don't play the games by rules and their goal is just to create lies instead of trying to find out the truth. So inside China, very often, especially in the postman era, there's this idea about conspiracy, western conspiracy to brainwash them with their outside Western propaganda instead of thinking they have been brainwashed by the CCP themselves. So, so I found it, I myself found it harder when I'm teaching in the classroom to convince the younger generation that what I'm teaching them is the truth because they think that you are trying to brainwash them with your truth. So I, I just wonder and domestically, I mean here what's happening in this country doesn't help Now. So, so I just wonder facing both of this, what can we do now, the challenges from the CCP telling the younger generation, there's a, a constant outside access in Taiwan from democracy to brainwash them. And then domestically in the United States, you know, when pit banned and all this younger generation, they turn to a little notebook and think those they were supporting, they were actually supporting democracy while they're supporting. I, so what can we do?

- So we have maybe three levels of information, pollution, disinformation, propaganda, and brainwashing brain and yeah, PRC, they've, they have some experience with all three. Okay. David,

- David Federer here from Hoover Institution. A question about the, the market dynamics for social media and Taiwan, because I know it's a little bit different in, in every country. And particularly for Jerry, you gave these examples of influencers who like fashion influencers and suddenly they're talking about politics on pretty, pretty coded ways. I'm curious on the level of engagement for those sorts of posts, and you use this term orders like they were getting orders from a PR management agency, otherwise, can you speak to the possibility of them being ordered by a PR manager versus receiving some sort of directed payments, you know, just so that they think it's worthwhile to do it even though they don't think it's very good for their, their level of engagement versus organic incentives. Like if they think it's, they see it's working on one account, if it's getting engagement and sort of bandwagon and are gonna think they're gonna get return or, or followers by doing that.

- Hi, I have a question for about Jerry and that is I want to hear more about your thought on how like AI propaganda would influence our like public discourse and also information environment at present and also probably in the near future. And I do have some thoughts. For example, I, I I believe there are two concerns that could be exaggerated by, by lease. The one, the first one is setting power in public sphere. Lemme give you, give you one example because I'm a heavy user of threats. For those of you might know, threat is actually the most popular social media in Taiwan. It's not TikTok, it's not, it's not even, she actually, it's, it's threats. And the playbook of threat is that the dumber you, the question you ask, the more engagement you have. So for example, like maybe you can see the post on first saying that, oh, am I the only one who think Taiwan is the worst president? All time you can see that is deliberately invite you to push back. And then in trying to create more debate and, and then you will see maybe thousands of comments on that post. And, but many times when you check the account, you found that actually many times they were just established maybe a few days ago. Then they started throwing questions, 1, 2, 10 questions. And then one of them was encouraged by the algorithms then become, and then then it went, went rival. So that's something I believe is pretty concerning in the sense that you can see in the many, like well trended AI trended or botnets, they throwing questions on social media and then they occupied the scariest cognitive resource about people and also time on engage in public discussion. That's one thing. And the other thing is, and it it is also something happening in threat recently. It's really interesting in the sense that some people, they ask, they, they found a like suspicious post and then the, some of the users ask lads account saying like, for example, now I, I'm asking you to speak in Indian in Arabic or something. And then that account started to speak switch from Mandarin to like Arabic or something. So you can, you can notice that that actually that is a AI generated or driven account. So, but let's bring us another question is something I call min, some scholars called liar dividend mean that even you encounter some or you, you are reading some like credible sources, but because you are so exposed to swabs, AI lab or any other kinds of swabs, so that makes you become more suspicious or more nihilism, nihilist become a nist even you read some like credible sources. So let's send like two quick observation that lot I have for now, but I would like to have more thoughts on from both of you. Yeah, thank you.

- Okay. We'll let each of you re reply as you as you can or see fit within the next few minutes. Go ahead Jerry.

- So for the first question about truth, so actually at the beginning of founded, we did not deal with fact check because we already know that no matter is true information or false information, the PRC, they want to create this threat of democracy or you know, some emotion to let people, to distrust their government and each other. So at the beginning we, we, we choose to not deal with if it's truth or false. So what are we going to do? So actually we focus on behavior and threat actors. So we want to expose the, you know, common behavior behind these accounts. For example, coordinated inauthentic behavior. They have similar patterns together to pose narratives that want to manipulate, manipulate people. So we exposed lot tactics they use for these behaviors. And also the second one is actors. We are gonna expose who are behind these operations. Sometimes we can find evidence for example, if this website registered by Chinese company and this Chinese company get fundings or contract with PLA or, or government agencies. So we can expose this kind of information to our audiences. So we focus on behavior and actors. So actually we, we, we don't focus on if this information is truth or, or, or, or not. Truth. Yep. And for the second question about engagement of those TikTok accounts, actually for the election fraud, the before this videos have been taken down the highest view of that election fraud video? I think it's around 420 K, but other than that he's or her videos talking about style maybe just around 10 K. So we can see the difference, right. And we don't have e evidence that who behind it, which, you know, PR firm behind it. But you know, just like YouTubers, they can get avenue from those views. So there's, there's some exactly numbers about how many views then you can get, how much money? Yeah, so I think like, like like 5K, 10 K, and 50 K there are different, you know, prices. So if you see these videos, 420 K then may be around maybe i I just say maybe around maybe 10, 10,000 USD dollars for that videos or, or even higher because it's, you know, a very good number of engagement. So that's why we think it's not for, you know, randomly select this topic. It definitely be some PR firm give them that no contract and also they can earn this money by their engagement number. Yeah, like, like views. Yeah. And for the third questions, I would say for the AI technology nowadays it's really hard to identify if this account is AI generated or not. Because let's say some tactics, the CIB, we used to detect it in the short time. For example, 10 minutes, there are thousands of accounts post something. But now you can use prompt to let them say similar things but not exactly the same. And also you can let the, you know, script to post those contents randomly within a day. So it won't be, you know, shortly post those media contents. Right? So you can avoid the detection from CIB, right? I hope the staff of PRC not hear my opinion. So, so they, you know, upgrade their tactics. Yeah. And also for the AI generated profile pictures, sometimes you can store a nobody's, you know, profile picture. So you, you, you use Image River search, you cannot find exactly the same person because that person actually don't use social media platforms a lot. So maybe you just take a picture from a random people then use as a your profile picture, then it's really hard to detect by some methods. And also when the AI technology get advanced, it can customize the picture content and posting time. So for us it's really harder than harder to detect those accounts. So we hope that the platforms like X actually they, they did something good, it's, they expose the location of that account. So it's a good way we can see if it's a foreign actors pretend like they're Taiwanese. It's a, it's a good and simple transparency that for researchers can understand if it's a foreign account. I, I think this, this kind of transparency is not difficult for the platforms if they want to do it. Yeah. So it's, it it's an easy way that platforms can help us to identify those foreign and identity accounts. Yeah.

- Okay. Way ping.

- Yeah, we

- Let you close.

- Yeah, I think you we're running out of time and Jerry had excellent answers to both of you. So I would just, I think I would address it to the brainwashing question. I think at some, some point it's really hard to counter brainwashing and take myself, for example, I grew up in 1980s in Taiwan and that was the time when the Goan still tried to brainwashing us. And I was a brainwashed child and I hate DT V so much and I thought everything taught by camp was right and we just sounded very naive. But I was changed when I was in college. And why changed me because I experienced the first Taiwanese presidential election and I was there as all the presidential campaigns and I watched the songs, I, I listened to the songs and watch all those things and I started to realize something was wrong in my education. And so I started to, to do more research myself and trying to find, find the truth. And so sometimes only preaching or only just one way information deliveries won't work really a lot. You have to really find opportunities to let people to can so they can experience themselves so they can change. Yeah.

- Well this is why I hope the educational system will try at least in part to rise to the challenge. 'cause if we get a new generation of Taiwanese youth who heavily succumb to the false narratives and cynicism of PRC propaganda and it requires experience to disabuse them of the propaganda, it could be too late by the time they're getting the experience of the PRC being in control of the island. So anyway, thank you so much. This has been a great session. We'll now break for 15 minutes and then come back at 3 45.

- Alright, well thank you very much for your attention and, and thank you for joining this session this afternoon. We are delighted to have two very significant discussions with you. Our, our presenters with us today. Cars has done a great job of introducing them. So I won't belabor that. Our discussion today will be Sergey Vic and we've already heard in the first panel, and he's completely and well-informed on the, on the topic, under, under consideration. And I look forward to a, to a robust and and interesting exchange. Today we're looking at two closely connected issues that in a real sense sit at the center of the strategic competition between the United States and China. China's expanding global data reach and the race to lead in artificial intelligence. The first issue concerns what one of our presenters is called the authoritarian gaze. The ability of a political system to collect, integrate, and analyze vast quantities of data about societies, economies, and individuals. Over the past decade, Chinese technology firms often operating within a system that blurs the line between commercial activity and state authority have expanded across the global digital infrastructure, telecom, telecommunications networks, cloud platforms, smart devices, digital payments and surveillance, surveillance technologies amongst them. These systems generate enormous volumes of data in an authoritarian system. That data can become a strategic asset not only for economic development, but also for surveillance political influence and geopolitical leverage. The second issue is artificial intelligence ai. Now that we've all learned to spell it, of course it's only two letters after all, rapidly becoming a general purpose technology that will shape economic productivity, military capability, intelligence, operations and information ecosystems. Leadership in AI will depend on access to compute, to advance semiconductors to talent and critically high quality data sets. Both the United States and China understand this and both are mobilizing public and private resources to secure advantages. These two dynamics, though I've addressed them separately, are deeply intertwined. Data fuels AI and AI amplifies the power of data. The countries that control the lattice, the largest data ecosystems and the most advanced computing infrastructure will have significant strategic advantages. This raises a fundamental question for democracies. How can open societies capture the enormous benefits of artificial intelligence and global digital connectivity while preventing authoritarian systems from turning those same networks into tools of surveillance, coercion and strategic influence? The answer will shape not only the future of technol technological competition, but the resilience of democratic institutions in this digital age. Today we're going to explore that a little bit further and I would invite our first presenter, y Lai as we heard the deputy director of Democratic government project at Dset to address the authoritarian gaze, China's global data reach and the systemic risks to democracy, which plays very nicely into those introductory comments that I made.

- Thanks again, thanks to Hoover for having me here. And I'm, I'm Joha. I am the deputy director of dsat of Democratic Governments at Dsat. And for those who are less familiar with Dsat, d said is a like Chinese government, founded a research institute, was a focus on technological, political, political issues with including like economy security and also energy resilience and national defense and also tech governance. And, and my program focused on two main issues, two main topic. The first one is AI governance, and the second one is the resilience of the digital information environment. That includes topics like Femi and also data protection and also the cyber resilience of critical critical infrastructure. And usually I actually, I usually I will introduce my organization starting by saying that a thesis is a related new organization in Taiwan since it was launched in late 2023. But because like fact link is here today, so I just delete my like sentence in my script like during the break. Yeah, thanks, thanks to backlinks presence. So, so from here I want to introduce like my program more, you might wonder why if I am talking about like resilience and for of information environment, why that relates to democratic governance. And I believe that is because democratic information environments face a dual challenge. On the one hand, their integrity must be defended against foreign interference on the other. On the other hand, any regulation could be, could implicate citizens' digital rights and freedoms. So it is that tension that makes the governance problems so hard and also they, they must be justified by democratic principles. So today echoing the, the title of this panel, I want to zoom in on one specific dimension, the risks post by Chinese platforms. I'll return to my report in the last part of my presentation, but since Jerry and we have already outlined the female landscape in Taiwan and also responses from the civil society. So I believe my job in this presentation is first by introducing how t government trying to respond to these challenges and also what, what like structural obstacles lay government faces for now. So I believe X would be a good case to start with. So for those who might not know in December, 2025, Chinese government banned or can call it red note or a year and the legal basis of it, the government claimed that was the entire fraud act and the government argued that the platform had been linked to over 1700 fraud cases since 2024. But the company, they just didn't establish any local presence or like legal office in Taiwan. And they also, they didn't respond to the, the government request either. But not surprisingly, the backlash came immediately critiques pointing out that X is actually not the worst platform for fraud because you can see the numbers here, Facebook alone had over 12,000 fraud cases in single months compared to Shasu in, in the past two years. And also the regulatory tool that government uses here, even s blocking was easily avoided because users can just switch to VPN and then keep on using XO elsewhere and also banning the entire platform for just a small fraction of the fraudulent content for many people that looks much like a government overreach. So I believe the lesson here for the Chinese government and also for my organizations, that on one hand the government failed to communicate the real risk to the public, and on the other hand, it the tool, it, it tried to use didn't match the actual challenge. And it is that mismatch I want is the threat I want to pull on it today. So to put this in context for people who here, who might know more about this is that Taiwan has developed a four prompt approach to this information, this 2018 identify, clarify, restraint and punish meaning that this least four approach, four approaches, links linked to four different strategies, media literacy, rapid debunking at limited spread, limiting the spread of disformation and also trying to panelizing malicious actors. But today I will only focus on the latter two, and that is because I believe late demand more regulatory intervention and also is because we can't rely only rely on the first two. And the reason is that if you take a look at the rapid, rapid bunking, I believe that relies on counter speech theory, meaning that using more speech to counter inaccurate, inaccurate information, at least in theory, it as some assumptions that people have the ability and also the willingness to prioritize troops. And it also sums that the information environment supports citizens, supports people to do so. But we all know that is not always the case. Right? And if you'd like to take, look at the media media literacy first thing is that it is a long-term effort. And the second thing is that we cannot not only rely on lease, because if we just rely on lease, it will place all the burden on individuals to fend off the state level information warfare. And I don't think that be a like feasible and ideal strategy. So government intervention remains necessary. The real question is what kind and if those mechanisms are effective. And lot brings me to the first challenge I I I have been seeing in Taiwan's context is the a institutional one. So to put it simple, Taiwan has focused heavily on criminal penalties including like raising sentences for deep fakes during elections. But criminal law has structural limits, at least for two reasons. The first one is you can, you can imagine court court proceedings cannot keep pace with viral disinformation. And the second gap is that the state aligned actors behind those campaigns, they could be operating outside Taiwan's jurisdictions. So those penalties at most, they can only hit downstream spreaders, the upstream regime remains and led by not executive, the core lead orchestras the manipulation. So lastly, the first challenge I think is I important in talent's context and the th the second one is you might think, so why Taiwan just build better tools? I believe there are two forces make that very difficult. The first is that Taiwan's political parties are deeply divided along national identity lines and also China policies. So any legislation targeting Chinese threats will face structural opposition just like we we've seen recently in the d disputes over arm, us arm procurements, right? And the second problem is that is what I called the shadow of Taiwan's, also authoritarian past. And one good example is that in 2022, the Taiwanese government proposed a, I would say it's more like a Chinese version of EU E-U-C-S-A. And in mandarin of, in Taiwan Taiwan's context, we call it the Digital Intermediary Service Act. And at platform accountability to counter this formation and also harmful content. But it was withdrawn within months because opposition parties equated the bill's, emergency content restriction with state censorship and then invoking Taiwan's martial law passed. So when these two forces converge, I believe Hong and platform regulation becomes more like harder to advance in Taiwan. And they are, there is a third structural barrier is that imagine if you want to restrain the spread of Femi or disinformation, you usually need the cooperation from foreign platforms, right? But unlike the scale of the EU market, Taiwan lacks the economy leverage to compel them to do so. So let also let further makes the policymakers worry about imposing like EU style regulation, but risk platforms leaving Taiwan altogether. So only when the legislation with overwhelming consensus, public consensus will survive just like anti fraud act. But, and as I already shown, China focused regulation lacks lack kind of consensus in Taiwan context. So the resulting is mismatched is that the only tools that Taiwan government can use for now are not decide for the actual problem posed by Shahan or Red note. So at this point, some of you might thinking how about the American approach? Can like Taiwan adopt something similar to the law in in the us For those who are less familiar with TikTok law, it basically required a, a divesture divestiture from Chinese or ownership, otherwise you are gonna face a ban but least ownership centric model, in my opinion, faces two barriers in Taiwan's context. The first is that it still directly targets China, but the bipartisan consensus that may make it happen in the US Congress doesn't exist in Taiwan. Just, just like what have already mentioned. And the second is that the reason why the ownership restructuring works in the us it's because ANCE has a subsidiary in the US right? In California right here. But in in the context of Taiwan, Taiwan, Taiwan users, we access TikTok through a a, a service providers register in Singapore. So Taiwan simply doesn't have the AB enough leverage to demand a Singapore Singaporean company to sep separate itself from the Chinese company just for maintaining themselves itself in in Taiwan's market. But, but I do think Taiwan can learn something from the US experiences, not from the TikTok law, but from the TikTok deal that follow that law. So basically the, the, the deal established a, like a majority American owned joint venture called TikTok, USDS, if I remember that correctly, to, and so this joint venture take takes over US operation under defined safeguards and those requirements and safeguards including like data localization in oracle's US cloud and also like licensed algorithm retraining on US data and also like independent third party audit and also like count content moderation authority for the US entity. So for me, I believe that this points to a more promising direction because it's help us to imagine a or envision a new framework, governance framework shifting from ownership to operation as a regulatory target or focus. And I, I do know that to be clear, I do know that they, there are many measures. They are not without criticisms, like many people still, still argues that ar argue that we need a comprehensive privacy legislation. And also, like we all know, we also know the algorithm, the IEP of the algorithms still maintained or owned by bio dance based in China. And also we still, we we also need more meaningful transparency under this framework. But I still think the operational direction is worth pursuing, at least for countries with less leverage or ownership restructure like Taiwan. And the second mindset shift I want to highlight here is related to EU PME framework because the title of this panel today, femi, is a concept developed, developed by the e European Union and it actually puts more emphasis on manipulative behavior rather than the content. And this distinction matters for Taiwan in a sense that content focused regulation for Taiwan that would triggers partisan conflicts and also the authoritarian past I just mentioned. But if you try to reframe the target from the fity of the content towards the behavioral patterns like orated manipulation or like data harvesting or like algorithmic amplification, we are less talking about censorship content censorship. We're more talking about structural governance framework and that exactly this kind of reframing, I think it's really important and helpful for Taiwan to better address the problem. So now I know like some of you might think targeting operation and structure won't be enough to address the full scope of the threat, but I want to be honest that I don't, I genuinely don't believe there's a silver bullet for this. Meaning, meaning that eliminating all risks from Chinese digital infrastructure, I believe is not technically feasible and also it's not geopolitically realistic. For example, the algorithm case I just mentioned, ance still owns the underlying model of TikTok and even if it try it, it, it is waiting to license it to the US entity. But I believe it is hard to further imagine that the PRC would ever allow ance to disclose the entire underlying model to the US because that would essentially mean China surrendering its whole digital sovereignty. And we all know that is simply not how competing digital powers between US and China interact. So I think the important message here is that we need to recognize the limits, but still try our best to manage and mitigate risk within the balance of law, of law and digital freedom. And the key here, I believe is a stronger evidence base. Government governments must be able to demonstrate what the risks are and why specific measure are justified. And that exactly where my own research comes in to showcase the institutional risks with solid or more solid evidence. So in my report, early of authoritarian days, I try to analyze the privacy policies of 10 Chinese AI services from deep sake to dobel, to park to cc to and Manus. But managed meta has already acquired manners like late last year. So a lot of something like a bit outdated here, but the, the message here is that even with those reg service providers register in Singapore, I still felt at least three consistent pathways, pathways through which user data flowing back to China. As you can see from this slide, there are three pathways, and I can use Shahan Zu as a example to further elaborate this. So for those who might not know after being banned in Taiwan, Shahan Zu immediately transfer itself transfer is Taiwanese users to Red note. Red Note is actually an oversee version of Shasu with a s Singapore registered service provider. But if you take look at its privacy policy, it still allows user data to be processed and stored in Chinese mainland and also Hong Kong. And data is, can still be shared with operators in Shanghai. And the policies, its privacy policies include a standup provision for disclosure to law enforcement and government agencies in, in China. So the key message is that even though the Singaporean registration changes its label, it doesn't change the essence and, and the pathways through which the PRC can access user data overseas and especially for those users in democratic societies. And I believe this matters because it projects China's data governance model, data governance framework overseas, like create a, created a one way channel data basically flowing through into China through commercial services. But Chinese laws prevent it from flowing out again. And once data reaches Chinese jurisdiction, the government has a dropped authority to access it through different laws like cybersecurity, cybersecurity law, like data security law, and also intelligence collection mandates and with virtually no meaningful judicial scrutiny or oversight and under a highly arbitrary concept of holistic national security in Mandarin it is, but that is proposed by Xi Jin Bing. And we already saw some evidence like the liquid Dexy documents, I know a double thing. That lab just released a fantastic report on this issue to see how, how the data and also AI models in in China have been used to promote a more algorithm driven manipul manipulation we discussing, we're discussing today. So I want to conclude here is that I believe what Taiwan needs here for addressing the risks of Chinese platforms like TikTok or shasu is not the anti-fraud act. I believe that if Taiwan wants to govern effectively, it must build a more robust data protection and cybersecurity and cross-border data transfer regulations along with review mechanisms and enfor enforcements tools that address coordinated manipulation and also data harvesting and also algorithmic amplification. And the start point of that is a rigorous evidence-based threat assessment that can support regulation without raising too intense political conflict in intense context. So that is basically my conclusion here. Thanks for everybody's attention. I look forward to the conversation.

- Thank you. Well done Yuha, that's particularly important I think. And I think the idea that perfection is not our lot and is not likely to be achievable is something that we all need to, to consider. We'll talk more about your mitigation and management techniques here in a moment. Next, Graham, please give us your thoughts particularly on the, on the overall character of the, the tech competition and how this all plays into the, the concerns we've got about the undue influence being exercised by the PRC and Taiwan population.

- Absolutely. Well, thanks for having me here. I'm, I'm really glad to be sharing some thoughts and learning a lot about, especially about the Taiwanese context in specific. So what I want to do, I'll, I'll say a couple words about what I do because I don't work on Taiwan all of the time, but what I do work on consistently is the development of Chinese technology regulation and the way that technology plays into US China regulations. And I do that, you know, across the sidewalk over there in, in the Freeman Spokey Institute. But I'm, the way, the way I'm gonna go about this is I'm gonna begin by presenting some research that I did with some colleagues here last year. And then, and this is specifically about the question of open weights, AI models produced by Chinese labs. I'll, I'll talk a little more about what all that means and then I'm gonna go on to a bit of discussion about what all this means for us China competition, what it means, you know, that Chinese models are so prominent as you'll see in a moment in the global open AI models ecosystem. And then a few thoughts on sort of how we should think about what I call the, so what question, hopefully addressing a little bit the specific question today about foreign information manipulation. So just to begin with this paper, I, you know, I, I wanna put this up there. I didn't make a QR code like my, my brilliant co-presenters, but you can find, find a under beyond deep seek on Google. I think we've still staked out that name space and I want emphasize that this was work with four colleagues, Caroline Meinhardt, Sabina no, Todd Hashimoto and Chris Manning. Caroline and Sabina are, were are working as researchers at Stanford Institute for Human-Centered Artificial Intelligence. We teamed up at the Digi China project, which I lead, and ssu and Chris are professors of both of computer science and Chris's case also linguistics. And they're associated with I, so we were able to sort of put together some China focused research with some actual computer scientists as you know, reality check collaborators and people driving us to ask better questions and, and really tried to describe a year after the deep seek moment that caught the global attention in terms of what Chinese AI models might be able to do, you know, what was the situation. So I'm just gonna go through a few things here. I'm, in terms of slides, I'm just doing the stuff that's data related and then I'll go into spoken remarks. So, you know, here, I'm, this is data from the Adam Project, which is a group that advocates for open models. One of the things we, we used another version of this data in our paper, but this is a more UpToDate version. So I've taken that from their site. You know, one of their big findings is that over the, you know, especially since Deep Seeq came out with its big drop just over a year ago, but really pretty consistently for what's ages in the AI development timeline for a bit over a year, Chinese models have performed better than US models. If you limit only to those that are open weights, meaning they can be downloaded and installed anywhere on earth if you have the appropriate infrastructure and and skills. So, you know, one of our collaborators when we were deciding what to do with these data, all of the metrics are bad. They, they, you know, if you look at what's, what's the best model, what's the, the best, you know, performance metric, the, we all agree that all of them are bad, but you have to use something. And so here this uses an aggregate of different metrics from a group called artificial analysis just showing that, you know, for, for this pretty significant period of time, the, the very best available Chinese open model has been better than the very best available US open model. And that big shelf on the US line that rises up and then stays flat from the end of last year. That was chat GPT, the open version, which they released effectively under pressure from deep seek having released their own before that the US status quo was LAMA models. The next bit of data to look at is the proxy for usage of the models of these open models. Remember, this is keeping the closed ones out, right? This doesn't include chat GBT and Claude in the way that you might think, but this is, so this data, again, aggregated by Adam project, but originally from hugging face, the, not the only, but the, the most important global center for downloading open weights, AI models for your various use around the world, if you aggregate the cumulative downloads of all open models. Since this all began, you find out that last summer it turned out that Chinese models have been downloaded more, and that's heavily on the back of the Alibaba Quinn series of models which have been, have extraordinarily high metrics. So one way or the other, we just downloads don't tell you everything. It doesn't mean that they're all in operation, it doesn't tell you what people did with it. They might have, you know, experimented with it and then, and then just hit delete. But the flow out to various systems is greater. And so if there's, if you're racing just for raw diffusion, maybe the Chinese models are ahead, here's a chart, it's a little less up to date because it's something that we did using another one of these aggregates of various performance metrics, in this case, the epic capabilities index. We made this to demonstrate what it looks like if you include the closed models, those that you can't download openly and you have to use by going to a service provider using an API or a website. So the little gray dots are the closed models and the colorful polygons of various kinds are, are, are the open models from with various national origins. This I think is before we got the chat GPT open model. But you can see there, you know, what you can take away from this is that at least in this sort of complicated, imperfect way of measuring capabilities of these large language models, these very best Chinese source open weights, models, the, the, the purple triangles, you know, they're in a ballpark with the very best of the clothes they're behind. But if you sort of draw some mental horizontal lines, they're just a few months behind the performance that you get from the very best closed models. So, you know, that's all, I'm going to just leave this discussion slide up here. You can go back to the regular display. I don't have anything further to say with slides, but the, the reason that I bring this all up is that, you know, in, in Silicon Valley in the United States at least, it's often people think of AI as these closed models that cost a certain amount of money to call the APIs from open AI or Anthropic or, or Google Meta has sort of wandered off of the scene there for a little while at least we'll see. But if you look at globally, there are all sorts of questions that people have about, well what can I do on my own infrastructure? What can I do if I want to control this and run it all in in-house? And significantly, what, what can I do that's cheaper? Because a lot of these open weights models are cheaper to run either on third party infrastructure or if you're the type of person that can run a fancy Mac mini or you know has a corporate server that you can run it in your own house, you are not gonna pay the same high prices for the most part for inference, for the use of the model that you would pay to open AI or anthropic. So a few, a few thoughts about the implications here. People ask me all the time, what do you think about the US China AI race or competition? Who's ahead? What does it mean to be ahead? The what does it mean to be ahead is a much better question in my opinion. I think we always have to disaggregate any concept of there being an AI race because AI doesn't mean anything in specific. And in this case, I think we ought to ask what did these edges mean for a few different things? So for our purposes today, one thing we might be concerned about is which models are going to give people a better capability to mislead or hack or docs people or confuse people. And I think what we find with that fairly close clustering between the very top frontier models from the US and the fast following open models out of China is that the def the ability to, to do these types of, of harmful things, deception, hacking, et cetera, is highly diffused already, you know, the sort of cat outta the bag. So that means that, you know, you're, if, if you're thinking of the Chinese government as your adversarial actor, they have access to it. If it's a a if you're worried about a group of people in a basement somewhere, they have access to it. Certainly US government has it as well. So that's not really a competitive thing anymore because something that's really good is pretty much available to everyone at this stage. Maybe that gap will grow if the question is a longer term strategic one about the capability to innovate and frontier science to achieve something that somebody recognizes or as some kind of artificial general intelligence or super intelligence. All of these terms that people throw around and have very different metrics for. I think it's totally unclear what all this means right now it's just not known yet for that type of breakthrough scientific or sort of comprehensive AI hypothesized breakout moment. Whether these open models are as close to the closed so-called frontier as it might look. So we don't really know there, there, there are all kinds of different ways where the competition goes in different directions. One of the things we talked about in the paper, and I should, you know, it's probably obvious, but I'm giving my opinions here. And so if you want to see the, the language that we all agreed on in the paper, you should read the paper. I'm not speaking for my co-authors, but one of the things we tried to get at in the analysis for the paper is what does the prominence of Chinese models among all open models mean? What does that mean for the world? And one thing that I think is clear is that some political censorship is transferred when you download a model that has been built to be compliant for running inside the PRC. You know, clearly if you're going to ask the model questions about what happened in 1989 in in Beijing, you're going to get an answer that is allowed out of the Chinese regulatory system. So in some, in some cases this is completely irrelevant. If you're trying to build a model to do coist on something that's completely nonpolitical, you know, there's hypotheses about maybe they might, you know, the the models might be inclined to do less secure code if it's connected to something that the PRC government doesn't like. But say you're just outside of that political blast radius, it simply shouldn't matter. You should be able to get a lot of utility out of these things around the world regardless. However, obviously in some cases this is very risky. In, in places like Taiwan or other places where there's a, a large Chinese diaspora group around the world, you know, political questions that will be managed through control within China are relevant. And so I think you have to just ask yourself, you know, in each case of app of applying these things, does it matter? I think it does matter that there's not this hot competition from us providers to put their own version out. Then there's another, you know, what does this prominence mean? What's the possibility of these models being used to surreptitiously create cyber vulnerabilities as they're used around the world? This is basically theoretical at this point. There's a couple of ways that people think about it. One is the, an app, if you download an app-based solution that's clearly, you know, software that you're running, but if you're just using the model in your own environment, it seems a little tougher to just hide kind of back doors in there. Although people theorize about that being possible At the same time, many people won't install it on their own thing and many people will just go to the deep seek app app, which I, I don't haven't read their specific structure, but as Joha was saying that, you know, if you look at their privacy policy, you'll probably find out that it's going straight to China. The next question I think is really important is, is dependency. If you're outside of China or the United States and you're using models from those places, the dependency question is quite clear for if you're relying on anthropic or open ai. 'cause they can just shut you off. They can say, oh you, we've decided that you as a user are no longer eligible to do this, but there may still be a dependency if you use open models and if you integrate with a broader ecosystem of software or cloud services that are being designed for this. So an example is Alibaba's top performing Quinn model is released completely openly, but it takes some extra configuration to get it to operate at the very largest context window. Meaning it it with the greatest amount of information that it can keep in mind, so to speak at a time. If you just use the Alibaba cloud version that they're offering, that extra capability is built in, they've pre-engineered that for you. They're trying to sell cloud services. Alibaba is the largest cloud provider in China, I think. And, and one way or the other, they're, you know, they're trying to make money somehow out of delivering this very expensive thing for free. So that's sort of dependency and lock-in is relevant for all sorts of reasons. It's the ability of to for supply to be cut off it's data security risks. If, if the Chinese government is one of be a threat actors, you know, everyone who's a user should need to think through all that stuff. And it's reasonably complicated to think through. So I think a lot of people aren't thinking through it. So the way that I kind of tie this together is that the different risks and affordances of open models being out there and the sort of peculiar situation that the best ones very much are Chinese at the moment. It means that you have to think about risks in a specific situation and risk mitigation goes with that specific situation as well. I think right now we're underdeveloped globally and thinking about how that risk mitigation and and identification can be handled. But I also think that it's necessary for the open closed us, Chinese, European, whoever's building models. If you're installing it in your workflow and your organization, you know, this is a, an area of how do we mitigate risk, how do we make sure that things don't get out of hand? That systems empowered to, to do certain actions within, within the server environment, don't, you know, try to escalate their access and do something else elsewhere and not necessarily maliciously, but anyway causing trouble. Another way to look at it in the risk framework is you can view the party state in China as from, from different angles. And one of them is as a user of widely available tools, right? So if if there's a Chinese government program to create propaganda or to achieve some sort of policy or, or scientific goal, yeah, they're probably gonna be using these things. And they obviously, obviously I don't need to tell this room that the Chinese government has widely documented foreign propaganda efforts, some of which are quite public and some of which are not, and are discovered by researchers such as our colleagues here. You can also think of the party state as a proliferator of the capabilities, you know, spreading them out. And I think it's wrong to view deep seek and the other labs as being told to do open weights AI by the Chinese government. I I haven't seen any evidence consistent with that. And there's a lot of evidence consistent with this being an organic evolution of the Chinese technical community. But the government could decide they don't want this proliferation and right now they're for it either incidentally or on purpose though, given that these models are proliferating from China, the Chinese government does affect the content in politically sensitive areas from the Chinese government's perspective. So that is an intentional thing they're doing, even if they're not focused on the foreign, which I think is prob not their primary focus, I think their primary focus is maintaining information controlled domestically. Okay, so the, I promised my so what bit here, I'm encouraging us to ask questions that I don't have the answer to. And when I was trying to think, you know, I wasn't assigned to talk about the foreign information manipulation, this isn't something that I studied professionally, although I read a lot of work about it, but I I did wanna try to figure out how to apply this to them and I got stuck on a few things that I think are actually reasonably hard questions. One is thinking about what is additional or different about this AI and open weights AI enabled risk platform. Various targets of Chinese government influence campaigns already have certain risks that, you know, don't necessarily rely on large language models as we've talked about today. I mean there's a lot of examples of things that are that sort of predate the, the, the relatively advanced models that we see now. And in that idea of what's additional or what's different, is it the AI capabilities? Is it just the raw power of the thing? Is it the fact that the AI is open or could the threat actor do this, you know, with a cloud account, if it's the Chinese government, certainly they could hire a domestic company to do it, but they would have trouble hiring anthropic. And is it about the fact that the AI is Chinese or not, that creates the risk. So if Europe has a producer, again, right now Europe's top open model is not among the best in the world, but if if they have another one that is open, does it activate much of the same risk profiles that you see when the open models are Chinese in origin? So those are my, my sort of questions that I don't have the answer to. You know, what's, what's additional and different in the information manipulation space from all of this and is it the capabilities, the fact of openness or the chineseness that affects the specific risks? I think that, you know, where I come down on all of this is that we're not gonna be able to answer all these questions right away, but I do keep coming back to the idea that application specific mitigations are going to be central. Whether it's a, a foreign interference fear or a simple, you don't want your system to malfunction or go down type of fear. People are going to have to figure out how to, you know, put walls up around this set of technologies and you know, we're not very good at it yet, but a lot of people are working really hard. So I have some optimism about that. Okay, I'll stop there. There's so much more to discuss. Thank you very much. Well thank you

- Graham, that was very, very insightful. Alright, I, I turn it over to Sergey for, for his insights and and conversations. I've got one question for each of you in reverse order, Graham, if they can fence or alter or shape your inquiries in the geopolitical domain, why can't they do it in the technical domain? In other words, I've used deep seeq, I asked them a question about optimum Taiwan policy. He won't believe what I got telling me something completely different than you get from from from us sources basically saying it's not a problem Taiwan's a part of mainland China and and end of question, but

- Why do you think we wouldn't believe that you got that answer?

- But I guess my question is why couldn't that extend to the technological if they, if they see the inquiries coming from offshore or the downloads from offshore, they shape you in a different direction, deny you that that level of exposure, you know, what about your, your work's vulnerability and, and the like, I guess, you know, you're talking about some mitigation strategies here that are application specific, but it seems like, you know, made in China is kind of a warning sign based on what you are saying that you're gonna have to accept some of these challenges or differences. Am I interpreting that wrong?

- I I think it's, it's a good point and the way that I would refine it is this, if you are relying on a, a direct service provided by a Chinese firm, meaning they control not just the underlying model but the application around it through which you are interacting, then that does engage the risk of the service provider targeting individual users for, you know, different outputs, you know, to interference in some way. However, if you're downloading the model itself, you know, the raw weights meaning it's this just enormous table of numbers that are interlinked in it in a matrix and then you run that on your own infrastructure, if they wanted to make a malicious use, it would have to be embedded in everyone's use of that model, right? So to to sort of, to put in that poison pill would have a deleterious effect for Chinese domestic users or non-targeted individuals. The the little wrinkle is that there's, there was a piece of research that I, I don't have all the details in my mind right now, I thought it was a little iffy, but I I would be interested in more research where some folks prompted some Chinese models to help build website code saying it was for, I don't remember which cause that is against the preferences of the Chinese government and they thought they observed the code coming out less secure than if the cause was a, you know, a coffee shop or whatever.

- So they'd have access to it.

- Well it's the, the question is, could the model have in built into it a preference to right, not help things that are obviously critical of the Chinese government, for instance, right? So that's a hypothetical possibility to me. There's an initial research finding that maybe it might be happening, but it wasn't that solid in my reading. But yes, I mean I I don't think we exclude that, that outcome and, and so my answer would be, you know, be conscious of what assurance systems you need based on your situation.

- So no matter ba how bad you think it is, it's worse than that, right?

- But if you're building a website for coffee shop, it's probably irrelevant. Yeah, that's kind of my problem.

- Okay, Holly, great, great conversation. And you, you talked about the Taiwan strategy and then you listed some things related to the last two elements of that strategy that implies that you might have a, a different idea of what their strategy ought to be. And you know, one of the ways that I've always thought about risk from a simplistic old fighter pilot's terminology is what I call the four S, right? You have to measure the risk and be confident in your ability to measure it accurately. You've got to minimize the risk and the process that you're a part of. In other words, take out what risk you can, then you've gotta manage the risk that remains as you point out. And I think Graham would agree, you don't have the luxury of not being in the, in the fray and you know, there's an old Navy saying that ships are safe in harbor but that's not what ships are for. And so you're gonna use ai, you've got to, you've got to enter into that somewhere. And then the final M is mitigation. I use it in terms of after the, the crisis happens. How do you, how do you deal with that and, and the like you use mitigation. I think it's part of maybe the, the, the minimization. But really that's the question of, so how would you characterize the key elements of what a, a Taiwan strategy ought to contain or consider, I'm not asking for a detailed analysis, but just some two or three key points that you think are not quite yet correct given the, the character of the threat that as you've described it and as Graham has indicated.

- So a quick question is that I mostly I will try to like trying to see this question from different layers. For example, like a data layer, Taiwan has a relatively weak cross-border data transfer regulations even weaker than the us I don't like us is compared relatively like weaker in if you will see that in the, at a global like standard. But Taiwan is even weaker than that. We, we decide the data protection and data transfer regime in a way that we like by default allowing data transfer from Taiwan to mainland to to China. Yeah, with like a few, only a few exceptions issued by maybe like some sectoral agencies. So that's one thing like data layer. And the other one is more like model layers in the broader sense, for example, like models like can generate content and also model an algorithm that can recommend you some like post or or information. And that is something I believe we need to like embed more or it's pretty much like what you probably why trying to, to say is that like when trying to apply those like models or recommendations algorithms, we need to like design more safeguards trying to reframe it from embedding some like censorship or even like a algorithm driven propagandas and then is like at the content layer I believe or main maybe like platform layer rather than content layer. Because what I emphasize in my presentation is that we need to focus more on the malicious behaviors render than content itself, right? That is something I don't think our regulation and or, or the mindset of our regulators really adopt. They still like highly focused on the content stuff, but just like way already highlight that the threat, the landscape, landscape of threat has already transferred from like content to the propaganda in Taiwan. It's pretty much we can see from the 2024 election to 2025 recall movement, right? So if that is something have been changing, have been changing just a few years, maybe our regulators and policy major needs to change our mindset of strategy and regulations at the same time in the same time. Otherwise we're not gonna address the problem in the correct way. So like different layers and you need to think, think that differently. That's something I can think of for now.

- That's a great point and I think you, your emphasis on the manipulation or the manipulative behavior rather than the content, I think is, is an interesting perspective that I've not heard before. So over to you for your insights please.

- Sure. And I should say, like, like Graham saying that I don't work on Taiwan all the time, but some of the things that you talked about sounded very familiar to me. For instance, this attempt to use fraud legislation as a way to punish some unfavor platform Russian government when it tries to ban telegram. The stated reason is protecting retirees from phone scams. This is like the big issue that requires very heavy legislation and a lot of technical infrastructure around it. And of course it triggers people using VPNs. Same thing, you know, people can maybe survive without Facebook but without telegram much, much harder. Now that might in turn lead to what happened in China in some cases where people started using VPN to access some entertainment and like on YouTube le let's say, and then switched to some political content as well. So could backfire generally, I think this idea of mitigation is a absolutely correct approach and, and the issues that you identify and I I think they, they have parallels elsewhere, including in, in this country, you know, this issue of authoritarian past as a, as a, a warning against heavy handed approach, right? People would say something, you know, would mention Mais, TISM and other, you know, past problems in, in US politics and political divide over China probably is reflected by general partisanship and political divide in, in this country. And this switch, which I agree with from, from quantum to behavior to algorithms is not a complete solution here because for example, in in US this, you know, big pushback against any anti disinformation measures and even research which targeted, you know, our colleagues at the cyber center here at Stanford, one of the key issues was something called shadow banning. And you know, this is because it is shadow, we don't even know the extent of it. Some of that might have been imagined, but still was the reason to pressure, for example, meta and Facebook in particular as well as Twitter. What is now x to, to, to, to stop any efforts in this, in this area. Neither market leverage is, is is a perfect solution here because look, again, look at European Union versus Musk litigation is ongoing and they pretty much refuse to cap operate. So I, I think this emphasizes something that is very important here is is market forces that operate very strongly, but they operate not always to our disadvantage. Sometimes they operate to our advantage. And in particular when, you know, some bad actors are trying to use all these technologies to interfere in elections, et cetera, they might suffer from their own market failures, if you will, such as corruption that is prolific in this kind of area. You know, when you mentioned, you know, an influencer that got 420,000 views on their piece of content, do you know all of that was organic or maybe there was some manipulation on the part of the influencer to show that engagement is so high to get this $10,000 payout and just, you know, then use 1000 of that to, to pay for this engagement. So, you know, this was, you know, internet research agency in Russia run by late Guinea, you know, he was, he got very rich out of that. And, you know, the, the sort of the, the practical effectiveness of that was always questionable both domestically,

- But he couldn't take it with it.

- Yes, absolutely. But some people are still doing it, like the, you know, you get a black budget, you know, that's, it's a lucrative business. And in his case, he, he really couldn't take it all with him because it was all in cash. It was like trillion rubles in cash over the years. So now, now switching to the AI sphere, so, and I, again, here I, I again want to emphasize the role of, of markets. I think generally the idea that, you know, government and and market are not, you know, they in competitors are two different strategies is, is is not like this space works. If government is shown and un accountable. It can both allow dangerous experimentation on the part of private companies, both in terms of how data is generated, no constraints on surveillance and how it is utilized and allow wide adoption in businesses, in government services, et cetera. But it still can take advantage of the results and prevent adverse effect to itself, right? Like tropic dispute with ministry of, you know, with, with Department of Defense is, is hard to imagine in many other countries in including in China. So, and I would also add that in terms of hardware, the inconsistent US policy I think helps them in some ways because, you know, this issue with in VD chips that were first prevented to be sold now are allowed to be sold. I think many people reasonably ask questions why, you know, this is sold to China when United States does, doesn't have enough supply now. But I again won't say that, you know, it's, it's such a bright future for Chinese, either in technically in in general or in using these technologies for, for it its advantage. But I think that the problems that they encounter, they are sort of old fashioned ones, like with previous technologies, look at human capital, right? They have strong schools and you know, deep seek is ger here did, did good research on that. It it's based on, on domestic talent, but some of the dome domestic talent and you know, in this case it's not just technical but also entrepreneurial talent. It's not like competition with USSR, it's much stronger, but the undoing is still there in the lack of the rule of law and, and general protection of, of, of rights in, in the country. And recently there was a big resignation of technical, technical stuff at Alibaba in that is producing KU model. I think I, I, people here would certainly know more about what the reason I, I saw two, two different theories. One is that the communist party is taking more sort of hands-on control of that, but the other is also money forces that, that, you know, moving Alibaba towards sort of Google model of using cloud as as a revenue generation. And so this model as closing as, as a way to sort of put a mode around their market market share. And so, but in any event, this will, you know, it's a really big resignation of, of the best technical talent. It, it could influence negatively as a their output. So I I would say that, you know, this, you know, general market forces and imperfections of the political systems will, will show up in, in, in the fact at least medium long, long term. O on last question, I I want to, to discuss that, that I think you pose is this question about what is different here and what is it the risks? I I think generally all of the above, you know, the raw capability of these models is staggering and the latest ones are especially impressive in, in, in that respect. But I think the, the key issue there is that it is true that whole worldview are encoded in these models. This is, you know, it it goes far beyond even any intended manipulation. It just, you know, it, they, they are Chinese, they are American and you know, if there would be Europeans it would certainly help if, if, if they get direct together because, and you know, it's, it's not surprising in a way if, if you Google, you know, a, a physical address or a phone number in us, it'll give you very detailed information of what you're looking for. But you know, if you have a physical address in, in, in in Russia or China use Yandex or or Chinese search engines, they will give you back. It's, it's, it's always was there and it's still true, even though theoretically these models can translate all the information back to some common Latin or something that humans don't even understand. They still encode a lot of, a lot of this wall views. There was recently a very good presentation here at Text Data Conference by George Tucker and co-authors on these very questions. They prove it, they, they, they do encode hold world use. And I think that is, you know, the, the, the big challenge we can run on more practical level. I do think that some of the in in existing internet threats and, you know, for for intelligence purposes elsewhere, they're amplified by this capability. You know, even basic stuff like DDoS attacks, it, it used to be that I was able to log into Social Science Research network SSRN without going through a very slow CloudFlare capture who is like, who is DDoSing, SSRN. It's crazy, but maybe it's actually an attempt to harvest the data. But if they're protecting themselves from that, they're probably too late. In any event, I think this capabilities do amplify certain conventional, conventional threads that existed before, but much stronger now. I will stop here and looking forward to this discussion. Thanks, I appreciate that.

- Yal, you, would you care to respond to any of the, the comments from Surrogate

- For now be, because I don't really read any questions from your comments, just more like comments, right? So right. Yeah, so nothing wrong to Graham for now.

- Over to you. I'll just say one thing. I mean there's lots to discuss. I don't know the answer to the Alibaba thing, but I think it's probably not the party control and probably something more like corporate, but that's to be reported. One thing that I hypothesize is additional risk now in this for an interference space is I think not really having to do with the open models, although powered by them potentially the open claw moment has a lot of chaos potential, I think. So not just producing misleading information or, you know, messing with discourse, but just flooding the zone with chaos, whether it's something getting hacked or going down or, you know, different inauthentic behavior on sites. I think that that has always been something that was hypothesized as part of what would be possible with these capabilities. But like I think I know a lot of people who would be able to do that now, whereas before I would've needed to think about, okay, who, who has worked in government agencies where they've tried, you know, kind of that type of thing. So I, I really do think that there's a, the agentic so to speak, angle adds modes of chaos that we haven't witnessed.

- Yeah, and think to the list, this goes again far beyond intention. These agents are, you know, have their own

- Intentions. Well, I mean that's, this is a, a fascinating piece because it gets back to the, to the broader issue that Sergey raised is that, you know, is this a Trojan horse of some stripe that, that you're now bringing into your system, not, you know, through a firewall, which we, I would argue we could never make high enough and thick enough that we're always going to get through. And that gets to my my point about mitigation, but now you're introducing gigabytes of, of data and software and, and all the things that are wrapped up in these models. How do you ensure that it's, it's clean, that it's not, that it's not an infection. I mean we used to get those warnings, remember, you know, are you sure you wanna open this link? You know, kind of thing. Now we willingly bring in, you know, mechanisms from, from China that are, that are, you know, two orders of magnitude larger. And so how does one, how does one think about that in this, in this digital age against the, the, the, the over overwhelming attack that could be, you know, enabled or impacted by, by that kind of action on our part? Sorry, how do you think about that?

- Well, I think that that with, with this, the, the open close thing that, that were mentioned, I, I think that the key issue there is that it's not even an, it's, it's sort of, it's coming from inside your computer. Like you, you willingly give access to it, to, to all your files and then you don't know it, it, this models are not deterministic. You don't know what they will do. So I, I think in a way it's easy to prevent, but you know, the, the, the issue of people willingly gi giving this power is a real issue. I will come back to, to what you started with this data that you showed and downloads. I understand all the challenges with with working with this hugging face data. But I think more general, again, on the market structure, will the use of this model be more on sort of like on our computers, each individual or will get just this profiles of, you know, sort of, you know, as a service for, from, from companies, you know, that we we just use for, for a particular task. And, and I think if it's the latter, it's easier to control if it's inside our computers if's impossible to control.

- I would just add to that one of the things we found in surveying the open weights model ecosystem from China is that the Chinese model makers are more than others on earth, are focusing on providing efficiency in use of the model. Meaning specifically in the case of that you're talking about, they'll provide a somewhat to very much less capable version that will run on my phone. Whereas I cannot truly run a version of Claude 4.6 on my phone. There's no such, they haven't, if they, if they've built that, they haven't shared it with us. Whereas I can do that with a, a deep seek or a Quinn model. And so for people who have that type of application need systemically right now, what are they doing? They must use the Chinese options. I mean there's some other ones, but you're going to at least experiment with these, these general models.

- Good point. I apologize for no tent cards. So I I saw a hand raised over here please.

- Okay. A quick comment about Alibaba's connection with propaganda. I think the most immediate one is Jack Ma actually own South China morning post. And I could see that the most recent in the past, like few months South China Morning Post has been doing a lot of soft power propaganda thing. For example, they talk about the recruit recruitment of the younger scientists and others coming, returning to China. So they profiled this very handsome guys and nice woman who were young and got their degrees and they, they're returning. So I I I, I think that that Jack Ma connection with Alibaba and then he owns the fact that he owns South China more post, for example, of the China model, meaning state capitalism and authoritarianism. So they basically, the money and if you want to be successful financially, economically isn't successful, then you are also doing something. So I think that's the best administration of the China model in France, business and propaganda.

- Thank you. Thank you. Here please identify yourself please. Thank you.

- Oh, my name is Leo. I'm a research fellow at Hooper. And so it seems to me that our session is focused on seeing China as a, as a threat and the West and the US and Taiwan playing defense. My question is about this defense offense balance. It seems to me quite clear that China is not impenetrable and it's actually full of vulnerabilities. And so if we look at, for example, great firewall, there's been research projects that actually unblocked the great firewall for parts of China, but overwhelming their servers that are used to, to wire traffic, internet traffic abroad And other things. For example, open claw is obviously a security nightmare for anyone downloads them. And last week the Xinjiang government just announced that they're downloading a bunch of open claw onto their own computers and have had issued a 14 point policy guidance to promote the use of open claw and even subsidizing individuals and all of their businesses. And so, so now we have this strange phenomena whereby that one on one hand the Chinese government is incredibly risk, risk seeking in in promoting cutting edge AI technologies that are developed outside China. On the other hand, still using Windows XP and Internet Explorer, and I'm not sure whether they're security software necessarily up to date with the latest AI softwares and surrogate touched on how models are coded, American models coded into American civilization. Chinese models coded into Chinese civilization. I might just add that on the, on the surface they might look that way, but actually the veneer of the code is actually quite thin. And so if you access the Chinese models through deep seek app or Q1 app, you get the, the Chinese versions. But if you download them locally and run them on your open portal, then they're actually overwhelmingly American. And so the fine tuning that's done is actually just a layer in the app rather than at a very deep level within the model. And so actually what we have right now is American models that are deeply American, the Chinese models that look Chinese, but actually deeply American too since they're trained on the same text as the global internet. I mean this, I mean this idea of American versus China in the, in a model sort of break down when we think about the internet as the civilization and, and, and the globally shared information system. And so given all that right, given that China's full of vulnerabilities, given that we assume that China is obviously trying to attack abroad in in Taiwan and and elsewhere in the world and, and funding and supporting autocratic regimes to, to conduct censorship surveillance, all that stuff, given that China is being offensive, why are we assuming that the western western strategy is completely defensive and not just going into the offensive space? And I mean one obvious argument for the offensive approach would be that you establish some kind of mutually assured destruction and so that the other side would not launch into your space knowing that you would launch into theirs and, and, and in this discourse, I think Taiwan is that rather well suited for, for this kind of role. I mean the, in comparison to Taiwan, Israel and China as Iran would break down and be quite unfortunate in our time of Asia, but to the extent that this kind of metaphor would apply, I mean, Israel's existence is rather a, a miracle in in middle. And, and to some extent Taiwan's would be too if they don't attempt things outside the box. And so here are my 2 cents, which are are part common and part question and I would love to hear what what you guys do.

- Well thank you. As an old stratcom guy, mutually assured destruction kind of warms my heart.

- It's

- Like good old days.

- There you go. There you go. Things were a lot simpler then. Any response from our two panelists to, to that intervention? I think

- Just one add one quick point is that I can agree with you more in the sense that actually I talked with a, like a red team like hackers from Taiwan who are also, who is also basing these right now, probably still basing these zero nine. He's saying to me why, why we're not playing like the defense forward strategy. I can frame that in this way, but actually it's just more like offensive strategy that we, we can just attack layer of vulner abilities. We don't have to be play defensive all the time. So yeah, can agree with you more. Okay.

- Very many comments. So much to say. I, I guess I'll, the, the thing I'll leave with is on this idea of o offensive defensive and mutual assured destruction, there's also a whole menu of risks of accidental events that might, that one side might attribute to the other, which as a strike com commander was something that you probably thought about quite a bit. But in this, just because you intended a signal doesn't mean it's sent and receded quite the way you intended, right? And, and in this case there could be a hell of a lot of risks of things not intended, but perceived. And you know, from the perspective that it's everyone's interest not to get into an accidental confrontation. I think that there's actually substantive common interest between us, China, Taiwan, other states in having some way to understand their mutual vulnerability as it emerges in this environment. As much as there's the adversarial and, and that's very much our focus today. There's, this is coming for all of us in one way or another. Yeah,

- That's a great point, great point. And you know, we're not gonna have time for it, obviously we're over time, but, you know, there are risk acceptance and, and risk embedded in these models themselves too. And who's, who are those, are those, the, the creators? Are those the, the, the Chinese government? They may not match perfectly with yours if you're using it in an operational context where you're expected to assess and evaluate and deem whether risks are acceptable. My wife's doing a a lot of research in this in the school of engineering and so there's lots of uncertainty surrounding all of this, but I think the questions that have been asked, I would argue are not rhetorical. I mean they're really not. At some point we're gonna have to move towards, towards answers to those in one way or another. Perfection is not our lot, it's not gonna be clear. But as I opined at the beginning, I do think to the extent that we're able to define these answers, they are going to shape the future of our technological competition and also the resilience of democratic institutions, not just in Taiwan, but around the world. So I thank you for your contributions today. That's been an incredible day and, and I very much appreciate what you brought to the table. Thank you very much.

Show Transcript +

ABOUT THE SPEAKERS

Jerry Yu is a senior analyst at Doublethink Lab, where he specializes in conducting digital investigations and analyzing influence operations. In 2022 and 2024, Jerry has extensive experience in observing influence operations during both local and national elections, including training part-time analysts, managing data collection processes, and publishing reports based on the findings.

Drawing from his elections observation experience, he has collaborated on cross-national projects by sharing the experience and knowledge with journalists, NGOs, and researchers across South, Southeast, East Asia, and the Pacific region. The collaborations are to expose the techniques of influence operations used by threat actors and share the intelligence together. During the Ukrainian-Russian war, he tracked the propaganda spread by the PRC and published a report, ‘Analysis: How Ukraine has been Nazified in the Chinese information space?’

Before joining the Doublethink Lab, he was a research assistant at the Center for Survey Research Center at the Academia Sinica, Taiwan, where he combined traditional social scientific methods with computational approaches to analyze the process, dynamics, and effects of human communication behaviors through the integration of user log and self-reported data from survey or experiment.

Jerry graduated from the Graduate School of Criminology at the National Taipei University, and is also trained in crime spatial analysis with the Temple University in the United States for a semester. He is also a co-producer and co-host of the “Jianghu 543” podcast, which provides insights into the lives of individuals in Taiwan’s criminal justice system.

You-Hao Lai is a practicing lawyer currently pursuing his doctorate at The George Washington University Law School. He currently serves as Deputy Director of the Democratic Governance Program at DSET. His research explores legal and policy responses to the challenges posed by digital authoritarianism to cybersecurity and the free flow of information. He is actively engaged in various civil movements related to technology regulation and human rights protection. Before joining DSET, he worked at the Cogito Law Office, a prominent firm specializing in public interest litigation in Taiwan. Additionally, he served as a legal and policy advisor to the President of the Judicial Yuan, Taiwan’s highest judicial organ. He holds LL.M. degrees from the National Taiwan University College of Law and Harvard Law School.

Wei-Ping Li earned her Ph.D. from the Philip Merrill College of Journalism at the University of Maryland. She serves as an adjunct lecturer at Merrill College and works as a researcher with UMD's Maryland Democracy Initiative. Li is also the research director at FactLink, a Taiwan-based organization dedicated to OSINT (open-source intelligence) investigations and enhancing digital literacy among Chinese-speaking communities.

Li's research focuses on the transnational dissemination of false information, conspiracy theories, propaganda and content moderation policy. From 2024-25, she held the position of postdoctoral researcher at UMD, collaborating with Dr. Sarah Oates and Dr. Naeemul Hassan on the "Disarming Disinformation" program, which is coordinated by the International Center for Journalists (ICFJ). Li was also a research fellow at the Taiwan Factcheck Center (TFC) from 2023-25.

Before pursuing an academic career in journalism, she provided consulting services on digital human rights in Asia. She also previously worked as a journalist covering financial and legal topics in Taiwan for several years.

Li is a licensed lawyer in New York state. She earned her LL.M. (Master of Laws) degrees from the University of Pennsylvania Law School and Soochow University (Taiwan), as well as a Master of Arts degree in journalism from National ChengChi University (Taiwan).

Graham Webster is a research scholar in the Program on Geopolitics, Technology, and Governance and editor-in-chief of the DigiChina Project at the Center for International Security and Cooperation at Stanford University. He researches, writes, and teaches on technology policy in China and US-China relations.

Before bringing DigiChina to Stanford in 2019, he was its cofounder and coordinating editor at New America, where he was a China digital economy fellow. From 2012 to 2017, Webster worked for Yale Law School as a senior fellow and lecturer responsible for the Paul Tsai China Center’s Track II dialogues between the United States and China and co-taught seminars on contemporary China and Chinese law and policy. While there, he was an affiliated fellow with the Yale Information Society Project, a visiting scholar at China Foreign Affairs University, and a Transatlantic Digital Debates fellow with New America and the Global Public Policy Institute in Berlin. He was previously an adjunct instructor teaching East Asian politics at New York University and a Beijing-based journalist writing on the Internet in China for CNET News. 

Webster holds a bachelor's in journalism and international studies from Northwestern University and a master's in East Asian studies from Harvard University. He took doctoral coursework in political science at the University of Washington and language training at Tsinghua University, Peking University, Stanford University, and Kanda University of International Studies.

Upcoming Events

Wednesday, April 1, 2026
LabratoryScienceiStock-1938555104.jpg
In Science We Trust? Understanding Americans’ Confidence In Science, Scientists, And Scientific Institutions
The Hoover Institution's Center for Revitalizing American Institutions, in partnership with the Hoover Technology Policy Accelerator, invites you to… Hoover Institution, Stanford University
Friday, April 3, 2026
The Man Who Told the Truth:   A Film Screening & Discussion Honoring Fang Lizhi
The Man Who Told The Truth: A Film Screening & Discussion Honoring Fang Lizhi
The Hoover Institution Program on the US, China, and the World invites you to The Man Who Told the Truth: A Film Screening & Discussion Honoring…
Monday, April 6, 2026
The Taiwan Relations Act At 47: Taiwan's Evolving Hedging Strategy Amidst Intensifying Global Competition
The Hoover Institution's Project on Taiwan in the Indo-Pacific Region invites you to The Taiwan Relations Act at 47: Taiwan's Evolving Hedging… Herbert Hoover Memorial Building, Room 160
overlay image