The fifth session of Challenges and Safeguards against AI-Generated Disinformation will discuss Watermarking for AI-Generated Content with Xuandong Zhao and Sergey Sanovich on Wednesday, December 4th, 2024 at 4:00 pm in HHMB 160, Herbert Hoover Memorial Building.

Read the paper here

Watermarking for AI Generated Content

ABOUT THE SPEAKERS

Xuandong Zhao is a Postdoctoral Researcher at UC Berkeley as part of the RDI and BAIR, working with Prof. Dawn Song. He earned his PhD in Computer Science from UC Santa Barbara, where he was advised by Prof. Yu-Xiang Wang and Prof. Lei Li. His research focuses on the intersection of Machine Learning, Natural Language Processing, and AI Safety, with a particular focus on Responsible Generative AI. Xuandong has published papers in top-tier machine learning and natural language conferences, including NeurIPS, ICML, ICLR, ACL, EMNLP, and NAACL. Xuandong is a recipient of the Chancellor's Fellowship from UCSB and is recognized with the AdvML Rising Star Award (2024). He has interned at Microsoft and Google and holds a B.S. in Computer Science from Zhejiang University (2019).

Sergey Sanovich is a Hoover Fellow at the Hoover Institution. Before joining the Hoover Institution, Sergey Sanovich was a postdoctoral research associate at the Center for Information Technology Policy at Princeton University. Sanovich received his PhD in political science from New York University and continues his affiliation with its Center for Social Media and Politics. His research is focused on disinformation and social media platform governance; online censorship and propaganda by authoritarian regimes; and elections and partisanship in information autocracies. His work has been published at the American Political Science ReviewComparative PoliticsResearch & Politics, and Big Data, and as a lead chapter in an edited volume on disinformation from Oxford University Press. Sanovich has also contributed to several policy reports, particularly focusing on protection from disinformation, including “Securing American Elections,” issued by the Stanford Cyber Policy Center at its launch.


ABOUT THE SERIES

Distinguishing between human- and AI-generated content is already an important enough problem in multiple domains – from social media moderation to education – that there is a quickly growing body of empirical research on AI detection and an equally quickly growing industry of its non/commercial applications. But will current tools survive the next generation of LLMs, including open models and those focused specifically on bypassing detection? What about the generation after that? Cutting-edge research, as well as presentations from leading industry professionals, in this series will clarify the limits of detection in the medium- and long-term and help identify the optimal points and types of policy intervention. This series is organized by Sergey Sanovich.

Upcoming Events

Sunday, July 13, 2025 9:00 AM JST
Aloha Tower, Honolulu, Hawaii, 1930s, Dennis M. Ogawa Nippu Jiji Photograph Collection, Hoji Shinbun
Traversing the Socio-Economic Frontiers of the Empire of Japan and the Pacific World
Co-hosted by the Center for Modern Japanese Legal and Political Document, Faculty of Law, Graduate Schools for Law and Politics, University of Tokyo… University of Tokyo
Sunday, July 13, 2025 9:00 AM JST
Aloha Tower, Honolulu, Hawaii, 1930s, Dennis M. Ogawa Nippu Jiji Photograph Collection, Hoji Shinbun
ヒト・モノ・カネの移動からみた日本帝国と環太平洋世界: 日本・アメリカ大陸関係史のフロンティア
共催:東京大学大学院法学政治学研究科 附属近代日本法政史料センター 明治新聞雑誌文庫 スタンフォード大学フーバー研究所 University of Tokyo
overlay image