Attendance at this event is by invitation only.

The second session of Challenges and Safeguards against AI-Generated Disinformation will discuss Detecting Text Ghostwritten by Large Language Models with Nicholas Tomlin and Sergey Sanovich on Wednesday, October 16, 2024 at 4:00 pm in Annenberg Conference Room, Shultz Building.

Read the paper here

Detecting Text Ghostwritten by Large Language Models

ABOUT THE SPEAKERS

Nicholas Tomlin is a PhD student in Berkeley EECS, advised by Dan Klein. His research focuses on language models, reasoning, and multi-agent interaction. During his PhD, co-developed Ghostbuster, a state-of-the-art system for detecting AI-generated text. His work has been supported by grants from the NSF and FAR AI and has received media coverage from Wired, Discover, and the BBC.

Sergey Sanovich is a Hoover Fellow at the Hoover Institution. Before joining the Hoover Institution, Sergey Sanovich was a postdoctoral research associate at the Center for Information Technology Policy at Princeton University. Sanovich received his PhD in political science from New York University and continues his affiliation with its Center for Social Media and Politics. His research is focused on disinformation and social media platform governance; online censorship and propaganda by authoritarian regimes; and elections and partisanship in information autocracies. His work has been published at the American Political Science ReviewComparative PoliticsResearch & Politics, and Big Data, and as a lead chapter in an edited volume on disinformation from Oxford University Press. Sanovich has also contributed to several policy reports, particularly focusing on protection from disinformation, including “Securing American Elections,” issued by the Stanford Cyber Policy Center at its launch.


ABOUT THE SERIES

Distinguishing between human- and AI-generated content is already an important enough problem in multiple domains – from social media moderation to education – that there is a quickly growing body of empirical research on AI detection and an equally quickly growing industry of its non/commercial applications. But will current tools survive the next generation of LLMs, including open models and those focused specifically on bypassing detection? What about the generation after that? Cutting-edge research, as well as presentations from leading industry professionals, in this series will clarify the limits of detection in the medium- and long-term and help identify the optimal points and types of policy intervention. This series is organized by Sergey Sanovich.

Upcoming Events

Friday, February 27, 2026
European Union flag and Taiwan flag on cloudy sky. waving in the sky stock photo
Partners In Need?: Tracking Europe-Taiwan Relations Amidst Global Disruption
The Project on Taiwan in the Indo-Pacific Region invites you to a Panel Discussion on Partners in Need?: Tracking Europe-Taiwan Relations amidst… Annenberg Conference Room, George P. Shultz Building, Hoover Institution
Wednesday, March 4, 2026
Classroom iStock-1254051142.jpg
How Can Universities Strengthen Civic Education in K–12 Schools?
The Alliance for Civics in the Academy hosts "How Can Universities Strengthen Civic Education in K–12 Schools?" with Jennifer McNabb, Joshua Dunn,… Hoover Institution, Stanford University
Wednesday, March 4, 2026
Judicial Importance, Independence, And Legitimacy In Polarized Times
The Center for Revitalizing American Institutions (RAI) invites you to join us for the next webinar—co-sponsored by the Stanford Constitutional Law… Hoover Institution, Stanford University
overlay image