Attendance at this event is by invitation only.

The second session of Challenges and Safeguards against AI-Generated Disinformation will discuss Detecting Text Ghostwritten by Large Language Models with Nicholas Tomlin and Sergey Sanovich on Wednesday, October 16, 2024 at 4:00 pm in Annenberg Conference Room, Shultz Building.

Read the paper here

Detecting Text Ghostwritten by Large Language Models

ABOUT THE SPEAKERS

Nicholas Tomlin is a PhD student in Berkeley EECS, advised by Dan Klein. His research focuses on language models, reasoning, and multi-agent interaction. During his PhD, co-developed Ghostbuster, a state-of-the-art system for detecting AI-generated text. His work has been supported by grants from the NSF and FAR AI and has received media coverage from Wired, Discover, and the BBC.

Sergey Sanovich is a Hoover Fellow at the Hoover Institution. Before joining the Hoover Institution, Sergey Sanovich was a postdoctoral research associate at the Center for Information Technology Policy at Princeton University. Sanovich received his PhD in political science from New York University and continues his affiliation with its Center for Social Media and Politics. His research is focused on disinformation and social media platform governance; online censorship and propaganda by authoritarian regimes; and elections and partisanship in information autocracies. His work has been published at the American Political Science ReviewComparative PoliticsResearch & Politics, and Big Data, and as a lead chapter in an edited volume on disinformation from Oxford University Press. Sanovich has also contributed to several policy reports, particularly focusing on protection from disinformation, including “Securing American Elections,” issued by the Stanford Cyber Policy Center at its launch.


ABOUT THE SERIES

Distinguishing between human- and AI-generated content is already an important enough problem in multiple domains – from social media moderation to education – that there is a quickly growing body of empirical research on AI detection and an equally quickly growing industry of its non/commercial applications. But will current tools survive the next generation of LLMs, including open models and those focused specifically on bypassing detection? What about the generation after that? Cutting-edge research, as well as presentations from leading industry professionals, in this series will clarify the limits of detection in the medium- and long-term and help identify the optimal points and types of policy intervention. This series is organized by Sergey Sanovich.

Upcoming Events

Sunday, July 27, 2025
technologyistock-1300754614.jpg
Advancing Policy Through Dialogue: Maintaining Excellence And Innovation In S&T
The Hoover Institution Program on the US, China, and the World invites you to Advancing Policy Through Dialogue: Maintaining Excellence and…
Monday, July 28, 2025
Ideas-Uncorked_square_July28.jpg
Ideas Uncorked: Reimagining Strategic Depth
The Hoover Institution in DC hosts Ideas Uncorked: Reimagining Strategic Depth on Monday, July 28 from 5:00–6:30 pm ET. The event will feature Nadia… Hoover Institution in DC
Tuesday, July 29, 2025
The Milk Tea Alliance:  Inside Asia's Struggle Against Autocracy and Beijing
The Milk Tea Alliance: Inside Asia's Struggle Against Autocracy And Beijing
The Hoover Institution Program on the US, China, and the World invites you to The Milk Tea Alliance: Inside Asia's Struggle Against Autocracy and… Shultz Auditorium, George P. Shultz Building
overlay image