The Hoover Institution Center for Revitalizing American Institutions webinar series features speakers who are developing innovative ideas, conducting groundbreaking research, and taking important actions to improve trust and efficacy in American institutions. Speaker expertise and topics span governmental institutions, civic organizations and practice, and the role of public opinion and culture in shaping our democracy. The webinar series builds awareness about how we can individually and collectively revitalize American institutions to ensure our country’s democracy delivers on its promise.
The sixth session discussed How Foreign Speech Restrictions Affect American Free Expression with Jacob Mchangama and Eugene Volokh on Wednesday, April 30, 2025, from 10:00 - 11:00 am PT.
Much of our speech to each other uses technology created by companies that operate throughout the world such as Google, Meta (Facebook), X, Microsoft, Amazon, and Apple. Because these companies operate worldwide, they are potentially vulnerable to pressures from the countries in which they operate—if Google has assets or people in Germany or Turkey, then the German or Turkish government can force them to comply with German or Turkish law.
So long as countries have tried to regulate what tech companies do in their countries (e.g., what information Google shows to readers in Germany or Turkey), foreign restrictions end up having relatively little effect on what Americans can say to other Americans. But foreign countries are increasingly asking for worldwide restraints on things that are said on various multinational platforms (for instance, anything said anywhere about those countries’ citizens or politicians), sharply risking undermining American’s free speech rights.
WATCH THE WEBINAR
>> Eryn Tillman: Welcome, my name is Eryn Tillman, an Associate Director at the Hoover Institution, and we'd like to welcome you to today's webinar organized by the Hoover Institution center for Revitalizing American Institutions, also known as RAI. Today's session will consist of brief open remarks from our panelists and facilitated discussion with our moderator, followed by a period where our panelists will respond to questions from audience members.
To submit a question, please use the Q&A feature located at the bottom of your Zoom screen. We will do our best to respond to as many questions as possible. A recording of this webinar will be available @hoover.org/rai within the next few days. RAI operates as the Hoover Institution's first ever center and is a testament to one of our founding principles, Ideas Advancing Freedom.
The center was established to study the reasons behind the crisis and trust facing American institutions, analyze how they are operating in practice, and consider policy recommendations to rebuild trust and increase their effectiveness. RAI works with and supports Hoover Fellows as well as faculty, practitioners and policymakers from across the country to pursue evidence-based reforms that impact trust and efficacy in a wide range of American institutions.
To date, our webinar series has covered topics related to transitions of the executive government, trust in elections, how polling helps us understand what is on the minds of Americans, and the experiences of conservative students on American college campuses. Today we'll be exploring regulations and restrictions on speech in other countries that may impact Americans right to free expression.
And with that, it gives me great pleasure to introduce today's moderator your Gene Volok, the Thomas M Siebel Senior Fellow and an affiliate at the center for Revitalizing American Institutions at the Hoover Institution. Eugene is one of our nation's leading legal scholars. During his 30 year tenure as a faculty member of the UCLA Law School, he taught First Amendment law, copyright law, criminal law, tort and firearms regulation policy.
He's a member of the American Law Institute and of the American Heritage Dictionary Usage Panel and the founder and co-author of the Volk Conspiracy, a leading legal blog. His work has been cited more than 350 court opinions, including 10 Supreme Court cases, as well as more than 5,000 academic articles.
He's also filed briefs, mostly amicus briefs, in more than 200 cases, and has argued in over 40 appellate cases in state and federal courts throughout the country. We're thrilled to have Eugene leading the Hoover Institution's program on Free Expression and grateful that he has agreed to lead today's conversation.
Eugene is joined by Jacob Machangam, the founder and Executive Director of the future of free speech, he is a research professor at Vanderbilt University and a senior fellow at the foundation for Individual Rights and Expression Fire. He has commented extensively on free speech and human rights and outlets including the Washington Post, the Wall Street Journal, the Economist, Foreign affairs, and Foreign Policy.
Jacob has published in academic and peer reviewed journals including Human Rights Quarterly, Policy Review and Amnesty International Strategic Studies. He is the producer narrator of the podcast Clear and Present A History of Free Speech and the critically acclaimed book Free Speech A History from Socrates to Social Media published by basic books in 2022.
Now I'll hand it off to Eugene and Jacob for today's conversation.
>> Eugene Volokh: Thank you very much, Eryn, Jacob, thanks so much for for participating in this. It's tremendously important topic and no one better than you to address it. So actually before we get into the substance, Jacob, tell us a little bit more about your background studying free speech law.
And in America we say First Amendment law, which on the one hand there are many technical problems with that. There are statutes that protect free speech, common law rules that protect it or restrict and such but the other problem is the 1st Amendment governs what, about 4% of of the world's population.
Free speech, on the other hand, is a universal principle treated differently in different places. So it's very important to study it on an international basis and you've been doing it for a long time. So tell us a little bit about your background in this area. And by the first just to make clear, Jacob is from Denmark, which I hear is a foreign country, which one of the places First Amendment doesn't run, and it's part of Europe, where naturally one wants to study more about the rules there.
He now is in America and knows a lot about American free speech law too but he has been studying the free speech rules in a variety of countries, including outside of both Europe and America, America for a long time. So tell us a little bit about your experience both in Denmark and now in America, dealing with all these things.
>> Jacob Mchangama: Thank you so much, Eugene. It's an honor and a privilege to be part of this. You're right, I was born and raised in Denmark, in Copenhagen, Denmark. And I think for a very long time, well into my 20s, I did not think much about free speech. It was something that I took for granted.
It was like breathing the air. Denmark is a very secular liberal country and free speech was essentially a battle that was won, it wasn't really threatened by anyone. And then as some of you might remember, 20 years ago, a Danish newspaper published cartoons of the Prophet Muhammad which led to a sort of geopolitical crisis where suddenly Denmark became the epicenter of a conflict over the battle over the relationship between the values of free speech and religion.
And I think that particular conflict and sort of the global outfall from that which I think we're still living with to this day, really set me down the rabbit hole of free speech, maybe radicalized me in a free speech direction and I haven't been able to extract myself from the rabbit hole since then.
I'm a lawyer by training, spent some years in corporate commercial, but then became the director of legal affairs of a Danish think tank. Founded a think tank 10 years ago, 11 years ago in Copenhagen, focusing on these issues, but since 2020 have been focusing on global freedom of expression almost exclusively.
And two years ago set up the Future of Free Speech, an independent think tank at Vanderbilt University, where we study and advocate for what we call a resilient global culture of free speech. So looking both at laws that protect free speech, but also norms that had to do with free speech, which I think is quite relevant to the topic that we're discussing today.
Because American free speech is not necessarily undermined legally by foreign laws or norms, but it might be more be the practical exercise of free speech of Americans that can be impacted by international and foreign developments.
>> Eugene Volokh: So that's very helpful, thanks very much. And it's a nice segue into the sort of the substantive question which I might frame a little bit like this.
So I think many Americans and maybe many people in other countries as well, think of the world as divided into sovereign nations. And the Danes have their own rules and maybe Europeans as a whole have their own rules. And we may agree or disagree and we may sometimes send our vice president over to complain about them to foreign countries.
But we understand the foreign countries, they have their own lawmaking, their own constitutions, we have our own. So we're just going to peacefully coexist with us having our own free speech rules that affect what Americans may say and what Americans will hear. And Europeans have their own, and South Africans have their own.
And Chinese have very restrictive views of free speech, but that's okay, cuz that's between them and their people. But it's, of course, more complicated than that, and that's what you're gonna be telling us about. Tell us how it is that Americans speech, right, may be affected by foreign free speech rules and perhaps even vice versa.
>> Jacob Mchangama: Yeah, so I mean, let's start in the 1990s. So this, I think, was a time of free speech optimism. It was driven by a techno-utopian ideal of free and open Internet. I think these ideals were sort of built into Section 230 of the Communications Decency Act. They were cemented by The Supreme Court's 1997 decision in Reno vs ACLU, which declared the Internet a unique and wholly new medium of worldwide human communication.
And there was an Internet Freedom Agenda, which was a bipartisan pillar of US foreign policy, where Washington and Silicon Valley worked hand in hand to promote ideals of free speech around the world. And for a while, foreign democratic governments and sort of oppressed populations around the world embraced the Internet Freedom Agenda as a harbinger of liberation and progress, culminating in the Arab Spring, which was powered by social media.
But that was then. So since then, we've had two polarizing presidential elections and a pandemic. We've had Brexit, we've had a refugee crisis in Europe. And Americans are now deeply divided over whether social media protects or undermines free speech and what, if anything, should be done about. But outside the US, the mood has also shifted.
I think a lot of democracies have soured on the American model of Internet freedom, which they associate with Silicon Valley platforms enabling the viral spread of content that is often illegal under their own laws or corrosive to their values and democracies. This was actually something that was predicted by Tim Wu and Jack Goldsmith in their book from 2006, Who Controls the Internet?
Which at the time seemed a bit contrarian and against the zeitgeist of free speech optimism, but today looks depressingly prescient. And over the past decade, governments around the world have launched regulatory efforts to tame what is sometimes called the online wild West. And some of these efforts may have consequences for free speech in the US even if Americans still enjoy the strongest constitutional protection for free speech in the world, because it is increasingly foreign and global standards that these platforms employ.
So we actually saw an example of this just last week. So back in 2020, I think, Facebook, now Meta, created an oversight board which can issue binding rulings on content moderation appeals from users and also make policy recommendations to Meta. And one recent case decided last week involved a Facebook video showing a transgender woman being confronted for using the women's restroom at a US university.
And so a US woman filmed this video and she questioned this transgender woman's presence, saying she feels unsafe. And the caption of the video said, male student is using the women's bathroom, why is this tolerated? There were users who found that this video violated Meta's hateful conduct policies.
Meta disagreed. This was appeal to the oversight board, and the board sided with Meta. But it did not interpret Meta's content policies by using First Amendment analogies. Instead, it applied international human rights law, specifically the International Covenant on Civil and Political Rights, the ICCPR. And the ICCPR, on the one hand, Article 19 protects free speech while it allows for certain restrictions.
But its Article 20, Paragraph 2 goes further. It actually mandates the prohibition of certain types of hate speech. Which, of course, is very different from the US supreme Court Brandenburg versus Ohio test of incitement to imminent lawless action likely to produce such outcomes under which the video, if you were to use an analogy, would would have been protected.
Now, in this particular case, international human rights law supported free expression. So the American user video was upheld. But in other cases, it has not. The oversight board has used human rights law to rule against anti-immigration speech. It has also found Holocaust denial to be unprotected using international human rights law.
It has also relied on decisions from the European Court of Human Rights. And the European Court of Human Rights offers no protection to hate speech and even permits restrictions on blasphemy. To be clear, of course, US platforms have a First Amendment right to set their own content rules.
Section 230 protects them broadly from legal liability when enforcing them. But I think there's good reason to think that international pressure is pushing American companies towards less speech protective policies and stricter enforcement than they would otherwise have chosen. After all, these are global platforms. Their terms of service and content policy are generally universal, that's at least the ideal.
And so when most of your users are outside the United States where speech laws are more restricted, it makes business sense to align with the dominant regulatory climate. This is especially so when the alternative is facing huge fines, national bans, criminal investigations, or even the arrest of local employees.
And this is all stuff that has actually happened under laws like Germany's now repeal in SDG, India's IT rules, Brazil's judiciary-led campaign against fake news, and most prominently the European Union's Digital Services Act. The future of free speech that I run, we've tried to sort of come up with some data that can maybe sort of give us a picture of the impact, even though it's difficult to make sort of any causal claims.
But in 2023, we did this report where we looked at the development of hate speech policies of as major social media platforms. All of them American. So Facebook, Instagram, X, YouTube, etc. And our findings show that platforms have significantly expanded their hate speech policies over time, both in content and in the range of protected characteristics.
So what began as relatively narrow bans on overt racist or hateful speech has drawn into sort of including harmful stereotypes, conspiracy theories, and so on. And since 2020, the average number of protected characteristics has more than doubled. And this creates a clear risk of over censorship to avoid fines.
This is something that is borne out by reports that we've done looking at deleted comments on Facebook and YouTube in France, Germany, and Sweden. Where we found that between 90 to 99% of the deleted comments were perfectly legal and most of them were not even sort of offensive, which is a subjective term.
But, but they were not even, many of them were not even controversial topics. So that suggests that laws like the NetzDG that was passed in Germany, the Digital Services Act, incentivizes platforms to remove legal content. And if they do so by changing their terms of service or their universal law content policies on hate speech, that then also has an impact on users in the US.
And given that the practical exercise of free speech of most Americans is predominantly carried out on social media platforms, that obviously then has an effect. And I briefly mentioned the European Union Digital Services act, which might be the most ambitious and sweeping attempt to regulate the online ecosystem.
The DSA is hotly debated, some see it as a huge step towards progress, others see it as a censorship machine. It does include some positive developments like transparency, improved appeals processes, stronger user protection. But I think it also presents a risk to freedom of expression with its notice and action system, its obligations for very large online platforms and search engines to mitigate systemic risk.
And with the European Commission acting as a regulator. And the European Union has been very clear in saying that its ambition is for the Digital Services act to act as a global model for online regulation. It was marketed as a global gold standard for platform regulation, which is a nod to the so called Brussels effect, where the European Union, because of its huge market and its expertise in regulation and its enforcement mechanism, is able to essentially set global rules that are likely to be adopted by private companies and also emulated by other countries around the world.
And some of you might remember that a former then commissioner, Cherry Breton, a very assertive Frenchman when he was in charge of the dsa, sent a number of threatening letters to US platforms warning that their content moderation violated the dsa. The most infamous example was when he sent a letter to Elon Musk just before he was about to livestream a discussion with then presidential candidate Donald Trump, warning that that might potentially violate the Digital Services Act.
So that was obviously something that provoked a lot of Americans saying, how dare a European bureaucrat try to sort of have a say on how an American platform facilitate free speech directly relevant to an upcoming US election? I think even within Brussels, there was a realization that this was a step too far.
I also think that the current administration is overly hostile to the DSA. So the impact of the DSA may be less sort of impactful than under the Biden administration. But under the Biden administration, there were meetings between the White House and the European Commission where they sort of sent out press statements that aligned their approach to fighting disinformation and so on.
And there was a hope in the EU that the US would voluntarily adopt parts of the DSA, like risk assessment and independent audits of social media platforms. So all this is to say that there are, I think there is definitely there has been a real impact on US Platforms and thereby US Users when it comes to the DSA.
I think it's uncertain to what degree it will have an impact. I think with the new administration in place, it's likely to be less direct and there's likely to be less of an incentive for US Platforms to follow the Brussels playbook. So that might be my initial comments.
>> Eugene Volokh: So, again, that's tremendously helpful. It's important to realize the First Amendment restricts. The government doesn't restrict social media platforms. Free speech is more in danger from the government than from social media platforms. But at the same time, as you point out, for most people, practically speaking, what they can say to the public is chiefly dictated by the social media platforms because they don't have other mechanisms for doing that.
So it's important not to ignore the influence that massive platforms have on free speech. And of course, you point out it's not just, well, is it government action or is it private company action? It's private company action often pressured by the need to comply with government orders, albeit perhaps from foreign governments.
>> Jacob Mchangama: And I think this is one of the gray areas because this concept of jawboning, when does government pressure reach such a level where you say, well, this is no longer just the government having a reasonable, strong interest in what goes on on platforms. Saying, hey, we think what you posted is wrong or false or dangerous, which I think that that's not unreasonable for government to take an interest in what is what kind of information is being spread.
But where does that cross the line into where you say, well, if you don't remove it, it might have consequences for you, and where it sort of becomes state action? And I think some of these laws incentivize or at least facilitate untransparent jawboning, or at least they create a risk void where you have sort of processes, but that are not very transparent.
That are not well described, and that creates mechanisms where it provides powerful regulators, possibilities to sort of jawbone platforms to address content that might not even be illegal. And of course, one of the developments that facilitates this is the centralization of social media, right? It's a very different online ecosystem that we inhabit now than if we go back, say 15 years and the blogosphere was dominant, right?
If we were back in the days of the blogosphere, much more decentralized online system, you could have a blog with a million users, but very few people. And the government would very likely not take a strong interest in how that block the content and how it moderated user comments because its impact on the entire online ecosystem was very limited.
That's a very different proposition when you have centralized platforms where billions of users essentially share and access information. And that can have a completely different systemic impact on the online ecosystem of information and ideas than a blog.
>> Eugene Volokh: Right, so one way of, or one thing one might wanna think about here is recognizing that there are basically 200 sovereign countries more or less in the world.
What one can do to protect their rights, to control what happens within their borders, but keep that from spilling over outside their borders. Of course, some countries say, well, American free speech norms have been spilling over into our country for a long time. We need to stop that too.
But now, of course, Americans may be concerned about that as well. So one thing that I know sometimes happens is there's a court order that is, let's say, requires the removal of defamatory material, but the defendant doesn't remove it. Maybe the defendant is outside to the jurisdiction or hard to find or whatever else.
And then people send that order to Google and say, deindex those pages, remove them from Google search results. It could be defamatory, could be under this right to be forgotten, that European countries have developed or under other kinds of restrictions. And my understanding, and I've studied this matter some, but maybe things have changed.
But my understanding has long been that Google tries to give effect to those orders, even when it's bound by those orders on a national basis. So if there's an order that comes in from a French court that says this material has to be removed, it removes it for people who are viewing it from France.
Or maybe if the order is based on European law and purports to apply throughout the EU removes it for people from the eu. On the other hand, Americans continue to access those things. So, for example, if somebody gets an order against my blog, my blog writes about various court cases.
In the process it mentions the names of people involved. Occasionally I get requests or even demands that I remove posts about people. I say well if it's accurate or if it's an accurate report of a court decision, I'm not going to do it. But imagine that somebody gets somebody who's a citizen of France, gets an order along those lines.
In France, Google, my understanding, at least from all I've seen is only going to make it effective for people who are apparently accessing it from France. Now, you could imagine some countries saying no, that's not good enough. We want to protect our citizens against having being defamed or having their private information disclosed all over the world.
So we're going to demand that you do it on an international basis, worldwide basis. And one way we'll make it stick is if you have assets in our country, we will just seize those assets or arrest your people until you act on what we're saying throughout the world.
But while I think that's happened at times, my sense is that's not the norm. So Google does indeed try to enforce these foreign orders in different ways depending on where the user is located. So one question is, is that your sense as well that there is some degree of this kind of geolocation and geolimitation of these foreign orders in pract practice at least by some big tech companies or whether on the other hand, no, in fact courts are pressuring even Google to block search results throughout the world.
And then the next question is, if Google is indeed doing this, is it something that might be reasonable to demand of other social media or excuse me, Google is not quite a social media platform here, but other Internet companies. Saying look, do what you can to try to make sure that foreign restrictions don't bleed over into to American users and perhaps vice versa.
>> Jacob Mchangama: Yeah, so an interesting example of this is that shortly after Russia's invasion of Ukraine, the European Union put a number of state sponsored Russian media outlets on a sanction list which meant that their rights to broadcast within the European Union was banned. But also it also told Google and social media platforms that they had to de-index search results and remove content from the ability to spread these state sponsored Russian media outlet on their platforms.
So this was something that Google implemented within the European Union but not in the US. So content that was part of disorder did not affect American users, but it did affect European Union users to access Russian state sponsored media. It's still in place. And the sanction list has been expanded since then.
And there are various other cases. I mean, sometimes, and I guess it differs from platform to platform, sometimes the platforms will fight these orders, looking at, for instance, international human rights norms or even national laws. And I guess, maybe, also they will look at, is this a democratic country or not?
And I'm sure they also take into account market and business interest. So I don't have sort of a good overview to what extent whether there's a consistent line in this. But you can go and look at some of the various platforms have transparency reports, and that's also a requirement under the Digital Services Act.
But the data is not always very useful because it doesn't give you all the details. So you're absolutely right that this geo blocking is a feature. One of the problems with this can be if you say, well, we don't want to fight with the European Union over hate speech, for instance.
So what we will do is we will just adopt a hate speech policy which is more expansive, more restrictive than what follows even under German law. German probably has the most speech restricted hate speech law in Europe. So the response of YouTube or Facebook could just say, we don't want to have these running battles.
So instead we just say our hate speech policies are just much broader. And that is where we are, at least until very recently, where we've seen that Facebook have changed their hate speech policy. It's no longer called a hate speech policy, it's now called hateful conduct and has been, I think, slightly less restrictive.
But that's the pernicious consequence of this is that you're incentivized to just say, well, we don't want to risk not being in compliance with the Digital Services Act or national laws because under the DFA you could risk fines of up to 6% of global turnover. Under the Net DG Act, this German law, I think fines were up to 50 million euros.
So if you're Google or Facebook, do you want to go to bat for neo Nazis or do you want to be good friends with the European Commission and the German government? You probably want to be good friends with the latter. And so you just adopt more speech, restrictive terms and then you err on the side of countries of removal.
That makes more sense. So that's one of the dangerous consequences of this, I think.
>> Eugene Volokh: Great, thanks very much. So there are some questions that have come in and one of them is also a question I had so particularly pleased to ask this. So, looks like European governments are pressuring social media platforms into doing certain things that affect Americans.
One logical entity to try to step in and protect the rights of Americans would be the US Government. And there are at least two ways they can do that. One is they could try to order the platforms not to comply with European law. That could put the platforms in a difficult position.
But maybe if we push harder than the Europeans, they'll go our way. Now, given the NetChoice decision from last year, it seems likely that at least as to certain things, the platforms could say, hey, we have a first amendment right to remove certain material, at least from the main feeds.
Maybe it's different as to whether they can remove it from users own pages, but we have the right not kind of promote certain materials as part of our main feeds. And just because Europeans are pressuring us to do this doesn't take away our right vis a vis the US Government to do this.
So but maybe there's still some room for the US government to act. But the other possibility, of course US government can try to pressure the Europeans and say, look, this is not acceptable to us and we will use what leverage we have to say, look, you can't pressure US Companies to do that.
At the very least, you have to accept when US Companies try to geolocate, try to make sure that they follow your orders only in your countries and not in our country. And maybe you shouldn't be pressuring them even in other ways because that's contrary to our public policy.
You're trying to enforce your policies, we're trying to enforce our policies about protecting Americans free speech rights. So let's see who's tougher, let's see who's got more weight to throw around in this kind of situation or perhaps another way of putting it. Let's see if we can work out an arrangement, a deal of some sort.
Is your sense that something like that is happening? Is there some agency within the federal government that is looking out for this? Or is it something that just the government, either under the Biden administration or the Trump administration has, has little interest in?
>> Jacob Mchangama: I think that under the Biden administration, I think there was a time, especially after 2016, when there was a real paradigm shift in how social media, the role of social media was viewed in democracies that they were no longer these positive forces for uninhibited and robust speech supporting democracies.
They were instead these malicious actors that spread disinformation and hate speech, undermining democracy. I think that especially among Democrats, there was a view that more needed to be done. And I think they were envious maybe. I remember Hillary Clinton congratulating the European Union when the DSA was adopted.
On the other end, it's quite clear that Republicans have sort of had different gripes. They have tended to say, well, these social media platforms censor conservative. And I think it's quite clear, you saw that in partly in JD Vance's speech in Munich that the Trump administration thinks that European governments are restricting free speech in ways that are, to quote Vance, shocking to American ears.
And they also view not only free speech restrictions, but generally, I think the European Union's attempt to regulate US Tech companies as essentially a move, a geopolitical move that they want to counter. I'm not sure how much free speech is the driving force of this, given what the current administration itself is doing on free speech internally within the US.
I don't think it has a particularly strong position in lecturing the Europeans given what has been going on in the past 100 days. But there's definitely been these noises coming out of the administration that laws like the Digital Services Act is an attempt by the European Union to silence free speech within America.
But, but you're also right that there's been this in many countries. I think the other complaint that America is that is colonizing or sort of cultural imperialism of its free speech norms that it's been imposing on the rest of the world. And I think there's some truth to that.
I think it's a particularly benign form of cultural imperialism that I wholeheartedly support. But so for the past 80 years or so, the US has had not a perfect record. But if you go back to 1941, FDR's four freedom speech, he talks about these four freedoms that he sees as the basis of a new global world order.
And the very first one is freedom of speech for everyone, everywhere in the world. And his widow, Eleanor Roosevelt, fought a very principled, laudable fight for speech protective standards in international human rights conventions. But fighting bitterly against the Soviet attempts to include hate speech and laws against disinformation and I think generally, America, as I said, have had free speech as an important part of its foreign policy.
And I think that up until 10 or 15 years ago, Europe and Western Europe and US were aligned. The differences between the US and the European free speech tradition were relatively minor when you compared it to other parts of the world, yes, we might ban Holocaust denial, we might have restrictions for free speech, but both agreed, you shouldn't have.
You have the right to criticize the government, you shouldn't have political prisoners and so on, and so the US and Europe were essentially fighting the same fight. And then they could nibble around, they could have minor differences on where to draw the line, but those differences have become much more significant now in the age of social media and the online world.
And so I think US and Europe has drifted more apart on those standards and how to regulate them.
>> Eugene Volokh: Makes sense, I think you're quite right on that. There are a couple more questions, one of them I think I know the answer to, let me offer it and then see, your reaction may be a relatively simple thing to answer, but actually important to keep in mind.
So the question is, how can any country enforce their court orders in another country where there's no jurisdiction and no enforcement mechanisms in that country? In effect, aren't the rulings in many countries merely an exercise in futility due to the cost to enforce them? If it is possible to do so, has the US agreed to enforcement of the Digital Services Act?
And my understanding is the answer to that is chiefly that many of these large social media platforms have assets and employees in foreign countries. So, yes, if Denmark decides to go after me because I had posted something that it disapproves of, can't really do much except make it very dangerous for me to visit Denmark, maybe even visit the rest of Europe, but still not that much.
On the other hand, if Facebook Meta has an Office in Denmark or Google has an office in Denmark, then they could say, unless you do what we want in America, we will seize your assets in Denmark or elsewhere in Europe where our writ does run, or maybe even arrest your employees.
Am I right in understand?
>> Jacob Mchangama: Yeah, yeah, no, and this has happened in India, I think X, Google and Facebook, at least two of those have had the situation where their offices have been raided by police and employees, sort of detained, arrested. Many countries have what you might call hostage laws.
So you can only operate in that country if you have a physical presence there. You need to have a designated CEO of a kind. And this is obviously a way to exert pressure, which makes the enforcement much more efficient. Rather than just writing a letter to a headquarter in Palo Alto where, where Mark Zuckerberg or whoever can just say, well, what are you going to do about it?
How many battalions do you have? So that's definitely something that has become a feature. And this is also why countries like Brazil, largest democracy in Latin America, significant market for social media companies, the European Union, a huge market, and India, of course, the largest democracy in the world, have much more pull.
My father is from the Comor Islands in East Africa, a small island nation with maybe 500,000 inhabitants, one of the poorest countries, countries in the world. If the Comala Islands were to adopt a Digital Services Act, I don't think that Mark Zuckerberg or Elon Musk or anyone else would quake in their boots, they would not comply with anything coming out of it.
So obviously the more geopolitical muscle you have as a country, the better your bargaining chips and the better options you have for of sort of trying to enforce your own laws, even Visa, the big global tech companies. And remember that Elon Musk, self professed free speech absolutist for a while was defying Brazil, but he ended up caving essentially and sort of giving in to demands from the Brazilian judiciary.
So when you have these hostage laws, if you want to be a bit polemical in place, and when social media companies have significant assets in a country, their law suddenly have much more bite and teeth.
>> Eugene Volokh: Right, well, so another question asks whether there may be some technical solutions to this.
Although the interesting question is what the market will think of these technical solutions. So let me read it. Governments and organized interests seem likely to pursue their political goals by seeking speech suppression from centralized content moderation. Because yeah, if you just either jawbone or just threaten retaliation, threaten arrest to the management of a company, then you can get quick results.
But then the question so the question is, are decentralized options possible? For example, Blue sky is based on protocols, not platforms, I believe Mastodon was framed much the same way. The model looks towards competition about content moderation rather than centralized moderation. What do you think about the prospects of the Blue sky model of social media?
>> Jacob Mchangama: Yeah, I think decentralization is an important step forward. But it's interesting, the Blue Sky, I think, recently complied with court orders from Turkey to block a number of accounts in Turkey. And of course, Turkey is a country that has very, very speech restricted limits. One where President Erdogan was leading a very illiberal country, which is now, I think turning fully authoritarian, and where he has often used restrictions on online speech to cement his rule.
So that means even Blue sky is not immune to this development. But I do think a more decentralized ecosystem of social media is a way forward and also one where maybe users have more control over what type of content they want to be confronted with. Because, for instance, when you look at the more speech restrictive policies in terms of US Companies, it's not only because of pressure from foreign governments in Europe.
It's also a lot of Americans disagree about where the limit should be drawn. Various interest groups in the US say, hey, you don't like anti-Semitic speech. We don't like Islamophobic speech. Facebook, you should remove this. Facebook, you should remove that. But if you provide users with more control over what type of content they want to be confronted with, maybe if you allow them to curate their own third party developed feed, then you reduce the demand for centralized content and more restrictive content moderation and the supply will maybe also drop.
I think that's an interesting and not implausible theory. The central problem, I guess, is that most people use social media, not for political purposes, not because they're heavily invested in debates and controversies like this, but because they wanna connect with family or share cat videos. And so for them, the ease of centralized platforms is really, it would take a lot for them.
You really need to be heavily invested in it to sort of say, well, I want to do away with the ease having everything in front of me and then opting to various decentralized models that requires a lot more on the part of the user. And that unfortunately, I think is an impediment to this.
But of course, technological innovations, and I think a lot of people are working on this to sort of say we can develop the best of both worlds. We can both have decentralized social media platforms that are also much easier for users to avail themselves of all the benefits.
But it is a huge ask of people who just use them to connect with friends or family who are not particularly interested in more political speech.
>> Eugene Volokh: Great, thanks very much. So there's another question that is a little tangential to this, but like all tangents, it touches, right?
And I think maybe relevant about Section 230. So America has this broad protection beyond probably what is required by the First Amendment under Section 230, which basically says that social media platforms are not legally responsible for material, generally speaking, not legally responsible for material posted by their users that is defamatory or invasive of privacy.
And as a result there's a lot more that is posted that is defamatory, invasive of privacy. In theory you can go after the people who posted it, but in practice it may be very difficult. My understanding is that that is not the norm in many other countries and that in fact that you could go after the platform if you can show that something is defamatory.
Although, of course, that may lead to over chilling as the platform maybe tries to avoid litigation risk by removing things proactively too much. What's your view of Section 230 and whether it makes sense both in the US and whether it's something that other countries should be adopting as well?
>> Jacob Mchangama: Well, I'm just finishing revisions of the manuscript of the new book on the future of free speech, which I'm co-authoring with Jeff Kosseff, who wrote the treatise on Section 230. So, not surprisingly I buy into Jeff's I think convincing argument for why section 230 really has been essential to online free speech and sort of building, one of the reasons why the US has been a world leader completely dwarfing Europe for instance, when it comes to building to innovation of platforms that depend on user-generated content.
That would be very difficult if you did not have those protections in place. If you were to say that, imagine not trying to operate a Facebook without Section 230 and you'd be heavily incentivized to try and remove much more content and that would sort of defeat the purpose of platforms that depend on user-generated content.
Now both the E Commerce Directive in the European Union and the Digital Services Act does have shield to a certain extent from intermediary liability. But it basically says that if you have received a notification under the Digital Services Act, that could be from a national authority, but could also be from a trusted flagger about illegal content, then you should expeditiously remove it.
And if you do so, then you're not liable. But also, it's not like you don't have a generating monitoring application. So the DFA does not mean that Facebook or X has to ex-official monitor all content on this platform, only when it's been made aware of illegal content do they have an obligation to examine it and then remove it if it's illegal.
So there is a qualified protection, but not as extensive as section 230. So I think section 230 is a better way forward than the alternatives. But I think it's important for free speech advocates to acknowledge that it also means that a lot of ugly stuff is put out there that can have harms and costs.
I'm generally very skeptical of laws against disinformation. I also think that the threat from disinformation has been hyped into sort of an elite panic. But it's also true that disinformation can at times lead to very serious arms. It can be a serious problem for democracies. It can make collective action much more difficult, if populations are cleaved into two polar opposite blocks who just don't share any set of facts.
I happen to believe that when I look at the harms and the cost, or the harms and the benefits, I come down on the side where I think the benefits outweigh the harms. And when you think about if we legislate, what will be the consequences of that? Will a law make polarization go away?
I think that's quite naive. I think that's a cure worse than the disease. I think James Madison wrote beautifully about that in the report of 1800 criticizing the sedition Act. He recognizes that free speech comes with these harms, but that the effects of censorship, if you like, are much worse.
So that's where I come down on it.
>> Eugene Volokh: Right, well, very helpful. Thanks very much. Let me just close with one last question, which also is a question I have, but a question that, that builds off of a question asked by a member of the audience. It's basically what are we gonna do about it?
The question is, are there active discussions about any proactive or affirmative approaches to push back against this pernicious threat? So you've identified, I think, a very serious problem. What's the solution?
>> Jacob Mchangama: Yeah, so I think, let's take a look at the Digital Service Act. It would have been interesting if the Digital Services Act focused on transparency and researcher access, for instance, rather than sort of notice an action and systemic risk.
That would have allowed us to sort of say, okay, we have this presumption among some that social media platforms are awash in illegal content and disinformation. But is it true? To what extent is it true? So I think transparency is something that can help us have a more qualified discussion about these issues.
I also, as we mentioned, as we discussed earlier, I think decentralization is part of the solution. Daphne Keller and Francis Fukuyama have written about the potential of middleware. The way you have third parties coming in and saying we can develop content moderation systems that users may opt into and then give them more options and that that could then be adopted by Facebook rather than Facebook being in charge of it.
This is another model that I think could be interesting. But I think essentially we need to uphold a strong culture of free speech. We have to be really aware of the benefits. I think we tend to take the benefits of free speech for granted. I think, look at what's going on in Brazil.
I think that's quite frightening. Look at what's going on in India and also frankly in Europe. Think about the fact that recently a journalist from this right wing newspaper was sentenced to seven months suspended prison and fined for a doctor demeanor of the Interior Minister of Germany holding a sign saying I hate free speech.
And this interior minister is someone who has actively reported several people to the police for violating free speech restrictions. I think that is a dangerous development and I think even more so the fact that authoritarian states around the world are obsessed with controlling the online sphere. This goes for China, it goes to Russia.
And I think the fact that these regimes are so obsessed with controlling the online sphere tells you something very significant. That is that authoritarian states know fully well the benefits of online free speech to a free and open society. And I don't think their democracies or their populations should lose sight of that.
>> Eugene Volokh: That's an excellent closing for our conversation. Jacob thank you so much for joining us. Eryn, thank you so much for introducing us and helping organize this. And thanks more generally to the Hoover Institution for putting all this together. So Jacob, many thanks and I much look forward to many further conversations.
>> Jacob Mchangama: Likewise.
>> Eryn Tillman: Thank you Eugene, thank you Jacob. What a great discussion. We really appreciate you, the audience for your participation in good questions, the events team, for all your work to put this together. And I wanna let the audience know that this recording will be available on the Hoover event webpage in about three to four business days.
And our next webinar is only one week away. It will focus on what the Academy can do to build strategic competence with a particular eye on the importance of revigorating a history at post secondary level. Hoover Fellow Stephen Kotkin will moderate a conversation with Lt Gen. H.R. McMaster next Wednesday, May 7th from 10 to 11am Pacific Time.
And you'll find in the chat a link to our RAI webinar series webpage. You can visit that to sign up for the next session, access recordings of previous webinars and subscribe to our RAI newsletter to receive updates on upcoming events. Have a wonderful rest of your day and thank you again for joining.
ABOUT THE SPEAKERS
Jacob Mchangama is the Founder and Executive Director of The Future of Free Speech. He is a research professor at Vanderbilt University and a Senior Fellow at The Foundation for Individual Rights and Expression (FIRE). He has commented extensively on free speech and human rights in outlets including the Washington Post, the Wall Street Journal, The Economist, Foreign Affairs and Foreign Policy. Jacob has published in academic and peer-reviewed journals, including Human Rights Quarterly, Policy Review, and Amnesty International’s Strategic Studies. He is the producer and narrator of the podcast “Clear and Present” Danger: A History of Free Speech and the critically acclaimed book Free Speech: A History From Socrates to Social Media, published by Basic Books in 2022.
Before coming to Hoover, Eugene Volokh spent 30 years as a professor at UCLA School of Law, where he taught First Amendment law, copyright law, criminal law, tort law, and firearms regulation policy. He is a member of the American Law Institute and of the American Heritage Dictionary Usage Panel, and the founder and coauthor of The Volokh Conspiracy, a leading legal blog. His work has been cited in more than 350 court opinions, including ten Supreme Court cases, as well as over 5,000 academic articles. He has also filed briefs (mostly amicus briefs) in over 200 cases and has argued in over 40 appellate cases in state and federal courts throughout the country.