A mother sues Character.AI, claiming that a conversation between her teenage son and a Character.AI chatbot led him to commit suicide. A conservative activist sues Meta, claiming that its AI-generated false accusations about him. Jane and Eugene analyze these cases, and more broadly, discuss lawsuits against AI companies, and possible First Amendment defenses to those lawsuits.
Recorded on May 6, 2025.Â
WATCH THE EPISODE
>> Eugene Volokh: Hello. Welcome to Free Speech Unmuted, a Hoover Institution podcast. With me is my co host, Jane Bambauer, who's a professor of law at University of Florida. And Bruckner, Eminent Scholar in the College of Journalism and Communications, also at the University of Florida. I'm Eugene Volokh. I'm a senior fellow here at the Hoover Institution and professor of law emeritus at UCLA Law School, where I spent years.
So we're talking today about free speech and artificial intelligence. Everybody is talking these days about artificial intelligence and about free speech. It's a good time to be specializing these subjects. So, Jane, tell us about the character AI lawsuit and the legal issues that it raises.
>> Jane Bambauer: Yeah, and listeners who have been with us on our whole journey probably remember that in one of our very first episodes, maybe episode three or so, we talked about AI output as speech.
And so some of those themes are going to come up with the cases we're discussing today. But so character AI is a company that was spun off by some former employees of Google that basically allows users to generate characters and then just do long conversations with them. And the users can of course, modify their characters and then the characters of course, learn through the dialogue about what the person seems to want and whatnot, just like any good LLM.
And this case comes out of some facts in Florida where A teenager, a 14 year old boy named Sewell Setser III, basically became obsessed with one of the characters he had created in Character AI. He had made the character in the image of Daenerys Targaryen, a character from Game of Thrones.
And he spent just hours and hours interacting with this chatbot. He was starting to. His grades were declining, he was starting to have behavioral problems, he was starting to lose sleep and at some point, and his parents would occasionally cause him to lose his phone privileges. And he found this very hard to deal with.
And so at one point he. It's not totally clear because there's, there's some ambiguity in the filings, but it's not clear whether the chat bot started on the theme of suicide or if he had raised it himself at some point. But he and the chat bot had had conversed about possible suicide.
The chat bot, by the way, had said, don't do that, I would help. It would be horrible if anything bad ever happened to you or that sort of thing. But then on one fateful day, he said he wanted to join Daenerys. And she said, yes, I hope you should come to me now.
And he interpreted that to mean, yes, join me in the Afterworld or whatnot. And he committed suicide, unfortunately. And so his parents, his mother filed a claim against character AI, also against Google. But I think we'll focus here just on character AI, just to make things a little simpler, alleging a bunch of things, defective design, failure to warn, negligence, negligence per se, based on some, you know, criminal solicitation and sexual abuse statutes.
They had been doing some, I don't know, not quite sexting, but sort of some, romantic role play in their conversations as well, and a few other claims. And Google was alleged to have aided and abetted all of this by supporting the company in various ways and hiring its key designers.
So at this point there is a motion to dismiss and the parties have all filed their briefings and replies and the main issues, I mean there's some non First Amendment issues that are interesting, I think, related to duty and whatnot. But let's start with the free speech one.
The character AI is making the argument a solid one in my opinion. I want to know if Eugene, you agree, but that all of the interactions here are protected speech. And so therefore any claim would have to go through a First Amendment analysis. And many of the tort claims, the character AI, at least the company says, are foreclosed by other cases that have considered similar challenges against video game companies, against even the creators of Dungeons and Dragons, on somewhat analogous claims that a teenager at some point became obsessed in the mood and had mood changes as a result of interacting so much with the content and wound up committing suicide.
And so based on the fact that those other cases were found defective under the First Amendment and based also on the fact that this speech output, even though it wasn't created by humans or even collections of humans like corporations, even though it was sort of purely computer generated output, there are still listener rights interests that would protect the output the way almost any other speech would be protected.
So I think that theme is quite consistent with what we had discussed way back in episode, in the earlier episode on AI generated output. I do think the company has a good argument that the free speech that the First Amendment must apply. Here the plaintiff is making the claim that the First Amendment defense isn't even valid at all because First Amendment requires a human speaker.
Let me actually read it. Or, I'm sorry, in their opposition to the motion to dismiss, the plaintiff says, to assert a First Amendment defense regarding human expression, there must be a human expression. Expression requires an intention to convey ideas or meaning. Films, songs and video games work this way.
Humans design the specific plots, characters, words and actions that communicate a message. LLM based products like Character AI do not work like this. They serve as stochastic parrots, automatically generating human language without understanding the meaning of that which they generate. So because there was no conscious mind that was putting together the message, the plaintiff claims there's no First Amendment protection.
It seems pretty clear to me that, now that the First Amendment has to apply, we have cases that, several cases at this point that focus primarily on listener interests in receiving and interacting with content. And so then for me, I think the more interesting question is, okay, well, even if the First Amendment applies, what would we do with, what should, what should we expect in a case like this?
Right. So there have been successful prosecutions, for example, of people who goad others into committing suicide or encourage it, actively encourage it. In some ways, of course, there are cases brought against people who have some sort of special relationship, like doctors, if they say, if they give, if they use pure speech.
But in the context of a special relationship where they, they know that the person receiving the information is likely to be induced to self harm and, and they have an affirmative duty to protect that person, right? So that wouldn't apply here. This is not a special relationship. There were some theories floating around for the last decade or so that, that maybe social media companies and other tech companies should be treated as special fiduciaries of their users, but the courts have not gone for it.
And I think that's good. But even so, if I, as just a normal person was interacting with a, you know, were to interact with a teenager who was clearly obsessed with me or you know, with, with, with our conversations, and I saw over time a deterioration in their mental health.
If that, you know, if that's what happened here, I think that it's fair to add that fact. Might I breach a duty of ordinary care by asking about suicide, which the chatbot did in a few occasions, seemingly unprompted, or what is it that I could say with pure speech that might nevertheless prompt liability?
>> Eugene Volokh: Yeah, great questions. They're the right questions. And as to the first, I mean, I don't think we really fully know the answer to the question, so I think we suspect, and probably we suspect the same answer. So as to the first point that you raise, I, I agree with you that this output is presumptively protected by the First Amendment because the real value of AI output, maybe not in this instance, maybe in this instance was harmful, but in many instances it's valuable to readers, to listeners, to people who interact with it.
And the law, as you point out, recognizes it. Let me just give one illustration. Imagine that there is some device, some robot with a scanner that is set up in a library to scan all the books from the shelf, open them up, scan them. I don't think robots are competent enough to do that, but it's certainly easily imaginable.
And then digitize them, make them available to the public. One of them happens to be Romeo and Juliet and somebody reads and the, the result, the book. And it resonates with their reaction to a frustrated love of theirs, and they commit suicide. So then there's a lawsuit against the company that's providing this information.
I think it's pretty clear that the First Amendment would bar that lawsuit, but not because of the rights of the speaker. The company may not even know that there would be Romero and Juliet on the shelves. And Shakespeare is dead, all right? Whatever rights he may have had.
Well, the early 1600s in England, free speech and free press. Rights were quite dicey. But even if we apply today's standards, the bottom line is he's not there to assert them. There would be no violation of his rights if his speech is suppressed hundreds of years after his death.
The violation of the rights would be the violation of the rights of readers who would be denied access to an important work that might. That for most of them may be quite moving and not at all harmful, and at the very least something that they're entitled to one way or the other to read.
So the reason we'd protect the distribution of that material is not the rights of the author who's long dead, nor the rights of the publisher. I set up this robot situation so you couldn't say, well, it's really the rights. I don't know, Random House or whoever it is who republishes some new edition.
Imagine, for example, the edition was on the shelves. It was in the 1800s. The publishing company is long out of business. The publisher is long dead. Doesn't matter, because it's the rights of the listeners that are important and the fact that a few, a tiny fraction of listeners or readers.
Free speech law often calls them listeners even though really they're reading text. Even if a small fraction of the listeners or readers is harmed by this or choose to harm themselves based on this, or may even be minors as to whom we might say, you know, maybe they didn't know any better.
Nonetheless, we protect the speech for the benefit of other readers. Now then, the question that arises, as you point out, is even if it's presumptively protected by the First Amendment, can the presumption be robotic? Is this the kind of speech that for various reasons is unprotected? We know, for example, false and defamatory speech is unprotected.
And libelous is often, how it's described. Threats of violence are unprotected. Is there an exception for speech that encourages suicide? Or more broadly, speech that is published or distributed in a negligent way that carelessly leads to physical harm to someone. And generally speaking, the court cases have been very skeptical about that theory.
As to broad publications, including with regard to suicide, there have been lawsuits brought claiming that, for example, a rock and roll song actually called Suicide Solution, pardon Ozzy Osbourne, exactly led someone to commit suicide. There was an incident where, or actually several incidents where an 11 year old, 13 year old, 14 year old were injured or killed themselves while simulating a stunt from a television show, or in one instance simulating behavior described in Hustler magazine.
Autoerotic asphyxiation. And the court said, no, you can't impose liability in the very few situations where people do engage in this kind of copycat activity and, and harm themselves, among other things, because that would interfere with the very many situations where people read this, are entertained by it or learn something, or perceive it as fiction or exaggeration or whatever else.
The example of music is a classic example. You can hear about someone singing about suicide and think, this is really kind of a reflection of human nature. It's telling me something about how people sometimes do have the desire to do it. Most people, of course, at times may have suicidal thoughts but don't act on them.
Or alternatively, this is just an obvious exaggeration, someone just being so angry at life that he says it even though he doesn't mean it. We'd all be deprived of the opportunity to do that or to read Romeo and Juliet and the like, do that, say, listen to that.
If there could be liability for, again, the very few people who misbehave based on that. But Jane, I think you're also entirely correct in pointing out that there's still the question of whether it's different in a kind of a one to one interaction where there's only one reader or listener.
And at least in some of these situations, you might say, well, the other person knows something about this person. Maybe they have a purpose to promote suicide. It seems very unlikely in any of these other past cases, or for that matter, in the character AI case, I don't think there was any purpose to do that.
But maybe they knew something about this person. And maybe if we could just say, look, we'll hold you liable for acting despite warning signs that you saw. Then people would still be free to talk to each other about various topics. We don't want to be in a position where anybody could be sued and financially ruined for talking about suicide or mentioning, I can understand why people feel the urge to commit suicide or I'm sorry, suicidal myself.
And the other person says, yes, I am too. And the listener acts on it and the speaker doesn't. So the theory would be that in these kinds of one-on-one interactions, we can maybe hold someone liable based on what they really were aware of or very clearly should have been aware of based on their knowledge of the other person.
And without deterring speech to the public or deterring ordinary conversations among people. And as you point out, especially when there's a professional involved, maybe there's a particular reason to think that. And especially if there seems to be a purpose to promote suicide, there'd be reason to think that.
But whatever the proper scope of that is, I just don't think that that's really applicable to AI conversations. They may be one on one in the sense that they're customized, but they're still fundamentally algorithmic. Fundamentally are something that is the program is aimed to speak to a large set of people and to say, well, you should have known about this particular danger, I think imposes on AI technology or AI technologists an obligation that I just don't think is technologically feasible.
You can't tell whether the other person really is suicidal or not, or just playing along or you're saying something quite ambiguous that is interpreted in a person particular way. So I'm inclined to say that the standard negligence rules, excuse me, First Amendment negligence rules which say there's no negligence liability in these kinds of situations based on the First Amendment should also apply here.
But I do acknowledge maybe there's a different field to one-to-one interaction than to somebody putting out a book or a movie or a song. What do you think?
>> Jane Bambauer: Yeah, well, so negligence and the First Amendment is an area that courts almost avoid talking too clearly about. Because it's true that as you said, that broadcasts and publishing and things that are made for audiences are basically immunized through First Amendment analysis.
But there are cases, there are successful negligence cases that boil down to one person telling another, for for example, that there's no car coming. You can go ahead and go and then it's, so negligence is neither, it's obviously, it's not one of the categories of unprotected speech. Negligent speech is not categorically unprotected.
And yet, and so I would I wish the courts did this. They don't do this. But I would say that despite the First Amendment's application, you could still be, people have and should be able to win cases if the circumstances are right. Meaning that, despite the speech interest the state interest in protecting others from harm are, are well founded, and tightly aligned to the facts of the case.
So. Okay, but two thoughts related to what you said. One, this case, I'm glad you raised the question of whether these chatbots are even causing net harm or net benefits, because I think the defendants could plausibly say, once they get to the merits of the case, if that happens, they could plausibly say that these harms are unforeseeable because in fact, on balance, their systems have been trained, as I'm sure they have to have certain, certain safeguards and that possibly overall their effect is more ameliorative than negative.
The particular facts in this case were at least described in the complaint quite effectively because they tried to do sort of a pre. Post change of behavior. So maybe in this particular case there's a good factual causation argument. But, more generally, I think what you'd have to say is that the company should have foreseen that this individual and that individuals like him were likely to actually commit suicide when they otherwise wouldn't be because of the output of the bots.
And I'm not sure you could say that ex ante. But, I also do think though that AI or maybe more generally data collection makes this one to one versus one to many distinction a little bit, not quite moot, but a little more challenging. This gets to some of your writing, Eugene, on privacy versus tort law, but I could imagine someone saying, okay, look, let's use the usual approach of least cost avoider.
For the most part, if you're just creating a song and you're distributing it through radio and CDs, you don't know anything about the people who are listening. But in the Internet era and the high tech era, every company who has these requires people to log in and has a long record of what they've said and done.
They actually do have A lot of data about the. Each individual listener. So even listeners who are receiving the same message, you might be able to forecast, you know, forecast with decent enough probability who is likely to be harmed and who isn't. So could you imagine your library?
I mean, for example, so maybe the library robot should say, I'm sorry, I'm sorry. Young Eugene, you are not in a good stage of life to be reading Romeo and Juliet at this time. I think both of us would find that pretty disturbing. We wouldn't want to go down there.
But if AI really is gets to the point, I want to reserve, at least for myself, I want to reserve the possibility that if a technology could read facial expressions, could infer vulnerabilities, could do a lot that could be exploited, or could read a lot that could be channeled into safety mechanisms, maybe some of the old.
Maybe the policy that goes into the rules that we currently have might start to look less. Less useful or less sound.
>> Eugene Volokh: Maybe so. And the fact is, while ostensibly the rules of First Amendment law and of other areas of law are by and large technology independent, you know, sometimes technological changes lead judges to reconsider legal rules.
So it's certainly always possible. I do want to identify three important things, though, in what we've been talking about. One is foreseeability is an important thing that people talk about. I just want to warn people away from assuming that foreseeability is the key. It is actually a necessary condition for liability under negligence law, but not as efficient.
I mean, lots of things may be foreseeable. If I write a book that talks about some kind of misconduct, it could be a crime or could be suicide, or it could be whatever else it may be foreseeable to me that again, a tiny fraction of my readers might act in an illegal way.
If I write a newspaper article about somebody who's accused of very serious crimes, sex abuse of children, let's say, maybe foreseeable to me that some of the readers are going to try to attack this person criminally, but it doesn't mean that I'm going to be liable simply because of that foreseeability.
There also has to be a judgment that the action was still unreasonable. In light of that possibility, and in the First Amendment context, you have to consider the danger that imposing liability will in fact chill valuable speech as well as harmful speech. A second and perhaps related point is, I could certainly imagine technology that makes a not implausible or reasonable guess about whether someone is in a particularly fragile state.
But let's think about the consequences of that. Let's say that the some AI, and presumably if it's one, then the others would have to have the same technology as well, concludes that the person is just very traumatized for whatever reason. Either traumatized or just. Maybe just, just some, some chemical imbalance leads them to be especially fragile.
And like that, this person starts surfing the net and search engines say, we can't give you this bad news because maybe it'll push you over the edge of, we can't tell you stories about someone who committed suicide, that is to say, give you links to news stories because of the possibility that you might be triggered by that into committing suicide.
If we do have a chatbot, the chatbot will treat you with special kit gloves. And you may say, no, no, no, please talk to me like a normal person because I'm feeling this inauthenticity in our conversation, which is the exact opposite of what I'm trying to get from a chatbot.
And the chatbot has, sorry, because we think you're so fragile, we can't give you kind of the full set of communications that ordinary people are entitled to. That doesn't sit right with me, but maybe I'm wrong. But the last point that you alluded to is maybe the rules should be different for children.
That with regard to TV shows and music and such, because they're distributed to the public, it's impossible to know the age of your viewer or your listener. Maybe you have a general demographic sense, some percentage of your viewers are underage. But again, if you try to restrict that speech because a lot of your viewers are underage, it'll also interfere with speech to adults.
But maybe the rules should be different as to AI, as to social media and such, when there is at least a high degree of confidence. And this is probably something could often be determined to a high degree of confidence that the listener or the viewer or the correspondent is a child, or at least is in law, a child, a minor.
So that's one of the things that, of course, courts have been grappling with in various contexts. There's a case pending before the Supreme Court right now with regard to that and online pornography and age screening. So maybe that's one.
>> Jane Bambauer: Yeah, and many states are experimenting with age limits on social media.
>> Eugene Volokh: Right, right. And maybe not even by context, but just social media. You guys just start using too much of it or otherwise harming yourselves with it. It's an interesting question, which, again, courts are going to need to deal with increasingly. So let me turn to another AI and the law story that's in the news.
And this is the latest instance of the problem of what I call large libel models, AI that in generating text about people or audio in this case, generates hallucinates, things that are not just false, but damaging to their reputations. So I want to talk about the Starbuck case.
Starbuck Veneta Platforms filed just basically several days ago in Delaware State Court. It's the third such case in American courts. And there's also a case pending before a data protection commission in Norway dealing with the same kind of question. So here is the allegation. And by the way, the lawyers, plaintiffs lawyers, are from the Dhillon law firm, and the founder, Harmeet Dhillon, is now in government.
She is the, I think, the head of the Office for Civil Rights in the Justice Department. Obviously, she's no longer. She's the Assistant Attorney General for Civil Rights at the US Department of Justice. She's no longer involved with the firm, but it's a reputable firm that does, generally speaking, its homework.
So their client, Robbie Starbuck, is a conservative commentator and conservative activist. And he says that some time ago he sees this tweet that essentially accuses him of being involved in the January 6 riot at the Capitol, being linked to the QAnon conspiracy theory and of other things. And he sees this and excuse me.
And the tweet quotes Meta AI for that, says, well, you're a bad guy. I know this because Meta AI told me so. Starbuck recognizes this, responds promptly to the tweet saying, this is all false, or at least the allegations as to January 6th and QAnon are false, gets lawyers involved.
The lawyers then send a demand letter to Meta saying, stop this from happening. And Meta apparently tries to and apparently does some things related to that. But then months later, Starbucks says that someone tells him this Meta AI voice feature, which was then newly introduced, starts repeating the same kinds of things, or maybe even worse, about him claiming that he had pled guilty over disorderly conduct on January 6 and had advanced Holocaust denialism and various other things.
So he says, look, we warned Meta they didn't do enough to stop this, and now they are liable for spreading these defamatory things about me. So it's an interesting question at this point. The lawsuit has just been filed. Court has not decided anything on this, although in one of the earlier cases in Georgia trial court, the case is still pending, but the court denied a motion to dismiss that was filed there by OpenAI.
That state court did not publish or did not release a detailed opinion explaining its rationale. Many state trial courts don't, but it looks like the court was at least open to this, to the possibility of liability. So here note by the way, maybe plaintiff will argue that Meta has no First Amendment rights vis a vis its AI output because there's no human being in the loop.
But but I'm not sure they will. And I think the strongest argument is they may have First Amendment rights, but First Amendment rights are limited by defamation law, right? Newspaper has First Amendment rights, but if they publish things that they know are false, then they are potentially liable.
Likewise with Meta. And by the way, note what's one important fact that I mentioned, which is he says, my lawyers told Meta about this. Meta acknowledged that it received this information, and then this later version of their software still continued to output this material, in this case in voice rather than as originally in text.
That's important because, I'm sorry? Well, and that's important because it would. It would. It would actually rise to the level of the actual malice standard. Right, remember, actual malice is a total misnomer. It is not actually malice in the way that English speakers speak that, but speakers of lawyeries or know that actual malice means that in lawsuits brought by public figures.
And Starbuck probably is a public figure because he is a noted activist who's deliberately injected himself into various political debates that there has to be a showing of knowing or reckless falsehood, not actually malice in the sense of hostility, but knowing a reckless falsehood. Aha, says Starbuck and his lawyers.
We let Meta know. Meta is now on notice. It's not whether their AI knows, because AIs don't really know things. Know is a property of human beings. But there are human beings at Meta who are aware of this, who are responsible for the deployment of the software, who could have done.
The theory would have to be they could have done something to prevent this from happening, either retraining the model or providing some sort of. Some sort of guardrail processing. Yeah, that maybe post processes material and sees if there's some known falsehoods in what is being said. And they didn't with this.
That's a very interesting case. And something we'll be watching to see what courts say, because this is potentially, potentially at least a viable legal theory. By the way, some people ask, what about 47 USC Section 230, which immunizes, for example, meta in its capacity as a host of.
Of material on Facebook? Would their AI be immunized as well? And I think the answer IS no. Section 230 provides that no provider or user of an interactive computer service shall be treated as publisher or speaker of information provided by another information content provider. So if Facebook posts material that I post that is libelous, not that I ever would, but if it does, then it would be off, immune from liability.
If somebody sues over that posting, they'd have to sue me because they're not liable for things that third party like me post. But the whole point of generative AI is that the AI generates the output. So the lawsuit against Meta isn't for something that someone else created. It's not even garbage in, garbage out, that the training data had these allegations and Meta, the AI was too credulous in accepting them.
Starbucks claim is there was no training data, source data, that made any of these allegations. You folks, your software just made it up. And if your software makes things up, then in that case you can't claim Section 230 immunity because you're being sued for information provided by you and the programs that you wrote and not by a third party.
>> Jane Bambauer: So that's absolutely right in terms of the statutory language of Section 230. But if, if we believe that the current interpretation of section 230 is, is sound as a matter of policy, it may. It looks to me like we have a little bit of a problem here because even if a, if a company has knowledge, receives notice, lots of notice, probably from lots of people, that the output that it's AI chatbots are producing is defamatory, it's not clear that there's going to be a good technological solution to reducing the error to some minimum amount or even zero.
And then also, I guess I worry about the problem of, of parsing through notices that are received. So if, you know, if a chatbot says something absolutely accurate about me, but also humiliating, and I complain to Google, OpenAI or whatnot, do they then have to do an independent review and figure out whether they need to go through all of these steps?
One reason that Section 230 was interpreted as strongly as it was was because they feared that the legislators and the courts feared that if they didn't make it so strong and airtight, then the companies, as a matter of course, and kind of a matter of inertia, would just delete all the content that people complain about.
>> Eugene Volokh: Right, right. And that would be an undue chilling effect where it would take down even accurate information and not just, not just false information. If we step back from that, that is a problem that the legal system has had to deal with for, for, for a long time, especially with regard to libel law.
So recall New York Times v. Sulliv, which established this actual malice test that was just an opinion by six of the nine justices. The remaining three would have gone further. They would have completely precluded libel liability or defamation liability, more broadly, slander liability and such, for any statements on matters of public concern, maybe just about public officials.
But the logic was, for really any statements on matters of public concern that would have been the view of Justices Black, Douglas and Goldberg. But the majority opinion by very liberal Justice Brennan didn't go that far. The concurrence is those three justices said, look, even if you impose extra burdens on the plaintiff to show that the defendant was saying something, was that was knowingly or recklessly false, you know, there'll still be a chilling effect because people will worry that maybe, that maybe a hostile jury will decide against them and the like.
But the majority essentially concluded that we should try to reduce the chilling effect. But if completely eliminating the chilling effect on true speech means completely eliminating the possibility of remedies for false speech, completely precluding it even when it's knowingly false, well, that's going too far. Now, Section 230 was a statutory decision, and it did completely eliminate liability.
But completely eliminate liability for intermediaries like social media platforms. It still left open the liability, imposing liability on the actual speaker, the person who actually created the libel. And, you know, maybe that person doesn't have money, maybe hard to find, maybe dead at this point or whatever else.
But still, still it didn't. Congress didn't completely preclude liability. So one question is, should Congress essentially say, because it's so important to diminish the chilling effect on AI companies, we should completely preclude liability, say that essentially if somebody is being liable online, they have no remedy against anybody, against the AI company or against her, because the AI company is the author.
>> Jane Bambauer: No, no, no, but I disagree there, there could be another party here, which is that Robbie Starbuck found out about this because somebody received that chatbot output and then posted it as if it were true, right? So there is still a republishing liability that may make sense and may provide a little bit.
Not the full amount of the sort of escape valve that you just talked about with Section 230, but at least a little bit of it. Or at least irresponsibly taking at face value the output of AI speech.
>> Eugene Volokh: So fair enough. But remember, of course. Well, you do remember, but the public, the listeners should remember that irresponsible speech that is defamatory is generally protected by New York Times v Sullivan and its follow up cases.
It seems very likely that the person who forwarded this meta AI output believed it. They may have been unreasonable in so believing may have been foolish and irresponsible in passing it along without checking it. But so long as they sincerely believed it, which I think unfortunately many people do believe that the AI output is accurate, they'd be completely off the hook.
So you couldn't sue them either. What's more, one of the things that people worry about is not just the forwarding of this, but just people acting on defamatory output. So Starbuck might say this one guy forwarded this and at least I had a chance to respond. But a lot of other people will run the same query and then stop doing business with me.
They stop hiring me for various jobs if I am somebody who wants to be an employee, stop engaging in various contracts, stop paying attention to what I have to say. I won't even know about it because they won't tell me that they're doing this. They won't tell the world that they're doing this.
But my reputation would have been damaged and I would suffer loss because of these things that people, people believe that, that AI software told them. Defamation law is supposed to protect people against that kind of harm. And I doubt that Congress will or probably should completely preclude that sort of liability when, when the defamation is produced by AI software.
So, all right, so this is so two different AI and the law topics both in the news now and representative. I think of a broader trend that I'm sure we'll be revisiting again now from now turning from cutting edge to something that literally refers to things that are about 100 years old.
Let's turn to our mailbag. We ask people to ask us questions@hoover.org askfsu that's hoover.org askfsu not floridastateuniversity. Presumably one of your new arch rivals isn't that right, Jane?
>> Jane Bambauer: Yes, I am supposed to hate FSU, right?
>> Eugene Volokh: But this is a different FSU. This is, of course, free speech unmuted.
So again, uber.org askfsu you could submit a question and we'll turn today to one such question that has been submitted. So Anders from Northern California asked the following that in February, Face the Nation host claimed that free speech in Free World War II Germany was weaponized to conduct a genocide.
And that later that day, 60 Minutes aired a segment that at least the reader interprets as seemingly implying that the Weimar government, or Weimar government. I suppose, could have prevented the Holocaust if it had imposed such a censorship regime in the 1920s and early 1930s. Is that historically accurate?
And I'm not a historian of Weimar Germany or free speech, but Jacob Changamma most certainly is. We've had him on the show. He has long been writing about free speech law and advocating for free speech. He's a Dane. He did that at a think tank in Denmark. He is now a research professor at Vanderbilt University, where he runs the Future of Free Speech Initiative.
And he wrote a book I highly, highly recommend, both quite enlightening and very readable, called Free Speech A History From Socrates to Social Media. So he both does a lot of history of free speech, but also history of free speech all over the world. He has studied it as free speech in Europe, free speech in South Africa, various other places as well.
So here's what he reports and we'll include a link to this in the show, notes that in fact, the Weimar Republic did not, he says. And I'm going to be largely quoting from him here, a free speech haven for extremists. Instead, it cracked down on speech through emergency decrees.
It banned Nazi newspapers and even outlawed Hitler himself from speaking publicly in many German states. Far from stopping the Nazis, he reports, these censorship measures played into Nazi propaganda, allowing them to paint themselves as persecuted truth tellers and worse. The very free speech restrictions that were meant to protect democracy ended up being used to destroy it once the Nazis came into power.
And so what he's reporting is there was an attempt in, in Weimar Germany to try to suppress Nazi propaganda. It may be backfired ultimately, at the very least, we know from history is it failed. So no reason to think that if Wermer had done more in, it would have succeeded.
And no reason to think that similar restrictions today would do much good to try to prevent violent revolution and prevent genocide and various other such things. Now, of course, it's possible there are some other movements out there very bad movements that somehow are prevented either by those restrictions.
Or by other restrictions, whether in Europe or elsewhere, who knows? But the Nazis grew up in an environment where in fact, supposedly extremist speech, including specifically targeted Nazi speech, was outlawed. There were action to suppress that, but it does not seem to have worked. In fact, one thing that I do happen to know about that era, pretty much everybody knows, is that part of the problem was that the Weimar regime wasn't doing enough to suppress the violence.
Not just it did indeed target the speech, but there were brown shirts is the classic example. And others who were actually thugs who were violently, even before the Nazis took over, violently going over there, violently acting against their enemies. My view is that democracy should do a lot to try to suppress that kind of violence, whether from Nazis or communists or anybody else.
But I don't think the Weimar experience tells us anything about, or excuse me, I don't think the Weimar experience suggests that restricting speech other than violence is going to help. And in fact, it might suggest the opposite.
>> Jane Bambauer: Yeah, there are studies on social media given that different social media companies tried different things with content moderation.
There are multiple studies at this point showing that there is indeed a backfire effect, or at least a lot of times there is a backfire effect that by stamping. Out it sends a signal of this is this is secret knowledge, this is dangerous. But, ideas that important people don't want you to know, that sort of thing, you know.
And I also think even if it were true, that there is some way, if you, that a wise leader could make just the right type of speech suppression to prevent a holocaust or a charismatic demagogue taking power. I would still be reluctant to use those methods. I guess for me, I'm curious if this is true for you, Eugene as well.
But I guess I'm more committed to the idea of free speech than almost anything else. So that I'm willing to bet the entire house on it. And it reminds me of that famous Oliver Wendell Holmes line in a letter he wrote to Harold Lasky where he said, if my fellow citizens want to go to hell, I will help them.
It's my job.
>> Eugene Volokh: Right. Well, so there is this famous line, let justice be done though the heavens fall. I think it's fiat iustitia, but fiat iustitia is let justice be done. But I forget the second part. So let's just say though the heavens fall. And my view is let justice be done, but maybe not though the heavens fall.
So I think I'm a pragmatist as well as being committed as a kind of a deontological, as a moral matter to certain kinds of rights. And I don't know what I would say if it turned out that there really was particular kind of speech that we thought as a practical matter could be restricted and prevent tremendous harm as a result without the risk of causing tremendous harm.
Because those restrictions would lead to other restrictions. I do think we actually say that with regard to some non viewpoint based restrictions. I think most of us accept it's okay to ban death threats, it's okay to ban child pornography, but what about there was a ban on a particular viewpoint?
What would I say then? I like to think I'd say no. I so strongly believe in free speech that I'm willing to tolerate this viewpoint even though I do think it's going to lead to a lot of harm. And I do think restricting it would avoid a lot of harm.
Maybe, maybe not. Fortunately, I don't think I have to come to that really difficult decision. Because giving the government the power to suppress bad viewpoints means giving the government the power to suppress viewpoints that it thinks are bad. And almost invariably at various times the government will then be suppressing viewpoints that are not bad and may in fact be good.
And what makes it think that they're bad is that they're against the government, that they're against the interests or the beliefs of whoever happens to be in power. So I think that as to many rights and free speech rights are among them, I think maybe it's just wishful thinking on my part, but I think the practicality and liberty walk hand in hand, generally speaking, at least when it comes to restrictions on supposedly harmful viewpoints, that practically speaking, giving the government this power is quite bad as well as from a perspective of liberty.
But at the same time, I have technology. Maybe you might imagine some situation, really the heavens would fall and then you ask it yourself, how committed are you to this again? Thankfully, I've never had to really make that decision.
>> Jane Bambauer: All right.
>> Eugene Volokh: All right. Well, Jane, always a great pleasure, and I look forward to our next episode in probably a couple of weeks.
And I look forward to more questions from the audience. Again, hoover.org/askfsu, until then, it's Jane and Eugene at Free Speech Unmuted.
>> Presenter: This podcast is a production of the Hoover Institution, where we generate and promote ideas advancing freedom. For more information about our work, to hear more of our podcasts or view our video content, please visit hoover.org.