Is AI output generally protected by the First Amendment, even though AIs have no self to express (or so we think ...)? Can people sue if they are libeled by AIs, or if AIs give them false information that leads to physical harm? Jane and Eugene discuss this, and more.

>> Eugene Volokh: Welcome to Free Speech Unmuted, Hoover Institution podcast on free speech. I'm Eugene Volokh, I'm a law professor at UCLA Law school. And I'm about to become a senior fellow at the Hoover Institution.

>> Jane Bambauer: And I'm Jane Bambauer, Brechner Eminent Scholar and professor of law at the University of Florida.

And Eugene, I see you were wearing a T-shirt today.

>> Eugene Volokh: That's right, it's my Robot Law T-shirt.

>> Jane Bambauer: Well, how appropriate?

>> Eugene Volokh: Yeah, that's where, in order to learn the three laws of robotics, robots have to go to law school. And the first year is the first law of robotics, second year, the second, third year, the third.

Except robots are smarter than us, so they figure it out more quickly.

>> Jane Bambauer: Yeah, so maybe we can shave it down to a few semesters. Well, but the first rule, do no harm, wait, what is the first rule again? Don't harm a human or something? So how's it doing on that score now that our robot overlords actually exist?

 

>> Eugene Volokh: Yeah, well, I'm thinking that so far, descriptively, AI hasn't done much harm. But it's early days yet, and I think a good bet is that any tool that humans create will always be used for harmful purposes. Partly, I suppose, because of the intentions of the humans, partly because of accidents, but it's pretty inevitable.

And right now, of course, AI, what it does is it mostly speaks, mostly communicates, mostly outputs images, mostly outputs text. Already people are plugging it into other processes. So, of course, the harms, once AI can, for example, operate a self driving car, let's say, just to pick a purely hypothetical science fiction scenario, those are gonna be governed by various other areas of the law.

But this is a podcast on free speech. So our question is, what do we do when AI causes harm by speaking?

>> David Bowman: Open the pod bay doors, HAL.

>> HAL 9000: I'm sorry, Dave, I'm afraid I can't do that.

>> David Bowman: What's the problem?

>> Eugene Volokh: That's what we're gonna deal with.

And the first question that I think we need to ask is, does the free speech clause apply at all to the output of AI? Now, even if it doesn't, there's still the question of what kinds of restrictions we might want to place in the output of AI. Maybe you could say there's no constitutional protection for AI output, but it's still a bad idea to over regulate it.

But if there is constitutional protection on AI out, but then at least the purely speaking AI's are going to be free. What does it mean for an AI to be free? So that's, in a sense, the first question that we need to deal with. Jane, what do you think about that?

 

>> Jane Bambauer: Yeah, well, I find it kind of interesting that outside of free speech circles, at least, when I raise this issue, people tend to think that the First Amendment either obviously applies or obviously doesn't apply. So, maybe that shows that there's actually an interesting coverage question or a potentially interesting coverage question, although I think you and I largely agree on how it should come out here.

But the argument that maybe the First Amendment shouldn't apply is that free speech is all about speakers. And the AI being computers that more or less autonomously are generating output that we might call speech, still wouldn't be protected the way human speakers would be. I think that's more or less the argument that maybe it shouldn't apply, I don't buy that for a number of reasons.

First of all, just purely descriptively, the Supreme Court, in numerous precedents, has found that there are First Amendment interests that adhere not only based on speakers. But based on listener interests as well, or the right to sort of receive ideas and freedom of thought. And so one way of looking at this is that the AI generated output needs to be protected because it is communicative to people who are receiving it.

Another way of looking at this that I prefer and that Justice Kagan has written about back when she was a law professor, is to look at what the government's actually doing and why. And so if the nature of the harm that the government is trying to reduce is a speech harm, meaning, the problem is that an AI bot, say, might produce some speech that then causes the speaker to either suffer emotional distress or go do something unwise.

Or in any case that causes a harm that has to flow through some sort of communicative process, then I'd say, okay, the First Amendment is present here, and so we should go ahead and do an analysis. How do you think about these things?

>> Eugene Volokh: Well, oddly enough, I agree with you entirely.

So I do think it's an interesting question. If you think of free speech is primarily about self expression, whether there is self expression going on there, there might be not the expression of the AI, at least unless we do conclude at some point an AI is self aware enough to be a person.

But let's bracket that for now, there may be a self expression of the AI company that may have designed the AI program or have trained the AI program to output certain kinds of content. In fact, a lot of recent controversies have stemmed from evidence that AI companies are biasing outputs in various ways and producing things that fit a particular ideological agenda.

Well, that may be bad customer relations, but it may suggest that they are protected by the First Amendment. But even setting that aside, I totally agree that there are rights of listeners involved here. So just to give an example, imagine the government says, we're just really worried about communist propaganda, it's coming back.

So we're going to ban AIs in America from outputting communist propaganda, cuz after all, they have no rights, so who cares? Well, that would interfere with the rights of users if they want to seek communist propaganda or if they want to perhaps produce communist propaganda of their own.

And I mentioned communist propaganda because there's actually a case on the rights of listeners in communist propaganda, Lamont v Postmaster General. It's a 1965 case, as it happens, it's the first time the Supreme Court ever struck down a federal statute on First Amendment grounds. It took until 1965 to do that.

And that statute provided that communist propaganda sent from foreign countries, generally speaking, in part from foreign governments, could not be delivered to Americans unless the American recipient specifically says, I'd like to see it. And of course, a lot of people in the 50s and 60s wouldn't want to be on the list of people who said, I'd like to see communist propaganda.

Now, the Supreme Court didn't say, well, foreign senders and much less foreign governments have First Amendment rights. It reserved that question, that question is still not fully resolved, but it's said-.

>> Jane Bambauer: Hello TikTok ban.

>> Eugene Volokh: There you go. There you go. But it did say that this interferes with the rights of Americans as recipients, as listeners or readers.

And I think the same thing would apply here. What's more, AI is a tool for creating speech, kind of like a camera. And just like many courts have said that bans on photographing or video recording things in public places are unconstitutional, especially when people, for example, are recording police officers and the like.

So I think restricting AIs as creators, as things that people can use to create their own speech, would implicate and usually violate the First Amendment. And I just want to close by highlighting, they also entirely agree with you and with Justice Kagan in this respect. That if the government is actually deliberately trying to block certain kinds of speech precisely because of its content.

That itself is strong evidence that there's a First Amendment violation here. Whether we're thinking of the rights of speakers or the rights of listeners. So if the government says, AI can't output racist speech or antigovernment speech or pro or anti-Israel speech or whatever else, that certainly sounds like a government attempt to suppress a particular viewpoint in the marketplace of ideas.

 

>> Jane Bambauer: Yeah, okay, so two quick reactions. One is that you're right that things like cameras have been recognized as protected because they're sort of critical tools for creating speech. But I've been surprised, I mean, despite the fact that I've been writing about it for over a decade now, I'm still surprised at how kind of uneven the protection of things like the right to record still is.

And so I think that's one reason that there may be some wiggle room in the AI context, too, if not entirely sort of come out with a totally different analysis from what we're suggesting. Maybe at least find some alternative route, given this new technology. I hope not, but it's surprising that a lot of times, even when the courts are protecting, say, video recording in public, they make their rules somewhat narrow.

They talk about how it's important to anticipate a broad audience. They sometimes emphasize the fact that the tool is being used to produce political speech as opposed to just any other type of information. But another backdrop to all of this, I think, is the evolution on commercial speech where there, too, the Supreme Court, in recognizing that things like advertising are constitutionally protected.

They did so because of listener interests, not solely because the corporation has an interest as an autonomous speaker. So I think there's a lot there in terms of listener rights. Okay, but here's, this will help us get to the second topic I know we want to talk about.

One kind of practical argument that maybe AI should be treated differently goes as follows. Well, it's true that listeners, especially open minded listeners, deserve to have access to all manner of ideas and information. But in fact, we have kind of implicitly relied on the idea that humans won't necessarily produce bad information at the same rate that maybe a machine might.

So although it's true that people defame each other, maybe they do so in ways that are more predictable or less often or something like that than a machine would. And so having a kind of human requirement may be doing something very pragmatic. And if we abandon the idea that there needs to be a human speaker, perhaps that puts strain on the listener interests.

What do you make of that? Or maybe more specifically, what do you think courts are going to do when ChatGPT and Google's Bard, it's not called Bard anymore, what's it called again, Gemini?

>> Eugene Volokh: Gemini, I think.

>> Jane Bambauer: Gemini starts producing libelous or otherwise illegal information at great rates.

 

>> Eugene Volokh: So that's a great question. There already are two cases on AI and libel, what I call large libel models cases, that are being litigated in US courts, at least two. One is in the federal district court in Maryland and another in state trial court in Georgia. The one in Georgia, actually, the judge denied OpenAI's motion to dismiss it, that was an OpenAI case.

And it doesn't mean that the plaintiff has won yet, by any means, but-

>> Jane Bambauer: But here we go, it's being litigated.

>> Eugene Volokh: Right, it's being litigated, the judge seemed to think that there's at least some plausible legal basis for this kind of claim. So one thing to keep in mind is if AI is protected by, even if AI output is protected by the First Amendment, it's probably protected no more than human output.

And there are limits on the First Amendment, libel law is one of those limits, right? So if we say that AI output is protected, well, we really mean it's presumptively protected. And there are some exceptions and one of the exceptions is for defamation. So, as we know, of course, social media platforms and some other online companies do get extra protection, not because of the First Amendment, but because of Section 230, part of the old Communications Decency Act.

And that provides that online speakers essentially are not responsible, I oversimplified here, but basically are not responsible legally for speech posted by other people. So you can't sue, or you can't successfully sue Facebook for libelous material or other harmful material that's posted by a Facebook user. Even though Facebook provides the hosting, even though Facebook might amplify that you can't, generally speaking, sue.

But that protection, I think, is not going to apply to generative AI companies precisely because their the programs are generative. When somebody is suing OpenAI for libel, that person is suing because of libel that's output by OpenAI's programs themselves. So the claim in that case, Walters v OpenAI, is OpenAI just made up this stuff itself using its algorithm, for which OpenAI is responsible, that was using OpenAI's ChatGPT product.

And in that situation, Section 230 doesn't apply because OpenAI is being sued for its own output, the output of its own programs, and not material supplied by a third party. So then the question is, how do you apply libel law rules to AI output? So, for example, as we may know, one really important element of libel law rules under the First Amendment is so called actual malice requirement.

Of course, actual malice doesn't actually mean malice, this is the lawyer's habit of using English words to mean something completely different, at least sometimes. So actual malice means knowing or reckless falsehood. So even a public official or public figure can recover in a libel case. If he shows that the defendant knew the statements were false or knew they were likely false and didn't investigate further.

What does it mean to ask whether a computer program knows that a statement is false? Now, in some situations, if it's a private figure suing, then at least, and again, I oversimplify here. But at least in some situations, that private figure can prevail in a showing of negligence.

Well, what does it mean to say a program is negligent? It didn't act reasonably. Well, what's the standard? Didn't act like a reasonable computer program? Well, it turns out, I think, that these standards, even though at first we might think are very strange when applied to AI programs, actually are applicable.

So one question might be, was OpenAI alerted that its program was outputting certain kinds of libelous material, specifically, not just in general, but specifically, was outputting libelous statements? In the Walters case, it's that a request to analyze the complaint in some case just made up allegations of embezzlement against the plaintiff, that the complaint in that case had nothing whatsoever about.

So if OpenAI had been warned about this and didn't implement some sort of blocking mechanism to block the continued output of that. Which is in fact part of the allegations in the other, in the federal district court of Maryland case. Then I do think it sounds like knowing or reckless falsehood, knowledge, not on the part of the program.

But on the part of the company, of the employees of the company. They were alerted that this output was being produced and they didn't do anything about it. Likewise, negligence, it would be careless design on the part of OpenAI. So if, for example, there are particular design decisions that foreseeably would lead to a good deal of libel and that could easily have avoided it.

Then there could be liability and that kind of analogous to negligent product design, but adapted to the libel context. So I do think that these libel lawsuits might prevail. But there's another question that you asked is, should we say that given the possible scale of the libel here, that there should be more demanding standards imposed in OpenAI?

Maybe, I hesitate to say no, no, no, never, but I'm not sure the case has been proven so far. Among other things, much AI libel is in response to queries, generally speaking. So in both of these cases, the claim was when somebody does a search or not a search, excuse me.

Provides a query to ChatGPT, or does a search in Bing, they get this kind of material. Well, that could be quite damaging to people, but it's usually outputting one at a time, one libelous statement at a time. And not always, even completely predictably so, sometimes it might output something else because there's random variation in AI output.

Whereas, for example, if the New York Times has output something that's outputting simultaneously to hundreds of thousands or millions of people. So on balance, it may not be the case that AI is more damaging by way of libeling people. Maybe it's just differently damaging or similarly yet differently.

 

>> Jane Bambauer: Yeah, I think, I agree with that. And I am actually maybe more worried about the impact of liability than you are. Notwithstanding my comment that it might happen at a greater scale or in a different way, as you put it, than ever before. I worry, though, that precisely because there's one concentrated choke point.

It may be irresistible for lawmakers to start imposing very high expectations of accuracy or error avoidance in a way that is not actually very reasonable. I mean, even the example of a direct alert that ChatGPT has said something wrong about a specific person or in a specific way.

I'm not sure we want a legal system that incentivizes the company to set up either bespoke guardrails where this sort of system could be exploited. So that people can clean up a record, falsely clean up their own record. Nor would we want it to overreact so that other searches that would, or other queries that would normally get pretty good information are kind of damaged.

And so I hope if we do go down this route toward assessing software or AI models under something like a product liability rule. I hope we take really seriously the kind of constraints that the courts have created over time within that domain. Where the plaintiff needs to show there really was a design that not only reduces the harm that the plaintiff is complaining about, but also doesn't exacerbate other problems, right?

 

>> Eugene Volokh: So I think that's absolutely right. Let's just step back a bit and look back 60 years ago now to New York Times v Sullivan, long before AI, but it dealt with some pretty similar issues, right? So the claim in New York Times v Sullivan was, look, libel law has too much of a chilling effect.

Libel law is supposed to, I oversimplify here, but basically, supposed to only punish false statements that damage people's reputation. But it also may deter people from saying true things because they're not sure whether it's true or false, or they do think it's true. Maybe they're confident it's true, but they're worried that a judge or a jury, or in criminal libel cases, a prosecutor is going to conclude that they're actually false.

So those are very serious risks. And six justices in New York Times v Sullivan said, well, because of that, we're gonna set up for libel lawsuits by public officials this so-called actual malice. Again, the knowing or reckless falsehood standards, that will diminish the chilling effect while still preserving libel law.

Now, three justices concurred in the judgment, agreed that the libel decision needed to be vacated in that particular case. But they would have gone much further. They would have said that for lawsuits brought at least by public officials and matters of public concern, there should be categorical prohibition on such libel lawsuits.

Because even the actual malice, knowing a reckless falsehood standard, would still have too much of a chilling.

>> Jane Bambauer: Even at the time, there was an interest among some of the justices to create an immunity that would have looked kind of like Section 230 within First Amendment law for certain projects.

 

>> Eugene Volokh: Right, so, that's right. And maybe they were right, but the bottom line is they didn't prevail. That you could completely eliminate the chilling effect of libel law on true, valuable speech by completely eliminating libel law. Or you could do a lot to protect reputation by returning libel law to a much more pro-plaintive standard.

But then you'd have a lot more of a chilling effect on true speech and an opinion and the like. So the law has come up to this sort of compromise solution, and I'm inclined to say it'll probably follow that. But you may be right that it may be that people may be too worried about AI, or maybe rightly worried a lot about AI might shift things to a more pro-plaintiff perspective.

Or might be worried about the chilling effect and shift to pro-defendant perspective. Now, Jane, I wanna ask you about something that I know you've been thinking a lot and writing about. What about other harms, not just to reputation, but to life and limb? What about AI outputting information that's mistaken in a way that could cause people to accidentally injure themselves, maybe eat the wrong kind of mushroom?

 

>> Jane Bambauer: Well, yeah, so there has, in fact, been a case where AI generated content has suggested that a mushroom is non toxic when it actually is harmful to digest. And Eugene, I think you and I are tickled by this, because the facts match a famous case, Winters versus, I'm forgetting the defendant's name.

Do you remember the publisher name?

>> Eugene Volokh: I think, Putnam.

>> Jane Bambauer: Putnam, where a mushroom encyclopedia wrongly listed a mushroom as non toxic, when in fact it was toxic, and a reader of the encyclopedia was harmed and sued the publishing company. And the courts, basically, said that under First Amendment principles, if not, strictly speaking, First Amendment law, the normal route to negligence liability or product liability just don't apply to publishers.

And the rationale, I mean, I think it's time to think through the reasoning of that case and whether, even if it made sense at the time, it still makes sense today. But I think what was happening was that even though we could imagine a range of cases where we would allow someone to sue somebody else for saying something that was wrong and that totally, foreseeably led them to injure themselves.

We wouldn't want to impose that kind of risk on publishers, maybe not even on authors, because there's just too much potential liability out there on those terms. And so this was, Winters makes me think that actually it's possible that if we didn't have Section 230. And we had to think about what to do with the problem of large Internet companies or large technology companies that produce lots and lots of information all at once.

It may be that a compromise position is just too hard to work out, and that something that's much closer to an immunity may be the more rational response. On the other hand, I think, much like your analysis of libel, I think it's also possible that what we'll see is that AI companies will be treated with some liability risk for negligently saying, suggesting, giving some advice that leads, again, foreseeably for some people to either harm themselves or harm third parties.

And that would fit pretty nicely under sort of well established expectations of duty, meaning when courts, even putting aside the First Amendment, when they have to decide who is going to be held responsible for harming other people, they'll sort of, they won't be willing to trace too far back up the chain of kind of causation.

But they will ask, okay, well, when you took some action, even if it was verbal, even if it was speech, could you have known that someone who was paying attention, and maybe in context where you really knew that they were likely to act on this piece of information, would hurt themselves.

And I can imagine a rule over time getting crafted for AI companies, even though courts have not wanted to impose such a rule on publishers or rap music producers who might inspire copycat crimes. Or any other number of speakers and publishers whose speech may have, in fact, inspired some harm.

 

>> Eugene Volokh: So, yeah, I think there's a lot of possibilities out there. We don't know quite how courts will jump on this. I do want to highlight one important possible distinction here. So, when we started talking about libel, we were talking about false statements of fact. And the Supreme Court has said false statements of fact generally lack constitutional value.

Sometimes they need to be protected to avoid a chilling effect. But still, on balance, they can be restricted in various ways. And I think the same thing is true if AI program outputs something saying, this mushroom is safe to eat. Or to make it even more factual, all the medical authorities say this mushroom is a safety, even though all of them say the exact opposite.

Now, there's a different kind of route to possible harm, which is expression of opinions, you ought to be doing certain things, go ahead and use cocaine. We know that it's dangerous, but it's so much fun. Or possibly, again, as you point out, copycat crimes like something that it's a movie that's produced by AI glamorizes certain kind of behavior in the minds of a small fraction of the viewers, and as a result, they go out there and commit crimes.

There it seems to me the First Amendment argument for protection is a lot stronger. There we're talking more about the territory of the incitement exception, which is deliberately very narrow. Or it has said that advocacy of illegal conduct can be punished, but only if it's intended to and likely to promote imminent illegal conduct.

And other than that, the possibility that something may inadvertently or in some long term scenario cause some people to act badly, not because of the falsehood, but because it suggests that some behavior is good or glamorous or pleasant or whatever else. That is something that's presumptively unconstitutional, at least when ordinary humans do it, and I think will probably apply as well to AI.

So those are the two most important and most high profile topics, maybe not most important, most high profile ones. There are three others we just wanted to flag briefly, maybe for a future episode. So one has to do with this question. What about the power that AI companies have to mold public opinion?

Imagine that people start using, as we're already seeing them do, start using AI programs in order to answer various questions instead of search engines, and they end up trusting the output of those programs. And then they end up maybe voting based on the output of those programs or deciding on various kind of political topics more broadly based on the output of those programs.

And imagine there are only two or three of them out there. Should we be worried about it now? It used to be that this was a big concern back when there were these three major broadcasting networks that had tremendous influence, potentially. There were various regulatory schemes that were aimed at keeping individual companies from getting too much influence.

They were in many ways, I think misguided, in some ways perhaps counterproductive. In general, there's always only so much they could do, in part because for business reasons often there was only one newspaper, for example, in any one particular town. So at least on local issues there was really only one voice.

But there were attempts to try to prevent that, to try to maintain second newspapers and limit the amount of influence any particular company could have. So I think one interesting question is will, and should Congress be concerned about that? If it does look like people are getting older information from, let's say, Google or from Microsoft, OpenAI, is that bad for democracy in certain ways?

And what, if anything, Congress can do about that? So that's one issue. A second issue is the copyright lawsuits against AI, and wanted to ask you what you thought about that. And a third issue is, should we be worried about AI getting too good at manipulating us, too good at kind of emotionally moving us in particular dimensions?

Obviously, good writers and good filmmakers have always been good at manipulating us.

>> Jane Bambauer: That's right.

>> Eugene Volokh: That's how we tell whether filmmaker is good is by whether he or she can manipulate us effectively. But should something be done about that? So I just wanted to ask you to close by speaking to those two questions, the copyright question and the manipulation question.

 

>> Jane Bambauer: Yeah, so I think both of them probably deserve more time for in future episodes, so hopefully we'll get to it. The copyright question is hard because I think there is some threat to the incentive to be creative in the first place if you know that anything that you create can be copied by someone saying, hey, ChatGPT, write me an article in the style of Eugene Volokh on X topic, or something like that.

However, I'm less sympathetic to the copyright claims related to having copyrighted material used in the training data set in the first place. So maybe in the future we can sort of parse through whether there's some good claims and some bad ones, and what copyright law will have to do, if anything, to navigate this.

On the AI manipulation, for the most part I'm skeptical for the reasons that you suggest, that good writing has always been somewhat manipulative, or a good idea also often operates both on an emotional and an intellectual level. On the other hand, when I think about why drugs, especially recreational drugs, are allowed to be regulated, it's only partly because they are ingested and have some direct physiological or physiognomic interaction with our bodies.

It's also partly because our minds are in some sense taken over. If there's some slippage between AI and some effect that is that strong, maybe it will become an interesting question. I'm not sure we're there yet, but in any case, we'll have to save a lot of this for future discussion, especially because I think I disagree with you on the political, whether there's gonna be too much power concentrated in these companies.

So let's definitely talk again.

>> Eugene Volokh: So these are great questions, and they give me an opportunity to provide an epigraph for the end of this episode. Can epigraphs go at the end? Well, why not? So this is from Rudyard Kipling, in a speech he gave in 1923, he says, I am by calling a dealer in words.

And words are, of course, the most powerful drug used by mankind.

>> Jane Bambauer: Perfect.

>> Eugene Volokh: Maybe by AI kind, they'll be even more powerful. And what should we say about that? In any case, great pleasure, as always, talking to you about these fascinating questions, Jane.

>> Jane Bambauer: Yep, and we'll see you all in the next episode.

 

Show Transcript +

ABOUT THE SPEAKERS

Eugene Volokh is a visiting fellow (soon to be senior fellow) at the Hoover Institution. For thirty years, he has been a professor at the University of California – Los Angeles School of Law, where he has taught First Amendment law, copyright law, criminal law, tort law, and firearms regulation policy. Volokh is the author of the textbooks The First Amendment and Related Statutes (7th ed., 2020) and Academic Legal Writing (5th ed., 2016), as well as more than one hundred law review articles. He is the founder and coauthor of The Volokh Conspiracy, a leading legal blog. Before coming to UCLA, Volokh clerked for Justice Sandra Day O’Connor on the US Supreme Court.

Jane Bambauer is the Brechner Eminent Scholar at the University of Florida's Levin College of Law and the College of Journalism and Communications. She teaches Torts, First Amendment, Media Law, Criminal Procedure, and Privacy Law. Bambauer’s research assesses the social costs and benefits of Big Data, AI, and predictive algorithms. Her work analyzes how the regulation of these new information technologies will affect free speech, privacy, law enforcement, health and safety, competitive markets, and government accountability. Bambauer’s research has been featured in over 20 scholarly publications, including the Stanford Law Review, the Michigan Law Review, the California Law Review, and the Journal of Empirical Legal Studies.

ABOUT THE SERIES

Hoover Institution Senior Fellow Eugene Volokh is the co-founder of The Volokh Conspiracy and one of the country’s foremost experts on the 1st Amendment and the legal issues surrounding free speech. Jane Bambauer is a distinguished professor of law and journalism at the University of Florida. On Free Speech Unmuted, Volokh and Bambauer unpack and analyze the current issues and controversies concerning the First Amendment, censorship, the press, social media, and the proverbial town square. They explain in plain English the often confusing legalese around these issues and explain how the courts and government agencies interpret the Constitution and new laws being written, passed, and decided will affect Americans' everyday lives.

Expand
overlay image