ϳԹй

Article

Section 230 and How the Supreme Court Could Upend the Internet

On Tuesday (February 21), the U.S. Supreme court will hear arguments in a case that could have enormous ramifications for the internet. Under consideration is the so-called Section 230, which has protected media platforms from being accountable for the content that they post.

Section 230 is under attack by both conservatives and liberals. BRINK asked professor Mark MacMarthy, an expert on the law, for details of the case that the Supreme Court will examine.

MACCARTHY:It is called the Gonzalez case. The story is that relatives of people who were killed in a terrorist attack are claiming that the terrorists were encouraged to commit the action through videos that they saw on YouTube. The Anti-Terrorism Act says that if you provided material support for terrorism, then it is actionable under that law. So the question is whether YouTube, because it provided access to that video, should be held liable under the anti-terrorism law. That’s the issue.

Is YouTube Liable?

Section 230 is a provision of law that’s been on the books since 1996 and says roughly that online companies, which include social media companies and video platforms, are not liable for the material that’s provided by their users. For years, that’s been interpreted pretty broadly to allow the social media companies and other online services to escape a broad range of cases that were brought against them. In this case, the lower court said that Section 230 prevented the action from going forward under the Anti-Terrorism Act.

If the new rule really is that you have to look carefully at everything that you promote in order to avoid liability, then the companies have to cut back on how they algorithmically amplify things because they simply cannot, as a practical matter, pre-screen everything that comes up. So the companies would really have to look carefully at whether the business model is sustainable.

What makes this interesting is that the actual allegation is not so much that YouTube passively allowed the video to appear on its system, but that YouTube affirmatively promoted it and recommended it to its users. In effect, it was saying to its users, “Hey, I think you would like viewing this. This would be to your interests and preferences and tastes. Take a look. It’s the kind of thing that I think you would find to be interesting and engaging.”

BRINK:This was by automatic algorithm, I presume.

MACCARTHY:Right. What the court case involves is an allegation that the algorithm does have content, in effect a message from YouTube to the user that says, “You would like this.” Obviously, YouTube didn’t write an email or send a message to the user saying that, but that’s what the algorithm effectively does.

So the allegation is that YouTube crossed the line by not merely allowing the material to be posted, but by affirmatively recommending it to its users. That creates a very interesting legal decision for the court to make.

The Difference Between Posting and Recommending

Now, the fact that the Supreme Court took this case, as opposed to allowing the lower court decision to stand, suggests that they might want to do something that’s different from what the lower court said. That has raised the possibility in the minds of many people that the court is about to draw a distinction between merely allowing something to be posted and affirmatively recommending it.

BRINK:What do you think the Supreme Court will do?

MACCARTHY:It seems to me that they might try to carve out a line of some kind, to say, “Section 230 applies here, but not there.” That would mean that there would be a new distinction that companies and users and lawyers would have to take into account when they think about the applicability of Section 230. No one really knows quite what that new line would look like.

BRINK:This is a very heated cultural subject for both right and left. Do you think that there’s a way for the court to thread the needle?

MACCARTHY:I’m not sure that the court can do it. That’s my worry about the outcome. I see substantial possibility of this line drawing, and I don’t see how it’s going to please either right or left. The reason is that drawing a line between recommendations and just allowing material to be posted is really hard.

Everything Depends On How You Interact With the System

TikTok, for example, is nothing but recommendations. There’s no neutral posting of material. When you go on there, you’re immediately offered something that in the first instance is just random. But then after that, everything else depends on how you interact with the system. On top of that, there are things that the company affirmatively recommends. So if the court says that 230 doesn’t apply to recommendations, all of TikTok is suddenly affected. The same is true with Facebook and Twitter and YouTube.

They really don’t have much of a business unless they can use algorithms to either affirmatively push something out or to cut it back. If the new rule really is that you have to look carefully at everything that you promote in order to avoid liability, then the companies have to cut back on how they algorithmically amplify things because they simply cannot, as a practical matter, pre-screen everything that comes up. So the companies would really have to look carefully at whether the business model is sustainable.

The other thing to think about is who else besides social media would be affected. Well, all the streaming services make recommendations. All of the online shopping sites do recommendations and so on.

Europe Has Taken a Different Approach

BRINK:Europe has taken a different tack. Is there anything that the U.S. can learn from that?

MACCARTHY:I think, in fact, that the EU has a much better handle on the problem. A couple of years after we passed Section 230, the European Union passed the. The Electronic Commerce Directive essentially put in place a kind of notice regime, a knowledge standard. It said that if a company is aware of illegal material on its system and does nothing to fix it, then it might be liable for any problems that arise.

It’s not necessarily guilty of anything, but it can’t say, “The law doesn’t apply to me” if it knows about the illegality and does nothing. On the other hand, if they have no knowledge of illegality, if no one has told them anything about the illegality, then they’re free of liability.

That screen of knowledge or notice is a solution to the problem of having to prescreen everything, which is impossible in large scale online systems. They can wait until the problem comes to their attention through a notice. When the problem comes to their attention, then they can take action or not, as they see fit. But they lose their automatic immunity if they do nothing. That system has been in place in Europe since about 2000, and the update in the Digital Services Act will make it even more workable.

And then just recently, in the Digital Services Act, they amended the Electronic Commerce Directive and added a clear description of what kind of notice would actually count as knowledge under that standard. It has to be clear. It has to cite the relevant statute. There has to be a good faith assertion of illegality and so on. It’s a workable system that Europe has put in place.

A Decision Could Come in June

In the United States, a notice liability system can only be put in place by overturning an earlier court ruling in the Zeran case from 1997. But the Supreme Court cannot reach that earlier case in the current Gonzales case because it is not at issue. So, it would be up to Congress to revise Section 230 to put a notice liability system in place.

It’s possible that the court could reach a decision in June. Congress isn’t going to do anything before that. They’re waiting for the court to act. But once the court acts, I do think that will create an opening for Congress to say, “Do we need to fix whatever the court did or can we leave it alone?”