
Meta Platforms Inc. has become a light stick for legal challenges in the United States. Challenges, from FTC’s antitrust case to shareholder lawsuits alleging the company misled investors. Last week, eight complaints has filed against the company across the United States. It includes allegations that young people who visit Instagram and Facebook committed suicide. They also experience eating disorders. (Facebook has not commented on the litigation and has denied allegations at the FTC and shareholder complaints.)
The allegations echo the concerns of Facebook whistleblower Frances Haugen. Frances Haugen, whose leak last year of thousands of internal documents. The documents showed that Meta was aware of the psychological damage its algorithms caused users. Such as Instagram worsening the problems of one in three teenagers.
Lawsuits strike at the heart of Meta’s harmful social impact. They could help educate the public about the details. But they likely won’t force a significant change at Facebook. That’s because Section 230 of the Communications Decency Act of 1996 protects Facebook and other Internet companies. Protection, from liability for much of what their users post. Unless U.S. law changes and there are no signs that this is happening anytime soon, Meta’s attorneys can continue to use that defense.
But that will not be the case in Europe. Two new laws coming down the pipe promise to change the way Meta’s algorithms display content to its 3 billion users. The UK’s Online Safety Bill could come into force next year. The European Union’s Digital Services Act is likely to come into force in 2024. The aim of the bill is to prevent psychological harm from social platforms. They will force big internet companies to share information. Information, about their algorithms with regulators, who will assess how “risky” they are.
Mark Scott, Politico’s chief is a technology correspondent. He is also a close follower of technology correspondent laws and answered the question about how they work. As well as what the limitations are, on Twitter Spaces with me last Wednesday. Our discussion edits are below.
Parmy Olson: What are the main differences between upcoming UK and EU laws on online content?
Mark Scott: EU law is addressing legal but unpleasant content. Content, such as trolling, disinformation, and trying to balance that with freedom of speech. Instead of banning [that content] altogether, the EU will ask platforms to check it and conduct internal risk assessments. They should provide better access to data for external researchers.
Must Read: 7 Things the FBI said about Cybersecurity
UK law will be 80% similar, with the same ban on harmful content for risk assessments. It will go a step further: Facebook, Twitter, and others will also be obliged to have a “duty of care” to their users. They will have to take action against harmful but legal material.
Parmy: So to be clear, won’t EU law need tech companies to take action against harmful content itself?
Mark: Exactly. What they are requiring is to mark it. They won’t need platforms to ban it altogether.
Parmy: Would you say the UK approach is more aggressive?
Mark: It’s more aggressive actions required by companies. [The UK] has also raised possible criminal sentences for tech executives who don’t follow these rules.
Parmy: What will risk assessments mean in practice? Will Facebook engineers have regular meetings to share their code with representatives of [UK communications regulator] Of com or EU officials?
Mark: They will have to show their homework to regulators and the world at large. So journalists or civil society groups can also look and say. A powerful, left-leaning politician in a European country is gaining massive traction. Why? What is the risk assessment that the company has done? Assessment, to ensure that [the politician’s] content doesn’t go out of proportion in a way that could harm democracy?” It’s that kind of boring but important job that you’re going to focus on.
Parmy: Who will do the audit?
Mark: Risk assessments will be done both internally and with independent auditors, such as Price Waterhouse Coopers and Accenture of this world or more specialized independent auditors who can say, “Facebook, this is your risk assessment and we approve it.” And then that will be overseen by regulators. UK regulator Ofcom is hiring around 400 or 500 more people to do this heavy lifting.
Parmy: However, what will social media companies actually do differently? Because they already publish regular “transparency reports” and have made efforts to clean up their platforms, YouTube has demonetized problematic influencers and the QAnon conspiracy theory no longer appears in Facebook’s news feeds.
Will risk assessments lead tech companies to remove more problematic content as it emerges? Will they be faster at that? Or will they make radical changes to their recommendation engines?
Mark: You’re right, companies have taken significant steps to eliminate the worst of the worst. But the problem is that we have to take the company’s word for it. When Francis Haugen made internal Facebook documents public, it showed things we never knew about the system before, such as algorithmic amplification of harmful material in certain countries. Therefore, both the UK and the EU want to codify some of the existing practices of these companies, but also make them more public. Tell YouTube, “You’re doing X, Y, and Z to prevent this material from spreading. Show me, don’t tell me.”
Must Read: In a Hybrid World VPNs can no longer keep up
Parmy: So, essentially, what these laws will do is create more Francis Haugens, except that instead of creating more whistleblowers, you have auditors who come in and just get the same kind of information. Would Facebook, YouTube, and Twitter make the resulting changes globally, as they did with Europe’s GDPR privacy rules, or only for European users?
Mark: I think companies will probably say they’re doing this global.
Parmy: You talked about technology platforms that show their tasks with these risk assessments. Do you think they will honestly share what kind of risks their algorithms could cause?
Mark: That’s a very valid point. It will all depend on the power and expertise of regulators to enforce this. It’s also going to be a lot of trial and error. It took about four years to ease the potholes for Europe’s GDPR privacy rules to take action. I think as regulators better understand how these companies work internally, they will know where to look better. I think initially, it won’t be very good.
Parmy: Which law will do a better job of enforcement?
Mark: The UK bill is going to be watered down between now and next year when it’s expected to come into play. This means that the UK regulator will have these quasi-defined powers, and then the rug will be pulled out from under them for political reasons. The British have been very deluded in terms of how they are going to define “legal but harmful” [content that should be removed]. The British have also made exceptions for politicians, but as we have seen more recently in the United States, some politicians are the ones who provide some of the worst falsehoods to the public. So there are some big holes that need to be filled.
Parmy: What do these laws do well and what do they go wrong?
Mark: The idea of focusing on risk assessments is I think the best way to do it. Where they have gone wrong is in the overly optimistic feeling that they can actually fix the problem. Disinformation and politically divisive material existed long before social media. The idea that some kind of bespoke social media law can be created to fix that problem without fixing the underlying cultural and social problems that go back decades, if not centuries, is a bit short-sighted. I think [British and EU] politicians have been very quick and eager to say, “Look at us, we’re fixing it.” Whereas I don’t think they’ve been clear about what they’re fixing and what outcome they’re looking for.
Parmy: Is framing these laws as risk assessments a smart way to protect free speech, or is it false?
Mark: I don’t have a clear answer for you. But I think the way to approach risk assessments and mitigate those risks as much as possible, that’s the way to go. We’re not going to get rid of this, but at least we can be honest and say, “This is where we see the problems and that’s how we’re going to fix them.” Specificity is missing, which provides a lot of gray space where legal fights can continue, but I also think that will come in the next five years as legal cases are fought, and we’ll have a better idea of how exactly these rules will work.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is the author of “We Are Anonymous.”
Stay Tuned with Us: