It’s not that all good things must pass away, it’s that the conditions that made them good very often evolve obviating their need or possibly turning their results against their initial benefits. Take the Communications Decency Act as an example and specifically Section 230. Passed in 1996, the Act was seen as a way of encouraging Internet use by telling platform providers like message boards that they were not responsible for the postings of their members.
The act was eminently sensible back then but the nature of communications, exemplified by the rise of social media, has changed so drastically that some reexamination of the bifurcation of publisher and content creator is in order. That’s what we’ll do here.
An emerging battle line pits First Amendment rights, i.e. free speech, against alleged government overreach in any attempt to regulate what is said in public space, sort of like the effort to thwart common sense gun legislation, but I digress.
The Safe Tech Act, a bill announced by Democratic senators Mark Warner (D-VA), Mazie Hirono (D-HI), and Amy Klobuchar (D-MN), seeks to put up guardrails for what’s tolerable speech on the Internet. At least one technology-oriented publication, Wired, headlined its approval, “Finally, an Interesting Proposal for Section 230 Reform.” So what’s got everyone already choosing sides?
One camp says the changes amount to requiring platform owners to be responsible for the content posted by users and therefore liable to lawsuits for things like slander, which could have a chilling effect on Internet use in general and social media’s uptake in particular (one might wish for such a chill, but none seems on the horizon). The other side says that the reasonable steps embodied in the changes wrought by the Safe Tech Act are overdue and reasonable.
It’s worth noting that Fox Corporation just got hit with a $2.7 billion suit over the alleged defamatory language used by such on-air personalities as Lou Dobbs and others in describing companies that made some of the voting machines used in the last election. So, it’s useful to keep in mind that being sued over slanderous speech is part of the give and take so the media used are not at issue so much as the content. Interestingly, Fox was so excited about the suit that it promptly canceled Dobbs’ show and no one is raising First Amendment concerns.
Who’s right? I am not a fan of the Solomonic trick of trying to cut the baby in half just to see who blinks, so let’s dig in.
Free Speech, Liberty and Freedom
First let’s dispense with the free speech point because it’s a real canard as the French would refer to a false or baseless and often derogatory story. No one aiming to corral content on the Internet is attempting to curtail liberty.
Free Speech is an issue of Constitutional Law that was settled only at the beginning of the 20th century in a Supreme Court decision authored by Oliver Wendell Holmes, Jr., Schenck v. United States. In that ruling, Holmes stated that the limit of Free Speech is whether or not the content presents a “clear and present danger” to society. The oft-cited example is that you can’t yell “fire” in a crowded theater because it could cause a stampede and injure or possibly kill people. Schenck was revisited in the decision Brandenburg v. Ohio and upheld.
Actually, you are at liberty to yell fire in a theater just don’t expect to use the First Amendment as a defense and that’s the key important point. The difference between liberty and freedom is fundamental. Although the two words are used interchangeably, they have different meanings and applications. Liberty refers to not having an authority figure like an absolute monarch and his minions telling you what you are allowed to think, do and speak. There’s no liberty in an autocracy and most don’t brook much opposition — witness China, Russia, Cuba, Iran; the list is impressive.
You might say liberty is about the absence of something while freedom is about its presence. They are Yin and Yang but they are not the same.
The Enlightenment philosophers including Montesquieu, Locke, Voltaire, and Rousseau gave a lot of thought to the idea of liberty and its importance to modern society. Rousseau and others said that we are all born in a state of nature and that society corrupts us. Liberty is an attribute of that state of nature and the corruption inferred refers to becoming part of society. If you are born into an autocracy, your liberty is stripped away and you are corrupted into understanding what you are allowed to think.
But things are very different democratic society compared to an autocracy because a democracy is governed by a social contract. Technically, and I welcome debate here, there is no true liberty in a democracy, or at least in the way many people interpret it but there is social contract agreed to by all, that accords freedom to everyone.
Freedom is the grant of rights by all to all and as you might imagine, my freedom ends where yours begins. It’s best explained by the notion that liberty exists in nature where we can exist as individuals but to live together in society, we all trade our absolute liberty for mutually assured freedoms. The alternative is living a life in Nature in constant fear of violent death; as Hobbes famously described it — in Nature the life of man is “…solitary, poor, nasty, brutish, and short.”
Succinctly put, we are each at liberty to rob a convenience store or shout “Fire!” anywhere but we do not have that freedom. Sure, we could accomplish those acts, but society’s laws would remind us in harsh terms that the freedom to knock over a store ends at the proprietor’s freedom to own and operate a business.
In retrospect, Holmes’s decision on free speech could not have been written differently. For the good of society, the right of free speech can be extensive, but it must end where it threatens the life, liberty (really freedom) and property of fellow citizens, i.e. where it presents society with a clear and present danger. That’s the social contract in operation.
Telling users of social media that they can’t do the equivalent of yelling “Fire!” isn’t a form of totalitarianism, it is a way of society declaring the limits of what’s permissible under the social contract.
Some opponents of the Safe Tech Act assert that the changes sought by its authors would require platform owners to police the content of their users because failing to do so might cause them to be liable for offensive content. But the same can be said of Fox News or any other content publisher. Although Fox is being sued, no one is predicting the end of broadcast news as we know it (though it may need to change as Fox knows it).
They say that’s unfair because the platform providers are not publishers and take no position about and have no authorship of the content and content developers are not employees. Therefore, the Act makes the providers responsible when they should not have to be. But under the social contract, a government decision like the Safe Tech Act, should it become law, is made by the duly elected representatives of the people and as such it is an accurate reflection of what the social contract allows.
Concerns about liberty are good points, at least they were in 1996 when the Communications Decency Act had its salad days. Back in the 20th century online communications technology meant message boards, the Internet was also munching salad then and it made sense to promote the Internet’s use. Free speech and message boards made a good match then, and everybody got it.
Fast forward 25 years and the Internet is a mature, world-girding and indispensable part of life. It is how we get our news, entertainment and opinion. It is also increasingly (thanks Covid!) how we meet. As such, it seems entirely reasonable to accord our experiences on the Internet with the same Freedom of Speech rights that we give other media.
That would be fine, but it would be a step back for platform providers unaccustomed to any form of regulation, however small.
To accord the Internet the same freedoms as other forms of communications would indeed be limiting. For instance, both radio and TV broadcasters that use parts of the electromagnetic spectrum to send their content to users must be licensed and adhere to standards set by government regulators such as the Federal Communications Commission to operate in the public interest.
You might think this unfair, but the electromagnetic spectrum belongs to the people as represented by the government under the social contract so the people set the standards for their use through their elected representatives.
Thus, nudity and cusswords, among other things, say torturing puppies, are closely monitored and if broadcasters get sloppy with administering their airwaves on behalf of the people, they can lose their licenses. The big exception here has been Donald Trump who as president has said and done some remarkably crude things that the media had to cover as straight news. The late comedian George Carlin once noted with glee the seven words you cannot say on television and then upped the ante to over 200 words. One wonders what his reaction would be if he visited 21st century America.
What’s Changed Since 1996
Perhaps the greatest change from 1996, the year that the Communications Decency Act passed, is that algorithmically driven social media has come into existence. And it’s not simply social media’s existence but how it works that presents a challenge. Social’s apologists like to claim that the various services are independent platforms and that their owners and operators have no control over what happens on the platforms.
This was certainly true when social media emerged, but the rise of the advertising business model changed things. The advertising model was not part of social’s initial presentation. In fact, social media had no business model and lacking one, i.e., a way to make money, businesspeople kept a safe distance. But social media is capable of collecting massive amounts of data and with analysis it can be very useful in generating information predicting human behavior. With that useful idea, the social media business model became an algorithmically based machine for garnering eyeballs for advertisers and making scads of money.
One of social’s many algorithms, which I’ll call birds-of-a-feather, brings together people with common interests. It doesn’t matter what those interests are either. Interests can include windsurfing, auto restoration, knitting, class reunions or even hate groups.
A recent article in the New York Times suggested that a great way to stop or at least reduce the spread of hate speech on Facebook would be to eliminate automatically recommending hate groups operating on the platform. The article states, in part,
In 2016, according to a Wall Street Journal report, Facebook’s research found that two-thirds of people who joined extremist groups did so at Facebook’s recommendation. Automated group recommendations was one of the ways that the QAnon conspiracy theory spread…
Although the WSJ report is from 2016, it remains valid today. A casual search on the words “Facebook recommends hate groups” brings in over 55 million hits in less than a second and although many are from several years ago this only proves an ongoing problem. Search on other social media names and you’ll get similar results. Some of the newer social media companies actively court the designation.
Partners in Hate
It’s too strong to suggest that any social media company has consciously partnered with hate groups but that’s not even the point. The algorithmic oversight and operation of social media platforms has fundamentally changed their natures making the defacto case. They’ve gone from middle of the road platforms with free-for-all give and take to distinct camps of interest groups, some of which spew misinformation and hate and serve as organizing platforms for questionable or even illegal activities. All of this is supported by algorithms focused on the bottom line at the expense of the public good.
Social media has lost its pristine, lily-white aura of being an unalloyed good and an honest broker of ideas; it has crossed a line that violates the social contract. Wittingly or unwittingly, it is now algorithmically responsible for stimulating dangerous content. That is the function of a publisher.
What to Do
No responsible person wants to see the end of social media because it has become a part of the fabric of global life and because banning it in one place would only cause it to spring up elsewhere.
Historically, when a disruptive innovation became part of the fabric of life, governments worked to trim its excesses through regulation. We’ve seen this in all of the modern utilities like electricity and telecommunications for example, which have become regulated industries at the interface of public and private partnerships.
Other products and services have been regulated at the state level for the most part and they include most of the professions such as medicine and law, but also barbers, beauticians, plumbers and electricians are all regulated.
Even if you are a doctor having successfully taken a licensing exam and you’ve submitted evidence of successful educational attainment, you couldn’t claim to be a neurosurgeon. If you tried to hang out a shingle saying you were a neurosurgeon, you wouldn’t get any business because you’d still need to prove yourself to the college overseeing neurosurgeons and that means demonstrating you’ve successfully served a residency in the specialty.
If you are a plumber and need to hook up a residence to a watermain, you won’t get a permit to dig up the street and do the work unless you also can show a master plumbing license in your name. In case after case the results are consistent, professions police themselves itself akin to what’s feared by opponents of regulating Internet activity.
As social media and the Internet mature you should expect some regulation of this kind. The Smart Tech Act is only the first of what is likely to be many efforts to get social media, and more generally the Internet, to exist as solid citizens under the social contract.
A Short Checklist
Fixing the problems surfaced by social media’s popularity isn’t hard. Here’s a short checklist of what’s needed.
1. Education, certification and licensure. You can mess with the plumbing or wiring in your home without a permit in most states, but you can’t do that for others for money without a license. Yet anyone with an ax to grind can set up a social media site/group and address the world with it. Addressing the world should require a modicum of training and licensure to ensure that users with those ambitions first do no harm. If you simply want to keep up with friends as a hobbyist you wouldn’t need to address the world thus your amateur use should be bounded.
2. Use your name. Your girlfriends might call you Tarzan but that’s not an acceptable name online. If you want to be a presence online, you really need to own your content and there’s no better way to do this than signing your name to your ideas, just like John Hancock did with the Declaration of Independence. He was willing to die for his beliefs and it’s not too much to ask that you simply own up to yours.
3. Separate church and state. It’s understandable that a platform provider might not want to be responsible for others’ content. Unfortunately, some social media companies want to be both platforms and content creators who engage other content creators with their algorithms. A moment may be coming when it makes sense for social media companies to split with one side of the house focusing on platform and the other acting as a customer buying data to analyze it and to sell advertising.
It goes without saying that we need to stop making it easy for haters to find each other and then flock. These steps won’t eliminate the problem of hate speech entirely, but they will raise the ante and the bar. They’ll make people think twice about their posts and make it difficult for them to spread misinformation.
Just as Justice Holmes wrote a decision that protected free speech as well as the social contract, we need wise decisions now that do much the same. The intellectual tools are available if the intestinal fortitude is.