Today, at any given time, Twitter is canceling someone. At the same time, though we discuss this word “canceling” constantly, there is no general definition of the term, and it’s applied and perhaps misapplied due to the newness of it. What is cancel culture? On one side of the argument, there are those who believe it is simply the process of silencing those who say or do things that go against the “woke police.” On the other side, it’s believed to be the action of holding someone accountable for harmful words, actions, and beliefs. However, canceling anyone is truly neither of these things. Nor have we actually managed to “cancel” those who trend on Twitter for a few hours for something that they said or did in the past. The vast majority of those who get called out for an offensive take they shared in the past are still bringing in income, still have a platform, and still have an audience.
So, what is the point of canceling? Simply put, nothing. There’s no point to the concept of canceling save for attempting to bring light to a larger issue. When it’s brought to light that a popular persona has been tweeting anti-Asian sentiment, it’s an opportunity to share how deep anti-Asian sentiment runs throughout the U.S. and beyond, and it’s a chance for those who have experienced such sentiment to see support from those who now have an example of it right before their eyes.
But when those tweets are inevitably removed, either by a social media platform or by the author’s own volition, is that censorship? And doesn’t that mean we’re violating our own Bill of Rights?
Although the First Amendment is frequently cited as being broken under cancel culture, the reality is that we are constantly misusing this clause. The First Amendment protects us from the government enforcing censorship upon us. But private organizations are allowed to uphold who they want to work with or promote at any time. In fact, Section 230 of the Communications Decency Act, which is commonly misused as well, enforces that platforms like Twitter have to set boundaries on acceptable discourse in the internet age.
Although we’ve heard about the first part of Section 230 a lot over the past few years, as many have argued that these platforms have been operating as publishers by moderating content, it’s a misapplication of the full law. Internet companies are indeed not liable for what is shared on their websites by third party individuals, but the second part of the law modifies this further by allowing “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”
When a tweet is slapped with a warning label as false information, many cry censorship and even cite Section 230 as the reason why it’s illegal, but Section 230 is in fact the reason why it’s necessary. Though the Communications Decency Act was put into effect 25 years ago, lawmakers had expected the further advancement of the internet age and, with it, the advancement of “obscene” or “harassing” language and actions through its use.
For this reason, social media platforms, like any private companies, are allowed to operate by their own internal set of rules to enforce how their products are to be used. As we create the content that fills these platforms, these companies must still monitor such content. As a writer, if I were to send an article to a magazine, I would expect edits, suggestions, and fact checking to occur as the final copy will be in their product. I expect the same of any private company, who operate by their own guidelines.
This is not to say, though, that certain forms of censorship on private platforms is necessarily the best course of action in order to uphold these guidelines. Consider, for instance, the organization of the January 6 insurrection of 2021 at the Capitol. Though there are reports that indicate that it was created in part due to Facebook’s inability to catch far-right extremism, I would be remiss not to point out that discussions on the message board 8kun discussed the insurrection openly, with the understanding that it was simply happening, with some even going so far as to clearly say their intent to kill. The difference is that, while we all know Facebook, few who did not already agree with far-right ideology knew of the message board’s existence.
While I do believe that Facebook needs to have more in place to recognize calls for violence on their platform, I also believe that outright banning and removing these extremists does not actually accomplish eradicating them from the online world. All it does is push them further into the shadows. They get better at hiding, using websites that are unknown to many, and they are ultimately allowed to plan potential injurious events without moderation.
It’s well known that we exist in our own bubbles in online life. The more we ban those who appear dangerous (rather than attempting to better moderate their content), the less transparency we have on those who may want to cause harm. Furthermore, these extremists will more likely attempt to find a “safe space” for them to share their potentially harmful ideas in echo chambers, which will only serve to further radicalize them. The ostracization of these groups only fuels their desire to be heard in a space where they won’t be questioned, hence the existence of sites like 8kun, which are quiet until they aren’t anymore.
In the wake of a pandemic in which some individuals believe that the enforcement of mask-wearing is a way in which one side of the political divide can control the other side’s mind, I fear we may have gone too far to turn back now. The radicalization has already occurred and continues to do so every day. How do we reintroduce those who have been outcast and radicalized back to online spaces? How much damage has already occurred that trying to have a real debate with real facts would no longer be possible? Simply saying that one side or the other has caused a rampant political division is missing the point.
Though the term ‘censorship’ and the citing of certain laws incorrectly are thrown around quite often, the reality is that those who are doing the misuse have a case. Just telling people they are too outdated or, more severely, too dangerous to exist in online spaces will never cause them to go away, nor will they ever have reason to listen as anyone tries to have a discussion based around facts as we know them. Censorship, to some extent, is a necessary reality. Baseless claims deserve fact-checking. In fact, all claims do. But how we handle censorship in the online world currently has the potential to continue us down a frightening trend in which we are ill-prepared to combat the spread of such claims in spaces where we have no transparency.
Jacqueline Gualtieri is an essayist and fiction writer based out of California. Her writing can be seen in publications like HuffPost and Food & Wine, as well as literary journals like a Murder of Storytellers and Please See Me.