“We’re flagging problematic posts for Facebook that spread disinformation.”
-White House Press Secretary
Disinformation these days is everywhere. Social media in particular has become akin to a freeway where everyone’s stuck in traffic, is incoherently yelling at nobody in particular out of frustration, while simultaneously hitting their horns. Oh, and the turn signals are working overtime, but it’s impossible to tell if the drivers are indeed planning on following the rules of traffic. In the distance, there are signs—but they are obscured by all the smoke. Good luck getting off.
So, in light of all the disinformation being circulated around a dangerous pandemic, it seems reasonable to want to try and address this disinformation, right?
In a press conference, White House spokesperson Jen Psaki has outlined some of the ways in which the Biden administration is working to address this issue. One of these strategies is to flag problematic posts for Facebook — as well as keeping a list of accounts/individuals who are actively spreading disinformation.
Seems noble enough, right? At least that’s what many people on social media applauding this move seem to think. The same people who believe that getting a vaccine should not be a choice and that trading some of our civil rights for the common good is just part of living in a society.
But not so fast.
There are plenty of people sounding the alarm too.
Freedom of speech is at the center of this debate. Sure, the White House isn’t mandating legally that certain content is removed from social media, it is merely suggesting it. But, when the entire government of the United States pressures a company, it’s naïve to believe that the pressure to comply isn’t tremendous. We’re talking about the most powerful of institutions here. One with direct access to the IRS, if needed. It’s a massive step in the direction of government regulating what is and isn’t allowed to be posted under the idea that it’s in the interest of public safety.
After all, White House communications director Kate Bedingfield has already suggested that the government is considering changing Section 230 so that companies become liable for publishing disinformation regarding anti-vaccine misinformation. In an interview with MSNBC, she said: “we’re reviewing that, and certainly they should be held accountable.” This isn’t the first time this has been suggested by Biden/his administration either.
And why should it be such a leap if, as Biden has alleged, Facebook and other social networks were literally “killing people” by allowing false information about vaccines on their platforms.
To this, Facebook has issued a response to NBC News:
"We will not be distracted by accusations which aren't supported by the facts. The fact is that more than 2 billion people have viewed authoritative information about COVID-19 and vaccines on Facebook, which is more than any other place on the internet. More than 3.3 million Americans have also used our vaccine finder tool to find out where and how to get a vaccine. The facts show that Facebook is helping save lives. Period."
At this point, it’s important to note that the term ‘disinformation’ is the one being used by entities like the White House, and not ‘misinformation.’ There’s an important distinction that speaks to intent. To spread ‘disinformation’ means to share false, misleading information with a specific motive. Misinformation, however, is to spread wrong information unintentionally. The two get conflated, but the distinction is crucial.
You see, ‘misinformation,’ is covered by the First Amendment — regardless of its validity.
And yet, perceived misinformation is currently under attack too.
Social media companies, however, are not bound by the First Amendment, so they can do as they wish. And they have been. On the one hand, they take no responsibility for the words published on their platforms since they refuse to confirm that they are editorializing, and yet, as many are familiar, they target the take down on some content but not other—in a seemingly arbitrary way.
In fact, I’ve personally recently reported several instances of “hate speech” on Twitter. The platform investigated my reports and confirmed that indeed this was a breach of their “hate speech rules.” The content and the users remained.
But, if the content being taken down is false, regardless of intent or who does it, what’s so bad about that? Isn’t it protecting people?
Here’s where the water gets very murky. How do we decide what is ‘false’ and what is not? Who gets to decide and on what basis? What’s the transparency mechanism in place? Who has the monopoly on ‘truth’? Does the government get the final say?
We’re already living in a time where numerous scientists have voiced how any dissent has been effectively shut down on some very important subjects, including public health. Some scientists have lost their careers, but many more have simply remained silent.
There have been many contradictions by authorities throughout the pandemic on things ranging from mask use, to the suppression of claims about the origins of the virus, such as the lab leak hypothesis. Things that were unacceptable to talk about just a month ago, are now being discussed in the mainstream. There’s also plenty of disagreement. Canada claims that mixing and matching vaccines is perfectly safe (coincidentally, they are experiencing shortages of Pfizer at the moment), whereas the FDA does not recommend it. The Canadian government also promoted new research claiming that it’s better to wait longer between the first and second shot of the vaccine (again, having had shortages of vaccines), whereas in the U.S. individuals were getting their second dose within 2-3 weeks. Canada, once its vaccine supply has grown, shortened the gap from 4 months to approximately two. There was plenty of debate and shifting information in regards to the AstraZeneca vaccine as well. And the list goes on and on.
Now, I’m not arguing that we should forget about disinformation or misinformation and leave it be. I’m not arguing that it’s not harmful. In fact, it’s something I’ve given much thought to and have wondered about ways to address without introducing a new problem. And we should consider working on ways to address —particularly innovative technological solutions—ideally ones built around decentralization.
But, I’m convinced, that censorship isn’t the way to go.
Which that brings us back to the censorship debate. Under the present laws, it’s true that technically what platforms like YouTube, Twitter, and Facebook are doing are not true violations of the First Amendment. But, effectively, they are given the massive reach that these platforms have.
I would argue that they have grown to resemble more of a utility company than anything else, and should be regulated in a similar manner. Just as we have ways to deal with monopolies to address a flaw in the system of capitalism, we should consider placing similar safeguards on what has become the dominant information highway of our day.
Social media platforms can’t continue to claim to have no responsibility and yet take on the task of deciding what information we should and shouldn’t see.
And at the end of the day, we should be very considered about who’s making such decisions for us—whether the White House, or a corporation.
(Oh, and if you feel like you’re comfortable Biden’s administration making such decisions for you because you trust him fully, consider how you’d feel if the other guy was in power. Will you feel the same then?)
What do YOU think? Share your thoughts below.