Jacquelyn Mason: I always like to start off by saying that each and every one of us is susceptible to spreading misinformation, or we’ve all believed false content at some time, right? Misinformation is false content. But the person sharing doesn’t realize that the content is false or misleading.
I can give a few examples of this. Many posts that were shared earlier in the pandemic revolving around certain groups’ susceptibility to COVID-19 were shared, but those were shared in order to help oneself or help loved ones. Pre-vaccine distribution, we saw pages and groups on social media, which were overloaded with false information about cures and remedies or distrust, and misinformation about the vaccine—which can also be disinfo, but I’ll get to that a bit later.
Often, we share things in order to help rather than to hurt. There was a plethora of misinformation surrounding the voting process, particularly where voters can register deadlines for voting by mail, et cetera. And all of these can also be examples of misinformation.
Now, disinformation, on the other hand, is content that is intentionally false and designed to cause harm within communities. It’s spread to make money for political gain, or simply to cause chaos. The biggest example that I can give of disinformation currently is the Big Lie, or that the 2020 election was somehow stolen. This proliferated most notably in groups on Facebook, but also in all aspects of the social web. Some popular posts, for example, are ideas that poll workers were discarding votes, and these were usually poll workers of color who faced harassment afterwards, with targeted campaigns against them. Also, that certain election officials were in some kind of cahoots in order to steal the election. This resulted in real world violence that left social media and went out into the streets. And we saw this on January 6, and we are still dealing with the repercussions of this to this day, as witnessed in the ongoing testimonies at the January 6 committee hearings.
So, one very important thing to note is that often, when disinformation is shared, it turns into misinformation. You know how groups [are] legitimately believing the narratives that have been spun by many bad actors and taking that into their own world? For instance, a beloved family member saying something false at a holiday party does not mean they intended to cause that harm, but someone else did. So, we’ve all heard of the term “fake news” and how it’s since become a no-no to describe what we see as mis[information] or disinformation. But disinformation has become a buzzword within itself. Many are now asking, “Is disinformation just code for news or things we don’t like, or things we don’t agree with?” We’ve been struggling with these terms for years. And the reason we continue to struggle with these terms is because it’s about more than specific pieces of granular information; it’s about our entire information ecosystem, which is polluted.
To understand the current information ecosystem, we need to break it down into three elements. The different types of content that are being created and shared, the motivations of who creates this content, and then the way this content is being disseminated, most notably on social media, but we also have TV, Fox News, and the radio. So mis- and disinformation, also thinking about these definitions, but then zooming in a bit. Mis- and disinformation has disproportionate effects on communities of color. I will also, before I kind of go into racialized disinformation, I’d like to note that the media often frames communities of color [as] more likely to either be targeted or even more susceptible to misinformation. This is an extreme falsehood. And in addition to that, those in Black, Latinx, and Indigenous communities have legitimate reasons to be skeptical of information given the legacy of historical traumas we faced in our communities related to vaccines and civic participation.
This article originally appeared in the Nonprofit Quarterly. See the original article here.