After more than a decade of dominating the social media landscape, Big Tech platforms like Facebook, Instagram, and Twitter are in flux. Meta—the parent company of Facebook and Instagram—has been under fire in the past few years over its lax policies on news content, data privacy, and misinformation. Long-time users of Twitter are concerned about hate and harassment after Elon Musk recently acquired the company, as some accounts that were previously banned for violating the platform’s content policies are now reinstated.
Some users are migrating to new platforms, like Post, or to decentralized, open-source services, like Mastodon. Other marginalized networks, such as Black Twitter and Disability Twitter, are choosing to remain on the platform, where they’ve developed strong community networks that they can’t easily replicate.
For nonprofits, this moment presents an opportunity for a social media strategy “do-over.” Interrogating how legacy social media platforms and their harmful content policies impact nonprofit communications, organizing, and fundraising can lead us to prioritize equity and safety when moving to new platforms—or choosing to stay on existing ones.
How Big Tech Harms Social Justice and Equity
For many nonprofits, social media platforms are vital communication tools with which to organize people and provide them with services and information. These platforms have become essential to people from marginalized communities as they build political power, deliver mutual aid, and have helped many make a living. Black users have long been at the center of digital culture as trendsetting social media influencers, content creators, and key drivers of online racial justice activism, as seen by hashtag movements such as #BlackLivesMatter and #SayHerName.
However, the companies that operate these platforms have no interest in maintaining them as the public utilities they have become. For example, social media companies often treat inclusive design features like accessibility to the visual and hearing impaired as afterthoughts, and indeed, the development of such features are at the mercy of hiring and layoff trends. For example, shortly after Elon Musk’s acquisition of Twitter, the company’s entire accessibility staff was laid off.
For another, Big Tech social media platforms often celebrate so-called “free speech” and a “marketplace of ideas” while developing content moderation policies and algorithms that monetize hate speech, amplify misinformation, and endanger BIPOC and LGBTQ+ users. Created with presumptions of neutrality and “colorblindness,” such policies and algorithms ultimately protect primarily White, cis, straight male users—people most similar to the leaders, funders, and lobbyists of Silicon Valley companies. This has led to situations where BIPOC and/or LGBTQ+ users are disproportionately affected by policies meant to address “bad” behaviors.
Marginalized users and organizations have been impacted in multiple ways. For one, social media platforms often fail to remove hateful speech directed at these users. Due to biased algorithms and data, posts by BIPOC and LGBTQ+ users are frequently flagged as “inappropriate” or removed without warning, even if the posted content doesn’t actually violate the platform’s policies. Moderators and reviewers often lack an understanding of diverse cultural terms and practices and end up removing posts by BIPOC and LGBTQ+ users while leaving up hate speech and images targeting them. Recently, in the wake of restrictive state-level and national abortion legislation targeting individuals seeking abortion access, multiple reproductive justice organizations have called attention to algorithmic content suppression and personal data surveillance practices of Meta and Google.
In addition to dealing with the inequitable application of content-moderation policies, women, BIPOC, and LGBTQ+ users have faced organized right-wing campaigns to have them removed from the platforms. Until a 2020 overhaul of its algorithms, Meta’s content policies for Facebook and Instagram made no distinction between hate content targeting marginalized groups and comments criticizing racist White people. A 2017 study conducted by Amnesty International showed that Black women are 84 percent more likely than White women to be mentioned in abusive tweets. A January 2021 report published by Pew Research showed that 54 percent of Black online participants and 47 percent of Hispanic participants say they were targeted for harassment due to race or ethnicity, compared with 17 percent of White users. Despite the efforts of Black feminists and researchers to call attention to Alt-Right online mobilization and racist content-moderation policies through campaigns like #YourSlipIsShowing, Big Tech companies and organizations continue to ignore and undermine these efforts.
Additionally, biased content-moderation algorithms can incorrectly identify the gender of queer and trans users, leading to misgendering (referring to someone using the wrong pronouns) or deadnaming (referring to trans, queer, or nonbinary people by names they no longer use.) At the time of this writing, only TikTok and Twitter have implemented policies to protect trans people from being deadnamed, and the practice is often still allowed even with such policies in place. For example, in 2022, actor Elliot Page’s deadname was allowed to trend on Twitter for 45 minutes before being removed. In recent years, trans, nonbinary, BIPOC, and plus-sized users have reported cases of being deplatformed (having access to their account revoked) or having their content suppressed by Instagram for being “sexually suggestive” even when their content was nonsexual or featured people who were fully clothed.
These barriers and disruptions to visibility, access, privacy, and safety impact people’s online experiences—and have offline consequences. Highly organized racist, anti-Semitic, and transphobic trolling campaigns on social media by the Alt-Right, including racial slurs, hate speech, and abusive language and behavior, are part of larger efforts to push the Right’s ideas into mainstream acceptance. Such issues make it difficult for nonprofit communicators to ensure that digital campaigns and other content created on these platforms are truly inclusive, accessible, and safe.
What Nonprofits Can Do Now
While free-speech advocacy groups have long advocated for Big Tech accountability from a digital rights and privacy perspective, organizations across movements have been at the forefront of a values-centered tech policy perspective. For example, Digital Defense Fund provides digital security and privacy training to abortion-access organizations, clinics, and workers and develops software solutions designed to protect the privacy of clients and volunteers. Racial justice organization Color of Change’s Black Tech Agenda provides policy recommendations that address algorithmic discrimination, net neutrality, and antitrust regulation. These and other organizations are working to advocate for a just and liberatory tech future by understanding and addressing how anti-Blackness, transphobia, and other oppressive systems impact tech policy.
Some nonprofit organizations have chosen to halt their advertising or post activity on Twitter and Facebook in response to the tech companies’ harmful policies. In 2020, tech nonprofit NTEN announced that they were pulling their advertising from Facebook due to its data-collection and content-moderation policies. The Boston-based Barr Foundation recently posted a blog detailing their decision to discontinue activity on Twitter, explaining that Twitter’s revised content policies and rising hate speech were incongruent with the organization’s commitment to racial equity.
As the actions of the Digital Defense Fund, Color of Change, and NTEN demonstrate, nonprofit organizations cannot wait for Big Tech to prioritize privacy, safety, equity, and access—we have the agency to both advocate and make our own choices to center these values. This means evaluating our social media objectives beyond content creation and campaigns and considering issues such as harassment, data privacy, surveillance, and accessibility. This takes time, intention, and cooperation from all staff levels and backgrounds at an organization, not just junior communications and tech staffers. It also requires humility—we are all learning at the speed of change—and the courage to look beyond capitalism’s tendency to prioritize content virality and reach. Large audience and engagement numbers are not the only metric of social media success.
Lean into your organization’s values and consider how they show up in your social media strategies and policies. Here are suggestions for a few ways to do this:
- If your organization’s internal social media content policies and procedures haven’t been documented, document them now. For example:
- Does your organization provide direct services via Big Tech social media platforms?
- Do you have internal information-privacy procedures to protect personal data from being shared via direct message?
- Are there policies in place for flagging, reporting, and blocking abusive and harassing content?
- Do you have policies for how your organization responds to sharing and responding to real-time sensitive and traumatic content, like police violence?
- If you’re planning to move to self-moderated communities like Mastodon or create your own server, have you developed policies and identified staff/volunteers who will moderate content, websites, and IPs?
These policies should be informed by your organization’s social media practitioners as well as the input of your senior staff and board: don’t let social media become an afterthought in executive-level communications strategies. Identify senior staff who understand or are willing to learn about social media policies and systems and make critical decisions to safeguard the privacy and safety of direct service recipients, supporters, and staff.
- Understand how bias and power dynamics operate in staffing technology and social media positions. Many social media content creator positions are disproportionately held by young and BIPOC women and racialized and gendered biases often impact expectations of these groups’
- Are the employees that handle social media community management for your organization properly compensated for the level of public-facing work and response obligations they carry?
- Are they resourced and protected during cases of harassment?
- Are they provided with support, such as mental health resources, if regularly moderating hate speech and abusive comments or viewing unfiltered, traumatic content online?
- Center language justice in your content strategies. Language justice is the fair and equitable use of language in all aspects of society. Consider the languages spoken by the audiences and communities you work with and the language needs of marginalized and underserved communities. Translate your social media content into multiple languages and partner with language-justice organizations to ensure that your content is inclusive to all. Avoid ableist language and, where appropriate, use non-gendered terms. Incorporate plain language principles in your content strategies and avoid using jargon or complex words and phrases.
- Prioritize accessibility and disability justice. Ensure that your social media content is accessible to disabled and neurodivergent users. Use alt text—a short description of an image that is read aloud by screen readers—for all social media and website images. Use closed captioning for your videos to provide access to people who are deaf and hard of hearing. Make your content mobile-compatible for low-income and rural users who may only have access to a smartphone or tablet.
- Look into digital security, privacy, and safety training and education from a nonprofit/movement perspective for all staff—at all levels. Organizations like the aforementioned Digital Defense Fund and Social Movement Technologies are good go-to resources.
- Consider the ethics of how and where you invest your social media marketing dollars and time. Does the benefit to your organization of investing ad dollars in these platforms outweigh the potential harm to the populations you serve or represent? This is not an easy decision to make, especially if you’ve spent years developing a presence and community on legacy social media platforms. It may require input from your organization’s social media followers. Consider a survey or focus group to find out where and how your followers spend their online time—and what needs and concerns they have when interacting online.
The process of developing a values-based social media strategy is an iterative one because the industry’s technology and audiences never stop evolving. Sometimes things can go wrong even when your social media staff have done everything right. Maybe a well-followed troll has decided to attack everyone who retweets from a particular account, or a change in content policy or admin settings has altered how your posts are shared. Nonprofit executive staff should be prepared to make informed decisions around social media policies with the goals of safety and inclusion for all audiences and staff.
A Values-Centered Future for Social Media
Nonprofit communications and tech professionals must decide whether to stay with legacy social media platforms, leave for emerging platforms, or attempt to balance both. Given recent changes to social media platform policies, this is exactly the right time to look closely at how an organization’s social media participation impacts vulnerable and marginalized users and staff.
Inclusive digital strategy and digital equity are important to me personally. As an “extremely online” nonprofit communications professional, I’ve experienced the power that social media holds for spotlighting inequality and connecting people with the communities they need to thrive. But as a Black woman speaking out about racial justice online, I’ve also experienced online harassment and abuse, and as someone with a visual impairment, I’ve experienced the barriers created by inaccessible social media platforms.
Let’s think of social media as more than just outreach to younger audiences or an outlet for campaigns and look closely at social media methods, policies, and messaging. Let’s approach these platforms for what they truly are: essential resources for a diverse audience of users. People and communities lead the way in creating inclusive, safe online spaces that allow every person, regardless of their background or abilities, to participate. Nonprofit communicators can work in and across their organizations to reimagine a future where such spaces are the norm. There’s no better time to get started.
This article originally appeared in the Nonprofit Quarterly. See the original article here.