Back in May, I received news that Facebook had unpublished my church’s page, meaning it was no longer publicly accessible but instead only viewable by admins like myself. Although we didn’t use our page as much as some churches and organizations, it was still one of our channels for live-streaming our Sunday morning services — which was critical for maintaining a sense of community in the midst of the ongoing pandemic.
I began researching the reason for Facebook’s decision as well as any potential fixes, and immediately began butting up against Facebook’s labyrinthine and conflicting information.
For starters, it was impossible to find out why our page had been unpublished other than a vague message that it had violated Facebook’s community standards. Our page, however, contained nothing untoward like misleading information, hate speech, or sexual content. Indeed, according to Facebook’s own “Page Quality” tool, our page had “no restrictions or violations.” Aside from live stream reminders, its content consisted primarily of news about upcoming events mixed with the occasional photo or video from past events like our Easter service.
Eventually, the best I could figure was that somebody had reported our page, but even then, there was no actual evidence of this. I understood why Facebook wouldn’t tell me who reported the page (that would be a major privacy violation), but even just an indication that such a report had been filed would’ve been helpful.
Trying to fix the page and get it re-published proved just as frustrating as trying to find out why it had been unpublished in the first place — and just as pointless. Facebook has a form for reporting issues with Facebook pages, but it seems geared primarily toward technical issues. They also offered a button for appealing the page’s unpublished status. And finally, their own help documentation links to this page, though that’s for requesting help with disabled accounts, not pages.
Not that any of these mattered, though. Submitting the forms anyway often resulted in errors while clicking the “appeal” button never generated a meaningful message that anything was being done.
My research into a fix ultimately pointed me toward a nuclear option: deleting all of the page’s posts, photos, videos, etc., and starting over. But even this proved pointless. While I was able to export the page’s content, I couldn’t actually delete anything because — you guessed it — the page was unpublished. In other words, the thing that I needed to do to fix our page couldn’t be done until our page was fixed.
At this point, I just gave up. (Though I did take some solace in the fact that I’m not the only one who’s been confused and frustrated by this situation.) I checked the page several more times to see if somehow, miraculously, Facebook had reversed their decision but to no avail. Then I stopped checking altogether until late last month, when I found that Facebook had finally done the inevitable: they had deleted the page.
So now my church has a new Facebook page — again, mainly to serve as a channel for live-streaming our Sunday morning services — and I have a renewed skepticism and determination to be less reliant on Facebook for anything important.
All frustrations aside, it was interesting to reflect on the above experience in light of the current debate surrounding “deplatforming,” i.e., when technology companies like Facebook, Twitter, and YouTube ban, suspend, or otherwise penalize users who’ve been deemed harmful, misleading, or in violation of the companies’ policies.
Recent examples include Twitter suspending Marjorie Taylor Greene and YouTube suspending Rand Paul after they both posted COVID-related falsehoods and misinformation. But the most famous instance of deplatforming occurred after the January 6 insurrection at the U.S. Capitol. Following concerns and criticism that Donald Trump might use their platforms to incite further violence, Twitter permanently banned the former president; Facebook and Instagram suspended his accounts indefinitely; and ecommerce platforms like Shopify and PayPal stopped selling Trump merchandise.
For some, deplatforming is clear evidence that “Big Tech” companies have run amok and are using their power to unfairly target and muzzle voices, and in particular, “right-wing” and “conservative” voices.
On the one hand, this argument’s rather hard to swallow. For starters, it’s ironic when people who tout the merits of the free market and limited government suddenly want to tell private companies how to run their business. Furthermore, it’s rich to claim that Greene, Paul, and Trump are somehow being silenced when they can appear on Fox News, Newsmax, or OANN at the drop of a hat and speak directly to millions. (And never mind claims that Facebook et al. are infringing on their right to free speech. It’s almost like some people have never read the First Amendment.)
On the other hand, “Big Tech” companies can’t really lay claim to the moral high ground. Their inconsistently applied policies make their decisions seem arbitrary while their own questionable behavior (e.g., Facebook’s history of privacy violations) makes it hard to trust that their decisions are well-grounded, reasonable, or fair — and this, regardless of your own political persuasions.
Put another way, it’s possible to cheer when someone gets banned or suspended for spreading and promoting lies and harmful messages and still feel disconcerted by a) the manner in which any such deplatforming occurs and b) the lack of transparency and consistency surrounding the deplatforming.
Returning back to my earlier experience, was Facebook trying to silence my church? Were we being persecuted by Mark Zuckerberg? Of course not.
Does that mean I’m OK with what Facebook did (or didn’t do), or that I didn’t find the experience frustrating? Again, of course not. What’s more, this experience offered some insight into the frustration that’s driving at least the more good faith criticisms of “Big Tech” and deplatforming that come from the Right. I personally have no problem with Greene, Rand, and Trump experiencing the consequences of their actions, but I can understand why it seems like such a big deal to people. Where I differ is in my firm belief that it doesn’t have to be a big deal, not really.
Ultimately, this experience, frustrating as it was, served as yet another reminder that you simply can’t trust Facebook as a communication medium. Not to serve as your primary medium, anyway. That’s not to say that Facebook doesn’t have any value or it’s being actively dishonest (well, not entirely — see those earlier privacy violations), but rather, that there’s no real way to hold it accountable short of government regulation (which is problematic in its own way).
It’s tempting to focus on “Big Tech” because, well, it’s Big Tech. Facebook, Twitter, and others are undoubtedly powerful forces shaping our culture, for better or worse, and it’s foolish to try and ignore that influence. But in the process, it’s easy to think that they’re the only viable options out there. But as I’ve been saying for years, you should always have your own website at your own domain and that should be your primary channel for communicating with your audience (or congregation, as the case may be).
Use Facebook, Instagram, Twitter, YouTube, et al., but don’t rely on them. They’ll all fail you eventually.
We do ourselves a disservice when we focus inordinately on “Big Tech” and throw up our hands helplessly as if we’re stuck with them. Those companies certainly have a lot of power (too much power, arguably), but that’s because we give it to them through our attention and our clicks. It can different, however, if we want it to be.
After Trump was deplatformed, there were plenty — myself included — who said that if he still wanted to get his message out, then the solution was simple: start a blog. And for awhile, Trump did have an honest-to-goodness blog titled “From the Desk of Donald J. Trump” on which he’d post, well, precisely the sort of stuff you might expect from Trump. A month later, though, he shut it down — because it wasn’t getting enough traffic. (Members of his staff claimed the blog was just a temporary solution while Trump’s organization developed its own social media platform — which still hasn’t happened yet.)
Evaluating your communications strategy is a necessity if you’re a public figure hoping to reach supporters. You want to make sure your efforts aren’t in vain — that you’re getting your message out to those who should hear it, that you’re not wasting time and resources, and that you’re actually where the conversations are. Therefore, it’s easy to see why politicians embrace and even come to rely on platforms like Facebook, Twitter, and Instagram.
I think Trump’s decision to shut down his underperforming blog had as much to do with his own ego as any communications strategy. However, you can’t claim you’re being persecuted, silenced, and denied a platform when you can, in fact, create your own platform. A platform that you control. Which is precisely what a personal website or blog is; it’s a place where you can share your message directly with your readership, with fewer worries of being suspended or banned. If that’s what you care about, anyway.
Unfortunately, getting suspended or banned actually seems like the goal these days, at least in some corners of American “conservatism” — an easy tactic for generating controversy and ginning up outrage in the base.
If you’re a “conservative” figure who gets yourself suspended by “Big Tech,” well, then you’ve got “proof” that “Big Tech” is out to get conservatives, and thus needs to be regulated or shut down by brave folks such as yourself. Never mind the fact that you got penalized for posting content that was misleading, offensive, and/or harmful. It got you banned, which was the point. And that outrage? It’ll come in awfully handy when it’s time to fundraise or run for re-election.
But this sort of bad-faith acting (or maybe we should just call it grifting?), and its accompanying outrage and controversy, obscures very real issues and questions pertaining to the role and effect of technology in our everyday lives:
- To what extent does a never-ending stream of information affect our decision-making process?
- To what extent does social media affect our emotional and spiritual well-being?
- How do we identify and deal with abusive, harmful, and misleading content and behavior, and those who spread it?
- How does the constant presence of technology affect our children, and how do we teach our children to use technology in wise and responsible ways?
- How do we teach ourselves to use technology in wise and responsible ways?
- How do we counter the toxicity and negativity that characterizes so much of our online discourse?
- How do we hold technology companies accountable for the incredible power they have without unduly penalizing them or preventing them from growing, innovating, and prospering?
These are questions that we should all be asking and thinking about, regardless of political party or persuasion.
There’s a discourse about technology that needs to happen in this country, of which deplatforming is certainly a part. But the fixation on deplatforming, and the anger and vitriol that comes with it, adds nothing to the discourse, nor does it help us to become better consumers and stewards of increasingly pervasive technology. Instead, it fans the flames, and in the process, we grow more insular, tribal, and balkanized — and less capable of using technology in ways that bless and benefit the broader society.
Updated 9/7/2021 to include a note about Facebook’s “Page Quality” tool.