News & Updates

Who Will Fix Facebook?

In its effort to clamp down on fake news, Russian trolls and Nazis, the social media giant has also started banning innocent people, proving again it can’t be trusted to regulate itself

Published by

James Reader tried to do everything right. No fake news, no sloppiness, no spam. The 54-year-old teamster and San Diego resident with a progressive bent had a history of activism, but itched to get more involved. So a few years ago he tinkered with a blog called the Everlasting GOP Stoppers, and it did well enough to persuade some friends and investors to take a bigger step.

“We got together and became Reverb Press,” he recalls. “I didn’t start it for the money. I did it because I care about my country.”

In 2014, he launched Reverb, a site that shared news from a pro-Democratic stance but also, Reader says, took great care to be correct and factual. The independent watchdog site mediabiasfactcheck.com would declare it strongly slanted left but rated it “high for factual reporting, as all news is sourced to credible media outlets.”

The site took off, especially during the 2015-16 election season. “We had 30 writers contributing, four full-time editors and an IT worker,” Reader says. “At our peak, we had 4 million to 5 million unique visitors a month.”

Through Facebook and social media, Reader estimates, as many as 13 million people a week were seeing Reverb stories. Much of the content was aggregated or had titles like “36 Scariest Quotes From the 2015 GOP Presidential Debates.” But Reverb also did original reporting, like a first-person account of Catholic Church abuse in New Jersey that was picked up by mainstream outlets.

Like most independent publishers, he relied heavily on a Facebook page to drive traffic and used Facebook tools to help boost his readership. “We were pouring between $2,000 and $6,000 a month into Facebook, to grow the page,” Reader says. “We tried to do everything they suggested.”

Publishers like Reader jumped to it every time Facebook sent hints about changes to its algorithm. When it emphasized video, he moved to develop video content. Reader viewed Facebook as an essential tool for independent media. “Small blogs cannot exist without Facebook,” he says. “At the same time, it was really small blogs that helped Facebook explode in the first place.”

But Reader began noticing a problem. Starting with the 2016 election, he would post articles that would end up in right-wing Facebook groups, whose followers would pelt his material with negative comments. He also suspected they were mass-reporting his stories to Facebook as spam.

Ironically, Reader, whose site regularly covered Russia-gate stories, suspected his business was being impacted by everyone from Republican operatives to MAGA-hat wearers and Russian trolls anxious to dent his pro-Democratic content. “It could have been Russians,” he says. “It could have been domestic groups. But it really seemed to be some kind of manipulation.”

Reader saw drops in traffic. Soon, ad sales declined and he couldn’t afford to invest in Facebook’s boosting tools anymore, and even when he did, they weren’t working in the same way. “It was like crack-dealing,” he says. “The first hits are free, but pretty soon you have to spend more and more just to keep from losing ground.”

He went to Facebook to complain, but Reader had a difficult time finding a human being at the company to discuss his problems. Many sources contacted for this story describe a similar Kafka’s Castle-type experience of dealing with Facebook. After months of no response, Reader finally reached an acquaintance at Facebook and was told the best he could do was fill out another form. “The guy says to me, ”˜It’s about scale, bro,’”‰” he recalls. In other words, in a Facebook ecosystem with more than 2 billion users, if you’re too small, you don’t matter enough for individual attention.

After all this, on October 11th this year, Reader was hit with a shock. “I was driving home in San Diego when people started to call with bad news,” he says. They said Reverb had been taken offline. He got home and clicked on his computer:

“Facebook Purged Over 800 Accounts and Pages for Pushing Political Spam,” a Washington Post headline read.

The story described an ongoing effort against “coordinated inauthentic behavior” and specifically named just a few sites, including Reverb, that were being removed. The Facebook announcement mentioned “timing ahead of the U.S. midterm elections,” implying that the deletions had been undertaken to preserve the integrity of American democracy ”” from people like James Reader.

Reader wasn’t alone. He was one of hundreds of small publishers to get the ax in Facebook’s October 11th sweep, which quickly became known as “the Purge” in alternative-media circles. After more minor sweeps of ostensibly fake foreign accounts over the summer, the October 11th deletions represented something new: the removal of demonstrably real American media figures with significant followings. Another round of such sites would be removed in the days before the midterms, this time without an announcement. Many of these sites would also be removed from other platforms like Twitter virtually simultaneously.

“All this happens on the same day?” Reader asks. “There’s no way it’s not connected.”

The sites were all over the map politically. Some, like the Trump-supporting Nation in Distress, had claimed Obama would declare martial law if Trump won in 2016. Others, like Reverb and Blue State Daily, were straight-up, Democrat-talking-point sites that ripped Trump and cheered the blues.

Many others, like the L.A.-based Free Thought Project and Anti-Media, were anti-war, focused on police brutality or drug laws, and dismissive of establishment politics in general. Targeting the latter sites to prevent election meddling seemed odd, since they were openly disinterested in elections. “If anything, we try to get people to think beyond the two parties,” says Jason Bassler, a 37-year-old activist who runs the Free Thought Project.

James Reader sits at his home in San Diego, CA on Friday, November 2, 2018. Reader, the publisher of online news site Reverb Press, found his page unpublished by Facebook in October, but he’s never been told why. Photograph by Sandy Huffaker for Rolling Stone

Reader tried to access his sites. The Facebook page for Reverb had been unpublished. Same for his old Everlasting GOP Stoppers blog. Even a newer page of his called America Against Trump, with 225,000 followers, was unpublished. “Everything I’d worked for all those years was dead,” he says.

Reader seethed about being lumped in with Russian election meddlers. But somehow worse was Facebook’s public description of his site as being among “largely domestic actors using clickbait headlines and other spam tactics to drive users to websites where they could target them with ads.”

This grated, since he felt that Facebook’s programs were themselves designed to make sure that news audiences stayed in-house to consume Facebook advertising.

“This is all about money,” Reader says. “It’s a giant company trying to monopolize all behavior on the Internet. Anything that can happen, they only want it to happen on Facebook.”

AFTER DONALD TRUMP was elected in 2016, Facebook ”” and Silicon Valley in general ”” faced a lot of heat. There was understandable panic that fake news ”” be it the work of Russian ad farms, or false stories spread about Barack Obama by Macedonian trolls, or insane conspiracy theories about Hillary Clinton and “Pizzagate” ”” was having a destructive impact, responsible for everything from Brexit to the election of our Mad Hatter president.

Everyone from journalism professors to sociologists to former Facebook employees blamed the social network for rises in conspiracism, Russian meddling and hate speech. “News feed optimizes engagement,” said former Facebook designer Bobby Goodlatte. “Bullshit is highly engaging.”

Politicians began calling for increased regulation, but Facebook scoffed at the idea that it was responsible for Trump, or anything else. Moreover, at least publicly, the firm had always been resistant to sifting out more than porn, threats and beheading videos. Its leaders insisted they were about “bringing people together,” not editing content. “We are a tech company, not a media company,” CEO Mark Zuckerberg said in 2016, after visiting with the pope.

Facebook’s touchy-feely vibing about togetherness and “friends” was probably part true, part thin veil for a voracious business plan: get as many humans herded in-site as possible, so they can have truckloads of ads shoved through their eyeballs. Restricting speech was a problem because it meant restricting speakers, which meant restricting cash flow.

To keep regulatory wolves at bay, Facebook had one thing to bargain with: its own unused political might. By 2017, 45 percent of Americans were getting news from Facebook, making it by far the largest social media news source in the country. A handful of executives could now offer governments (including our own) a devil’s bargain: increased control over information flow in exchange for free rein to do their booming eyeball-selling business.

We could have responded to the fake-news problem in a hundred different ways. We could have used European-style laws to go after Silicon Valley’s rapacious data-collection schemes that incentivize clickbait and hyper-partisanship. We could have used anti-trust laws to tackle monopolistic companies that wield too much electoral influence. We could have recognized de facto mega-distributors as public utilities, making algorithms for things like Google searches and Facebook news feeds transparent, allowing legitimate media outlets to know how they’re being regulated, and why.

Instead, this story may be turning into one of the oldest narratives in politics: the misuse of a public emergency to suspend civil rights and concentrate power. One recurring theme of the fake-news controversy has been a willingness of those in power to use the influence of platforms like Facebook, rather than curtail or correct them. Accused of being an irresponsible steward of information, Facebook is now being asked to exercise potentially vast and opaque new powers.

The accumulation of all these scandals has taken a toll on the company. A recent Pew survey found that 44 percent of users between ages 18 and 29 deleted Facebook from their phones in the past year.

Now there’s this. You thought you didn’t like Facebook before? Wait until you see it in its new role as Big Brother.

THE IRONY IS, Facebook’s business model once rested on partisanship, divisiveness and clickbait. One of the many reasons Trump won, as former Facebook product manager Antonio García Martínez described in Wired, was the campaign’s expert use of Facebook’s ads auction, which rewarded ad developers for efficiently stoking lizard-brain responses. The company, García Martínez wrote, “uses a complex model that considers both the dollar value of each bid as well as how good a piece of clickbait”‰.”‰.”‰.”‰the corresponding ad is.”

A canny marketer, García Martínez wrote, could “goose” purchasing power if Facebook’s estimation of its “clickbaitness” was high. The Trump campaign’s superior grip on this dynamic allowed it to buy choice ad space at bargain prices, while the reverse was true for Clinton.

In other words, the same company that rewarded the red-meatiest content and hyperpartisan drivel that political lunatics like alleged MAGA Bomber Cesar Sayoc devoured was now publicly denouncing sites like Reverb News for”‰.”‰.”‰.”‰clickbait.

Reader wondered why his site had been chosen. He admits to using multiple backup profiles, which is a technical violation, but he insists this would have previously earned a slap on the wrist. Several of the other deleted sites were right-wing or libertarian (although Facebook hasn’t released a full list of the purged sites). Reader wondered if Facebook ”” as it reportedly did after a Gizmodo piece in 2016 claimed Facebook suppressed conservatives ”” was attempting to overcompensate by targeting a blue-leaning operation.

Tiffany Willis Clark, whose page for her site Liberal America was taken down on November 2nd, is similarly baffled as to why. A self-described “Christian left” publisher from Texas who pushes a Democratic line, she says Liberal America, with its 750,000 followers, is a “lifestyle site” about “raising conscious kids who are aware of the suffering of others.” She insists she’s never engaged in any banned Facebook behaviors and is careful to source everything to reputable news organizations. An example of her content is a listicle, “87 Things Only Poor Kids Know and Conservatives Couldn’t Care Less About,” that contains lines like “We go to the doctor when we’re sick, but mom doesn’t.”

Clark created the site for political and spiritual reasons, and believes she has helped reach people with her down-to-earth approach. “I’ve had people tell me they’ve switched parties because of us,” Clark says. “We didn’t do this for the money. That was a happy accident.”

She was surprised to see traffic take off after launching in 2013, and began investing in the site as a business. Clark estimates that she has spent $150,000 on Facebook boosting tools since 2014. “I basically put my life savings into this, and it’s gone,” she says. Like many of the people contacted for this story, she regrets having built a business around an Internet platform with a constantly shifting set of standards.

“Facebook seems to be redefining its mission minute to minute,” she says. “They started with fake news, moved to Alex Jones, and now it seems to be anything that’s not mainstream media.”

The belief that the recent deletions represent the start of a campaign against alternative media in general have been stoked by the fact that in its efforts to police fake news, Facebook recently began working with a comical cross section of shadowy officialdom: meeting with the Foreign Influence Task Force at the FBI and the Department of Homeland Security; partnering with the Atlantic Council, a NATO-connected organization featuring at least six former CIA heads on its board; and working with a pair of nonprofits associated with the major political parties, the National Democratic Institute and the International Republican Institute.

“It’s a blatant attack on independent media in advance of the election,” says Sean Conners of Blue State Daily.

“This is a real thing,” says Bobby Rodrigo, a member of the Georgia Society of Professional Journalists and an admin to more than a hundred social media accounts for independent media and charity sites. “Lots of people I know have been affected. And not enough reporters are paying attention.”

NEWS FLASH: There’s always been weird shit on the Internet. Not long ago, that’s even what a lot of us liked about the medium. Everything was on the Net, from goat sex to “Thirteen Bizarre Stipulations in Wills” to all the evidence you needed if you wanted to prove Sasquatch is real. None of this was ever regulated in any serious way, in keeping with a historically very permissive attitude toward speech.

We’ve traditionally tolerated fakes (the 1938 radio broadcast of The War of the Worlds reportedly scared one in 12 listeners into believing Earth had been invaded by Mars) and conspiracy kooks like the LaRouchians. In modern history, we’ve mostly relied upon libel laws, market forces and occasional interventions from the Federal Communications Commission to regulate speech.

Obviously, no one has a constitutional right to a Facebook page or a Twitter account. As ACLU lawyer Ben Wizner points out, there’s no First Amendment issue here. “To the extent First Amendment rights figure in at all, they’re enjoyed by the companies, who get to decide what does and does not go on their platforms,” he says. But the fact that removals are probably legal does not mean they’re not worrisome. If a handful of companies are making coordinated decisions about content, especially in conjunction with official or quasi-official bodies, this has far-reaching implications for the press.

Eric Goldman of the Santa Clara University School of Law calls the problem “soft censorship,” adding, “We’re seeing removal of content that isn’t illegal but the government doesn’t like. It’s a backdoor form of censorship.”

Once viewed as a revolutionary tool for democratization and personal empowerment, the Internet always had awesome potential as a lever for social control, as we’ve already seen overseas.

When it comes to Internet companies working with governments, there are two main dangers.

In the first, a repressive government uses an Internet platform to accelerate human-rights abuses. The worst example of this is in Myanmar, where the U.N. recently concluded Facebook may have been key in helping incite government-sponsored genocide against that nation’s Rohingya Muslim minority.

The campaign against the Rohingya led to mass murder, arson and rape, and caused 700,000 to flee abroad and left thousands dead. The attackers were egged on by Myanmar officials and descended upon Rohingya settlements in a murderous rage.

A series of posts on Facebook in the Buddhist-majority country called Muslim minorities maggots, dogs and rapists, and said things like, “We must fight them the way Hitler did the Jews.” Facebook at the time had only a handful of Burmese speakers on staff reviewing this content, and the U.N. concluded that the platform had “turned into a beast.”

Facebook has since deleted accounts of Myanmar military figures accused of inciting violence, citing the same offense it applied to the likes of James Reader: “coordinated inauthentic behavior.”

The flip side of being too little engaged is to have intimate relationships between foreign governments and companies involved in speech regulation.

In March this year, for instance, after the company had unknowingly helped spread a campaign of murder, rape and arson in Myanmar, Facebook unpublished the popular Palestinian news site SAFA, which had 1.3 million followers.

SAFA had something like official status, an online answer to the Palestine Authority’s WAFA news agency. (SAFA has been reported to be sympathetic to Hamas, which the publication denies.) Its operators say they also weren’t given any reason for the removal. “They didn’t even send us a message,” says Anas Malek, SAFA’s social media coordinator. “We were shocked.”

The yanking of SAFA took place just ahead of a much-publicized protest in the region: the March 30th March of the Great Return, in which Gaza Strip residents were to try to return to their home villages in Israel; it resulted in six months of violent conflict. Malek and his colleagues felt certain SAFA’s removal from Facebook was timed to the march. “This is a direct targeting of an effective Palestinian social media voice at a very critical time,” he says.

Israel has one of the most openly cooperative relationships with Facebook: The Justice Ministry in 2016 boasted that Facebook had fulfilled “95 percent” of its requests to delete content. The ministry even proposed a “Facebook bill” that would give the government power to remove content from Internet platforms under the broad umbrella of “incitement.” Although it ultimately failed, an informal arrangement already exists, as became clear this October.

That month, Israel’s National Cyber Directorate announced that Facebook was removing “thousands” of accounts ahead of municipal elections. Jordana Cutler, Facebook’s head of policy in Israel ”” and a former adviser to Prime Minister Benjamin Netanyahu ”” said the company was merely following suggestions. “We receive requests from the government but are not committed to them,” Cutler said.

This template should worry Americans. The First Amendment prevents the government from ordering platforms to take down content. But as is clear in places like Israel, sometimes a suggestion is more than just a suggestion. “If they say they’re ”˜not obligated,’ that should come with an asterisk,” says Goldman.

The most troubling example of private-public cooperation is probably the relationship between Google and China. The company whose motto was once “Don’t Be Evil” is reportedly going ahead with plans for a censor-friendly “Dragonfly” search engine. The site could eliminate search terms like “human rights” and “Nobel prize” for more than a billion people.

The lack of press interest here is remarkable. Had an American company on the scale of Google helped the Soviets develop a censorship tool, the story would have dominated the press, but it has barely made headlines in the States.

Somewhere between the Myanmar and Israel models is the experience of Germany, which last year passed a broad Network Enforcement Act (NetzDG) requiring deletion of illegal content that violates German law against incitement to crime, hatred or the use of banned political symbols. Facebook tried to keep up with the NetzDG by hiring thousands to work in “deletion centers” in Essen and Berlin. But this year a German court ruled Facebook cannot take down content that is not illegal, which some believe may force the company to allow things like nude pictures. “This will get really interesting,” is how one European tech-policy researcher put it.

If content removal is messy in Germany, which has clear and coherent laws against certain kinds of speech, how would such an effort play out in America, which has a far more permissive legal tradition?

We would soon find out.

Just more than a year ago, on October 31st, a subcommittee of U.S. senators held a hearing to question representatives of Google, Facebook and Twitter. The subject was “Extremist Content and Russian Disinformation Online: Working With Tech to Find Solutions.” The grilling took place during the peak of public outrage about fake news. Facebook had just announced it would be turning over about 3,000 ads created by a Russian “Internet Research Agency.”

For the hearing, the tech firms sent lawyers to take abuse. The two chief counsels present ”” Colin Stretch of Facebook and Sean Edgett of Twitter, plus Richard Salgado, law enforcement director at Google ”” looked pained throughout, as though awaiting colonoscopies.

Although the ostensible purpose of the event was to ask the platforms to help prevent foreign interference in elections, it soon became clear that Senate partisans were bent on pushing pet concerns.

Republican Chuck Grassley, for instance, pointed to ads targeting Baltimore, Cleveland and Ferguson, Missouri, which he said “spread stories about abuse of black Americans by law enforcement. These ads are clearly intended to worsen racial tensions.”

Hawaii Sen. Mazie Hirono insisted that the Russian ads had affected the election and asked the Silicon Valley reps to come up with a “mission statement” to “prevent the fomenting of discord.”

When Stretch tried to offer a hedging answer about Facebook’s mission being the promotion of community (translation: “We already have a good enough mission”), Hirono cut him off and reminded him of a word he had used earlier. “Authenticity,” she said. “I kind of like that as a mission statement.”

Even if one stipulates every concern about foreign meddling is true, Hirono was playing with fire. Tightening oversight to clamp down on illegal foreign propaganda is one thing. Asking the world’s most powerful media companies to create vague new missions in search of “authenticity” and the prevention of “discord” is something else.

So how would the Senate make Facebook bend the knee? We got a clue in July, when Sen. Mark Warner released a white paper waving a regulatory leash at Silicon Valley. Warner proposed legislation requiring “first-party consent for data collection,” which would cut back on the unwanted use of personal data. This was a gun to the head of the industry, given that most of the platforms depend on the insatiable collection of such data for advertising sales.

The companies by then had already made dramatic changes. Google made tweaks to its normal, non-Chinese search engine in April 2017. Dubbed “Project Owl,” the changes were designed to prevent fake news ”” Holocaust-denial sites were cited as an example ”” from scoring too high in search results.

Although the campaign against fake news has often been described as necessary to combat far-right disinformation, hate speech and, often, Trump’s own false statements, some of the first sites to feel the sting of the new search environment seemed to be of the opposite persuasion. And this is where it becomes easy to wonder about the good faith of American efforts to rein in the Internet.

After Google revised its search tool in 2017, a range of alternative news operations ”” from the Intercept to Common Dreams to Amy Goodman’s Democracy Now! ”” began experiencing precipitous drops in traffic.

One of the first was the World Socialist Web Site (WSWS). According to reporter Andre Damon, the agency performed tests to see how the site fared under the new Google search. It found that in the old search, WSWS stories popped up very high. A few months later, they were nowhere to be found. “If you entered ”˜social inequality,’ we were the number-two story in April 2017,” says Damon. “By August, we were out of the top 100 for the same search.”

Damon and others at WSWS, using data from the marketing analytic company SEMRush and Google Webmaster, ran tests on a dozen other anti-war, progressive-leaning sites. They found their own search traffic had dropped 67 percent, and estimated Alternet was down 63 percent, Wikileaks down 30 percent. Every site they measured was down at least 19 percent. “Google pioneered this,” says Damon. (Google stressed that rankings shift with any algorithmic update, and the company says it does not single out sites by name.)

Facebook had also already made dramatic changes to its algorithm, and it wasn’t just left-wing sites that were seeing the crunch. Kevin Roose of The New York Times recently featured a Pennsylvania-based right-wing site called Mad World News that, like Reader, had spent enormous sums on Facebook tools to build an audience ”” a staggering half-million dollars, the site’s founders claimed. But starting in 2017, the site’s traffic dropped from 20 million views a month to almost nothing, especially after Facebook implemented its “Trusted Sources” algorithm, which de-emphasized commercial sites in favor of more-familiar “local” content.

“Have some integrity, give the money back” is what the Mad World founders told Roose.

But soon, mere algorithmic changes wouldn’t be enough, and the age of outright bans began. On May 17th, Facebook announced it would be working with the Atlantic Council.

Often described by critics as the unofficial lobby group of NATO, the council is a bipartisan rogues’ gallery of senior military leaders, neocons and ex-spies. Former heads of the CIA on its board include Michael Hayden, R. James Woolsey, Leon Panetta and Michael Morell, who was in line to be Hillary Clinton’s CIA chief.

The council is backed financially by weapons-makers like Raytheon, energy titans like Exxon-Mobil and banks like JPMorgan Chase. It also accepts funds from multiple foreign countries, some of them with less-than-sterling reputations for human rights and ”” notably ”” press freedoms.

One of its biggest foreign donors is the United Arab Emirates, which this year fell nine spots down, from 119th to 128th place, out of 180 countries listed in the World Press Freedom Index.

When Rolling Stone asked the Atlantic Council about the apparent contradiction of advising Facebook on press practices when it is funded by numerous speech-squelching foreign governments, it replied that donors must submit in writing to strict terms. The statement reads:

“[The] Atlantic Council is accepting the contribution on condition that the Atlantic Council retains intellectual independence and control over any content funded in whole or in part by the contribution.”

Around the same time the partnership was announced, Facebook made a donation to the Atlantic Council between $500,000 and $999,000, placing it among the biggest donors to the think tank.

The social media behemoth could easily have funded its own team of ex-spooks and media experts for the fake-news project. But Facebook employees have whispered to reporters that the council was brought in so that Facebook could “outsource many of the most sensitive political decisions.” In other words, Facebook wanted someone else to take the political hit for removing pages.

(Facebook did not respond to a question about having outsourced sensitive political decisions, but it said it chose the Atlantic Council because the council has “uniquely qualified experts on the issue of foreign interference.”)

Facebook announced its first round of deletions on July 31st, a day after Warner’s white paper was made public. In this first incident, Facebook unpublished 32 sites for “inauthentic behavior.” The accounts looked like someone’s idea of a parody of agitprop. One, Black Elevation, shows the famous photo of Huey Newton in a chair, holding a spear. Significantly, one event page ”” announcing a counterprotest to an upcoming Unite the Right 2 neo-Nazi march ”” turned out to be run by a real grassroots protest group called the Shut It Down DC Coalition. These people were peeved to be described as “inauthentic” in the news.

“This is a real protest in Washington, D.C.,” said spokeswoman Michelle Styczynski. “It is not George Soros. It is not Russia. It is just us.”

But the news headlines did not read “Facebook Removes Some Clearly Bogus Memes and One Real Domestic Protest Page.” Instead, the headlines were all gravitas: “Facebook Pulls Fake Accounts That Mimicked Russian Tactics,” wrote The Wall Street Journal; “Facebook Grapples With a Maturing Adversary in Election Meddling” was the unironic New York Timesheadline.

About a week later, on August 6th, one of the biggest jackasses in American public life was quieted. Four major tech firms ”” Apple, YouTube, Facebook and Spotify ”” decided to either completely or partially remove Infowars conspiracy lunatic Alex Jones. Twitter would soon follow suit.

Jones was infamous for, among other things, claiming the child victims of the Sandy Hook shooting were fakes, and his ongoing trolling of grieving Sandy Hook parents is one of the most revolting episodes in modern media. Jones is a favorite of Trump, who once gave Infowars a White House press pass.

The axing of Jones by the tech platforms was cheered by almost everyone in the mainstream press in “Ding-dong! The witch is dead” fashion.

“Finally,” exhaled Slate. “It’s about time,” said Media Matters. Even the right-wing Weekly Standard saluted the move, saying, “There’s no reason for conservatives to be defending this guy.”

Few observers raised an eyebrow at the implications of the Jones episode. The objections were more about the “how?” ”” not the “who?”

“Nobody complains about Alex Jones [being removed], which you can understand,” says David Chavern of the News Media Alliance. “But what rule did he violate? How does what he did compare to what other people saying similar things did? Nobody really knows.”

“I hate Alex Jones, I hate Infowars,” says the Georgia-based alternative journalist Rodrigo. “But we all saw what was coming.”

Reverb’s James Reader was one of the voices cheering the demise of Jones. Now conservatives are gloating over Reader’s removal from Facebook. “I have to take my lumps on that,” he says. “I still contend we don’t make incitements to violence or any of the bad things Jones does. But I should have been paying attention to the larger story. We all should have.”

AFTER THE REMOVAL of Jones, media and tech-industry types alike wondered about the “what next?” question. What about people who didn’t incite hate or commit libel but were merely someone’s idea of “misleading” or “divisive”?

The Atlantic Council in September put out a paper insisting media producers had a “duty of care” to not “carry the virus” of misinformation. Noting bitterly “the democratization of technology has given individuals capabilities on par with corporations,” the council warned that even domestic content that lacked “context” or “undermines beliefs” could threaten “sovereignty.”

Healing could accelerate, the council argued, by pressuring the market “gatekeepers” to better “filter the quality” of content. “This does not need to be government driven,” it wrote. “Indeed it is better if it is not.”

What does it look like when corporate “gatekeepers” try to “filter” social malcontents? Bassler of the Free Thought Project already had a pretty good idea. Bassler is controversial. On the one hand, he’s one of the most extensive recorders of law-enforcement misbehavior in America. His sites are essentially a giant archive of police-brutality videos. But he has a clear fringe streak. Sift through Free Thought headlines and you’ll find stories about everything from chemtrails to studies that question the efficacy of vaccines.

Overall, the Free Thought Project is a bit like a more politicized, Internet-era version of In Search Of: a mix of real news and the conspiratorial. It aims to fill clear gaps in mainstream-media coverage but also dabbles in themes that would make theColumbia Journalism Review cringe.

Like Reader, Bassler, he says, tried to comply with every Facebook request over the years, because his business depended on it. “I’m not interested in just building a circle jerk of people who agree with me,” says Bassler. “I’m trying to make a difference, so I need Facebook. That’s where the normies are, you know? That’s where you reach people.”

After 2016, Facebook made reaching the “normies” harder for smaller producers. Long before it brought in partners like the Atlantic Council and the International Republican Institute, Facebook invited mainstream-media partners to help fact-check sites. Those included the Associated Press, PolitiFact, FactCheck.org, Snopes and even The Weekly Standard.

Jason Bassler’s more radical page was also shut down with no explanation. Photo credit: Birch Studio Photography

Bassler did not do well in this process. Four Free Thought Project stories came up factually wanting under reviews. This caused traffic to plummet in the past two years, under a new Facebook policy algorithmically demoting “false news.” The Free Thought Project may not be ProPublica, but Bassler is no Alex Jones. In two cases, his “false” ratings were later overturned by PolitiFact and AP. But his business still took the hit.

The panel-review system poses serious issues. There’s the obvious problem of established media possibly being offered money from Facebook (reportedly as much as $100,000 annually) to directly reduce the business of smaller competitors.

A story by the Columbia Journalism Review about this process quoted unnamed checkers who professed to be unsure of how Facebook was picking sites for review. Some wondered why mainstream-media stories, like from Fox or MSNBC, were being filtered out. Others wondered why Facebook wasn’t fact-checking paid content.

Conspiracy theories aren’t always wrong, and people who have a conspiratorial bent are for this reason often the first to see real problems. Some important early reporting about the 2008 financial crisis, for instance, came from Zero Hedge, a site now routinely dismissed as conspiratorial.

If the question of whether reporting of this type is or is not legit is left up to panels of corporate media ”” who are often the targets of criticism from such sites ”” then even legitimate journalism that “undermines beliefs” will soon become rare. Especially when one considers that “reputable” media is often itself an actor in larger political deceptions (the Iraq-WMD episode being the most recent famous example of how terrible and lasting the consequences of disinformation can be), there’s tremendous danger in removing sites willing to play that challenging role.

Bassler’s Free Thought Project was eventually removed on October 11th. We can’t make any assumptions about why. But the opacity of the sifting process makes it hard not to wonder if such sites were chosen for something other than legitimate reasons.

“Unless they make their methodology transparent, we can’t give them the benefit of the doubt,” says Chavern. “Eventually, ”˜Trust us’ isn’t going to be good enough.”

THE NEW ERA of “content regulation” has been a mixed bag. Along with bans of neo-Nazi Daily Stormer content from sites like Google, we’ve seen removals of content like a picture of two women kissing or the banning of Arab-language atheist pages in Muslim countries. Venezuela-based left-wing sites like TeleSUR and VenezuelaAnalysis.com have been suspended or deleted from Facebook, feminist cartoonists have seen content removed in India, and videos of self-immolating Tibetan monks have been found to have violated Facebook “community standards.”

Meanwhile, in smaller incidents, libertarians like Daniel MacAdams of the Ron Paul Institute, progressive organizations like Occupy London and controversial writers such as Australian Caitlin Johnstone ”” among numerous others ”” have all been suspended from Twitter and other platforms.

Many of these cases involved suspensions triggered by user complaints, another potential problem area. Since the scale of Internet operations is so vast ”” billions of pieces of content a day are introduced on platforms like Facebook ”” companies will always be forced to rely on users to flag problems. As the motives for bans expand, we’ll see more and more people trying to mass-report their online foes into suspensions or bans. Rolling Stone found examples on both the left and the right. For Wizner of the ACLU, this feels key. “If you’re going to have billions of users,” he says, “it’s always going to be Whac-A-Mole. You can’t do it to scale.”

Whatever the democratic cure for what ails us, what we’re doing now is surely the opposite of it. We’ve empowered a small cadre of ex-spooks, tech executives, Senate advisers, autocratic foreign donors and mainstream-media panels to create an unaccountable system of star-chamber content reviews ”” which unsurprisingly seem so far to have mostly targeted their harshest critics.

“What government doesn’t want to control what news you see?” says Goldman, the law professor.

This is power that would tempt the best and most honest politicians. We’ve already proved that we’re capable of electing the worst and least-honest politicians imaginable. Is this a tool we want such people to have?

On his run to the White House, Donald Trump mined public anxiety and defamed our democracy, but that was just a prelude to selling authoritarianism. On some level, he understood that people make bad decisions when they’re afraid. And he’s succeeded in his short reign in bringing everyone down to his level of nonthinking.

This secretive campaign against fake news may not be Trump’s idea. But it’s a Trump-like idea, something we would never contemplate in a less-frenzied era. We’re scared. We’re not thinking. And this could go wrong in so many ways. For some, it has already.

“It’s Reverb Press today,” says Reader. “It could be you tomorrow.”

Recent Posts

Piyush Mishra Talks Udankhatola India Tour with Ballimaaraan

The writer, actor and musician and his band talk about heading to cities like Indore,…

November 5, 2024

Shinee’s Minho Blends Variety and Versatility in New Album ‘Call Back’

The 10-track record reveals Minho’s artistic growth, with each song unveiling a different facet of…

November 5, 2024

Foster The People Surprises Fans with an Exclusive Secret Set in Mumbai

We caught up with the indiepop icon Mark Foster to talk 2024’s Paradise State of…

November 5, 2024

Quincy Jones: 20 Great Productions

From "It's My Party" to Thriller and beyond, we survey the late producer's iconic career

November 5, 2024

Stevie Wonder Pays Tribute to Quincy Jones: ‘I Love You Madly’

From the moment a teenaged Stevie Wonder met Quincy Jones at the Apollo, the producer…

November 5, 2024

Paul McCartney Remembers Quincy Jones: ‘His Legend Will Continue’

McCartney remembered the "supremely talented" producer and musician, and the "private moments we were lucky…

November 5, 2024