Jonathan Albright wants you to know that misinformation still exists on Facebook — it’s just not as obvious as it used to be.
Albright, director of the Digital Forensics Initiative at the Tow Center for Digital Journalism, spent months digging into the analytics of Facebook posts, political ads and private groups to determine how the platform was influencing the election. The result is a three-part analysis of misinformation on the social media platform, which he published days before the United States midterms earlier this month.
Three months and 1,000 screenshots later, he found that, while the technology company has made strides in limiting the spread of misinformation over the past couple of years, there’s still plenty of fakery on the platform.
“I think the midterms especially have shown that they’re starting to clean the surface-level activities — the obvious bots, the trolls — but I think that, if you go down one level into Facebook, then I think you see that the problems are still there,” Albright told Poynter in an interview. “In some ways, they’re better, but in a lot of cases they’re actually worse.”
His findings are striking. Hundreds of different political groups with thousands of members produce conspiracy theories and misinformation, which then spread to more public parts of the platform. Moderators encourage others to screenshot false memes and photos so that they can avoid Facebook’s automated detection systems. And the company has been inconsistent in its enforcement of rules against accounts that violate its Community Standards, Albright found.
Another small finding: One of the pages Facebook took down last month had more engagement over the past five years than The New York Times, The Washington Post and Breitbart combined.
“I had all that data and I had this panic moment where I realized that I had to get it out before it became irrelevant or a lot less relevant, so I just went on a weeklong writing quest and tried to go back and do copy edits and make it passable,” he said of his analysis. “I’m trying to go back now and straighten up and connect some datasets to some of the observations I made … just to show how big of a problem this was.”
I caught up with Albright to talk about his three-part report on Facebook, how he investigates suspect pages and how journalists and fact-checkers can adapt to more insidious forms of misinformation. This Q-and-A has been edited and shortened for clarity.
You’ve been doing this work at the Tow Center for a while now. How did you get to this point?
I’ve been working a lot with journalists, I suppose. I’m in the mix of stories that happen and news that starts to kind of break, and then I get a chance to go and sometimes help or validate certain types of information or analytics or metrics about something. So I’m kind of, in many ways, a first responder for a lot of these big stories that have broken in the past couple of years.
It’s interesting that you see yourself as something like a first responder for these kinds of stories. What’s your process for investigating social media profiles, groups and content?
Every single incident or event or story or breaking story or developing story tends to be different. Other than going and checking what most people would do anyway, I don’t know if I have a specific workflow. If there’s anything, it’s that I tend to grab everything that I can. But I don’t think that’s unusual for journalists in general; they’re always trying to look at things and capture certain things because I think we realize that, more and more, things go away. So as soon as you find out about that Facebook page that’s now removed, you’ll wish that you had somehow tried to save it.
But at the same time, the web is starting to go by the wayside in terms of archival services, so it’s becoming more challenging. But I think if there’s one thing in my workflow, I don’t take things for granted when they’re still available. So I’ll try to get any insight that I can from something that’s remaining on a website or ad, and that includes analytics.
From the beginning, I’ve been really focused not on content. As much as it’s a problem and as much as they’ll remove pages, I think the story last year with the Facebook analytics maybe helped some or helps provide some kind of motivation for people to consider something like analytics as much as they would content. Because we can have all those posts from the trolls, but they don’t really tell us anything about the reach or impact in terms of politics. So there’s a lot more data than just the content of stories: there’s metadata, there’s analytics, and I think all those things are really important for reporters.
What struck me the most about your research is the extent to which bad actors have moved to private Facebook groups in order to avoid detection. Tell me a little bit how you figured out about that trend and perhaps a bit about the admins behind those groups.
The admins question is a great lead-in to that because a lot of these groups have no admin. One of the first things I started finding out in groups was that they didn’t have any admins. And I just don’t see how that should be an option on that platform. There’s essentially open groups with no responsibility, no moderators — and that’s worse than Reddit.
I’ve seen evidence of the groups for quite a while, for the past year or so at least. And no one had ever brought up groups for 2016, and groups were a really big component. But people were so focused on the Facebook pages that I think groups were completely overshadowed. Even back in 2016, some of the same groups still exist that are just prolific in sharing. They have 10,000, 20,000 members. Some of them have up to 50,000 members.
I think that what drove it home was when I was looking at the caravan and trying to trace back to the original posts of the caravan frenzy since the spring. I started realizing that on Twitter, you can really dig back and find most of the posts. But on Facebook, I started to find more evidence that it wasn’t just the open posts — it was actually a lot of the posts in the groups that had initiated the controversy or helped push it and get it going on Facebook. So when I started to see that, I looked back at some of the groups and I realize how much of that kind of thing was going on. It all clicked, basically.
I was also struck by the level of engagement some of the inauthentic accounts Facebook removed had racked up over the years. What did you think of that finding?
It doesn’t make any sense, right? It’s not logical and it’s not explainable. I was looking back at articles (from 2015) a few days ago … it was unavailable to me to see how big the focus was. There were academic studies entirely focused to a fault on the number of likes or retweets that something got. So to look back and read the headlines on some of these stories, it's just unbelievable. It’s crazy how much people trusted and went by almost word-for-word the reported engagement on a Facebook page between 2012 and 2015. It was the peak of engagement metrics.
Then we started to reconsider these relatively arbitrary and also highly emotionally bloated numbers and indicators when a lot of things are just simply being inflated by networks of spam and bots.
Have you talked to Facebook about some of the things you’ve found?
No. They’re not very fond of me, I would imagine. I’m not saying everyone there is evil or anything, but I don’t think their PR team is necessarily fond of me.
The conversations I’ve had with people that are not on their PR team have been great. I’ve talked to people who are product managers on things like News Feed and it’s totally different. But those are the people who are not responsible for what's happening and they don't have a whole lot of control over how to fix it.
What do you think about how this beat has changed more broadly? What do you think are the most important stories or unanswered questions going forward?
I think it’s evolved, and I don’t think that’s a bad thing. I think journalists tend to still be kind of responsive to things. For example, if this work that Facebook is releasing soon and there’s a frenzy of reporting on it, at what point do you need to say, “OK, let’s not act as Facebook’s public relations outlet.” At what point should you take your reporting resources and your team and report on this in the context of larger issues?
RELATED ARTICLE: When and how to use 4chan to cover conspiracy theories
To this day, there’s been more and more news releases on takedowns and the removal of fake accounts, but it always seems to be like a frenzy of reporting activity that essentially reproduces it — at least initially. I feel there’s an amplification of the message that they want to get out. There’s just a lot of expertise and professional resources that you're wasting by responding immediately to the comms team of some tech company, and you’re kind of stuck. I feel like the better model might be more in-depth takes.
Given your findings, what are some other important takeaways for journalists and fact-checkers? How should they retool their processes or coverage?
That’s an existential question. The first thing that they could do is just help not expose more people to it. Over and over and over again — and still to this day — when I go and look at a piece of content that is getting too much attention that’s either fake or false or highly controversial, and not in a good way, I’ll pull analytics on it and Media Matters is still the one that amplifies most of the outrage. There will just be 15-100 Media Matters links to it.
I think that journalists need to understand a little bit more about technology in a sense of how these referrals work and how they’re handing over a lot of their credibility through their news domain … they’re not really careful with how they attribute things. That’d be the No. 1 best way to at least initially deal with it: to not expose more people to it and also not to create more outrage about something.