Facebook and Twitter step up their game
For nearly three years, Facebook has been working with fact-checking organizations to limit the spread of false content on the platform.
That partnership, which (Poynter-owned) PolitiFact participates in, has changed a lot. (Disclosure: Being a signatory of the IFCN code of principles is a necessary condition for joining the project.)
And now, it’s changing again.
This week during a press call, Facebook CEO Mark Zuckerberg announced a slew of updates to the company’s anti-misinformation efforts going into the 2020 U.S. election. Among them: displaying fact checks more clearly on both Facebook and Instagram.
Before, posts rated as false by Facebook’s fact-checking partners, of which there are more than 55 around the world, would be appended with the fact check below the post. Now, a warning label will be superimposed on debunked photos and videos to make clearer that they’re false. Facebook will continue to notify pages that spread falsehoods, as well as users who share them.
Also of note: Fact checks will now also be applied to posts on Instagram, which fact-checkers started partnering with in May. When users try to share a debunked post in private messages, they’ll receive a warning message with the relevant fact checks.
Finally, Facebook announced last week that it’s hosting its first-ever summit with fact-checking partners at the company’s headquarters in Menlo Park, California. During that meeting, which runs Nov. 5-6, more than 100 fact-checkers and journalists will gather to discuss the program.
Those are promising changes to what has become one of Facebook’s most visible attempts to counter misinformation. And it’s not just Facebook that has changed its approach to false content recently.
Twitter, a platform that has done little to limit the spread of misinformation since 2016, took a baby step this week in announcing that it’s working on a new policy to combat misinformation. The company said it is using a feedback period to gather ideas from users on how best to combat “synthetic and manipulated media,” which Twitter defined as “media that’s been significantly altered or created in a way that changes the original meaning/purpose, or makes it seem like certain events took place that didn’t actually happen.”
The end result of that policy is not yet clear. In the past, Twitter has been pretty successful in removing lots of bots and fake accounts, but that’s very different from limiting the spread of false information.
Many questions remain about the efficacy of Facebook’s partnership with fact-checkers. But for now, it’s clear that both platforms are being more deliberate about their approach to misinformation in 2020 than in 2016.
. . . technology
Speaking of Facebook, artificial intelligence is getting “shockingly better very quickly” at finding misinformation, the company’s CTO Mike Schroepfer said at The Wall Street Journal’s Tech Live conference this week. Watch here.
China is beginning to “flex its muscles in the information space,” according to experts at the Atlantic Council’s 2019 Global Forum on Strategic Communications Wednesday. Here’s the council’s report.
Federal Computer Week explored the question of whether requiring social media companies to verify anonymous users could help reduce misinformation. Rep. Joe Neguse (D-Colo.) brought up the idea at a hearing this week on election security.
. . . politics
ISIS militants have been posting short propaganda videos to TikTok. An article in The Wall Street Journal Monday said that the Chinese company has hired thousands of content moderators.
During the Canadian national elections, Reddit played “an outsized role” in spreading unsubstantiated allegations that Prime Minister Justin Truedau was involved in a non-existent sex scandal, the National Observer reported ths week. (Trudeau was re-elected but will have a minority government.)
- Politico media writer Jack Shafer wrote that President Donald Trump is such a prolific liar that the media should start handling it differently. “As a petulant but devoted reader of the press, Trump would notice a headline reading ‘President Trump Said Something True Yesterday,’ and maybe tamp down on the lying,” Shafer wrote.
. . . the future of news
Dozens of new “news” websites have appeared in Michigan, but are basically publishers of conservative content. The Lansing State Journal reported that the company behind the websites, Metrics Media, says on its “about” page that it has plans to eventually launch thousands of such sites nationwide. Washington Post columnist Anne Applebaum noted on Twitter that this was a tactic used by the far-right in Europe.
The Duke Reporters’ Lab added 21 fact-checkers to its database of reporting projects that regularly debunk political misinformation and viral hoaxes. The database now lists 210 active fact-checkers in 68 countries. Facebook’s Third Party Fact-Checking Program and collaborative election projects are driving the rise.
- In Bangladesh, four people died in riots when 20,000 people took the streets for a protest after hackers posing as a young Hindu posted a Facebook message insulting the Prophet Muhammad, according to RFI (in French). Reuters reported that all the hackers had been detained.
President Donald Trump’s monologue at his cabinet meeting this week ran 71 minutes and covered a wide range of topics, which meant plenty of room for false claims. The Washington Post called it “part news conference, part stream-of-consciousness bragging and all about Trump.”
CNN found at least 21 claims to check (it has been updating its story), the Post did 15, and PolitiFact dissected nine. Factcheck.org had five reporters contributing to its comprehensive report. Some of the claims checked overlapped, some did not. Some could be checked quickly because they’ve been made before, like The Post’s check of Trump’s claim that a whistleblower gave a false account about Trump’s call with Ukrainian President Volodmyr Zelensky.
What we liked: Covering Trump as a beat reporter isn’t easy because of the frequency of his falsehoods. Reporters want to hold him accountable for what he says, but that would take time away from their ability to produce news and analysis. So in cases like this, fact-checkers carry a large load, taking the pressure off their White House colleagues so that readers see that falsehoods are debunked as they occur.
First Draft has updated its guide to understanding information disorder.
A new study found that people are more likely to fall for false headlines if they spend less time thinking about them.
Agence France-Presse wrote about how the communications blackout in Kashmir is fueling a misinformation battle between India and Pakistan.
Until Nov. 10, Maldita.es is offering a mini daily-podcast related to the Spanish election process. This new format was planned to reach those who have no more than 5 minutes to learn what is false online.
A state senator from North Dakota apologized for posting a long-debunked photo that purports to show U.S. Rep. Ilhan Omar (D-Minn.) holding a weapon at an al Qaeda training camp. But he won’t apologize to Omar, the Associated Press reported.
People in the fact-checking universe will want to keep an eye on MediaWell. Launched by the non-profit Social Science Research Council, it promises “to track and distill the latest research on disinformation, online politics, election interference, and emerging collisions between media and democracy.”
There’s a new student-run fact-checking operation at the University of Hong Kong. It’s called Annie Lab.
In Nigeria, WhatsApp is the medium of choice for older people sharing misinformation.
In the United States, the Federal Trade Commission is cracking down on companies using deceptive marketing tactics, such as selling likes and posting fake customer reviews.
The Florida Pro Chapter of the Society of Professional Journalists is attempting to trademark the term “fake news” to take some of its power away from those who use it as a pejorative for the press.