This article was originally published by The Institute for Future Media and Journalism and is republished here with permission.
Like any other industry, media organizations around the world are trying hard to adapt to the shockwave created by the spread of COVID-19. Confined journalists are rearranging their homes into makeshift news desks and newsrooms doubling down on efforts to combat disinformation while, at the same time, keeping their audiences informed with timely and quality news. They also strive to provide them with enough elements to understand, and sometimes challenge, their governments’ responses to the disease.
To tackle this whole new set of unprecedented challenges, technological innovation has come in handy for media outlets that were able to leverage it in time. Here are three technological innovations news organizations should keep a close look at during and even after the crisis.
At the beginning of March, as the disease was reinforcing its grip on European countries, the Swedish daily newspaper Aftonbladet partnered with the startup United Robots to assemble an automated system that helps reporters monitor 21 regional health authorities.
Aftonbladet journalists can access automated stories through a dedicated Slack channel and tweak them before publishing the final copy on a live feed dedicated to the coronavirus crisis. The same goes for journalists at The Helsingin Sanomat, in Finland, who edit the same way the automated stories they get through the newspaper’s own bot.
Likewise, the agency RADAR, which acts as an automated newswire for media clients across the United Kingdom, publishes daily updates on the spread of the virus for 150 areas. On March 12, a RADAR editor tweeted that the agency managed to generate 149 automated stories on COVID-19, only within an hour of the numbers being released.
That said, even though they come from a reliable source, any data input should be questioned critically and every automated story checked before publication. Failure to do so could result in the same embarrassing situation experienced by the Los Angeles Times in 2017: After records were updated in a geological database, the newspaper’s automated software for seismic alerts, Quakebot, warned its readers about an earthquake that actually took place … 92 years ago.
Advanced data visualizations
In the second week of March, The Washington Post published a data visualization that was so popular and impactful that the news organization decided to translate it into 13 other languages. According to Paul Farhi, a Washington Post journalist, this article, which was developed by Harry Stevens, may even be the newspaper’s most-read online piece.
Steven’s visualization piece features four simulations that correspond to four potential responses to a viral disease: a free-for-all, an attempted quarantine, a moderate distancing, and an extensive distancing scenario. To illustrate the efficiency of each scenario, Stevens programmed 200 dots to bounce around a frame. One of them is infected and starts spreading the disease, which is transmitted when two dots come into contact.
Eventually, all the dots recover, but this data visualization demonstrates the effectiveness of extensive distancing in any attempt to “flatten the curve,” in other words keeping the number of patients as low as possible over time.
In another ambitious data visualization published by The New York Times on March 22, the comings and goings of millions of Chinese were put on display through a compelling scroll-down narrative, from the very onset of the epidemic in a seafood market in central China up until the stage it turned into a global pandemic and reached the United States.
To realize “How the Virus Got Out,” a team of journalists and designers compiled data released by three telecom and internet providers in China to map out mobile phone usage during that time. They put that information side by side with estimates of the number of coronavirus carriers, along with information on air traffic.
No matter how advanced a chart may be, it’s most important for it to drive the story and not be a mere illustrating tool, a point stressed by data visualization expert Alberto Cairo. On that note, The Financial Times has been incredibly successful in using a low-tech log scale to give a global outlook of the spread of the pandemic.
Evan Peck, assistant professor in computer science at Bucknell University, warned of a few caveats associated with visualizing the disease. Among them, the uncertainty around hard figures when it comes to the number of people being infected by COVID-19 (not everyone is getting tested), and the risk that any data visualization could end up being quickly out of date because of the virus’ rapid evolution.
Many of us have the habit of…
get data –> visualize it –> move on
But in a context like this, there is an individual responsibility to either keep the data updated or don’t share it.
— EvanMPeck (@EvanMPeck) March 8, 2020
Confronted with a tremendous surge in misinformation in the wake of COVID-19, news organizations and individual fact-checkers are teaming up to debunk them on a massive scale.
Regrouped within the Trusted News Initiative, the BBC, Agence France-Presse, Reuters, The Financial Times, The Wall Street Journal, The Hindu, and CBC/Radio-Canada are collaborating with Facebook, Google, Microsoft and Twitter as well as with the European Broadcasting Union, First Draft and the Reuters Institute for the Study of Journalism to set up a shared alert system on “harmful coronavirus disinformation.”
Also, the International Fact-Checking Network at the Poynter Institute kickstarted the #CoronaVirusFacts Alliance, which brings together a group of more than a 100 fact-checkers in 70 countries to update a database of debunked false information about the disease.
While many fact-checkers are grappling with the continuous flow of disinformation distributed online, advanced computing techniques may come in handy to distinguish right from wrong. At the University of Waterloo in Canada, for instance, a team of researchers is achieving high results when employing deep learning algorithms to compare claims published in posts or stories with other information found in similar materials.
Furthermore, the Reporters’ Lab at Duke University is developing Squash, a program that is able to fact-check live videos of speeches and debates that are slightly delayed and present its conclusions in an informative box at the bottom of the screen.
As the debate rages among news organizations on whether they should broadcast President Donald Trump’s live press conferences on the virus, which contain claims that are regularly debunked in media outlets’ fact-checking sections, maybe this middle ground could be seen as an acceptable option.
Samuel Danzon-Chambaud is a Ph.D. researcher on the JOLT project, which has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 765140.