October 25, 2017

A mountain of evidence exists to provide clues about what happened when Stephen Paddock fired on a country music festival in Las Vegas earlier this month. 

While traditional journalism tends to favor explanations from official sources over information gleaned from evidence, the timeline of events — roughly 10 minutes of panic — has changed several times as police have provided new information about the case. 

To augment the shifting storyline provided by law enforcement and shed light on unanswered questions, the New York Times combined available evidence to create an independent timeline of a shooting that killed 58 people and wounded hundreds more. The video, built with eyewitness footage, police and fire scanner audio, police bodycam footage and other known facts, provides what it says is “perhaps the most complete picture to date of what happened.”

Using a technique called investigative video reporting, or video forensics — pioneered by human rights organizations like Forensic Architecture, Human Rights Watch and Amnesty International — the Times video team reconstructed Paddock’s 10-minute rampage by identifying all 12 bursts of gunfire and placing them on a timeline. 

Social intelligence and news agency Storyful was among the first to apply these verification practices to journalism, but “for general readers or a general audience, it’s kind of new to them to see this stuff at the New York Times,” said Malachy Browne, a senior story producer at the Times who previously worked at Storyful. 

Browne says he approaches this process by considering all information — videos from social media, police body cameras and dispatch audio in this case — as the “raw ingredient of journalism.”

“Being able to harness any and all information through traditional and new digital methods and then parsing that information through forensic analysis provides you with a really powerful storytelling and investigative reporting toolkit,” he said. 

Here’s how Browne and the video unit at the New York Times gathered, parsed and published evidence from Las Vegas in one coherent timeline. 

Browne and his team began piecing together a timeline by collecting eyewitness video from the Associated Press and Storyful. They searched through social media for undiscovered clips, some of which appeared as police allowed concertgoers to reclaim property they had left at the festival. They found about 40 videos in total. 

Where possible, the team reached out to video uploaders for the original files because they tend to contain more information, as many social media sites strip out metadata when users upload photos or videos.

Bodycam video provided by the Las Vegas Metropolitan Police Department, and police and fire scanner audio from Broadcastify, a tool that sources public safety streams, rounded out the raw information. 

The team began building a scaffolding by ordering the videos based on where each one was filmed: from different parts of the festival venue, on public streets or at the Mandalay Bay hotel, where the gunman was based. 

“That was fairly easy for the most part,” Browne said, because the videos were filmed in a fairly small area with plenty of recognizable landmarks. 

Clues in the videos such as street lights and the Luxor Obelisk in the distance, combined with information about the city from Google Street View, became valuable ways of verifying authenticity and location. The team also examined raw files for geolocation data, looked at satellite imagery of before and after the shooting and even consulted a moon calculator to determine near-exact locations.

Browne used a numbering system (1.0-based names for festival grounds, 2.0 for Las Vegas Boulevard, etc.) to organize the videos on a spreadsheet. He began scribbling down a rough analysis of the different bursts of gunfire. Patterns emerged. Knowing the location of the hotel in relation to where the videos were filmed helped to analyze firearm sounds.

Using Adobe Audition and Premiere, video journalist Barbara Marcolini lined up every single burst of gunfire using the audio waveforms from the videos. Looking at the “signature” each burst provided, the team identified 12 distinct bursts of fire outward from the hotel. 

Using these bursts as guides, they were able to line up all of the videos down to less than a second. 

Browne laid them out on a Premiere timeline, giving the team the complete event from start to finish. They had at least three videos from various angles for every burst of fire, which they began to contextualize further with scanner audio and previously reported information. 

“In doing that, because you hear different sound patterns in the gunfire … it raises questions about what’s going on and why the gunman was behaving in this particular way,” Browne said.

For instance, this video, taken from a vehicle right below the gunman, contained both loud bursts being fired outward from the hotel and a dull burst that appeared to be an outlier. 

“I couldn’t understand what that was,” Browne said. “It wasn’t picked up by cameras that were recording each of the other bursts out in the festival.” 

For analysis, the team sent the video to C.J. Chivers, an investigative reporter for the Times and Marine veteran who wrote a book about the history of the AK-47, and Thomas Gibbons-Neff, a Times staff writer who is also a Marine veteran. The duo determined that the lack of bullet cracks but the audible sound of the gun chamber explosions indicated that the gunman was shooting indoors at the time, possibly spraying bullets down a hallway at a security guard and the building’s engineer.

With the videos all lined up in relation to each other, the team had to determine a start and end point based on actual time. 

Clocks appear in some of the footage (see an example at the 20-second mark of the Times video), which helped to set a timeframe, but Browne also used an app called Investigator to view EXIF data for the video start time. These methods can be fallible, but the team found that six of them point to a starting time within three seconds of each other.

Based on knowledge gleaned from Jon Huang and his work on a related piece about Paddock's modified weapons, Browne and David Botti, a contract video reporter with the Times and a Marine veteran, returned to the audio files to isolate the cracks of bullets. They were able to establish a count of shots fired that served as a basis for further analysis.

For instance, the first burst contained 59 or 60 shots, even though Paddock was using 100-round magazines (the next three bursts contained about 90 shots). 

“We don’t know what explains that and it’s up to the police to answer,” Browne said.

The video team leaned on the graphics department to augment the raw footage with room layouts, Google Earth tours of the area and other visuals. The national team provided further insights from their own sources within the police and other people familiar with the case. 

“There are many things we still don’t know about,” Browne said. 

It’s unclear whether Paddock fired his first shots at the security officer, or if the officer interrupted his first burst of fire (which may explain the shorter first burst). We don’t know why Paddock fired several single rounds of fire at the start, or even what he was aiming at. We don’t know what two pops heard at the end signify. 

“We expect a lot more to come out. CCTV will be released from within the hotel, more police body count footage, and we’ll get a much fuller picture in time,” Browne said. “We’re confident in the evidence that we’re presenting but there are still many questions to answer.”

Browne and I didn’t talk much about this much, but viewing traumatic footage can have a significant impact on a journalist’s mental health. Resources like the Dart Center, the Carter Center and Poynter’s course on journalism and trauma are valuable for dealing with the aftermath of interacting with these materials.

Though investigative video reporting is fairly new to journalism, First Draft News has a wealth of case studies, blog posts and training available for reporters who would like to learn more.

Browne said he hopes more journalists learn how to take in the raw information provided by eyewitness video and other primary sources to provide clarity around big stories. 

“We’re leveraging as much possible information as we can, stripping it down and then building it back up to see what makes sense and what patterns there are to draw a complete picture of an event,” he said.

Learn more about journalism tools with Try This! — Tools for Journalism. Try This! is powered by Google News Lab. It is also supported by the American Press Institute and the John S. and James L. Knight Foundation

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Ren LaForme is the Managing Editor of Poynter.org. He was previously Poynter's digital tools reporter, chronicling tools and technology for journalists, and a producer for…
Ren LaForme

More News

Back to News