What journalists need to know about digital video editing

Digital camcorders, DSLRs and digital audio recorders have revolutionized video production. It’s now possible to get higher quality footage for less money than ever before. But, advances in hardware don’t tell the whole story. Equally important have been improvements in video software — the tools used to edit, process and publish video.

At the center of this software ecosystem is the digital editing program. This is the software that helps transform footage into stories. It’s the tool that structures disparate clips into logical sequences. And it’s the best way to polish footage and pull together many assets — video, images, voice overs, on-location audio, titles, animations and more.

Why should journalists learn about editing video? After all, video editing is about technology and production techniques. There is a technical side to video editing, but there’s also an opportunity to extend storytelling deeper into the production process. Many of the decisions made in the editing phase have a big impact on stories. Pacing, structure and sequencing are just a few of the factors that go into it. Several tools have gained prominence over the years:
  • Avid Media Composer has long been a top choice for professional video editors, and it’s found in most TV and film production houses.
  • Final Cut Pro is Apple’s Flagship video editing program, and it’s widely used in newspaper and online newsrooms with editing stations.
  • Meanwhile, Adobe has been hard at work advancing Premiere Pro, a cross-platform editing tool that’s quickly gaining ground on Final Cut.

Add to the mix a slew of other desktop options, numerous editors for mobile devices, and even a few Web-based editors, and it’s clear there’s no shortage of choices in how to edit video.

Fortunately, regardless of the particular platform you find yourself working with, a core set of concepts, elements and processes appear in most video editing platforms.

If you understand — conceptually — how these pieces work together to provide extensive control over how video projects are assembled, learning how to implement one feature or another is a relatively straightforward task.

Non-linear & nondestructive: Video editing freedom

Two concepts underpin digital video editing.

First, video editing software is non-linear. This is the ability to jump from any place in a sequence to any other place, forward or backward. Along the way, it’s possible to cut and insert footage, changing the order of the shots and scenes in a story ad nauseam.

With linear editing, edits are made sequentially. It’s impractical to go backward and redo an edit once it’s made, and it’s challenging to preview how things are progressing until all edits are complete.

The ability to move fluidly from one point in an edit to another provides incredible flexibility to the editor. It makes for a more nimble workflow, one in which fewer compromises have to be made in how a story is structured.

Second, and equally important, video editing software is nondestructive. This means changes when editing are reversible. This applies to many kinds of changes but, most importantly, when we cut raw video into smaller, more focused segments.

Cutting down video is a process of refinement. Excess is trimmed away, beginning with wide-swath cuts, then more precise cuts as things progress. But what if too much has been taken away? No problem. Nondestructive editing means any footage cut can be restored. Just like non-linear editing, nondestructive editing means freedom and flexibility.

Non-linear, nondestructive editing has been a mainstay in broadcast newsrooms for several decades now. My Poynter colleague Al Tompkins sums up the impact it’s had on producing video this way:

Non-linear allowed us to re-edit or change stories with a click of a mouse. Once the story was edited, it could be uploaded to a server for nearly instant playback. Many users could access the video at once. Since the editing was all digital, generation after generation dub after dub was the same quality as the first. Multi-channel audio editing  is a breeze and it was just as easy to add transitions and effects.

Linear, tape-based editing didn’t need to be ingested or rendered, so it saved journalists precious time when on deadline. But Tompkins points to some steep drawbacks:

If, after we finished editing a story, a producer decided it was too long and needed to be cut down, it would require time consuming re-editing to shorten or change the piece. And once a story was edited, somebody would have to run the tape down to the video playback department. Every day the newsroom looked like that famous scene from Broadcast News where some poor soul would have to sprint down stairs to make the deadline.

Non-linear made techniques like slow-motion and dissolve transitions much more difficult.  And every generation of editing would decrease the video quality.

Now, with low-cost digital editing software widely available, we all can benefit from the power of non-linear and nondestructive tools. Let’s take a look at the essential elements and steps involved in digital video editing.

Essential elements in video editing software

With this bedrock accounted for, it’s worth reviewing specific elements common to just about every video editing program.

Most video editors are comprised of four regions. They go by different names, depending on the particular program, but, conceptually, they serve the same purposes.

First, we have an area where files are imported and organized. In Premiere, this is the Project area. In older versions of Final Cut, it’s called the Browser, and in Final Cut Pro X it’s the Event Library. When clips are imported into an editor, they show up here. And folders — often called bins — can be created to organize our files. All kinds of media — videos, photos, audio — can be captured and organized in this area.

The “Event Library” in Final Cut Pro X.

Next, there’s a region where media contained in the browser can be previewed. This can be thought of as a built-in media player. In Premiere, it’s the Source. In Final cut, it’s called the Viewer.

The “Source” in Adobe Premiere Pro

Below the viewer is an important area called the timeline. This is where video projects are really assembled.

The timeline occupies two dimensions. Left to right represents, naturally, time. Elements placed to the right occur later in time than those to the left. When a clip is dragged from the browser, or project, onto the timeline, its width represents its length. Longer clips are wider, extending further to the right of the timeline.

The second dimension of the timeline represents visual depth. Elements placed higher on the timeline appear above those placed lower. This is achieved through the use of tracks; each step up or down is a different track. Complex projects sometimes use many tracks, and some tracks are designed to hold video content while others hold audio.

The Timeline in Final Cut Pro 7
The “Timeline” in Final Cut Pro 7

One final timeline-related element worth noting is the Playhead. This is a visual marker that denotes the current position of playback within the timeline. When a project is previewed, the play head sweeps across the timeline, progressing to the right and marking the passage of time.

This takes us to the final major area — the output space. It’s called the Canvas in Final Cut and the Program in Premiere. Like the Preview area, this is a video player. Unlike the preview, though, it doesn’t show just one clip, but rather the fully-edited, sequenced content from the timeline. This is the view that reveals what a project’s going to look like when it’s exported.

Common steps in the video editing process

Video editing is a creative act. Still, most editing involves working through a well-established, predictable set of steps. The first step is the importing and ingesting phase.

In general, we talk about ingesting tape and importing files. More and more video is file-based so, most likely, importing is what’s happening in this step. “Importing” is a little misleading, as files aren’t actually embedded within the editor. Instead, a link is made between the video project and the file being imported. This means it’s important to be careful when moving or removing imported video files. When this happens, the editing software will lose track of the, and links to the media will need to be reestablished.

After importing, it’s time to make basic, rough edits to footage. This may entail chopping several long clips into shorter ones, creating more narrowly-defined “in” and “out” points (the beginnings and ends of clips), and deleting imported clips that don’t serve the project.

Sequencing comes next. This involves dragging clips into the timeline where an order can be established.

There are many ways to create a video sequence, but one of the most popular ways is to match video against audio. This method assumes we have a decent audio track that video can be synced to.

Trim editing is often the next step. This involves making minor changes to clips, sometimes in isolation (a “slip edit,” for example, which involves changes), but often alongside adjacent clips (a “roll edit,” for example, which involves changing, in equal proportion, one clip’s out point and another’s in point.)

With the structure set, it’s time to work through some additional post-production steps. These involve adding transitions between clips and various kinds of video filters, which change the visual quality of one or more clips. When and how filters and transitions are applied can have a significant impact on the tone and texture of a piece.

Titles are often added around this time. These include various kinds of on-screen text — the “lower thirds” that appear when interviewees are on screen, title screens introducing videos or sections, and credit rolls at the end.

One of the final steps involves correcting and grading color. Put simply, grading involves enhancing color and correcting involves fixing color imperfections.

Working with color entails getting skins tones looking natural, making sure colors match across shots and ensuring the overall color is “balanced,” which involves making sure blacks are truly black, whites are truly white, and so on.

The final step is to export the video, which involves selecting a codec and container. Codecs are used to compress video, making otherwise large files suitable for downloading and streaming. And containers package up video and audio streams and, often, additional “metadata,” while also putting a familiar extension (for example, .mov, .mp4) on the resulting file.

Editing brings form to video stories

Editing video is really about structuring stories. It’s about establishing a beginning, middle and end, deciding how scenes will transition into each other, establishing a rhythm, and building momentum.

Knowing how to trim a clip or sequence a series of shots is important in all forms of video storytelling. In video journalism, these techniques can help us advance stories and enhance their journalistic purpose.

We have made it easy to comment on posts, however we require civility and encourage full names to that end (first initial, last name is OK). Please read our guidelines here before commenting.

  • Anonymous

    Good point — the new video features in Photoshop CS6 are quite nice and surprisingly extensive. Having keyframes and easy access to many of Photoshop’s tools and filters make this a particularly good option for audio slideshows and simple video projects. There are limitations, though, and more extensive projects and ongoing video editing are better handled by a dedicated tool like Premiere. But Photoshop is well worth a mention.

  • Anonymous

    You may want to also mention that Adobe Photoshop CS6 has the ability to do quality edits of DSLR video too.

    Rather than talk about getting new software suites and unfamiliar tools, the latest version of the same basic software used on just about every photo desk today can do the job.