Casey Frechette

Casey Frechette is a visiting assistant professor in the journalism and media studies department at the University of South Florida St. Petersburg, a Web strategist and consultant with USFSP's Nelson Poynter Memorial Library, and an adjunct with The Poynter Institute. Casey teaches and writes about digital media and researches the role of technology in learning. Before joining the USF St. Petersburg, Casey was an interactive learning producer with Poynter’s News University, a leading online journalism and media training program. At Poynter, Casey worked with faculty and industry leaders to design and build custom training experiences for a community of 200,000 (and growing) online learners. Casey has over a decade of Web development and e-learning experience, specializing in PHP, MySQL and jQuery. He produced multimedia lessons for Navajo students at the University of New Mexico’s Technology and Education Center and DJed at KSEL in Portales, New Mexico. Casey has a master’s in media arts and computer science and a doctorate in organizational learning and instructional technology. His dissertation looks at the effects of animated characters in Web-based learning environments, and his research has appeared in peer-reviewed publications. His current research projects involve investigations into how online wisdom communities form and develop.


Technology in the hands

An introduction to newsroom programming technologies

Newsrooms around the country use code to expand their reporting, create alternative storytelling formats and engage audiences in new ways. Opportunities to enhance newsgathering and publishing with programming skills are significant and growing. So too are the calls to teach journalism students coding alongside writing, editing and reporting.

Many journalism schools recognize the value of technology training in their courses, but they face roadblocks when adding programming to their instruction. One fundamental challenge concerns what tools to focus on and when to teach them.

Journalists looking to improve their technical skill sets face similar issues. There’s no shortage of ways to learn code, but it may not be clear where to begin or how technologies fit together to make code-infused journalism possible.

To help address these issues, here’s a look at the most popular programming languages (and related tools) used by some of the newsrooms at the forefront of journalism and coding.

I’ve grouped the list into several thematic areas. These may lend themselves to modules in, say, a data journalism class. Or they may point to courses unto themselves.

Frontend technologies
Any news app, interactive infographic or other tool delivered on the Web is bound to make use of two foundational technologies: HTML and CSS. Combined with actual content, these tools give us everything we need to publish basic stories and graphics across devices. And they’re necessary in complex applications that require more technology.

  • HTML (Hypertext Markup Language). When the Web was invented, HTML was a founding technology. It remains an essential tool for publishing on the Web. It’s a means to add structure and semantic meaning to content. This makes it possible for browsers and other software (from search engines to screen readers) to make sense of what we’ve published. Though not strictly a programming language, it’s hard to avoid HTML when developing simple stories or sophisticated applications. And more sophisticated tools, from JavaScript to Ruby on Rails, integrate closely with HTML.
  • CSS (Cascading Style Sheets). Like HTML, CSS is a foundational technology with no substitutes. HTML gives content structure, and CSS designs it. Typography, color and layout are some of the presentation options CSS lets us define. Just as HTML has been revitalized with the emergence of a new, richer standard, CSS has benefited from ongoing development. It’s the primary tool designers use to implement responsive Web designs, making it a key ingredient when thinking about building content that works across devices.

JavaScript
JavaScript is another ubiquitous technology. Unlike HTML and CSS, it’s a full-fledged programming language with many sophisticated capabilities. On news sites, JavaScript usually adds a layer of interactivity to projects, making it possible for users to interact with complex interfaces. Many related technologies make JavaScript a valuable point of focus when learning to program for journalism.

  • jQuery. jQuery is one of the most popular JavaScript libraries. In programming, a library is a collection of prewritten code that solves common problems, accelerating the development process. In this case, jQuery makes JavaScript easier and faster to write and standardizes inconsistencies across browsers. jQuery focuses on making Web pages dynamic by manipulating the DOM, or document object model. This makes it possible to take part of a Web page and change it on the fly, for example, in response to a user action. This basic premise — changing part of a document after a user interacts — is the foundation of much of the interactivity seen on the Web today.
  • CoffeeScript. CoffeeScript is a different way to write JavaScript. It maps very closely to the JavaScript language, but it standardizes and simplifies some of the more tricky syntax. In the end, CoffeeScript compiles into JavaScript code, so it’s really about streamlining workflows. NPR uses this tool in some of its projects.
  • JSON. The JavaScript Object Notation is a standard for formatting and transmitting data from one site application to another. JSON makes it possible to represent data in a way that’s both highly structured (good for computers) and easy to read (good for humans). With the proliferation of APIs, or application programming interfaces, that allow systems to exchange information with one another, JSON has become a vital part of many news applications.

Data stores
Data stores are technologies used to archive information, usually in a highly structured way. That usually means many individual records, each with the same parts. Imagine a list that shows the names of organizations, for example, along with their locations, emails and phone numbers. Having a strong structure makes it possible to retrieve information in predictable ways. You could gather a list of all phone numbers for an organization, for instance, or all its other contact information.

What data stores do news organizations use?

  • CSV. Comma-separated value documents are a kind of plain-text data store. These files are easy to create and transmit, but they aren’t terribly well-suited for large or complex datasets. Sometimes, though, sources (government websites, for example) provide data only in this format, so it’s a necessary starting point.
  • Spreadsheets. Plain old spreadsheets can be surprisingly effective tools for capturing data and play an important role in data journalism.
  • MySQL. As an open-source relational database engine, MySQL integrates with some of the most popular content management systems, from WordPress to Drupal. In the case of WordPress, MySQL is the only database supported.
  • PostGreSQL. Postgres is another open-source, relational database management system. Its features and performance match MySQL closely. The Chicago Tribune uses Postgres for some of its projects.

Server-side programming languages
In the realm of newsroom coding, three languages have gained traction: Python, Ruby and PHP. Each is an object-oriented language and a good tool for making complex Web applications. Object-oriented programming emphasizes the use of classes, a kind of templating system. Classes allow code to be compartmentalized and reused, leading to speedier writing and easier maintenance–necessities when programing in the newsroom on deadline. These languages also share open-source licensing models. This makes it possible to deploy software without the need to secure rights or pay licensing fees.

Unlike some of the other technologies reviewed here, different server-side languages tend to be adopted by different development shops. In part, that’s because each language has its own quirks and strengths, along with a unique set of complementary tools. Sticking with one language streamlines development workflows.

  • Python. The Chicago Tribune, NPR and others use Python to power dynamic projects. Invented in the 1990s, Python has earned a reputation as a language that combines power and ease-of-use. It’s well-suited for a variety of tasks (on and off the Web), and it integrates easily with other technologies. Google uses Python for many of its projects.
  • Ruby. ProPublica uses Ruby for some of its projects. Ruby is, in many ways, comparable to Python, though some consider it more difficult to learn. For some problems, though, Ruby can provide more elegant solutions, and its block functionality is an often-cited advantage. Many websites run on Ruby — perhaps most notable, Twitter, although the social network has relied increasingly on Java to power its infrastructure.
  • PHP. Many of the biggest websites run on PHP, including Wikipedia and Facebook. PHP is also readily available — it comes preinstalled on most Web hosting accounts, making it one of the most accessible server-side languages. PHP integrates with other popular tools, including databases like MySQL and SQL Server and content management systems like Drupal and Joomla.

Server-side frameworks
Programmers use frameworks to make server-side languages easier to use and better-suited to the problems they need to solve. Different frameworks extend different languages, and some languages benefit from a group of frameworks, each with its own strengths. Here’s what’s popular in newsrooms.

  • Django.The Chicago Tribune and The New York Times are among the news organizations that use Django to deploy Web applications with Python. Among the technologies covered here, Django is unique in that it emerged from a newsroom. This makes it a great option for building dynamic news websites. Django uses a model-view-controller, or MVC, approach to building Web applications. MVC applications separate the ways data are stored (the model), displayed (the view) and manipulated (the controller) into logical subparts that are easy to mix and match.
  • Ruby on Rails. ProPublica uses Rails to streamline Ruby programming. Like Python, Ruby is a general-purpose language. It’s useful for solving all kinds of problems. The Rails framework makes Ruby particularly well-suited for Web development. Rails also employs an MVC approach to programming.
  • WordPress. Though not strictly a framework, WordPress (the “.org” version of the software — not the hosted tool that makes blogging a breeze) can be used to make PHP programming easier and more productive. As a content management system, WordPress offers an extensive API for creating and managing pages, blog posts and other kinds of content. Yuri Victor of The Washington Post has talked about why that organization uses WordPress.

Native mobile technologies
Some news organizations offer native mobile apps. These call on platform-specific technologies beyond the tools listed above. Objective-C is the programming language for iOS, the operating system that powers iPads and iPhones. Android devices make extensive use of the Java programming language. It’s also possible to build mobile apps using Web tools, and even so-called “hybrid” apps built that are built with Web technologies but deployed as native apps. (I’ve written about native and Web apps and what journalists need to know about the difference.)

Learning more
One of the best ways to learn about how newsrooms use code to enhance their journalism is to hear first-hand from newsroom developers. Fortunately, NPR, the Chicago Tribune, ProPublica and The New York Times all blog about their efforts to innovate digital journalism.

Along with learning how these teams build their projects, you can review the fruits of their efforts on GitHub, a website programmers (and others) use to store, share and maintain their work. ProPublica, NPR and The Chicago Tribune each maintain code repositories. You can use these to see the code behind some of their projects.

Related: How journalists can learn to code — and why it’s important | What journalists need to know about the power of code Read more

Tools:
4 Comments
Lockedinfo2

15 things journalists (and everyone) need to know about digital security

In these days of NSA snooping, SEA hacking, corporate espionage and cyber fraud, everyone should have digital security top of mind.

If one of your accounts is compromised, you (and your employer) can lose credibility, financial security can be jeopardized and reputations put at risk. And when you’re handling sensitive information from sources, contacts and clients, livelihoods — sometimes even lives — are on the line.

Many organized, well-funded groups — competitors, criminals and governments — have a vested interest in getting at your data. As digital technologies become more pervasive, protecting the security of our information will only become more important.

Here’s the problem: It’s all too easy to get lax with security, and being safe often means sacrificing some convenience. Keeping data secure takes ongoing vigilance. It’s also not always clear where vulnerabilities lie and when your data — or your identity — might be in peril.

Fortunately, there are some basic tenets you can follow and best practices you can adopt to stay safe online.

1. Most of the Internet is not, by default, secure.

Most of the protocols that make up the Internet — including HTTP (the Web), FTP (file transfers) and SMTP (email) — aren’t secure. That means data transmitted with these technologies are open for potentially anyone to see. This is, in one sense, what makes the Internet and the Web so great: open access to knowledge. But, in the case of personal or confidential information, openness does more harm than good.

The problem with online communication is the false sense of privacy we have when we, say, send an email to a friend or log in to a website. Though all we see is the end recipient with whom we’re communicating, our message is actually passing “in the clear” through any number of other computers before reaching its destination. In principle, anyone with access to those computers can monitor the communications that pass through them. We think we’re sending a sealed envelope, but we’re really mailing a postcard.

2. Encryption solves many problems.

Fortunately, some of the most common Internet protocols have secure alternatives. These options provide the same functionality, but also encrypt data before it’s transmitted and decrypt it once it’s received.

HTTPS is the most important secure protocol. This is the technology that makes it possible to transmit credit-card numbers and other sensitive data on the Web. Once confined to e-commerce, HTTPS is quickly becoming at least an option — if not a mandate — anytime you need to log in to an account. Encryption can slow down connections, but things are quickly improving on this front. If an HTTPS connection is available for a given site, you can always request it by typing https:// rather than http:// in your browser’s address bar. One good way to ensure you’re using HTTPS whenever possible is the HTTPS Everywhere browser plugin.

If you’re running a website that involves logins or users contributing any kind of sensitive information, HTTPS won’t be turned on by default — you must purchase and install an SSL certificate to secure your site. This certificate uses a third party to verify your identify as a website host. This gives visitors a chance to see not only that their communications to you are encrypted, but also that you are whom you claim to be.

3. Weak passwords always compromise security.

Many security breaches begin with weak passwords. A weak password is one that’s easy to guess, either by social engineering or a brute-force attack in which many thousands of possible combinations are tried repeatedly. Studies have shown the most common passwords are also among the weakest.

Without fortifying our passwords, websites can implement certain measures to limit the effectiveness of brute-force attacks, such as limiting the number of incorrect attempts allowed in a given time period. But these measures aren’t under our control, and the possibility that someone could guess the password remains. The best approach is to start with a strong password.

Fortunately, there are clear guidelines for strengthening a password. Length is one factor — eight or more characters is ideal. Including a variety of letters, numbers and symbols, along with a mix of upper and lowercase characters, helps a lot. Avoiding correctly spelled (or commonly misspelled) words is important.

One of the best tips I’ve come across is to think not about passwords but about passphrases. With a phrase, you create a strong but relatively easy-to-remember password by stringing together four or five words, interspersing numbers, using “creative” spelling and randomly capitalizing some letters.

4. No password is more important than the one for your email.

Email is a skeleton key. Someone who gets unauthorized access to your email will be quickly able to access any number of other accounts. That’s because most sites allow for password resets by clicking email-based confirmation links.

Often, these confirmation-link emails can be generated by providing the email address itself, so a would-be intruder doesn’t even need to know your account usernames to reset your passwords. All of this adds up to the need to use strong passwords first and foremost on your email accounts.

5. Using different email accounts for different purposes improves security.

Sometimes the best way to become more secure is to minimize the damage of a breach. This can be achieved by using one email account for most public communications and another email account that’s kept private for more-sensitive communications. By limiting who knows about your private email, you can reduce its vulnerability. And if your “public” address is compromised, the damage is contained.

It’s also common to use a disposable or “spam” email account when you need an email address to confirm a registration but don’t otherwise want to give up any personal information. These services — Mailinator is one popular option — make it possible to create an email address on the fly and receive messages at the address without logging in. Messages are automatically deleted after a few hours.

6. For the best security, use a password manager and memorize just one “super password.”

Strong passwords are essential to digital security. Using different passwords for different accounts is even better. But combine these approaches and you have a recipe for a lot of headaches. Who can remember lots of different complicated passwords? And who wouldn’t be tempted to use different simple passwords or one strong password, thus weakening their security?

An alternative to these approaches: Use a password manager. This tool automatically generates very strong passwords. It then encrypts those passwords, along with information about the sites they belong to, preventing access unless a master password is supplied.

This one password — which must be strong and memorized — unlocks the vault and provides access to all the credentials stored therein. KeePass is one good password-management option. It’s free, open source (see below) and cross-platform.

7. All things being equal, open source is more secure.

Open-source tools and platforms have a well-deserved reputation for being secure. Paradoxically, source code that’s open is more secure, for the simple reason that anyone can know exactly what the software does, how it manages data and where potential vulnerabilities might lie. Closed-source, proprietary software, on the other hand, is a black box.

Potential vulnerabilities are hard to know and potentially significant security or privacy compromises are hidden. For open-source projects with many contributors, there’s the added benefit of lots of people working to fill security holes as they’re discovered.

8. Open-source software is great, but must be kept up-to-date.

Open-source software can benefit from quick updates when security exploits are identified, but in most cases you don’t get those benefits automatically. That makes it essential to install updates as they’re released. Most software will tag updates with security implications as “critical” ones.

In other words, as long as these updates remain unapplied, the site’s vulnerable. Update procedures will vary from one platform to another and, in some cases, it’s advisable to back up data before running an update.

9. Storing and communicating data necessarily compromise security.

Encryption is a great tool. Good passwords can go a long way toward keeping our data safe. But once you decide something needs to be digitized and (especially) transmitted to someone else, you create the possibility for a breach. For these reasons, it’s important to consider whether something has to be digitized in the first place. Would it be possible, for example, to meet someone face-to-face instead?

10. Security breaches can happen in the moment, or months or even years later.

Digital communications, while fleeting on one hand, are also permanent. Once you publish something on the Web, it’s best to treat the communication as more or less indelible.

It’s true that messages come and go, never to be seen again, but much of what you put online is stored in one form or another. Even if the initial transmission isn’t compromised, you’re counting on whomever’s storing your information to take appropriate measures to protect it, especially when it comes to encryption.

11. Anonymization can solve certain security concerns.

Encryption isn’t the only way to improve security (and privacy). Anonymization — a process by which your actions aren’t necessarily encrypted but can’t be traced back to you — is another tool in your arsenal.

Anonymization involves technologies such as proxy servers and VPNs. Tor, one popular anonymization tool, uses a combination of encryption and relays to obfuscate data and send it on a roundabout path before it reaches its destination. This makes the communication both anonymous and secure. Web-based services such as Anonymouse.org make it possible to use a proxy server without needing to install any software.

12. Open WiFi networks can be a problem.

In general, HTTPS is a big security boost, even for communications over insecure wireless networks. But risks still remain. On an unencrypted WiFi network, anyone connected can view anyone else’s traffic. Information encrypted over HTTPS won’t be visible, but some websites implement HTTPS incompletely, protecting login pages (and thus usernames and passwords) but not other details.

Unfortunately, in some cases it’s possible to compromise security with another piece of information — a session cookie. This cookie is a unique identifier that tells a website who you are and “proves” you have authorized access.

If someone else figures out what your cookie is, they can “hijack” your session. In other words, they’re suddenly logged in as if they were you. And even without hijacking a session, an intruder could eavesdrop on private communications if you’re not using HTTPS or the site you’re on isn’t implementing it completely.

13. Multi-factor authentication improves security.

This is a fancy way of saying security gets better when you have to prove yourself two or more ways to gain access to a restricted system. One well-known implementation of multi-factor authentication is Google’s two-step verification process.

This demands that you supply not only a valid password but also a valid verification code — one that’s transmitted to a phone number provided during the initial registration process. Security gets a big boost with two-step verification because a valid username and password combination is no longer enough — you also have to be in physical possession of your phone.

14. Protecting unauthorized access to your physical devices is essential.

All these efforts to secure your online activities may be for naught if your computer isn’t physically secure. If you stay signed in to accounts in your browsers and apps, is your device itself password-protected? If the answer is no, it takes very little time to access sensitive data or even lock you out of your own accounts.

All the major operating systems provide a means of password-protecting access, and these are well worth looking into. It can be annoying to supply a password every time you wake your device up, but good security means forgoing some convenience.

15. Encrypted email and OTR chat provide the best security for ongoing sensitive communications.

Unfortunately, the benefits of HTTPS encryption don’t extend to communications that unfold online but off the Web — such as email and instant messaging. And sending an email from an encrypted Web page doesn’t mean the message itself will be encrypted — only that your connection to the remote server is secure.

This encryption is important — it means your username and password are protected — but it doesn’t protect your correspondence once it leaves the server and travels to (and arrives at) its destination.

When the content of your messages is sensitive, switching to secure channels such as PGP-encrypted email and OTR (off-the-record) messaging is a good idea.

Unfortunately, these options can be more cumbersome to set up than some of the other reviewed techniques, and both parties need to take steps to secure the communication. When you need to correspond with some in a way that’s truly private, though, it’s well worth the extra effort to establish a secure line of communication. Read more

Tools:
5 Comments
Writingcode

How learning to program can make you a better writer

Perhaps you’ve considered learning how to program. The benefits are enticing. You could create complex visualizations, process reams of public data from your local municipality or even create a Carlos Danger Name Generator.

As reporting and storytelling continue to converge with technology, the case for journalists learning to program gets stronger. For example, it might help in better understanding what’s possible in the digital realm, whether it’s contributing to a data visualization, interactive narrative or a new form of digital storytelling. Picking up some programming jargon can improve communication on multidisciplinary teams, and learning specific tools and processes can help reporters, visual journalists and others do more in their newsrooms.

On the other hand, some argue that pushing journalists to program erodes the value of specialization and puts unfair demands on folks who already are juggling many responsibilities. And the point gets made that involving more people with coding leads to more bad code. It’s easy to learn just enough to be dangerous, but becoming adept at producing stable, secure code takes time.

I suspect there’s merit to each of these points. Certainly, learning to program isn’t for everyone — and perhaps shouldn’t be a priority for most. But some stand to gain. If you’re going to learn to program, the key is to get the most value not only out of the new skills you learn, but also from the learning process itself. That’s why it’s worthwhile to focus on how learning to program might enhance your existing repertoire, especially your writing.

The link between programming and writing

If the idea that learning to code can make you a better writer — especially a better journalistic writer — seems far-fetched, consider this: Programming is, in the end, a kind of communication. True, programming often means relaying information to computers, but that really means we’re communicating with someone via a system they’ve already built.

For programmers to be successful in this endeavor, their code needs to meet several criteria. It needs to be concise, precise and descriptive. It needs good organization and sequencing, even when there are many possible ways to bring structure. And it needs to be free of grammatical errors — even problems with style.

Sound familiar? On closer look, good code and good writing have more in common than you might think. That’s why programming — whether it’s building large-scale systems or hacking on little side projects — provides new ways to practice the same skills that contribute to writing a news brief or feature story.

Learning to code can help you:

1. Become a better self-editor. Small discrepancies in code can mean the difference between a happy website and a crashed server, with one misplaced comma potentially toppling a site. With stakes this high, it’s no surprise programmers are strong self-editors. They’re used to double- and triple-checking their work for problems and continuously testing their code. And being vigilant isn’t just about averting immediate calamity. Some of the most pernicious programming bugaboos arise when there’s a hidden problem in code not discovered until weeks or months later. Poorly editing code today means software bugs tomorrow.

2. Organize your thoughts. Programming takes strong organizational skills. A complete program can be thought of as a set of subparts — systems or routines, to use the technical vernacular. The quality of code is as much about how these components fit together as it is about any one line. This means programmers spend a lot of time thinking about how their scripts are organized — where their programs begin, how they progress and, yes, even how they end. They may not use inverted pyramids or nut grafs, but they do employ templates to impose structure and write more efficiently.

3. Write precisely. There’s little room for ambiguity when programming. Computers need instructions, and aren’t terribly good at interpreting context or intuiting our underlying meaning. Vague, indirect, approximate code isn’t going to work. Programming successfully means expressing exactly what we want to do, in no uncertain terms.

4. Write concisely. Programmers like to write the least amount of code possible. There’s a good reason to be terse: More code means more potential points of failure. When it comes to software design, the best systems are the ones just complicated enough to fulfill their purpose, but no more. Lots of code makes maintenance a challenge and can often result in slower performance. Writing short code can be a challenge — just like writing short — but there are big benefits to doing so.

5. Explain complex ideas simply. Programming isn’t just about giving computers instructions. It also involves providing fellow programmers with a road map they can use to decode what’s been written. These guidelines take the form of comments — short bursts of explanatory text sprinkled between arrays, objects and integers. Good comments take time to craft. They should provide relevant context without repeating what’s evident when reading the code itself

6. Follow a style guide. Every programming language has its own subgroups that work together to build software. If you’re a PHP programmer, the Drupal and WordPress communities are two places you may find yourself roaming (these content management systems are both based on the PHP server-side scripting language).

To make collaboration easier, these groups have developed their own style guides to promote consistency in how code is written. If you want to make software that gets adopted by the community — and has a shot at being used — it’s important to know and follow these guidelines. Code will run just fine even if it’s not in tune with a community’s standards. But coding style guides ensure the kind of uniformity that makes it easier for everyone else to understand what’s been written.

To code or not to code

Should coding become a part of your repertoire? It depends. What’s your employer looking for? Are you interested in doing more digital work? Do you need to code to bring your next project idea to life?

Here’s another way to look at it: Some of the same skills that make you a good writer can come in handy when you’re hacking code. At a time when you’re expected to wear more and more hats, it’s important to find the connection points between seemingly disparate skills. The challenge is in seeing those points of contact. Perhaps, though, writing a personal narrative and a web-scraper in Perl aren’t as disconnected as they might seem.

Coding is like writing in one other way. There isn’t a magic moment when you “become” a programmer. The act of programming makes you a programmer, just as the act of writing makes you a writer. Once you get started, it’s all about improving. That’s a process that takes time — years, even – with likely setbacks along the way. But the best writers and programmers have something else in common: They never stop learning. Read more

Tools:
1 Comment
responsivedesign

What journalists need to know about responsive design: tips, takeaways & best practices

Phones and tablets have created new ways for audiences to reach our work, but they’ve also made it much harder to design a website that works for all readers. A site that looks great on a laptop might be illegible on a phone, while a sleek design on a tablet might look simplistic on a desktop monitor.

To make sure everyone has a good experience, we might be tempted to build different sites — one for phones, another for tablets, and a third for laptop and desktop users.

That might have been a workable solution when there were just a few mobile-device sizes to account for, but what about the current media landscape with oversized phones, shrunken tablets and everything in between? Creating different sites for each possible configuration is a daunting prospect, especially when new form factors seem to pop up every day.

This is where responsive design comes in. It’s a simple solution to a big problem — a way to account for different devices without requiring different sites. Instead, responsive design extends a core Web principle — the separation of design from content and structure — to give us a way to make a site appear differently depending on the size of the device it’s accessed on.

The promise of responsive design

Responsive design benefits how a site is built and maintained. Because a responsive site is still just one site (with several faces), only one codebase and one publishing process are needed. Content doesn’t need to be replicated to another system — manually or otherwise — to make it accessible to a second set of users. And design changes can be made sitewide or for a specific device size, providing lots of flexibility in maintaining a site over time. 

Responsive design offers a second advantage. Since it emphasizes reusing visual elements and retaining content and functionality on different versions of a site, it encourages a consistent experience across devices. A reader who starts a story on her phone and finishes it on her tablet will get a fluid experience that feels like browsing the same site, but in ways that cater to the screen being used.

USA Today has implemented responsive techniques to develop desktop and tablet versions of their site, but their phone presence calls on a separate mobile site — with differences not just in design but content.

News organizations and other content publishers are in a particularly good position to use responsive design, at least when it comes to their articles and news stories. Things get a bit trickier with multimedia or interactivity, as we’ll see later.

But the main obstacle to enjoying a news story on a phone is one of design: Is the font size appropriate? Are there margins or padding separating content from the edge of the screen? Has the number of columns in the layout been greatly reduced? Are navigation options easy to see and tap? Responsive design doesn’t address these issues in one fell swoop, but it does give us a chance to make sure we have an answer lined up, while concentrating the bulk of our site-building efforts into one product.

How CSS and media queries factor in

Under the hood, responsive design involves a few different Web technologies working in concert, but the most important is CSS and, specifically, a tool called a media query. 

Remember, CSS — cascading style sheets — is the technology used to design a website. CSS can be used to change typography and color, and is also the tool designers use to change the layout of a site, including the placement and width of elements. So it’s not surprising that it’s at the heart of responsive design.

CSS media queries are clever ways to change the styles that take effect depending on the device used to access the content. In other words, media queries can make CSS conditional on whether a visitor’s using a smartphone, tablet or laptop.

All kinds of properties, such as color depth and aspect ratio, can be “queried.” But the most important feature to consider when implementing a responsive design (indeed, often the only feature considered) is width. Based on this one property, we can decide what version of a design makes the most sense to serve. The interesting thing about the width property is it refers not to the size of the device but rather to the browser. That means the mobile version of a responsive site can be previewed on a laptop or desktop simply by shrinking the browser’s width.

The Toronto Standard spans its navigation across two columns in the phone version of its design and makes the links larger (and easier to tap).

In plain English, a typical media query looks something like this:

If the width of the device accessing this site is less than 480 pixels, load all the styles that follow.

Another media query in the same stylesheet might look like this:

If the width of the device accessing this site is greater than 480 pixels but less than 960 pixels, load all the styles that follow. 

Here’s what the actual code for the first example looks like:

@media screen and (max-width: 480px) {
}

The key to a media query is to define width boundaries and then load up styles when a device meets the parameters. Boundaries can be defined one way (less than 480 pixels, for example) or two (between 480 and 960 pixels).

A boundary is also known as a breakpoint — a condition under which the styles on a site will change. If a site has one breakpoint, it sports two designs: one on either side of the break.

The power of media queries — and thus responsive design — lies in their flexibility. Once a breakpoint has been defined, any valid CSS can follow. This can result in big changes in how a site appears, even though the markup doesn’t change and a large portion of the CSS stays the same.

The Boston Globe design moves from three columns, then two and finally one as the width decreases.

The simplest way to think about a responsive website is that it gets narrower as the viewport — the width of the Web browser — shrinks. Though it’s true the overall width will vary with a responsive site, other factors might also change:

  • The position of elements. A site with three columns on a laptop may switch to two columns on a tablet. The content in the third column, rather than disappearing, will reposition itself below the remaining two columns.
  • The width of elements. Columns may become narrower, and images and videos may shrink.
  • Font sizes. A headline might appear in smaller font size, or even a different type face.

These differences — and more — are possible because designers can control the full range of properties accounted for in the CSS specification when they employ responsive design.

Making things fluid

In the early days of the Web, fluid designs were prevalent. These layouts would fill the browser window, whether it was full-screen on a huge monitor or narrow on a tiny one. As designs became more sophisticated, fluid layouts fell out of favor: the differences between a fluid layout at its widest and narrowest possible configurations were simply too great to create predictable results.

By limiting just how much the width of a site changes before it “resets” to a new configuration, responsive design has created a renaissance for fluid layouts. It’s now possible to have the best of both worlds: layouts that expand or contract to fill the precise dimensions of the screens they’re viewed on while offering structures fundamentally agreeable with the general screen size.

Best of all, fluid layouts are an option when employing responsive design, but not a requirement. To switch to them, designers define the width of the elements on a page in times of percentages rather than absolute values (such as pixels).

Quartz incorporates a navigation link in the upper-right corner with a dropdown menu, a common design pattern for phone layouts.

Best practices to keep in mind

Responsive design has gained enough traction that we’re starting to see best practices emerge. Here are some points to consider if you’re looking to adopt a responsive design for your publication.

  • Use commonly-accepted breakpoints. There’s no need to guess at what dimensions are best to use. Here’s a commonly-used chart to get you started:
    • 320px and lower – portrait phones
    • 321px to 480px – landscape phones
    • 481px to 768px – portrait tablets
    • 769px to 940px – landscape tablets
    • 941px to 1200px – laptop/small desktop
    • 1200px and higher – large desktop/TV

     

  • Don’t design everything at once. A responsive design with six breakpoints is quite ambitious. It’s better to start more modestly — you can always design for additional breakpoints later. As a starting point, you might focus on just desktop (940px and higher), tablet (480px to 940px) and phone (480px and lower) layouts.
  • Start big. Designing the largest version of a site or page first is usually best because it’s easiest. Difficult design choices must be made within the constraints of tablet and phone screen sizes.
  • Don’t throw anything out. As you move to narrower designs, it’s tempting to discard some of the elements that don’t fit. Avoid this temptation. One of the goals with responsive design is to keep the mobile Web a first-class experience, not a watered-down version of your “real” site.
  • Focus on a single column for phones. Single-column layouts should dominate designs for phones. Switching to two columns is possible on occasion, but most of your content will need to flow linearly from top to bottom. That makes ordering especially important since browsing on a phone will likely require lots of swiping to get to the bottom of the page. 

For more on these and related best practices, take a look at this terrific post by Tito Bottitta, one of the folks who worked on The Boston Globe’s responsive site.

Downsides to consider

Responsive design is a good way to deal with the increasingly varied devices audiences use to reach content. But it’s not without its costs.

Most notably, responsive design doesn’t automatically tailor a site to different devices — it merely offers the opportunity to do so. That means an investment must be made both in designing and deploying each site version: a site with two breakpoints must be designed three times, for example. Of course, many design elements can — and should — recur from one version to the next. Typography, color, iconography and other design pillars should largely be consistent. But grids, hierarchies and clickable areas will likely change.

Performance is another consideration. Some responsive designs incorporate CSS and JavaScript, a related Web technology. JavaScript can add advanced effects, but it can also slow things down, especially on older devices. Likewise, responsive design won’t fix underlying problems with a site’s markup, If the code’s bloated and slow, responsive design won’t speed it up.

Content isn’t the only consideration when approaching a responsive design. Different layouts mean different ad inventories — the banner that fits perfectly on your widescreen layout will get squished or clipped on a phone.

Site analytics are likely to get more complicated with a responsive design, too.

It’s also important to remember that responsive design is a set of techniques for implementing a mobile-friendly Web strategy. It won’t help answer questions about whether you should also invest in one or more apps or whether those apps should be native or Web-based.

Newsweek makes extensive use of fluid techniques, expanding images to fill even the widest monitors.

Making sure responsive design is right for you

A decision to pursue responsive design means you’re taking your mobile Web presence seriously. That may seem like a no-brainer, but mobile isn’t just about the Web, and you may still have a sizeable audience that’s just interested in getting your content on desktops or laptops.

Since responsive design will require an investment, here are some things to make sure before you get too far into it: 

  • You want to reach an audience across devices with your Web presence. Responsive design is all about the Web. If you’re pursuing a different strategy — one that hinges on native apps, for example — responsive design should take a back seat.
  • You want to deliver the same content across platforms. Responsive design is great for customizing the presentation of your site. But it’s not the right tool if you need to deliver unique content or functionality.
  • You’re starting from scratch, or your existing infrastructure benefits from “clean” markup. It can be challenging to make responsive design work on a big legacy site, especially when there isn’t a clear-cut separation between the structure and design of the site.
  • You’re ready to invest more in design. Responsive design makes design more important, and requires a bigger investment to make that design work. (Though there’s a potentially bigger payoff.)

Taking the next steps

Like all parts of the Web, the technologies undergirding responsive design are in flux. Even as the CSS3 specification gains traction, proposals are under review for its successor, CSS4.

Some of the issues now under review include how complex structures such as tables and forms are handled, and how the resolution of images and video can be adjusted depending on the device.

In the meantime, the techniques needed to bring a polished responsive design to fruition are well developed, especially for article-based content. And one of the best ways to get started with responsive design is by using a framework — a collection of pre-built code that takes a lot of the heavy lifting out the equation, helping you stay focused on your content and how to best present it. Read more

Tools:
0 Comments
Smartphone with cloud of application icons

What journalists need to know about the difference between Web apps and native apps

Facebook’s recent unveiling of Home, a software suite for Android phones (and soon tablets), offered more evidence that apps rule the mobile world.

Just a few years ago, usage of apps lagged Web browsing within that world. But we now spend more than 80 percent of our mobile time with apps, according to Flurry Analytics, comScore and NetMarketShare data.

That means news publishers need to prioritize app development when crafting their mobile strategies, as Tom Rosenstiel noted in a recent Poynter.org article summarizing comScore research. But when it comes to developing those apps, publishers have at least two options:

1. Native apps run alongside the browser. They’re built with tools specific to the device’s platform (usually Android or iOS), give a publisher prominent placement on a user’s home screen, and benefit from a raft of sophisticated features.

2. Web apps run within the browser. They’re built with a collection of advanced Web technologies — but, like native apps, emphasize utility over content. Though lacking the power of their native counterparts, Web apps can be equally capable for users and may be a more cost-effective alternative for publishers.

Both kinds of apps provide ways to help news consumers solve problems. But they offer different paths to those solutions, both in terms of the resources needed to create them and the channels available for distributing them.

Web apps

A Web app runs in the browser. Technically it’s a Web page, but in practice it looks and works like an app. It’s designed to allow users to accomplish something.

Web apps rely on the same technological tools that power the rest of the Web, and make use of some of the newest and most-powerful capabilities of those tools. HTML5 represents and structures content in a Web app, CSS3 provides a design for the app, and JavaScript adds functionality and can serve as a bridge to the device’s hardware.

Combining these tools allows for exciting possibilities, including geolocation, multi-touch, video and audio, device orientation detection, and offline storage. Not long ago, these capabilities were exclusive to native apps. But now the Web is catching up.

Web advantages

Web apps offer three key advantages:

1. Because the Web’s technologies and underlying standards are open, changing the tools that underpin it can be slow and messy. The end result, though, is a platform that works consistently regardless of what device is used. That lets publishers build something once and know it will work on many devices, and means only one product has to be maintained and updated.

2. Many newsrooms have Web developers on staff, but not app builders. Web apps give such newsrooms a chance to capitalize on their developers’ existing skills – HTML, JavaScript and more – to build engaging, feature-based experiences.

3. Web apps are easy to integrate with content elsewhere on the Web, including other Web apps, sites and the APIs provided by various Web-based services.

Native apps

Native apps are built with a mix of platform-specific technologies. The Android and iOS ecosystems command the lion’s share of the marketplace – about 90 percent of all mobile devices (phone and tablet) run one of these two platforms. Windows Phone is a distant third with about 3 percent market share, and a hodgepodge of other platforms round things out.

Most app developers are concerned only with Android and iOS, which simplifies things somewhat for publishers. Unfortunately, these platforms run on completely different technologies.

Android programmers mainly build apps with Java, making occasional use of Python. Underlying code libraries – the building blocks of the Android platform – rely on a combination of C and C++. Each is a separate language and, in the mobile world, all are specific to the Android platform.

iOS developers, on the other hand, use the Objective-C programming language, the Cocoa Touch framework and Xcode, a collection of programming tools.

Given the range of technologies involved in development and the clear division between the Android and iOS camps, native apps present a workflow challenge for publishers: While it may be possible to design an app once, the full development cycle has to be completed at least twice, and even then 10 percent of potential mobile users are left out.

Developing native apps in sequence is a common workaround, but publishers must decide which platform comes first. Android is the clear frontrunner with about 52 percent of the overall mobile market, but iOS users spend much more money on paid apps. So publishers must decide what goal to prioritize: a larger audience or more revenue.

Native advantages

Five key advantages set native apps apart from their Web counterparts:

1. They can deliver a better user experience. Web apps always include elements from the browser, such as the address bar and other related tools. On small screens, that’s precious space that could be devoted to app-specific controls. Certain user interactions, such as swiping a page to move or change content, can also be more fluid and consistent in a native app.

2. They integrate more closely with the device’s hardware. This may change as Web technologies progress and become more capable, but for now native apps have the upper hand. Accessing the likes of Bluetooth, USB, telephony and GPS remains challenging — if not impossible — with Web technologies, while other hardware (including cameras and videos) can only be accessed in a limited way.

3. Native apps allow for close integration with the operating system and other apps. This presents interesting possibilities in which one app can “talk” to another, exchanging information or working in tandem to perform a task for the user.

4. Web apps stop running when the browser closes, but native apps can run continuously, even when they’re not active. This allows user alerts and notifications.

5. Native apps have the potential to run faster than Web apps. This is especially true for graphics-intensive apps such as games.

Marketplace integration

Distribution is a critical distinction between native and Web apps. Web apps are accessed through a browser, and users often find them while surfing a mobile site or searching. But native apps must be accessed through a platform-specific store, typically Apple’s App Store or Google Apps Marketplace. Native apps can be free, but if paid, Apple and Google take a 30 percent cut of the sale price.

It’s possible to charge for a Web app, though systems for doing so need to be established. Users that do buy Web apps tend to do so through a subscription model rather than paying a one-time download fee.

Hybrid approaches

Native and Web apps represent two clear-cut choices for publishers, but variations and blended versions of these models do exist.

Publishers can marry the advantages of native app technology with the Web’s workflow benefits by developing a Web app in a native wrapper. That means building a standard Web app, then inserting it into a shell that allows it to function like a native app.

While this doesn’t enhance the capabilities or speed of the app, it does break it out of the browser, resulting in a dedicated icon on the user’s home screen, and makes it a candidate for purchase via an app store.

PhoneGap is one of the most popular ways to put a native wrapper around a Web app. It provides a means of creating wrappers for several platforms at once, and largely automates the process. Once wrapped in native code, the Web app is ready for distribution through platform-specific stores.

If you’re committed to developing with HTML but still want to harness the power and speed of a native app, several platforms provide hope. Titanium Studio is one option: Rather than putting a Web app in a native wrapper, Titanium Studio takes source code written with Web technologies and translates it into actual native code. It can take HTML and JavaScript, for example, and convert it to the corresponding Objective-C code. MoSync is another option for converting HTML into native code.

Summary

Native and Web technologies continue to advance, though the Web presently enjoys a faster rate of development. That’s led to a shrinking gap in the capabilities of Web and native apps and renewed interest in the promise of Web technologies. Some people, including usability researcher Jakob Nielsen, have predicted that Web apps will become the clear favorite before long.

Still, native technologies are also advancing and offer compelling advantages for publishers that have adequate resources. And in some cases they can deliver functions that are still beyond Web-based approaches.

In the end, publishers need to be clear about what they want to accomplish with their app before deciding on Web tools or going native. If an app’s features don’t demand the extra capabilities or speed of a native app, a Web app may be the best bet. It will work on almost every mobile device, use development skills that may already exist in the newsroom, and offer a wider range of distribution options. Read more

Tools:
7 Comments
Magnifying Glass - Web Design

What journalists need to know about Web design

Fifty milliseconds. That’s how quickly visitors can form strong, long-lasting impressions about your news or information website. But they aren’t sizing up the quality of your content or the sophistication of your code. They’re making nearly instantaneous, mostly subconscious judgments about how your work has been designed.

Those assessments can lead to very conscious — and consequential — conclusions about the merits of your page, product or platform. Bad graphic design can damage perceptions about your credibility. It can make your content harder to understand and render your work less appealing.

The visual Web

The Web is a visual medium. It didn’t start that way, back when HTML truly was all about marking up text. Over the years, though, the options for shaping the appearance of a Web page have grown more plentiful and sophisticated.

Now, of course, Web producers have a wide range of design tools at their disposal. Color, typography, imagery, positioning and many more design elements can be tuned to exacting detail. Emerging technologies like CSS3 and HTML5 make it easy to implement these visual ideas.

In the right hands, an array of design choices can produce impressive results. Misapplied, they can create a visual cacophony.

Thinking about design

To create a strong visual expression of your work, nothing beats working with a top-notch designer. Sometimes, though, you need to figure things out on your own, whether you’re bootstrapping your business or freelancing a multimedia story.

And, it’s always helpful to know the language digital artisans use to think about their craft, whether it’s floats and functions or points and pixels.

Good design skills may seem innate, even mystical. But the best designers are well-versed in a core set of widely-applicable principles. They’ve internalized the techniques prescribed by these ideas, applying them methodically and appropriately.

Fortunately for the rest of us, good Web design builds on the same principles that underly design in general. These are tenets you can study and apply. Many are rooted in psychology and perception — the way we attach meaning to color, search for patterns, crave balance, identify outliers and make sense of the world.

Three principles of Web design

Graphic design isn’t about making something “look pretty” but rather more easily understood. Good design is about communication. On news sites, it’s about helping readers identify the newest content, differentiate blogs from news reports, and spot the biggest story of the day. It’s about helping readers scan through lots of content to find the most important stories and the items that interest them most.

With these goals in mind, here are three principles of Web design that should guide your efforts:

1. Favor the simple over the complex. Whether you call it minimalist or simply a clean design, striving for simplicity is one of the best ways to ensure good results. A simple design is easier to implement, and it’s easier to interpret.

Simplicity is about limiting your options. Instead of using colors haphazardly, pick a scheme of just a few colors that work well together, and stick to them. Instead of five font families, pick two. In designing any particular element, start with the most basic implementation and see if it’s enough. Work from there.

2. Be consistent. Let’s say you come up with a certain design treatment for the headlines on your site that utilizes a particular font, color, size. Use that approach consistently across your product, only varying it with specific intent. Consistency is especially important from page to page or screen to screen. Make sure the design conventions you establish carry through your work. If your home page uses one color scheme, section landing pages should too, unless you’re specifically looking to brand those areas with color.

3. Express your voice. Every design choice you make tells your readers something about your product, your company, yourself.

Seven ways to design better Web content

Let’s take a look at some tactics we can use to develop a good design with the principles above as a foundation.

1. Use a grid. In Web design, a grid is an invisible set of equal-width columns along which the elements of a page are aligned. The gutters, or spaces between the columns, are also equal. Most grids utilize 12, 16 or 24 columns, and this transparent skeleton provides structure and alignment for a design. Grids appear in print design, too, and they’re a great way to help guide viewers as they scan through the contents of a page, whether it’s in print or on the screen. By designing content that spans multiple columns, designers can exercise lots of flexibility within seemingly rigid constraints.

Technology site The Verge adheres to a grid in its layout. The main structure rests along a three-column design, though certain elements break out of that mold. The use of a grid is evident from the way content lines up. In this example, the “Chromebook pixel review” headline lines up perfectly under the “Google on a non-profit budget” headline above it.

The Verge aligns its content to grid.

2. Repeat elements. Developing a design element — and then repeating it — is a great way to establish continuity and organization.

On a news site, repetition can be used to group similar kinds of content. The Christian Science Monitor, for example, uses different treatments for blog and news story entries on its homepage. Blog entries get smaller thumbnails and kickers. News items get bigger headlines and leads. The treatment for a given kind of content, though, is repeated every time it appears.

The Christian Science Monitor repeats many design elements, including the entries in these blog and news story lists.

3. Use white space. Sometimes, leaving space in a design is just as important as filling it with something. This white space helps establish the relationship between elements, directing viewers’ attention. Generous use of white space — elements of a design that aren’t, well, designed — is one of the best ways to pursue simplicity. NPR makes extensive use of white space in their design, especially around key elements like headlines. All this room helps direct visitors to the most important content on the page.

4. Establish a hierarchy. By varying the size, color and positioning of elements, designers can establish a hierarchy for a section or page. This helps readers prioritize what they’re seeing, providing a kind of roadmap they can use to skim content.

The Boston Globe makes extensive use of hierarchy to establish importance. In this example, the first story on this homepage list gets priority not just with its position, but with a larger headline font.

The Boston Globe gives prominence to its most important “latest news entry” by enlarging its headline.

5. Use texture and depth. These are ways to make a design more interesting. They often help reinforce a voice or brand.

In Web design, texture most often appears in the background, for example, behind the content in a footer or header section. It involves variation in the colors or shades of color used and can create the impression that something is polished, rustic, crumpled, etc. Depth creates the illusion that some elements on a page are stacked above or below others. It can be created with drop shadows and by varying the opacity of elements.
The Las Vegas Sun creates a subtle textured effect in their footer with the graphic of a sun. As part of their logo, this effect adds visual interest to the footer and reinforces the Sun’s brand.

6. Convey meaning with color. To maximize usability, color shouldn’t be used as the sole means to communicate meaning, but it’s an effective reinforcement. Adapting a tradition from its print edition, USA Today makes extensive use of color to “tag” its content: blue for news, purple for entertainment, gray for opinion and so on.

In this headline grid, color appears behind the tag and fills the squares when they’re hovered over.

USA Today reinforces there content categories with color. Blue, for example, always means “news.”

7. Establish importance with contrast. Establishing a pattern — then breaking it with something that stands out — draws visitors’ attention to a certain element on a page. Contrast can be created any many ways – through color, typography, size, shape and more. The Schenectady, NY-based Daily Gazette uses red in its otherwise blue-toned color scheme to punctuate timestamps on its most recently published stories.

Form and Function

Web design is a rich topic, and I’ve only scratched the surface here. You might go deeper with self-directed training from Poynter’s NewsU on specific topics like typographyuser interface design and color in news design.

Just remember: At its best, Web design isn’t about putting a “skin” on a finished concept. “Form and function should be one, joined in a spiritual union,” Frank Lloyd Wright said. In terms of creating on the Web, design should be considered alongside content, even developed in tandem with it.

And, for news sites and apps, design serves a special purpose: We know many consumers like to skim, and well-designed content is one of the best ways to make content scanning easy. Read more

Tools:
1 Comment
interview

What journalists need to know about interviewing for video

Interviews are a cornerstone of video storytelling because they provide emotion, content and structure, especially in documentary-style stories with little or no narration. Good interviews make for good videos.

Fortunately, most of what you’ve learned about interviewing applies to video. Open-ended questions produce revealing answers. Good follow-up questions create deeper insights. Long and double-barreled questions confuse subjects, or give them an easy out. And good listening can lead to answers with more detail and depth.

As in print, the video interview is a key reporting tool. But it’s also an essential part of the presentation. Footage of subjects discussing their lives, work and expertise is the engine that drives a video story forward.

That’s why it’s important to consider a range of factors when interviewing for video. Good questions aren’t enough, no matter how compelling the answers.

The success of your stories will hinge largely on the quality of video and audio you capture for your interviews.

Thinking in stages

Video interviews are easier to tackle when they’re approached in phases.

  • First, think about the prep work needed to make the interview work. This will entail a combination of upfront research and reporting, notes on the questions you want to ask, and logistics planning. Will you conduct the interview indoors and/or out? What time of day? What equipment will you need?
  • Next, once on location, set up the interview. You’ll need to determine where you and your subject will be positioned, and then set up equipment based on that choice. Consider the backdrop, lighting sources and potentially problematic background noise.
  • Lastly, start recording and conduct the interview.

Here are some tips for navigating through these steps.

Prep work: What could go wrong?

When preparing for an interview, brainstorm things that could go wrong. What could happen in the middle of the shoot? Who might walk behind or, worse, in front of the subject? Might the lighting change in the middle of the interview? Could something about the sound change?

Most of all, can anything be done to minimize the chances of a mishap? You can’t prepare for or control every possible hitch that comes your way. But you can take some steps to safeguard against some of the biggest threats to the quality of your interviews.

Prepping for an interview also entails gathering equipment. Plan to bring backups whenever possible, especially for accessories like batteries and memory cards. While you’re at it, make sure the batteries you think are charged really are, and double check how much space you have on your recording media. It’s important to know how long you’ll be able to record before entering the field.

Setup: Interview audio

Audio is easy to overlook but crucial in most video productions. And the most important audio you’ll capture is for your interviews. You want your interview clips to sound good; if audiences struggle to hear your interviewees, your story’s toast.

A few basic steps will set the stage for good results. First, use either a lavalier mic, shotgun mic or portable recorder. The key here is positioning the microphone close to the subject. Second, monitor your volume levels. That means listening to what you’re recording with a good pair of headphones (not earbuds) and, usually, tracking a visual indicator of levels on your recording device.

You’re looking for a happy medium. If a signal’s too weak, lots of background noise will be audible when you raise your levels in post production. If it’s too strong, it will “peak,” creating an unpleasant distortion that’s difficult to fix. For digital recording, -12db or a little under is the best level to aim for.

A good practice when you’re checking levels is to start recording. It’s always better to record sooner than later. During the soundcheck, you can ask subjects to pronounce and spell their first and last name. That’ll come in handy when you find yourself needing to reference them in your narration. Remember that even common names can have unusual pronunciations.

Setup: Shot composition

Shots can be composed in many ways. For interviews, there’s a tried-and-true formula that’s best to stick to, especially when starting out. It involves six factors:

  • Use medium shots. The standard interview shot puts a little bit of headroom above the subject and extends down to the shirt pocket. Closer and wider shots can be effective, but they may not work in every scenario. The closer you go, the more intimate the shot becomes. You’re taking your audience from arm’s length to within inches of your subjects. Wide shots create distance, but can also help establish context for where the interview’s taking place.
  • Follow the rule of thirds,a photographic principle that applies to video, too.
    This shot follows the rule of thirds. The subject’s eyes are positioned along the top horizontal line. His body lines up with the right vertical line. (PBS Arts)

    In a nutshell, the rule of thirds tells us to divide any frame into nine segments via equally-spaced horizontal and vertical lines. The regions that appear along these lines, and especially at the junction points between them, carry the most visual potency. Applying this rule to a video interview, you want to position the eye lines of your subjects along the top horizontal line of the frame. The subject’s face should rest along either the left or right vertical line, but not in the center.

  • Pay attention to the background.What’s happening behind your subject? Does it add to your shot or detract?
    In this shot, depth of field is used to keep focus on the subject, despite a very busy (and interesting) background. The background is blurred, but we still have a sense of place. (Kornhaber Brown)

    It’s important to guard against several pitfalls here.
    If your subject is positioned in front of a wall, make sure there’s space. Too little space can create a cramped, imprisoned feeling.

    You also want to make sure there’s not too much happening in the background. Too much action — people walking, cars zooming — can distract viewers from the subject. One workaround to this problem involves another photographic technique — shallow depth of field. In any shot, the depth of field is a measurement of how much of the shot is in focus. When the depth is shallow, just a few feet (or less) is in focus. The result? Your subject is crisp and in focus but the background is blurred

  • Pay close attention to lighting. Viewers want to be able to see who’s talking.
    This interview shot, a close-up, benefits from the effective use of natural lighting. The window is positioned in front of the subject and to the side, creating a sense of dimensionality. (Pat Shannahan)

    Shadows can become big distractions, and too little light can have a big impact on the overall image quality. Remember this foundational idea: Always position the key light (the brightest light source) in front of the subject, favoring one side slightly over the other.

    If you brought a light with you, you’ll want to position it about 45 degrees off your subject’s line of sight. If you’re relying on available light, think about how you can use windows as your key lights. Put the window behind you and shoot toward the subject.

  • Use a tripod (or another device) to stabilize your shot. Hand-held shots have a place in video storytelling, but they don’t tend to work well for interviews. If you’re working on your own, interviewing while holding a camera is a challenging feat best reserved for quick-hit conversations. Most other times, a tripod will help you establish your composition. Don’t have a tripod? Get creative. A shelf or stack of books on a desk can work in a pinch.
  • Angle the subject slightly away from the camera. In most cases, subjects should be facing the interviewer just off camera. Subjects positioned along the left vertical line should would angled toward their left. Facing the subject, the interviewer should be positioned to the right of the camera. (When subjects are on the right vertical line, move to the other side of the camera and have subjects look to their right.)

During the Interview: Focus and listen

In video, how an interview unfolds is just as important as the content of the interview. Audiences benefit from all the nonverbal cues that are lost when you transpose interviews to the written word. But you want to be careful not to direct subjects’ actions. Interviewees need space to be themselves and share their stories. Nonetheless, you can push things in the right direction. Here’s how.

  • Set the tone. The enthusiasm you project will rub off on your subject. Think about the pacing and energy you want to convey.
  • Interview people in their environments. Interviewing people where they live, work or play helps them feel more comfortable in front of the camera. It also makes for a more interesting environment and creates more opportunities for good b-roll, or supplemental footage — something that’s always worth capturing before or after the interview.
  • Be careful about making noise. Usually, interview responses are presented in isolation, without the interviewer’s question. This makes it easier to weave together different interview clips and tie things together with narration recorded toward the end of the production process. The challenge is to make sure your audio doesn’t get mingled with the interviewee’s. Talking while they’re in the middle of a response, starting a new question while they’re wrapping up, even murmuring the natural “hmm hmm” can complicate the editing process. That’s why it’s important to give subjects “space” when they’re responding. Don’t be afraid to pause before asking your next question: Those extra couple of seconds can prove invaluable in the editing bay. And, who knows, maybe your subject will add to the response, revealing an insight you might otherwise have missed.

Here’s a tip: When recording an interview, put your audio on a separate track. This will isolate your sound, making it easier to filter your voice from the presentation. An alternative is to not mic yourself at all, and that can work, but you limit your options later if it turns out you want your voice in the production.

Conducting a good video interview can be challenging, especially for the solo video producer. If you’re working on your own, you have to wear two hats: producer and reporter. By planning ahead, then focusing on audio and shot composition, you can ensure that the presentation and substance of your interviews complement each other. Read more

Tools:
3 Comments

What journalists need to know about digital video editing

Digital camcorders, DSLRs and digital audio recorders have revolutionized video production. It’s now possible to get higher quality footage for less money than ever before. But, advances in hardware don’t tell the whole story. Equally important have been improvements in video software — the tools used to edit, process and publish video.

At the center of this software ecosystem is the digital editing program. This is the software that helps transform footage into stories. It’s the tool that structures disparate clips into logical sequences. And it’s the best way to polish footage and pull together many assets — video, images, voice overs, on-location audio, titles, animations and more.

Why should journalists learn about editing video? After all, video editing is about technology and production techniques. There is a technical side to video editing, but there’s also an opportunity to extend storytelling deeper into the production process. Many of the decisions made in the editing phase have a big impact on stories. Pacing, structure and sequencing are just a few of the factors that go into it. Several tools have gained prominence over the years:
  • Avid Media Composer has long been a top choice for professional video editors, and it’s found in most TV and film production houses.
  • Final Cut Pro is Apple’s Flagship video editing program, and it’s widely used in newspaper and online newsrooms with editing stations.
  • Meanwhile, Adobe has been hard at work advancing Premiere Pro, a cross-platform editing tool that’s quickly gaining ground on Final Cut.

Add to the mix a slew of other desktop options, numerous editors for mobile devices, and even a few Web-based editors, and it’s clear there’s no shortage of choices in how to edit video.

Fortunately, regardless of the particular platform you find yourself working with, a core set of concepts, elements and processes appear in most video editing platforms.

If you understand — conceptually — how these pieces work together to provide extensive control over how video projects are assembled, learning how to implement one feature or another is a relatively straightforward task.

Non-linear & nondestructive: Video editing freedom

Two concepts underpin digital video editing.

First, video editing software is non-linear. This is the ability to jump from any place in a sequence to any other place, forward or backward. Along the way, it’s possible to cut and insert footage, changing the order of the shots and scenes in a story ad nauseam.

With linear editing, edits are made sequentially. It’s impractical to go backward and redo an edit once it’s made, and it’s challenging to preview how things are progressing until all edits are complete.

The ability to move fluidly from one point in an edit to another provides incredible flexibility to the editor. It makes for a more nimble workflow, one in which fewer compromises have to be made in how a story is structured.

Second, and equally important, video editing software is nondestructive. This means changes when editing are reversible. This applies to many kinds of changes but, most importantly, when we cut raw video into smaller, more focused segments.

Cutting down video is a process of refinement. Excess is trimmed away, beginning with wide-swath cuts, then more precise cuts as things progress. But what if too much has been taken away? No problem. Nondestructive editing means any footage cut can be restored. Just like non-linear editing, nondestructive editing means freedom and flexibility.

Non-linear, nondestructive editing has been a mainstay in broadcast newsrooms for several decades now. My Poynter colleague Al Tompkins sums up the impact it’s had on producing video this way:

Non-linear allowed us to re-edit or change stories with a click of a mouse. Once the story was edited, it could be uploaded to a server for nearly instant playback. Many users could access the video at once. Since the editing was all digital, generation after generation dub after dub was the same quality as the first. Multi-channel audio editing  is a breeze and it was just as easy to add transitions and effects.

Linear, tape-based editing didn’t need to be ingested or rendered, so it saved journalists precious time when on deadline. But Tompkins points to some steep drawbacks:

If, after we finished editing a story, a producer decided it was too long and needed to be cut down, it would require time consuming re-editing to shorten or change the piece. And once a story was edited, somebody would have to run the tape down to the video playback department. Every day the newsroom looked like that famous scene from Broadcast News where some poor soul would have to sprint down stairs to make the deadline.

Non-linear made techniques like slow-motion and dissolve transitions much more difficult.  And every generation of editing would decrease the video quality.

Now, with low-cost digital editing software widely available, we all can benefit from the power of non-linear and nondestructive tools. Let’s take a look at the essential elements and steps involved in digital video editing.

Essential elements in video editing software

With this bedrock accounted for, it’s worth reviewing specific elements common to just about every video editing program.

Most video editors are comprised of four regions. They go by different names, depending on the particular program, but, conceptually, they serve the same purposes.

First, we have an area where files are imported and organized. In Premiere, this is the Project area. In older versions of Final Cut, it’s called the Browser, and in Final Cut Pro X it’s the Event Library. When clips are imported into an editor, they show up here. And folders — often called bins — can be created to organize our files. All kinds of media — videos, photos, audio — can be captured and organized in this area.

The “Event Library” in Final Cut Pro X.

Next, there’s a region where media contained in the browser can be previewed. This can be thought of as a built-in media player. In Premiere, it’s the Source. In Final cut, it’s called the Viewer.

The “Source” in Adobe Premiere Pro

Below the viewer is an important area called the timeline. This is where video projects are really assembled.

The timeline occupies two dimensions. Left to right represents, naturally, time. Elements placed to the right occur later in time than those to the left. When a clip is dragged from the browser, or project, onto the timeline, its width represents its length. Longer clips are wider, extending further to the right of the timeline.

The second dimension of the timeline represents visual depth. Elements placed higher on the timeline appear above those placed lower. This is achieved through the use of tracks; each step up or down is a different track. Complex projects sometimes use many tracks, and some tracks are designed to hold video content while others hold audio.

The Timeline in Final Cut Pro 7
The “Timeline” in Final Cut Pro 7

One final timeline-related element worth noting is the Playhead. This is a visual marker that denotes the current position of playback within the timeline. When a project is previewed, the play head sweeps across the timeline, progressing to the right and marking the passage of time.

This takes us to the final major area — the output space. It’s called the Canvas in Final Cut and the Program in Premiere. Like the Preview area, this is a video player. Unlike the preview, though, it doesn’t show just one clip, but rather the fully-edited, sequenced content from the timeline. This is the view that reveals what a project’s going to look like when it’s exported.

Common steps in the video editing process

Video editing is a creative act. Still, most editing involves working through a well-established, predictable set of steps. The first step is the importing and ingesting phase.

In general, we talk about ingesting tape and importing files. More and more video is file-based so, most likely, importing is what’s happening in this step. “Importing” is a little misleading, as files aren’t actually embedded within the editor. Instead, a link is made between the video project and the file being imported. This means it’s important to be careful when moving or removing imported video files. When this happens, the editing software will lose track of the, and links to the media will need to be reestablished.

After importing, it’s time to make basic, rough edits to footage. This may entail chopping several long clips into shorter ones, creating more narrowly-defined “in” and “out” points (the beginnings and ends of clips), and deleting imported clips that don’t serve the project.

Sequencing comes next. This involves dragging clips into the timeline where an order can be established.

There are many ways to create a video sequence, but one of the most popular ways is to match video against audio. This method assumes we have a decent audio track that video can be synced to.

Trim editing is often the next step. This involves making minor changes to clips, sometimes in isolation (a “slip edit,” for example, which involves changes), but often alongside adjacent clips (a “roll edit,” for example, which involves changing, in equal proportion, one clip’s out point and another’s in point.)

With the structure set, it’s time to work through some additional post-production steps. These involve adding transitions between clips and various kinds of video filters, which change the visual quality of one or more clips. When and how filters and transitions are applied can have a significant impact on the tone and texture of a piece.

Titles are often added around this time. These include various kinds of on-screen text — the “lower thirds” that appear when interviewees are on screen, title screens introducing videos or sections, and credit rolls at the end.

One of the final steps involves correcting and grading color. Put simply, grading involves enhancing color and correcting involves fixing color imperfections.

Working with color entails getting skins tones looking natural, making sure colors match across shots and ensuring the overall color is “balanced,” which involves making sure blacks are truly black, whites are truly white, and so on.

The final step is to export the video, which involves selecting a codec and container. Codecs are used to compress video, making otherwise large files suitable for downloading and streaming. And containers package up video and audio streams and, often, additional “metadata,” while also putting a familiar extension (for example, .mov, .mp4) on the resulting file.

Editing brings form to video stories

Editing video is really about structuring stories. It’s about establishing a beginning, middle and end, deciding how scenes will transition into each other, establishing a rhythm, and building momentum.

Knowing how to trim a clip or sequence a series of shots is important in all forms of video storytelling. In video journalism, these techniques can help us advance stories and enhance their journalistic purpose. Read more

Tools:
2 Comments

How wireframing can help journalists plan & communicate ideas

Among the technology-based skills worth journalists’ consideration, wireframing merits a closer look.

Wireframes are rudimentary visual depictions of ideas. They can be created with specialized software or nothing more than a pen and the back of a napkin. Web pages, mobile app screens and information graphics are all suitable wireframe subjects.

Despite their visual nature, though, wireframes aren’t about design, at least not completely. They don’t typically convey information about color or typography. And they don’t specify how gradients, textures, shadows and other effects should be implemented.

Instead, wireframes express how elements should be positioned relative to one another. They convey the importance of elements, oftentimes establishing a sense of hierarchy. Usually, wireframes convey basic information about white space, and the portions of a page or screen. They’re more about visual communication than design.

In practice, wireframes usually take the form of a set of boxes and other simple shapes, each representing a region in a larger layout. Wireframes convey what goes where and, in the case of interactive projects, such as mobile news apps, wireframes provide an initial indication of how users might interact with the software. (You can see some examples of wireframes here.)

How wireframes can help you

Wireframes are an effective bridge between content and structure on one hand and design on the other. A reporter could create a wireframe to express a visual idea to a designer who then creates an interactive infographic. An entrepreneur could make one to show a freelancer how she wants her new community blog to look.

Wireframes help us test ideas by simulating, in a very crude way, how a final product might work. These previews can provide valuable insight into what works and needs improvement before we make a significant investment of time building something. And, they help us communicate our intentions to teammates, facilitating feedback at the earliest stages of a project.

In a recent email interview, I had a chance to ask journalist and Web developer Andrea Jezovit how she uses wireframing in her work. Jezovit is a content editor on the Creative Solutions team at MSN UK and a former News21 fellow. She studied user experience and interaction design at City University London.

Casey Frechette: What kinds of projects do you wireframe? Can you give some specific examples of times you’ve wireframed news products?

Andrea Jezovit

Andrea Jezovit: Interactives, websites, blogs, Web apps. I use wireframes a lot for different projects in my current role, but not much of it’s news-related — lots of website layouts for commercial campaigns, social media apps and interactive tools like quizzes … I even wireframed a game.

News products I’ve created wireframes for have included my interactive infographics produced as part of Berkeley News21 last summer. They were especially tricky infographics to design, as I was trying to combine a clear narrative with interactive elements that could be explored. Doing lots of wireframing really helped.

I also created wireframes to help me figure out the layout and navigation of a couple of (now defunct) journalism blogs, and I created really detailed wireframes for this interactive ‘deals calendar’ produced for an MSN UK campaign, which an outside developer built for us — not quite a news tool at the moment but it can be re-skinned for use as an events calendar, which is sort of newsy.

One of Jezovit’s wireframes. You can see more examples on her personal website.

What suggestions do you have for journalists who don’t have backgrounds in visual communication but are interested in wireframing?

Jezovit: I’d say you don’t need to have a strong background in visual communication to be good at wireframing! Wireframing is a different skill from visual design — it’s more about thinking about the structure of a site or an interface and making sure all the important information and functionality is in there, while (most importantly) making sure that it will make sense to the user.

I think most journalists and editors are probably used to organizing information and creating experiences to meet readers’ needs, for example through coming up with a good story structure or coming up with sections for a magazine — so I think we’re natural wireframers once we get started.

Even if you’re clueless about design, all you need to do is sketch out a rough wireframe showing what information you want to display most prominently, what other information the user should be able to access and what sections you want to include in the navigation, etc. Then the designer can work with you to improve everything, and offer her or his own suggestions.

What tools do you use when you wireframe? Do you tend to use pen and paper or software?

Jezovit: I use both. I usually use pen and paper when sketching out a rough initial concept; it’s a quick way of getting your ideas down, and often it’s enough for a designer to work from if it’s for something simple like the layout of a website.

If I’m working on something more complicated, though, like an interactive with lots of different screens, I’ll do another set of more detailed wireframes using software like Visio or OmniGraffle, or sometimes a web-based tool like Cacoo. That way, the wireframes turn out cleaner and easier for the developer who will be building the project to understand, and also these tools have built-in shapes and icons and copy/paste functionality that makes it easier to wireframe lots of pages quickly.

How has wireframing helped your journalism?

Jezovit: It’s helped me learn that when you’re trying to get information across, simplicity is best. It’s also allowed me to tell stories better (at least I hope) when it comes to my interactive infographics, because through wireframing I’ve taken the time to put myself in the user’s shoes, think about what they’ll see when they’re interacting with the infographic, and make sure they’re getting the best possible experience that will make them want to explore the information and learn something. Not sure if my projects have always been successful at this in the end, but doing lots of wireframes/interface designs has at least made me hyper-conscious about exactly what people who view my infographics are getting out of them.

Are there any other tips or techniques you’d recommend to journalists interested in wireframing?

Jezovit: I’d say wireframing isn’t the first step to designing a good website or interactive. Before you start on a wireframe, it’s good to think about who your user is and what goals they’re going to have when they visit your site or look at your interactive. (E.g., are they just going to be skimming for an interesting story, or digging deep for a specific piece of information?) Think about the whole user journey they’ll be experiencing as they travel around your site, and sketch that out. Then sketch out a wireframe that takes this into account. Also:

  • Be sure to get others’ opinions and feedback when you’re wireframing. This is really helpful, as it can be really hard to come up a good interface design, and a design that you think works might end up being confusing for others. If this gets caught at the wireframing stage, you can fix it before it gets built. Definitely be sure to involve any designers or developers you’re working with, as they’ll be able to provide good feedback and will know what’s feasible.
  • If you’re working on something really complicated and you want to make absolutely sure it’s right before any building starts, you can create detailed wireframes and then test them on users using paper prototyping, which is a technique some UX professionals use. There are lots of fun videos on YouTube that show how this can be done.
  • Learning how to code and build things yourself really helps. Once I started building things, I became so much more aware of all the detail that’s required when planning a website or interactive.
  • Look for examples of great sites, blogs, apps and interactives to use as inspiration when you’re planning your project. If you see a great piece of functionality on another site (for example, a really well-designed, easy-to-use navigation system), you might be able to borrow from it. (I borrowed a few interface design ideas from some really well-done interactives by the New York Times when I was sketching out one of my interactive infographics.)

Taking the next step

If you’re feeling inspired to try a wireframe or two on one of your next projects, there are plenty of resources you can turn to — whether you’re looking for toolstips or information about the wireframing process.

A final word of advice when wireframing is to work quickly and with an openness to revision. Wireframes should help bring focus to what you want to build, but that doesn’t mean the focus can’t change. Read more

Tools:
1 Comment

How journalists can improve video stories with shot sequences

Good video stories need strong individual shots. Great video stories present those shots in a sequence that complements the parts and creates a much greater whole.

Shooting and editing effective sequences are essential video storytelling skills. Shot sequences can enhance cohesion, help communicate more information in less time and create an overall sense of purpose.

In video storytelling, a sequence is simply a series of shots that works together to show an action unfolding. Shot sequences are ubiquitous — most shots in most stories are part of a larger sequence. That’s because they’re a foundational storytelling tool in a medium that’s not only visual but also depicts the passage of time.

Benefits of shot sequences

Shot sequences offer three main benefits:

Shot sequences promote continuity. When audiences see a disparate collection of images that don’t seem to fit together, they often experience a sense of disorientation. They’re pushed away from, rather than pulled in to, the story. Sequences are the remedy. A good shot sequence creates a seamless progression. Everything seems to build as the sequence unfolds. When it ends, you’re ready for the next sequence to begin. This clarifies what you’re watching. And it creates an impression that something continuous is unfolding before you. Sequencing is so important that it’s the bedrock of an entire school of filmography — continuity editing — that influences not only video stories but just about every Hollywood movie released in the past 100 years.

Shot sequences compress time. A good shot sequence conveys the full meaning of an action or event without requiring real-time observation. That means you can express more ideas in less time, with fewer extraneous details.

Shot sequences add professional polish. A few simple steps can make amateur video footage a little more professional. You can take steadier shots (possibly by employing a tripod). You can minimize zooms, pans and other camera movements. And, perhaps most of all, you can shoot in sequences. A good shot sequence conveys purpose and direction. This sense of intention immediately bolsters the professionalism of a piece.

Key ingredients in sequences

Shooting sequences starts with identifying specific actions – discrete events that unfold visually and can be captured by a camera. The key to spotting actions is to get specific. Rather than “cooking dinner,” think “dicing a potato.” Rather than “delivering mail,” think “putting a particular letter in a particular mailbox.”

The most challenging part of identifying actions is figuring out what’s going to happen in advance so you’re ready to record when the moment comes. Video storyteller Colin Mulvany calls this “anticipating the action.” This is a skill that can seem intuitive, but it often emerges from thoughtful planning upfront.

Variety is another key consideration. Good sequences result from a diverse mix of angles, distances from the subject and compositions (how subjects are positioned in a shot). It’s especially important to use variety in back-to-back shots.

Together, specificity, anticipation and variety lead to strong sequences.

Sequence patterns

No two shot sequences are exactly the same. But most sequences can be grouped into just a few different formats. Journalism educator Andrew Lih calls these patterns.

The great thing about a pattern is its flexibility. You learn a pattern once, then focus on all the different ways you can employ it.

Several sequencing patterns have become popular in video storytelling:

The two-shot sequence. Even two shots can create a sequence when they capture the same subject from different angles. Imagine a wide shot — one taken at some distance — of a person sitting on a park bench, followed by a much closer shot that reveals details of the person’s face and shows she’s reading a magazine. Or, you could start with a close-up of a subject’s face and proceed to a close-up of his hands. Two-shot sequences are simply back-to-back shots of the same thing from different angles and/or distances.

The three-shot sequence. Three-shot sequences usually employ a combination of wide (long), medium and close-up shots to depict the same subject from three distances. Often, different angles are used for each shot. A good three-shot sequence to practice is to start wide (at the greatest distance) and move progressively closer to the subject.

The following three-shot sequence from a Time video on extreme couponers shows this popular technique in action.

First of Three-shot Sequence in Supermarket
The first shot in this sequence is the widest. It establishes the place and highlights the subject. (Jacob Templin/Time)

Second shot in supermarket sequence (reaching for dressing)
The second shot in the sequence shows the subject reaching for a bottle of salad dressing. The camera has moved closer, and the angle has changed. (Jacob Templin/Time)
The third (and final) shot in the sequence is the closest. The camera has moved in on the shelf, where the subject is about to take a bottle, and the angle has again shifted. (Jacob Templin/Time)

The five-shot sequence. This sequence, popularized by video journalist Michael Rosenblum, also relies on wide, medium and close-up shots, while introducing the idea of perspective. In a five-shot sequence, the first shot is a close-up of a subject’s hands — a pianist, for example, tickling the ivories. The next shot is a close-up of the subject’s face. For the third shot, move back from the action and capture a medium shot of the subject. Next, move to an “over-the-shoulder” shot. Standing just behind the subject, shoot downward toward the action — hands on the keyboard, for example — showing what’s happening from a point-of-view.

For the final shot, think of the most creative composition possible. You might use an unusual angle, shooting from the ground or high above the subject’s head, or you might move far away and capture an extreme wide-angle shot. You could capture the pianist from the other end of the room or stage, for example.

Once these patterns are mastered, you can mix and match them in lots of creative ways to create more complex sequences. Multiple sequences make scenes. And long-form video stories — even feature-length films — are ultimately made from lots of short shot sequences arranged back-to-back to build complex, multi-part scenes.

Sequencing pitfalls

There are a few common problems when shooting sequences. When recording live events, there’s often only one chance to capture a shot, requiring quick decision-making to get a sequence right.

For video journalists, this can lead to ethical concerns. In the interest of achieving continuity, it can be tempting to control how a scene unfolds. In an extreme case, this might involve telling a subject exactly what to do and then recording that moment as if it unfolded naturally. This kind of staging is a clear and imminent ethical breach, but it can emerge in more subtle ways, too. What if a subject were asked to repeat something that she had just done, so a desired shot could be captured? Or, asked to slow down, so the videographer has more time to setup for each shot?

In breaking news situations, when something’s happening in real-time and you have just one opportunity to capture a moment, shooting in sequence becomes the most difficult. And the temptation to “direct” a subject is most acute.

Fortunately, there are a few strategies journalists can employ to avoid the need to intervene in any way with what’s happening in a scene. (I’ll cover these in just a bit.)

Another common pitfall comes from challenges in ensuring continuity. When sequencing shot angles, variety is a key variable. Too much variation can disorient the viewer, and too little can result in jump cuts — two shots so similar that the subject appears to move, or jump, unnaturally between them.

A few guidelines can help on both counts. In terms of ensuring enough minimum variation, a good rule of thumb is to vary back-to-back shots by at least a 30-degree angle.

To ensure there isn’t too much variation, it’s helpful to imagine an imaginary line that runs through a subject from left to right and to keep the camera on one side of that line. This is known as the “line of action.” It’s also called the 180-degree rule.

A final pitfall involves neglecting the importance of reactions. This usually manifests as a lack of shots of subjects’ faces. Actions are important, but reactions are often even more interesting — and informative.

Shot sequence tips

Like all parts of video storytelling, shot sequences take practice. Over time, shooting effective sequences becomes an intuitive process. But a few techniques can help you get there faster:

Take some time to observe the subject and space. As you observe, think about shot possibilities — angles, distances, compositions. Consider from where you could shoot and how the background would change accordingly. Note how and where the subject’s moving and plan a few ways you might track that movement.

Shoot for the edit. In general, “shooting for the edit” is a key video storytelling practice. This means making editing decisions while you’re shooting — especially by capturing footage you think there’s a good chance you’ll use in roughly the order you plan to use it. In terms of sequences, this means considering the order of your shots from the beginning and making final sequencing decisions later in the editing process, when you can always opt to drop some of our original shots.

Favor close-ups. Closer shots provide more detail, and detail is what makes a video story interesting. As a bonus, close-ups tend to be easier to sequence, especially when alternating between close-ups of different subjects. About half your shoots should be close ups.

The following shots, from a New York Times video on Chinese novelist Murong Xuecun, show how to use close-ups effectively.

A medium shot opens this sequence. We see the subject working at a computer. (Jonah Kessel/The New York Times)
The next shot is a close-up of the subject’s hands typing. Although the shot crosses the line of action, it provides good detail, and the absence of a second character minimizes any viewer disorientation. (Jonah Kessel/The New York Times)
Another medium shot is next in the sequence. This medium shot has slightly tighter framing, and the close-up second shot prevents what would be an awkward transition (and jump cut) between shots one and three. (Jonah Kessel/The New York Times)

The sequence ends with another close-up and another detail. (Jonah Kessel/The New York Times)

Use different starting shots to create different effects. Start with a wide shot to establish location and a sense of place. Start with a close-up to provide a specific detail, while leaving some questions in viewers’ minds about exactly what’s happening. These questions can create anticipation and build momentum for your story. To better anticipate action, keep asking yourself two questions: What’s happening now and what’s going to happen next?

Finally, look for repeating actions. It’s easier to build a shot sequence around something that repeats or a process that takes some time to complete. Always keep an eye out for these opportunities on location.

Correction: An earlier version of this story incorrectly stated that the second shot featured in the New York Times video doesn’t cross the line of action. Read more

Tools:
7 Comments