A Guide to Closed Captioning for Web, Mobile, and Connected TV

Captioning is coming to Internet video. Legislation goes into effect in the US during 2012 and 2013 that mandates closed captioning on certain categories of online content – see Zencoder's post for details on the legislation. But even apart from this legislation, closed captioning is a good thing for accessibility and usability, and is yet another milestone as Internet video marches towards maturity.

Unfortunately, closed captioning is not a single technology or “feature” of video that can be “turned on”. There are a number of formats, standards, and approaches, ranging from good to bad to ugly. Closed captioning is kind of a mess, just like the rest of digital video, and is especially challenging for multiscreen publishers.

How Closed Captions Work
The first thing to understand is how closed captions are delivered, stored, and read. There are two main approaches today:

  • Embedded within a video: CEA-608, CEA-708, DVB-T, DVB-S, WST. These caption formats are written directly in a video file, either as a data track or embedded into the video stream itself. Broadcast television uses this approach, as does iOS.

  • Stored as a separate file: DFXP, SAMI, SMPTE-TT, TTML, EBU-TT (XML), WebVTT, SRT (text), SCC, EBU-STL (binary). These formats pass caption information to a player alongside of a video, rather than being embedded in the video itself. This approach is usually used by browser-based video playback (Flash, HTML5).

What about subtitles? Are they the same thing as closed captions? It turns out that there are three main differences:
  • Goals: Closed captions are an accessibility feature, making video available to the hard of hearing, and may include cues about who is speaking or about what sounds are happening: e.g. “There is a knock at the door”. Subtitles are an internationalization feature, making video available to people who don’t understand the spoken language. In other words, you would use captions to watch a video on mute, and you would use subtitles to watch a video in a language that you don’t understand. (Note that this terminological distinction holds in North America, but much of the world does not distinguish between closed captions and subtitles.)

  • Storage: Historically, captions have been embedded within video, and subtitles have been stored externally. This makes sense conceptually, because captions should always be provided along with a video; 100% accessibility for hard-of-hearing is mandated by legislation. Whereas subtitles are only sometimes needed; a German-language video broadcast in Germany doesn’t need to include German subtitles, but that same video broadcast in France would.

  • Playback: Since captions are passed along with the video and interpreted/displayed by a TV or other consumer device, viewers can turn them on and off themselves at any time using the TV itself, but rarely have options for selecting a language. In these situations when subtitles are added for translation purposes, they are generally hard subtitles and thus cannot be disabled. However, when viewing DVD/Blu-Ray/VOD video, the playback device controls whether subtitles are displayed, and in which language.

Formats and Standards
There are dozens of formats and standards for closed captioning and subtitles. Here is a rundown of the most important ones for Internet video:
  • CEA-608 (also called Line 21) captions are the NTSC standard, used by analog television in the United States and Canada. Line 21 captions are encoded directly into a hidden area of the video stream by broadcast playout devices. If you’ve ever seen white bars and dots at the top of a program, that’s Line 21 captioning (more information.)

  • An SCC file contains captions in Scenarist Closed Caption format. The file contains SMTPE timecodes with the corresponding encoded caption data as a representation of CEA-608 data.

  • CEA-708 is the standard for closed captioning for ATSC digital television (DTV) streams in the United States and Canada. There is currently no standard file format for storing CEA-708 captions apart from a video stream.

  • TTML stands for Timed Text Markup Language. TTML describes the synchronization of text and other media such as audio or video. See the W3C TTML Recommendation for more.

    Example:
    <tt xml:lang="" xmlns="http://www.w3.org/ns/ttml">
    <head>
    <styling xmlns:tts="http://www.w3.org/ns/ttml#styling">
    <style xml:id="s1" tts:color="white" />
    </styling>
    </head>
    <body>
    <div>
    <p xml:id="subtitle1" begin="0.76s" end="3.45s">
    Trololololo
    </p>
    <p xml:id="subtitle2" begin="5.0s" end="10.0s">
    lalala
    </p>
    <p xml:id="subtitle3" begin="10.0s" end="16.0s">
    Oh-hahaha-ho
    </p>
    </div>
    </body>
    </tt>

  • DFXP is a profile of TTML defined by W3C. DFXP files contain TTML that defines when and how to display caption data. DFXP stands for Distribution Format Exchange Profile. DFXP and TTML are often used synonymously.

  • SMPTE-TT (Society of Motion Picture and Television Engineers – Timed Text) is an extension of the DFXP profile that adds support for three extensions found in other captioning formats and informational items but not found in DFXP: #data, #image, and #information. See the SMPTE-TT standard for more.

    SMPTE-TT is also the FCC Safe Harbor format – if a video content producer provides captions in this format to a distributor, they have satisfied their obligation to provide captions in an accessible format. However, video content producers and distributors are free to agree upon a different format.

  • SAMI (Synchronized Accessible Media Interchange) is based on HTML and was developed by Microsoft for products such as Microsoft Encarta Encyclopedia and Windows Media Player. SAMI is supported by a number of desktop video players.

  • EBU-STL is a binary format used by the EBU standard, stored in separate .STL files. See the EBU-STL specification for more.

  • EBU-TT is a newer format supported by the EBU, based on TTML. EBU-TT is a strict subset of TTML, which means that EBU-TT documents are valid TTML documents, but some TTML documents are not valid EBU-TT documents because they include features not supported by EBU-TT. See the EBU-TT specification for more.

  • SRT is a format created by SubRip, a Windows-based open source tool for extracting captions or subtitles from a video. SRT is widely supported by desktop video players.

  • WebVTT is a text format that is similar to SRT. The Web Hypertext Application Technology Working Group (WHATWG) has proposed WebVTT as the standard for HTML5 video closed captioning.

    Example:

    WEBVTT

    00:00.76 --> 00:03.45
    <v Eduard Khil>Trololololo

    00:5.000 --> 00:10.000
    lalala

    00:10.000 --> 00:16.000
    Oh-hahaha-ho


  • Hard subtitles (hardsubs) are, by definition, not closed captioning. Hard subtitles are overlaid text that is encoded into the video itself, so that they cannot be turned on or off, unlike closed captions or soft subtitles. Whenever possible, soft subtitles or closed captions are generally be preferred, but hard subtitles can be useful when targeting a device or player that does not support closed captioning.

Captioning for Every Device
What formats get used by what devices and players?:
  • Flash video players can be written to parse external caption files. For example, JW Player supports captions in SRT and DFXP format.

  • HTML5 captions are not yet widely supported by browsers, but that will change over time. There are two competing standards: TTML, proposed by W3C, and WebVTT, proposed by WHATWG. At the moment, Chrome has limited support for WebVTT; Safari, Firefox, and Opera are all working on WebVTT support; and Internet Explorer 10 supports both WebVTT and TTML.

    Example:
    <video width="1280" height="720" controls>
    <source src="video.mp4" type="video/mp4" />
    <source src="video.webm" type="video/webm" />
    <track src="captions.vtt" kind="captions" srclang="en" label="English" />
    </video>

    Until browsers support a format natively, an HTML5 player framework like Video.js can support captions through Javascript, by parsing an external file. (Video.js currently supports WebVTT captions.)

  • iOS takes a different approach, and uses CEA-608 captions using a modified version of CEA-708/ATSC legacy encoding. This means that, unlike Flash and HTML5, captions must be added at the time of transcoding. Zencoder can add captions to HTTP Live Streaming videos for iOS.

  • Android video player support is still fragmented and problematic. Caption support will obviously depend on the OS version and the player used. Flash playback on Android should support TTML, though very little information is available.

  • Some other mobile devices have no support for closed captions at all, and hard subtitles may be the only option.

  • Roku supports captions through external SRT files.

  • Some other connected TV platforms do not support closed captioning yet. But they will soon enough. Every TV, console, cable box, and Blu-Ray player on the market today wants to stream Internet content, and over the next year and a half, closed captioning will become a requirement. So Sony, Samsung, Vizio, Google TV, et al will eventually make caption support a part of their application development frameworks. Unfortunately, it isn’t yet clear what formats will be used. Most likely, different platforms will continue to support a variety of incompatible formats for many years to come.

Closed Captioning for Internet Video: 2012 Edition
The landscape for closed captioning will change and mature over time, but as of 2012, here are the most common requirements for supporting closed captioning on common devices:
  • A web player (Flash, HTML5, or both) with player-side controls for enabling and disabling closed captioning.

  • An external file with caption data, probably using a format like WebVTT, TTML, or SRT. More than one file may be required – e.g. SRT for Roku and WebVTT for HTML5.

  • A transcoder that supports embedded closed captions for HTTP Live Streaming for iPad/iPhone delivery, like Zencoder. Zencoder can accept caption information in a variety of formats, including TTML, so publishers could use a single TTML file for both web playback and as input to Zencoder for iOS video.

Beyond there, things get difficult. Other input formats may be required for other devices, and hard subtitles are probably necessary for 100% compatibility across legacy devices.

Source: Zencoder