Media & Entertainment Archives - 3Play Media https://www.3playmedia.com/blog/tag/media-entertainment/ Take Your Video Content Global Fri, 12 Sep 2025 19:29:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.3playmedia.com/wp-content/uploads/2025/07/cropped-favicon_1x-300x300-1-32x32.webp Media & Entertainment Archives - 3Play Media https://www.3playmedia.com/blog/tag/media-entertainment/ 32 32 Closed Captioning vs. Subtitles: What’s the Difference and Why it Matters for Accessibility (Including EAA) https://www.3playmedia.com/blog/closed-captioning-vs-subtitles/ Fri, 11 Apr 2025 04:00:00 +0000 https://www.3playmedia.com/blog/closed-captioning-vs-subtitles/ • Watch the Webinar: How the EAA Impacts Global Business Captions and subtitles are important timed text solutions that make video content accessible to all audiences. But over the last several years, the two have become clouded with questions and confusion, with the top concern being “What’s the difference between captions and subtitles?” Many experts...

The post Closed Captioning vs. Subtitles: What’s the Difference and Why it Matters for Accessibility (Including EAA) appeared first on 3Play Media.

]]>

  • Captioning

Closed Captioning vs. Subtitles: What’s the Difference and Why it Matters for Accessibility (Including EAA)


Watch the Webinar: How the EAA Impacts Global Business


Captions and subtitles are important timed text solutions that make video content accessible to all audiences. But over the last several years, the two have become clouded with questions and confusion, with the top concern being “What’s the difference between captions and subtitles?”

Many experts have weighed in, slapping labels to “captions” and “subtitles” in order to give each a singular, yet narrow definition. Now, some of these definitions may be correct, but they’re often only partially so. Why? 

Captions and subtitles are a lot more complex than most people realize. While they may seem interchangeable, understanding the differences between captions and subtitles is not only crucial for selecting the most appropriate option to enhance viewer experience and reach, but it also carries significant weight when addressing legal and accessibility requirements. For organizations and content creators serving the European market, this understanding is paramount for ensuring compliance with the European Accessibility Act (EAA).

In this blog, we’re diving head-first into the captions vs. subtitles debate. We’ll define timed text, captions, and subtitles; review the various types of captions and subtitles; and explore why they’ve become such a source of confusion in recent years.

What is a timed text?

Untitled design (1)

A timed text is a text-based file that includes timing information. 

In the accessibility space, timed text files are usually intended to pair the transcription of dialogue and/or sound to media. The timing information allows the text to be synchronized to specific time codes of media. Both captions and subtitles are forms of timed text.

What are captions?

Captions were introduced to accommodate D/deaf and hard of hearing television viewers in the early 1970s. Eventually, captions became a mandated requirement for broadcast television in the United States.

Captions provide a textual transcript of a video’s dialogue, sound effects, and music. Captions are designed for use by D/deaf and hard of hearing audiences, but have gained popularity with all audiences

Screenshot of man and woman talking. Closed caption reads "These are captions."

Standard closed captioning style: white text on a black box.

Captions appear as white text over a black box by default, but can sometimes be customized by viewers, depending on where media is being viewed.  Placement varies, but is often centered at the bottom of the screen for readability. When graphics or text appear in the lower third of the video, captions are typically placed at the top of the screen.

608 Captions

608 closed captions (also known as CEA-608, EIA-608, or Line 21 captions) were the standard captioning type for analog television transmission. 608 captions are unable to be customized by viewers, though they are compatible with digital television.

708 Captions

708 closed captions (also known as CEA-708/EIA-708/CTA-708 captions) are the newer standard captioning type for digital television. 708 captions are customizable by viewers, but are not compatible with analog television.

Styles
Captions have a few main display styles: pop-on, roll-up, and paint-on. Pop-on is used for recorded content. Roll-up is used for live programming. Paint-on is rarer to find in modern captioning workflows, but may occasionally be used in certain types of programming.

What are subtitles?

Subtitles were introduced in the 1930s, when silent film transitioned to “talkies,” or film with spoken audio, in order to accommodate foreign audiences who didn’t understand the language used in a film. 

Subtitles provide a textual translation of a video’s dialogue. Traditionally, subtitles assume the viewer can hear the audio but cannot understand the language. The exception to this is subtitles for the D/deaf and hard of hearing, which assume the viewer cannot hear the audio or understand the language.

Screenshot of man and woman talking. White subtitle reads "These are subtitles."

Common subtitle style: white text with black dropshadow, no background.

Screenshot of man and woman talking. White on semi-transparent black box subtitle reads "These are subtitles."

Subtitles mimicking the appearance of closed captions.

Subtitles can appear in a variety of styles, but often appear as white or yellow text outlined in black, or with a black dropshadow. It is also common for subtitles to mimic the appearance of captions. Placement varies, but is often centered at the bottom of the screen for readability and ease in translation. When graphics or text appear in the lower third of the video, subtitles are typically placed just above the graphic/text. Subtitles can sometimes be customized by viewers, depending on where media is being viewed.

non-SDH

Non-subtitles for the d/Deaf and hard of hearing (non-SDH) are traditionally referred to as just “subtitles.” Non-SDH are designed for viewers who can hear the dialogue and non-dialogue information but cannot understand the language. The only transcribed element of non-SDH is dialogue. On-screen graphics or words may also be transcribed, when time allows for the translation of these elements.

SDH

Subtitles for the D/deaf and hard of hearing (SDH) assume the end user cannot hear the dialogue and include important non-dialogue information such as sound effects, music, and speaker identification.

SDH were originally designed for viewers who cannot understand the language, but are increasingly used in place of captions on some video platforms and services.

Forced Narrative

Forced narrative (FN) subtitles, also known as forced subtitles, clarify pertinent information meant to be understood by the viewer. FN subtitles are overlaid text used to clarify dialogue, burned-in texted graphics, and other information that is not otherwise explained or easily understood by the viewer. 

Open vs. Closed
Both captions and subtitles can be open or closed.

On and off toggle buttons

Open: The captions or subtitles are permanently visible or burned onto the video. The viewer cannot turn them off.

Closed: Captions and subtitles are not visible unless they are turned on. The viewer can toggle the captions or subtitles on and off at their leisure.

Why Do Caption and Subtitle Choices Matter for European Accessibility Act (EAA) Compliance?

 

Learn how 3Play can support you in becoming EAA compliant

 

Why are captions sometimes called subtitles and vice versa?

Captions and subtitles are infamous for being confused with one another, and there’s a few reasons for this. Let’s take a quick look at how global differences in terminology and the increased usage of SDH have been adding chaos to the CC vs. subs discourse.

Global Terminology Differences

Globe with location pins in various places. Words "CC" and "SUB" appear next to pins, depending on location.

Outside of the United States and Canada (for example: the UK, Ireland, and most other countries), video subtitling and captioning are usually considered one and the same. In other words, the use of the term “video subtitling” does not distinguish between subtitles used for foreign language translation, and captioning used to aid the D/deaf and hard-of-hearing audiences.

The globalization of video content across corporate, education, and entertainment industries has greatly impacted how viewers use the terms “captions” and “subtitles”. It can be hard for viewers to understand the difference between the two when different entities label their accessible timed text files based on regional preferences. 

SDH = CC…for some

Because of the aforementioned globalization of video content, closed captions and subtitles for the D/deaf and hard of hearing are now commonly mistaken for one another. It’s easy to see why: they both serve D/deaf and hard of hearing audiences and often look alike.

But SDH and captions are different. SDH were initially designed to accommodate D/deaf and hard of hearing audiences who could not understand the language. But over the past few years, SDH have been used in place of captions on platforms where traditional captions are not supported. Sometimes the platform will refer to SDH as “SDH”; other times, they may be called “CC”. There are even cases where they could be called both, e.g. “CC/SDH”.

Captions vs. Subtitles

Because of the many nuances involved in defining captions and subtitles, it’s hard to compare both in general terms. To get to the heart of the individual differences between them, it’s important to break captions and subtitles down into their individual types.

Feature Captions Subtitles
608 708 SDH non-SDH FN
Text transcribed All All All Dialogue only Only pertinent dialogue & information not easily understood by viewer
Timed text synced to video
Audience assumption D/deaf and hard of hearing D/deaf and hard of hearing D/deaf and hard of hearing Hearing Hearing
Can be turned on/off
In source language Sometimes Sometimes
Speaker identification    
Music & sound effects    
Signs & graphics transcribed       Sometimes
Translation options Limited Limited
Appearance White text on black box; 32 characters per line White text on black box; 32 characters per line Varies; 42 characters per line Varies; 42 characters per line Varies; 42 characters per line
Placement Varies–usually centered at bottom, moving to top for lower third graphics Varies–usually centered at bottom, moving to top for lower third graphics Varies–usually centered at bottom, moving to top or just above lower third graphics Varies, usually centered at bottom, moving to just above lower third graphics Varies, usually centered at bottom, moving to just above lower third graphics
User Customization (when available)  

There’s a lot of nuance missing from the captions vs. subtitles discourse, and the complexities of each won’t go away anytime soon. In the broadest sense, each serves a different purpose with a common goal:

  • Captions provide an accessible way for viewers who cannot hear audio to watch video.
  • Subtitles provide an accessible way for speakers of any language to watch video.

Video accessibility is the string that ties captions and subtitles together, but there are ways to move beyond generalization of these accessibility solutions. The question of “what’s the difference between captions vs. subtitles?” is one that will always require us to break it down further. By comparing and contrasting the individual types of captions and subtitles, we can begin to grasp the differences between the two a lot more easily. 

 

EAA Get Started

 

This blog post was originally published by Sofia Leiva on August 14, 2016, and was updated on June 22, 2021 by Kelly Mahoney. It has since been updated again for comprehensiveness, clarity, and accuracy.


About the author

The post Closed Captioning vs. Subtitles: What’s the Difference and Why it Matters for Accessibility (Including EAA) appeared first on 3Play Media.

]]>
Closed Caption Styling & Formatting Best Practices You Need to Know https://www.3playmedia.com/blog/closed-caption-styling-formatting-best-practices-you-need-to-know/ Fri, 03 Nov 2023 21:03:15 +0000 https://www.3playmedia.com/blog/closed-caption-styling-formatting-best-practices-you-need-to-know/ • Captioning Best Practices for Media & Entertainment [Free eBook] Closed caption styling is an important element of video production that significantly impacts video quality and accessibility.  Traditionally, caption styling best practices were determined by television networks, streaming services, and captioning professionals based on feedback from D/deaf and hard of hearing communities. Guidelines from such...

The post Closed Caption Styling & Formatting Best Practices You Need to Know appeared first on 3Play Media.

]]>

  • Captioning

Closed Caption Styling & Formatting Best Practices You Need to Know


Captioning Best Practices for Media & Entertainment [Free eBook]


Closed caption styling is an important element of video production that significantly impacts video quality and accessibility. 

Traditionally, caption styling best practices were determined by television networks, streaming services, and captioning professionals based on feedback from D/deaf and hard of hearing communities. Guidelines from such entities as the Described and Captioned Media Program (DCMP), the Federal Communications Commission (FCC), and the World Wide Web Consortium (W3C) also played a key role in the development of best practices.

With the increase in video content and development of new captioning solutions over the past several years, caption styling has been unlocked for all video creators. This has come with an explosion in creative methods and DIY captioning. Unfortunately, creativity can sometimes come at the expense of accessibility, leading folks right back to conventional caption styling rules.

So how can you curate a captioning style that fits your video and brand while simultaneously maximizing the accessibility of your content?

In this blog, we will explore the best practices for closed caption styling and formatting. We’ll show you all of the styling elements you’ll need to consider, weigh the pros and cons of using different styles, learn why consistency is critical in any caption style, and provide tips for compiling your own captioning style guide to best support your brand’s content.

Caption Styling Elements to Consider

Whether you’re styling your own recorded captions or subtitles using YouTube or Premiere, or you’re in the process of creating your brand’s recorded captioning style guide, you will most likely be thinking about captions in pop-on format. Pop-on format is the most common captioning type for prerecorded video content, and it’s the only format available for subtitles. It allows for the greatest amount of customization in offline captions and subtitles.

Speaker Identification

Dashes: This is a simple way to identify new speakers. Use a dash followed by a space to indicate when a different speaker is talking.

Woman in workout gear holds a kettlebell. A closed caption with white text on a black background reads "- Hold this pose."

Name/title: This method identifies new speakers by name or title and can be helpful for viewers who want to know which character is speaking. Using names or generic titles to identify speakers can be done in several ways.

Four identical images of a woman in workout gear holding a kettlebell. A closed caption with white text on a black background sits on each image to demonstrate different speaker IDs. The first reads "JANE: Hold this pose." The second reads "Jane: Hold this pose." The third reads: "(Jane) Hold this pose." "The fourth reads [JANE] Hold this pose."

 

Speaker-oriented placement: This identification style uses manual horizontal caption placement to follow each speaker around the screen. Dashes and names may be used in addition to this style, or they may have no identification at all unless they are off-screen. This style can be useful for those who struggle with center-placed identification, but others may find this style distracting and hard to follow.

Two women sit side by side on a sofa with beverages. A closed caption with white text on a black background, positioned to the far left reads "- I really loved the movie!"

Overall, the use of speaker-oriented placement has been moving out of favor due to its incompatibility with many internet-based streaming platforms and video players. 

Placement

Bottom-center only: This style is compatible with almost every television and online video player. It is often the default on some web players, and is sometimes the only placement option for certain web caption file types. Despite its compatibility, bottom-center placement can obscure lower-third video graphics if they are present.

A person checks their watch. A closed caption with white text on a black background, in the bottom center reads: "- My ride is late."

Bottom-center, moving for lower thirds: This style is standard for many television and streaming networks, and many captioning vendors adhere to this placement by default. Captions stay in the bottom, center portion of the screen and are placed on the top of the screen when lower-third graphics are present.

A person wearing scrubs and a stethoscope listens to a golden retriever's heartbeat. A pink lower third graphic in the bottom right corner reads "Dr. Jay, Veterinarian." At the top, center of the screen is a closed caption with white text on a black background reading "- Today we're doing a lot of check-ups."

 

Speaker-oriented: As mentioned in the previous section, this style of placement is becoming less common because of its incompatibility with some web video players. This style can also be distracting and difficult for some viewers to follow.

Two women sit side by side on a sofa with beverages. A closed caption with white text on a black background, positioned to the far right reads "- The acting could have been better."

Narration and Off-Screen Speech

Italics: Italics are commonly used to differentiate voice-over narration and off-screen speech. They are sometimes used in tandem with speaker IDs.

An empty room of a house. A closed caption in white text on a black background is formatted in italics and reads "- We want to take a bold approach to this room."

Descriptors: Name descriptors may be used in addition to italics to indicate off-screen speech or narration. They are sometimes used without italics, as the means for indicating off-screen speech.

Two images of the same empty room of a house. Top image: A closed caption in white text on a black background on top uses italics and a name followed by a colon to identify the narrator. It reads "narrator: We want to take a bold approach to this room." Bottom image: A closed caption in white text on a black background on top uses no italics and uppercase text followed by a colon identify the narrator. It reads "NARRATOR: We want to take a bold approach to this room."
Two images of the same empty room of a house. Top image: A closed caption in white text on a black background on top uses no italics and parentheses to identify the narrator. It reads "(narrator) We want to take a bold approach to this room." Bottom image: A closed caption in white text on a black background on top uses no italics, uppercase text, and brackets to identify the narrator. It reads "[NARRATOR] We want to take a bold approach to this room."

Sound Effects, Music, and Other Non-Speech Information

Brackets: This style uses brackets to enclose sound effects or music descriptors. Brackets usually surround words in lowercase, without spaces. Sometimes, sound effects may be in uppercase or include additional spaces/italics as well.

Four images of the same set of trees blowing in the wind. Each image has a closed caption in white text on a black background located in the bottom center of the image. Each image uses brackets to indicate a "wind howling" sound effect. Top left contains brackets with no spacing: [wind howling]. Top right contains brackets with no spacing in uppercase: [WIND HOWLING]. Bottom left contains brackets with spaces: [ wind howling ]. Bottom right contains brackets with spaces in uppercase: [ WIND HOWLING ]

Parentheses: This style is almost exactly used like the brackets style, but includes parentheses to indicate sound effects instead.

Four images of the same set of trees blowing in the wind. Each image has a closed caption in white text on a black background located in the bottom center of the image. Each image uses parentheses to indicate a "wind howling" sound effect. Top left contains parentheses with no spacing: (wind howling). Top right contains parentheses with no spacing in uppercase: (WIND HOWLING). Bottom left contains parentheses with spaces: ( wind howling ). Bottom right contains parentheses with spaces in uppercase: ( WIND HOWLING )

Detailed descriptors: Highly detailed descriptors have gained traction with many hearing caption users due to their creativity and entertainment value. These can be a fun way to help immerse viewers in a program. However, it’s important to note that these can also confuse other viewers, particularly when advanced vocabulary is used in the descriptor.

Trees blowing in the wind with a closed caption in white text on a black background located in the bottom center of the image that reads in brackets: [treacherous Aeolian howling]
Captioning Sound Effects
If you’re creating captions yourself, adding non-speech elements is equally as important as ensuring all dialogue in transcribed. When trying to describe sound effects or music, ensure you are thinking about words that best describe the sound as opposed to the actions making the sounds. For example, [wind whooshing] or [wind howling] gives a better idea of the sound wind makes as opposed to simply writing [wind blowing].

Font, color, and character limits

Font: Sans Serif fonts with medium thickness are preferable for captions. Serif fonts can be used when they are simpler but tend to be less readable for viewers in general. Overly thin or bold fonts can additionally pose issues with readability. The more decorative a font is, the harder it may be for viewers to read.

Five examples of closed captions with white text on a black background. Each uses a different font. Caption one displays in a non-Serif font and reads: "This is a Sans Serif font." Caption two displays in a Serif font and reads: "This is a Serif font." Caption three displays in a bold non-Serif font and reads: "This is an extra bold Sans Serif font." Caption four displays in a thin non-Serif font and reads: "This is an extra thin Sans Serif font." Caption five displays in a decorative Serif font and reads: "This is a decorative Serif font." Captions one and two are the easiest to read.

Color: Closed captions are typically displayed as white text on an opaque or semi-transparent black box. Subtitles are often styled in white text with a black outline or black drop shadow. These tend to be the most readable colors for viewers, but open captions and open subtitles can be styled in other colors. Choosing different colors can be a creative way to extend branding, but caution should be used to ensure appropriate contrast is provided. 

Six examples of captions. Each uses different colors. Caption one displays as white text on a black background: "This is a standard caption." Caption two displays as white text on a semi-transparent background and reads: "This has a semi-transparent background." Caption three displays as white text with a black outline and reads: "This is has a black outline." Caption four displays as white text with a black dropshadow and reads: "This has a black dropshadow." Caption five displays as yellow text with a black dropshadow and reads: "This is yellow with a black dropshadow." Caption six displays as yellow text on a semi-transparent background and reads: "This is yellow on a semi-transparent background."

Character Limits: Closed captions have a line limit of 32 characters per line by default. Subtitles can have varying line limits, but are often capped at 42 characters per line to best support readability.

Profanity and Censorship

Bleeping: When bleeps are used to censor audio, the profanity is typically reflected as [bleep], (bleep), or [BLEEP] within the captions.

Dropped Audio: When audio is entirely dropped or silenced, the profanity is usually reflected as […] or (…) within the captions. 

Partial Censorship: When words are partially censored in the audio, or if producers wish to indicate the word being used in the captions, profanity can be transcribed using the first and/or second letter of the word followed by asterisks or dashes, such as sh– or sh**. Note that dashes are preferable due to asterisks’ display incompatibility with certain caption file types and players/televisions.

 

Can captions be customized by users?
Yes, captions can sometimes be customized by users. 

On television, 608 captions are unable to be customized by viewers, but digital 708 captions do have the capability for user customization, with choices for font, color, size, and background.

Some streaming platforms and online video players additionally support customization options to varying degrees, such as YouTube.

 

 
 Discover Captioning Best Practices for the Entertainment Industry ➡ 
 

Consistency in Caption Styling is Key

There is no blanket guideline for caption or subtitle styling. This can be great for creativity, but less so for accessibility. That’s where consistency comes in.

Consistency in Broadcast and Streaming

Video accessibility requirements for the FCC and WCAG, for example, are broad enough to allow for different caption styling options. However, it’s important to remember that content going to broadcast networks and streaming services, such as Netflix or Amazon, may require particular styling guidelines to be met. This helps each individual platform or network create greater consistency for captions and subtitles within their libraries of programming.

When applicable, network or streaming style guides should always be consulted and followed before defaulting to any other style. Some captioning vendors, like 3Play Media, are familiar with and well-versed in handling these specs, but always ensure they have the most updated style guides to review prior to caption creation.

If your content is being distributed to a network or platform without any specifications beyond following FCC guidelines, your captioning vendor will typically default to their house style. A caption vendor’s house style should integrate key compliance requirements and major recommendations from organizations like DCMP.

Consistency in Non-Entertainment Video Content

For video producers, organizations, and individuals with recorded video content not geared toward entertainment–including corporate training videos, brand videos, educational videos, event recordings, and more–ensuring a consistent caption style can help optimize both accessibility and branding. But how can you do this? Where do you start?

To create greater consistency across video content, it can be useful to review other style guides, talk to a captioning vendor about their house style, and watch captioned videos across different players and platforms. In fact, many captioning vendors, networks, and streaming services have designed their caption style specs with guidance and suggestions from disability communities and organizations over the years.

However, even the standard best practices can become outdated or may no longer best meet the needs of D/deaf and hard of hearing communities. That’s why it’s incredibly important to research the current preferences of these communities in order to gain a holistic view of caption styling priorities from the people who rely on them. 

Keep in mind that every individual will have their own preferences and reasoning behind their choice in caption styling. Because one cannot speak for the entirety of caption users, these preferences may not always be within the general best practices for captioning, but should still be considered when crafting your own caption style. 

Building a Captioning Style for Your Brand

When creating a captioning or subtitling style guide for your brand, remember that accessibility must be placed before aesthetics. Using your brand’s font and colors may support a consistent brand experience, but they can also be illegible to caption users if a font is too fanciful or colors don’t have enough contrast. Overly detailed sound and music descriptions may be entertaining and provide hearing caption users with a memorable brand experience, but they can also be distracting and confusing to others who need them to understand your video. Plus, it’s important to remember that not all captioning customizations display the same way across web platforms and televisions unless they are permanently burned in.

So with all of these caveats, how can you create a consistent and accessible captioning experience that supports your brand and complements your video content?

Choose Your Basic Style Requirements

Closed captions are not permanently burned into the video, unlike open captions or subtitles. Therefore, style elements like font, size, and color should not be considered during this stage. 

Stick to determining the basics of closed caption styling elements. How should speakers be identified? How do you want sound effects and music formatted? How should off-screen speech be indicated?

Once you figure out the basics, document your preferences so that they can be followed by your captioning vendor.

Choose Advanced Captioning Style Elements

After creating your basic preferences, you may begin selecting advanced captioning style elements if you will be creating or adding permanently burned-in open captions or open subtitles for your video content.

Take your own brand and preferences into account here, but make adjustments and considerations for accessibility as you do so. If you’re looking for a font, and your brand font is non-Serif with medium thickness, it will likely be readable in captions. If it’s Serif, decorative, has very thin lines, or is overly bold, there may be readability issues. 

When determining caption or subtitling color, consider utilizing a color contrast checker to ensure captions have enough contrast to support readability. For subtitles, consider how the use of outlines, drop shadows, and semi-transparent elements can improve contrast.

Put Your Captioning Style Guide to Use

Now it’s time to test your style elements together. How do they look in your video content? What do your viewers and caption users think? Do your caption styling preferences support captioning best practices?

After successful testing, you can go live with your new captioning style. Provide a copy of your style guide or requirements to your caption vendor, and review your files–ideally in the final video platform or player–to confirm the finalized caption display is accessible and to ensure overall consistency and compatibility.

 

 

Captioning Best Practices for Media and Entertainment: Read the eBook

 

This blog was originally published by Kelsey Brannan on November 1, 2016, as “Guest Post from PremiereGal: Trends in Captioning Style & Formatting” and has since been updated for comprehensiveness, clarity, and accuracy.


About the author

The post Closed Caption Styling & Formatting Best Practices You Need to Know appeared first on 3Play Media.

]]>
Canada’s Online Streaming Act: Everything We Know About Bill C-11 So Far https://www.3playmedia.com/blog/canadas-online-streaming-act-everything-we-know-about-bill-c-11-so-far/ Fri, 30 Jun 2023 18:09:40 +0000 https://www.3playmedia.com/blog/canadas-online-streaming-act-everything-we-know-about-bill-c-11-so-far/ Navigating Broadcast Accessibility in Canada [Free Webinar] As the landscape for media consumption continues to evolve from traditional broadcasting to online streaming, governments around the world have been working to make relevant updates to their existing legislation to address the challenges and opportunities that streaming presents. In Canada, these updates have taken the form of...

The post Canada’s Online Streaming Act: Everything We Know About Bill C-11 So Far appeared first on 3Play Media.

]]>

  • Legislation & Compliance

Canada’s Online Streaming Act: Everything We Know About Bill C-11 So Far


Navigating Broadcast Accessibility in Canada [Free Webinar]


As the landscape for media consumption continues to evolve from traditional broadcasting to online streaming, governments around the world have been working to make relevant updates to their existing legislation to address the challenges and opportunities that streaming presents.

In Canada, these updates have taken the form of Bill C-11, also known as the Online Streaming Act. Bill C-11 was passed in Spring of 2023 and is among the first legislations shaping the future of streaming media, but has not come without controversy.

In this blog, we will dive into everything we know so far about Canada’s Bill C-11, the Online Streaming Act: what it is, key takeaways, and why the bill has garnered criticism and pushback from certain entities, both domestically and internationally.

What is Canada’s Online Streaming Act?

The Online Streaming Act, or Bill C-11, was passed by the Canadian Senate in February 2023 and received Royal Assent in April of 2023. This bill amends Canada’s Broadcasting Act to include internet video and digital media. It marks the first substantial reform to the Broadcasting Act since 1991.

The Act strives to regulate online streaming services operating within Canada by establishing a fair and competitive environment for streaming platforms. Bill C-11 also aims to simultaneously prioritize accessibility, promote Canadian content and cultural diversity, and increase the power of the Canadian Radio-Television and Telecommunications Commission (CRTC).

Key Takeaways from Bill C-11

The Online Streaming Act has several areas of focus with an emphasis on elevating Canadian stories and creators to “give Canadians more opportunities to see themselves in what they watch and hear, under a new framework that better reflects our country today.”

Prioritizing Accessibility

Staying on trend with other North American pushes for more robust accessibility services on streaming platforms, Bill C-11 emphasizes the need for inclusive and accessible internet content. The bill mandates that streaming platforms provide features such as closed captioning and audio description to better support diverse and disabled communities, with a focus on options for English, French, and Indigenous languages. The bill also suggests imposing monetary penalties “for violations of certain provisions of that Act or of the Accessible Canada Act.”

Fostering Canadian Content & Cultural Diversity

One of the fundamental goals of Bill C-11 is to preserve and promote Canadian content and cultural diversity in the online streaming space. The legislation aims to better serve all Canadians by requiring use of Canadian talent and content, as well as improving the discoverability of such content on streaming platforms. The bill additionally stipulates that the CRTC “meaningfully engage” with minority and Indigenous communities to encourage the creation, availability, and discoverability of programming by those groups and communities.

Increasing the Power of the CRTC to Regulate Streaming Platforms

The bill grants the Canadian Radio-Television and Telecommunications Commission (CRTC) increased power to: 

  • Regulate broadcasters and streaming services in Canada
  • Provide flexible, fair, and clear directives that contribute to the creation, production, and distribution of Canadian content
  • Impose conditions upon broadcasters and streaming platforms to uphold Canadian broadcasting policies
  • Instate financial penalties for violation of parts of the Act

By giving the CRTC greater authority, the Government of Canada hopes to ensure the entity has the “proper tools to put in place a modern and flexible regulatory framework for broadcasting.” The Government of Canada notes that the CRTC’s policies will only apply to platforms that stream in Canada and will not extend to users, but some critics remain skeptical. Read about the proposed CRTC policy directions.

What is the CRTC?
The CRTC is Canada’s “administrative tribunal that regulates and supervises broadcasting and telecommunications in the public interest.” The entity maintains oversight over 2,000 broadcasters, radio stations, telecommunications carriers, and more. The CRTC frequently hosts public hearings, discussions, and forums to gather Canadian citizens’ feedback and views.

Learn key insights about Canada’s broadcast accessibility landscape with 3Play Media Canada’s Melina Nathanail

Pushback on the Online Streaming Act From Some Platforms and Content Creators

Bill C-11 has been met with pushback from some streaming platforms, content creators, and politicians, who argue that the Online Streaming Act’s regulations could pose a threat to freedom of choice, platform algorithms, and more. 

The CRTC denies that the bill will stifle creators or censor content on the internet. The commission additionally published a “Myths and Facts” website to help mitigate these concerns.

The Future of Canada’s Online Streaming Act

In May 2023, the CRTC announced plans for a public consultation process in order for streaming platforms, broadcasters, media and entertainment professionals, and Canadian citizens to share ideas on how broadcasters and platforms should support Canadian broadcasting initiatives.

The consultation process is split into three phases over 2023 and 2024, with a goal of implementing final policy decisions in late 2024. The CRTC maintains that “every step will include open and public consultations.”

  • Phase 1 was launched in Spring of 2023 and kicks off the process of getting started with the development of a framework for how the Online Streaming Act will be implemented.
  • Phase 2 is expected to launch in Fall of 2023 and will dig deeper into the specifics of expectations and requirements for broadcasters and streaming platforms.
  • Phase 3 is targeted to begin in late 2024 and will focus on implementation of new regulations and policies.

Between public consultations and development of policies, it is expected to be at least a year before the actual scope and impact of the Online Streaming Act is known.

Canada’s Online Streaming Act could mark a big step towards the country’s modernization of broadcast legislation, but the full ramifications of Bill C-11 are still unclear at this time. The bill aims to promote accessibility, foster the creation and inclusion of Canadian content on broadcast and streaming platforms, and empower the CRTC to regulate these mediums more effectively. Yet, growing criticism of the bill’s scope and wording–from algorithms to “CanCon” to the increased power of the CRTC–is giving pause to a number of stakeholders across industries.

With the passing of Bill C-11, the Online Streaming Act, the Canadian government is signaling to platforms that regulatory frameworks are needed to address the challenges of the digital era while preserving Canadian culture and including all communities. The next year will be critical for streaming platforms and broadcasters to monitor and plan ahead for the latest Canadian updates and regulations. Keep up with latest on Canada’s Online Streaming Act and review the CRTC’s full regulatory plan here.

Navigating Broadcast Accessibility in Canada. Watch the Webinar.

This blog post is written for educational and general information purposes only, and does not constitute specific legal advice. This blog should not be used as a substitute for competent legal advice from a licensed professional attorney in your state.


About the author

Related Posts

The post Canada’s Online Streaming Act: Everything We Know About Bill C-11 So Far appeared first on 3Play Media.

]]>
The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them https://www.3playmedia.com/blog/the-ultimate-guide-to-subtitles-different-types-how-they-work-and-when-to-use-them/ Thu, 22 Jun 2023 19:30:41 +0000 https://www.3playmedia.com/blog/the-ultimate-guide-to-subtitles-different-types-how-they-work-and-when-to-use-them/ The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them Video subtitling is instrumental in reaching global audiences, but can be a complex and nuanced media accessibility solution. Add captions to the equation, and it can become even more confusing for producers and creators of video content. We know it’s...

The post The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them appeared first on 3Play Media.

]]>

  • Subtitling

The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them


The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them


Video subtitling is instrumental in reaching global audiences, but can be a complex and nuanced media accessibility solution. Add captions to the equation, and it can become even more confusing for producers and creators of video content.

We know it’s easy to get bogged down with the different types of subtitles. That’s why we’re excited to debut our new eBook, The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them.

In this eBook, we compiled a comprehensive overview of the different types of subtitles based on the knowledge and experience of 3Play’s tenured subtitling experts. 

The Ultimate Guide to Subtitles covers the top subtitling solutions used across industries, including subtitles for D/deaf and hard of hearing (SDH), non-subtitles for the D/deaf and hard of hearing (non-SDH), and forced narrative (FN). Read on for a closer look into our extensive guide to all things subtitling.

Everything You Need to Know About Different Types of Subtitles 🌎

Discover the Different Types of Subtitles and How They Work

Learn all there is to know about subtitles in general. We provide an overview of their history, how they work, what they can look like, and how they’re encoded. Then, dig deeper and discover how SDH, non-SDH, and FN subtitles are defined.

Understand How Subtitling Types Compare

As mentioned above, subtitling is a nuanced solution and it can be difficult to wade through the different types without additional context. Explore in detail how each subtitling type compares to one another and how they stack up to captions. We even discuss why subtitles and captions have become so entangled in recent years and how you can better determine which media accessibility service you really need for your video.

Learn the Best Subtitling Type for Your Video

Each subtitling type has differing use cases and audience assumptions. In The Ultimate Guide to Subtitles, we cover the top use cases for SDH, non-SDH, and FN subtitles using examples that span across industries to help you find the best subtitling type for your video.

Resources

Gain access to a curated list of key 3Play Media resources for you to reference as you make accessibility part of your content production process.

The Ultimate Guide to Subtitles offers an in-depth exploration of the different types of subtitles, their functionality, and how they compare to captions. Using this knowledge and helpful use case examples, you will be able select the perfect subtitling solution for your media based on your viewers’ dynamic needs, no matter where they’re located in the world.

The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them. Download the eBook


About the author

Related Posts

The post The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them appeared first on 3Play Media.

]]>
Demystifying Caption Encoder Workflows https://www.3playmedia.com/blog/demystifying-caption-encoder-workflows/ Tue, 16 May 2023 16:51:44 +0000 https://www.3playmedia.com/blog/demystifying-caption-encoder-workflows/ The Complete Guide to Caption Encoders [Free eBook] With such a wide variety of caption encoder workflows available, determining whether to use a physical or virtual encoder can be a complicated process to navigate. Perhaps you’re making a decision about your encoding method. Or maybe you’re simply trying to figure out whether you even need...

The post Demystifying Caption Encoder Workflows appeared first on 3Play Media.

]]>

  • Captioning

Demystifying Caption Encoder Workflows


The Complete Guide to Caption Encoders [Free eBook]


With such a wide variety of caption encoder workflows available, determining whether to use a physical or virtual encoder can be a complicated process to navigate.

Perhaps you’re making a decision about your encoding method. Or maybe you’re simply trying to figure out whether you even need an encoder at all. Either way, it’s important to have all of the information before you begin, which is why we decided to demystify caption encoding and all of its associated workflows in this blog.

A general understanding of caption encoder workflows can help you best determine how and when encoding is necessary for your media. Read on to discover a high-level overview of caption encoding, a breakdown of specific live and recorded caption encoding workflows, and our detailed resources on each aspect of encoding.

Caption Encoding 101

File with arrow pointing to encoded data

 

Sometimes sidecar files, such as SRT or VTT, are not acceptable for a platform or television. In these cases, encoding may be necessary to transmit captions. Caption encoding is the process of embedding captions into a video stream. 

A caption encoder itself is the piece of equipment or software that a television network or video platform uses to pair the captions with the video and audio stream. Encoders convert captions into data that can be decoded by individual televisions or video players.

A broad range of caption encoder workflows exist for both live and recorded captions. But first, let’s take a look at how caption encoding works in general.

Traditional Caption Encoding

Caption encoder workflow: A caption provider transmits a caption feed to the encoder.The encoder collects the caption feed for transmission. The encoder pairs the captions to the video on line 21.

 

Traditional caption encoder workflows involve the use of physical encoder equipment or software. In general terms, there are three types of encoder connections: telco (analog/modem), telnet (digital/IP), and iCap (only if the encoder is manufactured by EEG). The typical encoder workflow usually goes like so:

  • A caption provider transmits a caption feed to the encoder(s).
  • The encoder collects the caption feed for transmission to the viewer.
  • The encoder pairs the captions to the video on a specific data transmission line known as line 21–this is the data that televisions are mandated to decode captions from. 

There are two main standards for the encryption and decryption of closed captioning data via encoders. These standards were developed based on Federal Communications Commission (FCC) regulations: CEA-608 and CTA-708. Learn more about the differences between 608 and 708 captions and how they can impact captioning workflows.

Virtual Encoding

Cloud data

Virtual encoding options have expanded in recent years and are popular for web-based platforms or players. Virtual encoders function similarly to physical encoders without the physical box and connection. Virtual encoders are hosted in the cloud and require clients to connect their stream digitally. 

Virtual encoders are useful for events that are streamed online, where the virtual encoder will add the captioning data and re-route the video stream to the desired platform.

Web-based platforms don’t usually follow the same data transmission methods as traditional broadcast television, so virtual and alternative encoding options are often used instead. 

Live Caption Encoding Workflows

Live caption encoding allows broadcasters to simultaneously receive and encode captions, allowing them to be displayed alongside a television program or video in real time. 

Live Caption Encoding Methods

The three main physical live caption encoding workflows involve the use of telco, telnet, or iCap encoders.
Modem

Telco Encoders

A telco encoder is based on analog technology and requires phone lines to connect to.

Telnet Encoders

A telnet encoder uses an IP and port number to receive the caption data. Similar to a telco encoder, a separate audio line is needed to hear the dialog that needs captioning. 

iCap Encoders

iCap encoders are caption encoders manufactured by EEG. They include iCap software for improved functionality, such as sending audio to the captioner. They can also be set up as IP connections if desired. 

Explore each of these live encoding methods in greater detail in The Complete Guide to Caption Encoders.

Live Virtual Caption Encoding

In March 2023, 3Play Media introduced an exciting new live virtual caption encoding solution, which eliminates the need for additional live captioning hardware. 3Play’s virtual encoding solution delivers high-accuracy and low-latency captions to platforms, while streamlining live captioning workflows from listening through delivery. Learn more about 3Play’s exciting virtual encoder developments.

Looking for an audio described version of this video? We’ve got you covered!

Everything You Need to Know About Caption Encoders

Title page of The Complete Guide to Caption Encoders: an eBook by 3Play Media

This ebook serves as your comprehensive guide to caption encoders – what they are, when and why you need them, and which encoder to use.

Get your free eBook

Other Live Virtual Encoding Options & Alternatives

Aside from 3Play’s Live Virtual Caption Encoding solution, additional virtual encoding options, such as iCap Falcon (by EEG), are available for live captioning purposes. 
A growing number of alternative options to encoding have arisen in recent years due to the evolution of broadcast, streaming, and other technological advances. For instance, captions are sometimes included as a separate entity on applications that have built caption functionality directly into their players, such as Zoom and YouTube. 

Sidecar files and video player integrations remain popular options for many users due to their ease of use. Integrations in particular help take the guesswork out of whether a video requires encoding by simplifying captioning workflows. 3Play Media offers numerous integrations and partnerships with top video platforms such as Brightcove, Wistia, and YouTube.

Recorded Caption Encoding Workflows

In certain cases, it is necessary to embed recorded captions in the video itself rather than use a separate track. This is done using caption encoders.

Recorded caption encoding ensures that your closed captions will be viewable if you don’t have a video platform, if you want an offline option, or if you need captioned videos for kiosks and social media.

Closed & Open Caption Encoding

null

Closed captions are usually output on a separate track as a sidecar file and added to a player to be played in sync with the video. In this case, the captions can be turned on or off, usually by pressing the “CC” button on the video player.

Open captions, on the other hand, are encoded via video embedding. This encoding workflow permanently burns captions into the video, meaning that they are always showing and cannot be toggled off.

Open captions eliminate rendering inconsistencies across different video players and allow publishers to control the exact size and style of the captions. Open captions also make it easier to create DVDs and other physical media. Open captioned video files can be imported into any NLE or DVD authoring software.

Because open captions are part of a video itself, they are supported by all video players and devices. Discover more about recorded caption encoding workflows.

Subtitle Encoding

Pixels

Subtitles, while closely related to captions, differ in their encoding processes

Subtitles are often encoded as bitmap images, which tend to be a lot more compatible with newer digital media methods. HD disc media, like Blu-ray, does not support traditional closed captioning but is compatible with subtitles. The same goes for some streaming services and OTT platforms. SDH or other subtitling formats may be used on these platforms due to their inability to support traditional Line 21 broadcast closed captions.

Review the differences between closed captions and subtitles for the D/deaf and hard of hearing (SDH).

The Complete Guide to Caption Encoders

Title page of The Complete Guide to Caption Encoders: an eBook by 3Play Media

To determine the encoding needs of your next video project, it’s crucial to ask some key questions to gain a full understanding of the numerous types of caption encoders and transmission methods available. 

In 3Play Media’s The Complete Guide to Caption Encoders, we break it all down for you. This free eBook:

  • Defines caption encoding
  • Helps you determine whether you need an encoder
  • Explains the different types of encoders and encoder alternatives

Encoders can seem daunting, but they’re an important part of making both live and recorded captions fully accessible to viewers. By learning the basics of caption encoder workflows, you can take the next step towards making your media accessible in the most efficient way possible.

The Complete Guide To Caption Encoders: Get Your Free Guide

About the author

Related Posts

The post Demystifying Caption Encoder Workflows appeared first on 3Play Media.

]]>
SDH vs. CC: What’s the Difference? https://www.3playmedia.com/blog/whats-the-difference-subtitles-for-the-deaf-and-hard-of-hearing-sdh-v-closed-captions/ Mon, 06 Mar 2023 05:00:00 +0000 https://www.3playmedia.com/blog/whats-the-difference-subtitles-for-the-deaf-and-hard-of-hearing-sdh-v-closed-captions/ • The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them [Free Ebook] When it comes to media accessibility, one of the most common questions from television viewers revolves around the differences between subtitles and closed captions. But between the rise of streaming content and the global use of the...

The post SDH vs. CC: What’s the Difference? appeared first on 3Play Media.

]]>

  • Captioning

SDH vs. CC: What’s the Difference?


The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them [Free Ebook]


When it comes to media accessibility, one of the most common questions from television viewers revolves around the differences between subtitles and closed captions. But between the rise of streaming content and the global use of the term “subtitles” versus “captions,” the answer has become complicated.

As the lines between subtitles and captions continue to blur, perhaps none has become more confusing than the difference between subtitles for the d/Deaf and hard of hearing (SDH) and closed captions (CC). 

The issue of SDH vs. CC has been compounded by the availability of both options on certain streaming platforms. Adding further confusion, there’s also the matters of:

  • Mixed usage of terminology 
  • Different interpretations of what makes a timed text file SDH or CC
  • General misinformation on the purpose and function of SDH vs. CC files

This widespread confusion is precisely why we’ve decided to tackle SDH vs. CC in this blog. We’ll review the key differences between subtitles and closed captions, closely examine SDH subtitles, cover each of their respective roles and use cases, and explain why some streaming services are moving towards offering both options to viewers.

 

 

 

Looking for a described version of this video? We’ve got you covered!

Defining Subtitles and Captions

Before fully understanding the difference between SDH and closed captions, it is helpful to first understand the basic differences between subtitles and captions.

Person sitting between two boxes that read "CC" and "sub".

How are they alike?

Both subtitles and captions are timed text files synchronized to media content, allowing the text to be viewed at the same time the words are being spoken. Captions and subtitles can be open or closed.

How are they different?

In the United States and Canada, subtitles are intended for hearing viewers who do not understand the language. Traditionally, subtitles show the spoken content but not the sound effects or other audio elements. They are often used to refer to translations (think: subtitles for a foreign film.) In places like the UK, the term “subtitles” is used to describe both subtitles and captions.

Closed captions are designed for d/Deaf and hard-of-hearing audiences. They communicate all audio information, including sound effects, speaker IDs, and non-speech elements. They originated in the 1970s and are required by law for most video programming in the United States and Canada.

What are Subtitles for the d/Deaf and Hard of Hearing (SDH)?

It’s important to note that there are a few different types of subtitles. The most frequently used types are known as: SDH, non-SDH, and forced narrative (FN).

SDH stands for subtitles for the d/Deaf and hard of hearing. These subtitles assume the end user cannot hear the dialogue and include important non-dialogue information such as sound effects, music, and speaker identification. In the United States and Canada, SDH traditionally assumes the end user cannot understand the language being spoken, whereas traditional subtitles (also referred to as non-SDH) assume the viewer can hear the audio but doesn’t know the spoken language.

SDH often emulates closed captions on media that does not support closed captions, such as digital connections like HDMI or OTT platforms. In recent years, many streaming platforms, like Netflix, have been unable to support standard broadcast Line 21 closed captions. This has led to a demand for English SDH subtitles styled similarly to FCC-compliant closed captions instead. 

SDH can also be translated into foreign languages to make content accessible to d/Deaf and hard-of-hearing audiences who speak other languages.

Translation
Translation is often cited as a major difference between subtitles and captions. But can’t captions also be in other languages?

 

Yes! It’s common in the United States and Canada to find closed caption offerings in Spanish and French, along with other languages. The FCC even requires Spanish CC for all Spanish television programming in the US. There are limitations with translated closed captions, however. Because of CC’s line limits and lack of extensive international character support outside of Western languages, SDH subtitles are preferred to get the most accurate translations for d/Deaf and hard of hearing viewers across languages.

 

3Play Media Explains… SDH vs. CC – Watch the Video 👀

 

A Deep Dive into SDH vs. CC

SDH subtitles and closed captions are closely related, and there’s often confusion between the two. One of the main reasons? Preferred jargon.

The term “closed captions” has dominated the vernacular for nearly half a century in North America. The term “subtitles” has encapsulated any timed text format in the UK and other parts of the globe. 

But in recent years, rapid developments in streaming content and the globalization of media has shaken up the popular nomenclature across the world. This has left viewers and users of these accessibility services scratching their heads and wondering how SDH and CC are different.

Appearance

Example of SDH subtitles styled to closely resemble closed captions. Text reads "I'm street smart..." in white text on a semi-transparent black background.

SDH subtitles styled to closely resemble closed captions: white text with a semi-transparent black background.

SDH subtitles have a lot of flexibility in terms of appearance. They can be customized by professional captioners to look exactly like closed captions, or styled to match a customer’s request or platform’s specifications. 

Example of SDH subtitles styled to a standard subtitling appearance. Text reads "I'm street smart..." in white text, black outline, no background.

SDH subtitles styled to a standard subtitling appearance: white text, black outline, no background.

SDH subtitles’ appearance can sometimes be determined by a video player or platform, which sets the appearance independently of the original captioner. Occasionally, SDH can also be customized by the end user, but this varies based on the player or platform’s customization options.

 
Example of closed captions. Text reads "I'm street smart..." in white text on a black background.

Default closed captioning style: white text on an opaque black background.

By default, closed captions are displayed as white text on a black box, with placement that is customized on the captioner’s end. This has changed over the years with the introduction of digital television and 708 captioning standards, which allows for user customization.

User Customizations
When customization options are available to users, they can choose from a variety of font, sizing, and color options for SDH or CC. Customization options vary depending on the television, video player, or OTT platform capabilities.

Placement

SDH subtitles and closed captions are both capable of supporting placement. Viewers often find SDH and CC are placed in the bottom center, with movement to the top to avoid lower thirds. Some styles of CC may include horizontal placement to indicate speaker changes.

SDH can theoretically be placed anywhere on the screen if they are burned-in. As a best practice, SDH are typically centered for readability and ease in the translation process. 

Caption placement is usually implemented by a captioner and cannot be adjusted by the user unless the captions are formatted to 708 standards. According to FCC rules, captions must be positioned in such a way to avoid covering important lower third graphics.

Ultimately, SDH and CC positioning is dictated by the file type being used, or by the requested formatting specs from a platform or television network. 

Why are SDH and CC often centered?
Many streaming platforms and networks are moving towards center placement for both SDH and CC files for readability. It’s still common to encounter CC positioning to indicate speakers, but current trends point to left-justified, center-aligned SDH and CC.

 

Streaming services that follow this trend include Netflix and Amazon

Encoding

The move from analog television to high-definition (HD) media over the last 20 years had major implications for the encoding of closed captions and subtitles.

Standard 608 closed captions are transmitted via Line 21 as a stream of commands, control codes, and text. 708 closed captions are transmitted via MPEG-2 video streams in MPEG user data.

Subtitles, on the other hand, are often encoded as bitmap images – a series of tiny dots or pixels. And this method of transmission is a lot more compatible with newer digital media methods.

HD disc media, like Blu-ray, does not support traditional closed captioning but is compatible with SDH subtitles. The same goes for some streaming services and OTT platforms. SDH formats are increasingly used on these platforms due to their inability to support traditional Line 21 broadcast closed captions. That being said, some classic captioning formats, like SCC, have proven to be versatile across television and digital formats.

SDH vs. CC: At a Glance

Features SDH Closed Captions
Timed text synced to video
Can be turned on/off
In source language
Speaker Identification
Sound effects
Translation options Limited
Text appearance Varies; often white text on black or semi-transparent background to mimic captions Usually white text on black background
On-screen placement Varies; typically centered at the bottom, with movement to the top for lower third graphics Varies
Encoding Supported through HDMI Not supported through HDMI

 

Why Do Streaming Platforms Sometimes Include Both SDH and CC?

While many streaming and OTT platforms only offer one timed text option for viewers to use, some have started offering both SDH and CC options when available.

Apple TV+ is one of such platforms offering a wide array of accessibility choices for viewers on select programming. Depending on the program chosen, a viewer could find themselves choosing between CC and SDH. So why offer this?

Person thinking with text in a thought bubble: "English CC, English SDH, English non-SDH."

The answer can be different depending on the platform, but by offering both options, viewers are able to choose the format that they prefer. In situations where there is no distinction made between CC and SDH, the file could be considered one in the same. 

When both options are available to select, it’s often likely that the captions originate from a true CC file and are formatted to match that style; whereas the SDH could be a simpler timed transcript in the source language that was intentionally designed for translation into other languages. The difference between the two isn’t always clear when both are offered on a platform, but usually comes down to how each is displayed.

 
 
 

Closed captions and subtitles for the d/Deaf and hard of hearing are like siblings: closely related, with similar mannerisms, yet each has their own unique traits and appearance.

Like many media accessibility services, CC and SDH are nuanced and tricky to definitively declare as being one specific solution designed for one specific purpose. In the greater scheme of timed text files, either solution offered by a television network or streaming platform will provide an accessible experience for viewers.

Neither CC or SDH will ever fit neatly into one box, and it’s possible that defining them may only get more complicated as digital video evolves. But one thing remains certain for CC and SDH: they will always serve the d/Deaf and hard of hearing community first and foremost.


The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them. Download the eBook 

This blog was originally published by Lily Bond on May 21, 2014 as “How Subtitles for the Deaf and Hard-of-Hearing (SDH) Differ From Closed Captions.” This blog was updated on August 24, 2021 by Elisa Lewis and has since been updated again for comprehensiveness, clarity, and accuracy.


About the author

The post SDH vs. CC: What’s the Difference? appeared first on 3Play Media.

]]>
What are Forced Subtitles? https://www.3playmedia.com/blog/what-are-forced-narrative-subtitles/ Tue, 14 Feb 2023 14:52:13 +0000 https://www.3playmedia.com/blog/what-are-forced-narrative-subtitles/ • Download the [FREE] Checklist: Dubbing We previously covered SDH subtitles, non-SDH subtitles, and when they’re used; the difference between SDH subtitles and closed captions; and how subtitles vary from closed captions in general. That leaves us a common yet important subtitle type that most viewers never actually have to toggle on: forced narrative subtitles....

The post What are Forced Subtitles? appeared first on 3Play Media.

]]>

  • Subtitling

What are Forced Subtitles?


Download the [FREE] Checklist: Dubbing


We previously covered SDH subtitles, non-SDH subtitles, and when they’re used; the difference between SDH subtitles and closed captions; and how subtitles vary from closed captions in general. That leaves us a common yet important subtitle type that most viewers never actually have to toggle on: forced narrative subtitles.

A number of subtitling types exist in the world of video translation and localization services. The most commonly used subtitles include: 

  • Subtitles for the Deaf and Hard of Hearing (SDH)
  • non-Subtitles for the Deaf and Hard of Hearing (non-SDH)
  • Forced Narrative (FN) 

Forced narrative subtitles are crucial to supporting audience comprehension in a number of programs, regardless of the genre. So why is that? In this blog, we’ll explore what forced narrative subtitles are, what they look like, and when to use them.

What are Forced Subtitles, and What Purpose Do They Serve? 

Forced narrative (FN) subtitles, sometimes referred to as forced subtitles, are used to clarify pertinent information meant to be understood by the viewer. FN subtitles are overlaid text used to clarify dialogue, burned-in texted graphics, and other information that is not otherwise explained or easily understood by the viewer. Forced narrative subtitles are typically used in video translation and localization workflows to ensure any viewer can understand critical textual elements displayed on screen.

Forced narrative subtitles broaden the viewing experience across a wide range of countries, languages, and devices. FN subtitles are delivered as separate timed ­text files; therefore, they are not burned into the video. 

How are Forced Narrative Subtitles Different from Traditional Full Subtitles?
Forced subtitles clarify only the necessary information that would not be understood by the audience. The subtitles are “forced” because a viewer will not have to toggle the subtitles on to see them.
 

A full subtitle file translates the entirety of a program’s content, but must be toggled on by the viewer. It may or may not contain forced narrative content, depending on the viewing platform and other factors, such as timing. This means that information contained in a forced narrative, like the translation of a sign or other on-screen text that is normally not translated in full subtitle files, may not be included if dialogue is happening at the same time as the other text is displayed. Dialogue translation takes precedence over forced narrative elements in these cases.

What Do Forced Subtitles Look Like?

Many OTT providers will not display forced subtitles unless the Subtitles/CC settings are set to “off.” That being said, some platforms, like Netflix, incorporate forced narrative content into full subtitling and closed caption files.

When forced narrative subtitles are displayed on their own, their appearance can mirror that of typical subtitling or closed captioning files. And much like subtitles and captions, the visual appearance of FN subtitles varies depending on the platform, player, television, or other viewing device.

 

Adding dubbing or voice-over to your video? This checklist covers everything you need to consider 💬

How Are Forced Subtitles Used?

Forced narrative subtitles are commonly used in several scenarios. Let’s explore these different use cases for FN subtitles to better understand what they are and how they work.

Sporadic Foreign Language

Although a film may be in one source language, occasionally certain characters will use a phrase or short segment of a different language. 

Person speaking on the phone. A speech bubble above the person reads "Guten tag." Below, a forced narrative subtitle reads "Good afternoon."

One scenario might be a German character living in the United States who makes a phone call to a family member where they speak in German. If the information during this scene is important to the plot and overall understanding of the movie or show, FN subtitles will be used to translate the conversation.

 

Translation of Labels

Sometimes burned-in text graphics are used to enhance the viewing experience. Oftentimes, these are labels for locations, names, or dates. Since they are burned into the video in the original language, FN subtitles can be used to translate these into another language for viewers.

Silhouette of Boston, Massachusetts with Chinese characters written above it. Below, a forced narrative subtitle reads "Boston, Massachusetts."

This image showcases an example of a film containing a location label in the original language at the top. When shown in the United States, English FN subtitles would be used to translate the city name for English-speaking viewers to understand.

 

 

 

Other Forms of Communication

Forced narrative subtitles are helpful in cases where other forms of communication are showcased in a video, such as non-verbal communication formats like sign language; or fictional languages, such as Game of Thrones’ Dothraki or Elvish dialects in The Lord of the Rings.

Person using sign language with blackboard with a sketch of a tree behind them. Below, a forced narrative subtitle reads "Today we're learning about trees."

For example, if a character communicates in sign language, forced narrative subtitles would be used to clarify the meaning for viewers who aren’t familiar with the language. This example shows forced narrative subtitles below a teacher communicating via sign language.

 

 

Transcribed Dialogue

Sometimes forced narrative subtitles are used for transcribed dialogue in the same language. This is done to assist audience members when audio is inaudible or distorted.

Police cruiser chasing a car with explosions behind them. Below, a forced narrative subtitle reads "We're in pursuit!"

It may be hard to hear dialogue in an action movie with a lot of background noise, or in a documentary with poor audio quality. In either of these cases, FN subtitles could be used to clarify dialogue for the viewer.

 

 

Forced Narrative Subtitling with 3Play Media

Did you know 3Play Media creates forced narrative subtitles for video content?

Our experienced translation and subtitling team creates forced narrative subtitles for video content across networks and major OTT platforms. View our plans, and get in touch with 3Play Media to get started!

Not sure if you need forced narrative subtitles?

We’re here to help. Our team is filled with experienced localization professionals who have created countless SDH, non-SDH, and FN subtitling files for a variety of networks and streaming platforms. Reach out to begin scoping your project, and we’ll help determine if forced narrative subtitling is right for your content.

Dubbing Checklist: Get your free checklist

This blog was originally published by Elisa Lewis on December 8, 2017, as “What Are Forced Narrative Subtitles?” and has since been updated for comprehensiveness, clarity, and accuracy.


About the author

The post What are Forced Subtitles? appeared first on 3Play Media.

]]>
Why Advocates Are Calling Out Closed Captions at Movie Theaters and Festivals https://www.3playmedia.com/blog/why-advocates-are-calling-out-closed-captions-at-movie-theaters-and-festivals/ Tue, 07 Feb 2023 20:36:13 +0000 https://www.3playmedia.com/blog/why-advocates-are-calling-out-closed-captions-at-movie-theaters-and-festivals/ • Download the [FREE] Checklist: Caption Reformatting Open captioning is back in the forefront of accessibility advocates’ minds after Sundance Film Festival’s 2023 dramatic jurors Marlee Matlin, Jeremy O. Harris, and Eliza Hittman walked out of a film screening after Matlin’s closed captioning device malfunctioned and no other captioning alternatives were available to her and...

The post Why Advocates Are Calling Out Closed Captions at Movie Theaters and Festivals appeared first on 3Play Media.

]]>

  • Captioning

Why Advocates Are Calling Out Closed Captions at Movie Theaters and Festivals


Download the [FREE] Checklist: Caption Reformatting


Open captioning is back in the forefront of accessibility advocates’ minds after Sundance Film Festival’s 2023 dramatic jurors Marlee Matlin, Jeremy O. Harris, and Eliza Hittman walked out of a film screening after Matlin’s closed captioning device malfunctioned and no other captioning alternatives were available to her and other d/Deaf and hard of hearing audience members.

Before this incident at Sundance, the issue of closed captioning at movie theaters and festivals had long been debated by filmmakers and viewers alike. Many in the d/Deaf and hard of hearing communities have called for film screenings to include permanent, burned-in open captions. The current closed captioning solution for film screenings relies on captioning devices, which are often plagued with technological and user experience issues.

But what exactly are open captions, and why are accessibility advocates passionate about adding them to films screened at movie theaters and festivals? 

In this blog, we’ll discuss the current state of closed captions at movie theaters and festivals; why accessibility advocates are calling on the media and entertainment industry to move toward open captioning for films; and discuss artistic, cost, and audience loss concerns many filmmakers have about adding open captions to movies.

The State of Closed Captions at Movie Theaters and Film Festivals

Cinema entrance

ADA Requirements for Movie Theaters

Movie theaters are required to provide and maintain closed captioning and audio description equipment for digital films that are produced with accessibility features, according to a Final Rule revising the Americans with Disabilities Act (ADA) Title III. 

Additionally, theaters are required to provide notice to the public about the availability of accessibility features and ensure that staff is available to assist patrons with equipment.

How Movie Theater Closed Captioning Devices Work

The National Association of the Deaf (NAD) states that the two types of captioning equipment available in theaters are Sony Entertainment Access Glasses and CaptiView:

Sony Entertainment Access Glasses

Captions are transmitted to a wearable wireless receiver device, which viewers wear while watching a film. Captions appear overlaid on the screen through the lenses.

CaptiView 

A small display with a flexible arm is attached to the arm of the seat or cupholder. Captions are transmitted to the device and appear on the display screen.

The Closed Captioning User Experience at Theaters and Festivals

Many accessibility advocates and people who use closed captioning find the user experience of captioning devices in their current state difficult. In the last year alone, multiple disabled people who use captioning devices have lamented the poor user experience of the current technology, including filmmaker Alison O’Daniel and advocate Shari Eberts.

To get further insight into the captioning issues at movie theaters and film festivals, we chatted with Matt Lauterbach, a filmmaker and accessibility advocate who founded All Senses Go and serves as ReelAbilities Film Festival Co-Director in Chicago. 

Lauterbach said that “a lot of what’s happening is intentions that aren’t yet matched by an understanding of what’s involved” when it comes to accessibility at film festivals and movie theaters. He noted that filmmakers generally want to reach a universal audience and be accessible to all but are facing technological and procedural constraints to get to the point where films are truly accessible. In the meantime, closed captions remain a way for filmmakers and theaters to provide a compliant solution without taking a “visible stand” on the issue.

Open captions are a visual stand [for inclusion].Matt Lauterbach

Lauterbach works with many filmmakers and caption users who support the use of open captions over closed captioning devices in movie theaters and film festivals. He explained that captioning technology can be cognitively draining, straining on the eyes, and even cause users to miss content in screenings due to the need to look back and forth from a device to the screen. “It’s a tough user experience,” he said.

Besides the user experience, Lauterbach also noted some basic technological functions in captioning devices that are prone to disrupt users. 

“The device needs to be set to the proper theater,” he said. “You might get a caption device set to theater 7, and it’s set to theater 6. You then need to bring it back to get it fixed [during the movie]. That’s tough.” 

On top of incorrect theater settings, dead batteries and uncharged devices are a common issue, not to mention theater and festival staff who aren’t trained on how to use or troubleshoot captioning devices.

Why Accessibility Advocates Want Open Captions

Accessibility symbol

When it comes to captioning at movie theaters and film festivals, many accessibility advocates and disabled users have aligned on adding open captioning to all screenings. Open captions, similar to burned-in SDH subtitles, provide a permanently accessible way to view dialogue and sound effects on screen. Advocates prefer open captions over closed captions for film screenings due to their more inclusive user experience.

What are Open Captions and How Do They Work?
Open captions are permanently burned into a video so that the viewer cannot turn them off. Because open captions are part of a video, they are supported by all video players and devices. Open captions eliminate rendering inconsistencies across different video players and devices.

According to Variety, many international film festivals, including Cannes and Venice, already include open captions or subtitles in multiple languages on the screen, and Sundance’s 2023 dramatic jury “repeatedly expressed concerns to both Sundance and filmmakers that movies playing at this year’s festival should come with open captions.”

Open captioning for movies has become more mainstream in the last few years, with some theaters and filmmakers adopting the practice to make films more inclusive for d/Deaf and hard of hearing viewers: 

Do you need to update your existing caption files? 👀

Filmmakers’ Concerns About Open Captions

Filmmakers holding a camera, clapboard, and boom microphone.

The enormous progress being made with accessible film experiences at movie theaters and festivals has not come without pushback. Some filmmakers and viewers find open captions to be too costly or distracting. Even Lauterbach admits that there are “legitimate artistic concerns” when it comes to open captions on films. 

[It] depends on what you as a venue want to value. Film festivals are less profit-motivated and often have inclusive missions. To really practice what they are preaching, I think open captions are one of the strongest symbols you can send.Matt Lauterbach

Some creators, particularly disabled filmmakers, strongly believe in the benefits of open captioning and make it part of their art rather than an obligatory element. For example, filmmaker Alison O’Daniel’s 2023 Sundance debut, The Tuba Thieves, includes open captions specifically crafted to be part of the art itself. Additionally, the use of certain types of SDH subtitles can support numerous customizations so that filmmakers and producers can curate the look and feel of the subtitles to align with a film’s other artistic elements.

For filmmakers with open captioning concerns, the issue is less about intentional exclusion and rather one about production costs and viewer experience. But are these concerns legitimate?

Cost

The issue of cost for the creation of an open-captioned print of a film is often cited by filmmakers as a barrier to offering open captions. Regarding the most recent incident at Sundance, several filmmakers reportedly brought up concerns about costs and time associated with the creation of an open-captioned film print, in addition to fears that burned-in captions could negatively impact a film’s asking prices for distribution.

In response, Lauterbach said that the Digital Cinema Package (DCP), a collection of files that includes caption formats used at film festivals and theaters, can actually be formatted as both closed and open captions without a need for additional quality control or much of a difference in overall cost. When a captioner creates a DCP caption file, it’s a matter of toggling settings on and off via the DCP.

If a filmmaker is not utilizing DCP specs, it can be a different matter in terms of time and cost. For example, a festival or theater may require different exports, which can add complexity to the open captioning or SDH subtitling process. Still, if a film is closed captioned, it can easily be reformatted to an open-captioned or SDH-subtitled version, regardless of export.

Many accessibility advocates say that the cost of not including a major group of people is greater than the cost of adding open captions or subtitles to film screenings because of the enormous segment of consumers being excluded. The U.S. d/Deaf, hard of hearing, and hearing loss communities consist of over 30 million people. Plus, millions of non-native English speakers, neurodivergent audiences, and viewers who prefer media with captions turned on make up additional viewing groups who have helped fuel the unprecedented usage of captions in recent years.

Audience Loss

Another commonly cited issue around open captioning surrounds the loss of audience over having permanent captions or subtitles on the screen. Lauterbach did not want to dismiss these concerns but noted that having open captions does not guarantee audiences will have a bad viewing experience.

You may find that you gained audiences. Captions [are often] compared to curb cuts–many people benefit from it, even if your hearing is crystal clear.Matt Lauterbach

A recent Preply study in the U.S. found that only 22% of viewers find subtitles more distracting than helpful, from which it can be inferred that over three-quarters of potential viewers don’t find subtitles distracting. The study also found that:

  • 74% of viewers say subtitles help them comprehend the plot.
  • 68% say subtitles help hold their attention on the screen.
  • 55% say they often have to rewind after missing things said when they don’t use subtitles.

Lauterbach added that while he is not disabled, he is a dedicated caption user due to captions helping reinforce characters’ names, clarifying dialogue, and bringing to light other elements you can miss during a viewing.

Making Movies More Accessible

Film screen with dramatic imagery surrounded by audience seats

As the news cycle moves beyond the renewed calls for open captioning at movie theaters and film festivals, the question remains: How can venues and creators ensure films are inclusive and accessible to all? 

At 3Play Media, accessibility is always on our minds. We want to help filmmakers learn about the benefits and limitations of closed and open captioning so they can make an informed decision about what kind of service is best for them.

3Play has a robust offering of closed captioning, open captioning, and SDH subtitling services designed to give cinematic content creators peace of mind when it comes to films screened at movie theaters, festivals, streaming platforms, or broadcast television. Whether you are submitting a film and require Simple DCP specifications or you want a curated, customized experience for your film’s SDH subtitles, 3Play will help you build accessibility into the process for a future-proof solution that is inclusive to all audiences.

Do your captions and subtitles need a refresh? Our Caption Reformatting Checklist can help! Free download.


About the author

The post Why Advocates Are Calling Out Closed Captions at Movie Theaters and Festivals appeared first on 3Play Media.

]]>
Why Reformatting is the Best Way to Edit Existing Captions and Subtitles https://www.3playmedia.com/blog/why-reformatting-is-the-best-way-to-edit-existing-captions-and-subtitles/ Mon, 09 Jan 2023 14:00:42 +0000 https://www.3playmedia.com/blog/why-reformatting-is-the-best-way-to-edit-existing-captions-and-subtitles/ • Download the [FREE] Checklist: Caption Reformatting Have you ever watched a rerun of your favorite television show with the original captions on and noticed that they don’t seem entirely correct? Closed captions could be delayed, covering graphics, paraphrasing dialogue…or maybe all of the above. Now, you may wonder how these captions slipped through the...

The post Why Reformatting is the Best Way to Edit Existing Captions and Subtitles appeared first on 3Play Media.

]]>

  • Captioning

Why Reformatting is the Best Way to Edit Existing Captions and Subtitles


Download the [FREE] Checklist: Caption Reformatting


Have you ever watched a rerun of your favorite television show with the original captions on and noticed that they don’t seem entirely correct? Closed captions could be delayed, covering graphics, paraphrasing dialogue…or maybe all of the above. Now, you may wonder how these captions slipped through the cracks of quality control (QC), but you could be surprised to learn that the captions likely didn’t make it through the QC process at all. Why? Because the captions were never reformatted to the video content they’re paired to.

Instead, an existing caption file (usually created to an older or original video version) was paired to an edited video–in this case, edited for a rerun on another network or streaming service–when it should have been professionally reformatted, which is the best way to edit existing closed captions or subtitles and truly ensure their accuracy and compliance.

Reformats are an extremely important captioning and subtitling service, yet are seldom discussed when it comes to media accessibility. Caption/subtitle reformats are a crucial step in the editing of existing captioning and subtitling files for content that’s been adjusted in any way–even for seemingly minor things such as the removal of commercial breaks. Videos with these kinds of changes usually involve an update to the original caption or subtitle file through reformatting.

What is caption or subtitle reformatting?

Reformats update a caption or subtitle file when a video has been changed or edited in some way that makes it different from the original video. Captions and subtitles need to match the video they are being paired to, and if the video is different from the one that the caption/subtitle file originated from, the captions/subtitles are going to be incorrect.

Put simply, if you edit your video to an updated version, it will probably impact the captions or subtitles.

Outdated caption/subtitle files that are eligible for reformatting can range from very minor and barely noticeable changes to egregious misalignments in dialogue and/or timing. In rare cases, a reformat may not be necessary, but this is only if the caption/subtitle file is not affected by the video changes.

It is important to note that reformats aren’t usually meant for revising simple spelling or grammatical mistakes, nor are they meant for a caption timed a few frames behind dialogue. Think of reformats as editing caption and subtitle files on a larger scale as compared to singular revisions of files. Sometimes the two services collide, but reformats generally take a bit more time, depending on the scope of changes required.

When do you need a reformat?

Reformatting is necessary when there are changes made to the content of a video. While it primarily affects broadcast and streaming captions/subtitles on television and OTT streaming platforms, reformats are suggested and often necessary to have accurate captions and subtitles on any updated video.

Can I do a reformat manually?
In the case of very, very minor changes such as a short word change or spelling/grammar correction, yes, you can manually edit a caption file. At 3Play, we only recommend manual revisions to caption files if:

A caption or subtitle file with significant changes to timing, transcription, or format should be handed off to professional captioners with experience in reformatting to ensure fully updated, compliant files.

 

How does reformatting work?

Reformats are completed by professional captioners who edit the caption or subtitle file alongside the updated video content until both are in sync and the content between the video and the caption/subtitle file match. Reformats are usually done within professional captioning software due to its ability to import a variety of file types and videos, allowing captioners to make the most efficient edits as possible.

The time it takes to reformat a file varies based on the changes required, but on average, most customers can expect a reformat to be completed in approximately half the time it takes to originate a caption or subtitle file. The larger or more numerous the changes are, the longer the reformatting process can take.

 

Do you need to update your existing caption files? 👀

 

Why do I need a reformat to edit my existing captions or subtitles?

There are many reasons why you may need a caption/subtitle reformat, but sometimes it can be difficult to know if you truly need one. So let’s review some of the top scenarios in which a reformat would be required for your content.

Re-timing

A person touching the hands of a clock.Making any sort of timing adjustments, whether it’s cutting material or adding material, necessitates a reformat. When you add or remove space to a video, even if it’s just for commercials and contains no dialogue, you still need to account for that updated timing in the caption or subtitle file so that it can be offset accordingly and properly synchronized.

Transcription adjustments 

Hands on a keyboardChanges to voice-over, dialogue, sound effects, and music need to be reflected in an updated caption or subtitling file. For example, swapping out music is fairly common on re-aired content for licensing reasons; if you change music–especially to a song with different lyrics or an entirely different mood–you need to ensure the caption/subtitle file captures this change in the transcription and timing.

Changes to graphics

Compilation of hands with graphics and textual imagery

The addition or removal of graphics, burned-in subtitles, or credits means that captions/subtitles must be manually adjusted by a captioner so that they don’t cover them. Because placement is an FCC requirement, it’s particularly important that a file is properly reformatted to accommodate these changes.

Profanity & censorship updates

Speech bubbles with abstract exclamationsProfanity and censorship guidelines can vary based on air time or distribution to other networks and platforms. It’s critical to ensure that the captions/subtitles match up to the audio when it comes to profanity, whether it is bleeped, dropped, or uncensored. Some broadcasters and networks can face potential penalties for this.

Video frame rate conversions

A person placing parts of a video togetherVideo frame rate conversions always require a reformat to adjust timing changes in the caption/subtitle file. Sometimes all that will change in a video is the frame rate itself during the editing process; however, this is a crucial adjustment to be made to the caption file, as a significant timing drift can occur when a file’s frame rate does not match the video’s frame rate. Frame rate changes can happen for a number of reasons, but are most common when prepping a video for online streaming or international distribution.

Outdated captions and subtitles

Wavy analog television bars and toneIt is uncommon to come across captions and subtitles that haven’t been updated to the FCC’s captioning quality standards, but it does occasionally happen. These caption and subtitle files were typically created prior to 2013 and may include older styles of formatting and paraphrasing. If captions and subtitles are verbatim, synchronized, and appropriately moved for graphics, these can still be acceptable for broadcast; otherwise, they need to be updated. Note that it is recommended to reformat outdated files, even if they meet current FCC standards, to achieve optimal readability and accessibility for viewers.

Reformatting with 3Play Media

Person at a computer with captioning software on screen

Did you know that 3Play Media provides reformatting services for existing caption and subtitle files?

Our experienced team of captioners can quickly and easily reformat any files you have in need of adjustment. Simply talk to our sales reps or account managers about our reformat add-on options to get started.

Not sure if you need a reformat?

We’re here to help. Our team is filled with experienced captioning professionals who have reformatted hundreds (even thousands!) of hours of caption/subtitle files for updated video content. Get in touch with us to begin scoping your project, and we can determine if reformatting is right for you.

Just want a quick fix?

Try our Caption & Subtitle Editor to quickly make spelling and other small adjustments to captions and translations.

 

Do your captions and subtitles need a refresh? Our Caption Reformatting Checklist can help! Free download.


About the author

The post Why Reformatting is the Best Way to Edit Existing Captions and Subtitles appeared first on 3Play Media.

]]>
What is an EEG Caption Encoder? https://www.3playmedia.com/blog/what-is-an-eeg-caption-encoder/ Tue, 03 Jan 2023 19:51:24 +0000 https://www.3playmedia.com/blog/what-is-an-eeg-caption-encoder/ The Complete Guide to Caption Encoders [Free eBook] Throughout the past few decades, caption encoders have allowed televisions to receive closed captioning transmissions, and they remain widely used for many broadcast and streaming workflows today. There’s several different types of encoder technologies available to help simplify caption delivery of your broadcast and streaming content; in...

The post What is an EEG Caption Encoder? appeared first on 3Play Media.

]]>

  • Captioning

What is an EEG Caption Encoder?


The Complete Guide to Caption Encoders [Free eBook]


Throughout the past few decades, caption encoders have allowed televisions to receive closed captioning transmissions, and they remain widely used for many broadcast and streaming workflows today. There’s several different types of encoder technologies available to help simplify caption delivery of your broadcast and streaming content; in this blog, we will highlight EEG caption encoders like iCap and give an overview of what encoding workflows look like.

What is a caption encoder?

Encoders let a broadcaster simultaneously receive and encode captions, allowing them to be displayed alongside a television program or video in real time

Modern encoder technology took a big step in 1993, when the Federal Communications Commission (FCC) mandated that TVs include a decoder to receive caption signals, thus allowing a viewer to turn captions on or off on their television. 

Closed vs. Open Captions
“Closed captions” means a viewer is able to toggle on/off the captions, whereas “open captions” are always on.

What is an EEG encoder?

An EEG encoder refers to a captioning encoder manufactured by EEG, such as iCap and iCap Falcon.

iCap encoders

These EEG caption encoders have iCap software for improved functionality, such as sending audio to the captioner, but can also be set up as IP connections if desired. 

iCap-enabled encoders are manufactured by EEG, and with their direction, you can set up the encoder to feed both audio and video to the captioner, making it easier to monitor and caption effectively. 

The video and audio are converted to a data stream on the iCap cloud which is accessible via an Access Code. Captions are routed through the cloud and into the encoder where it is married to the stream and ready for broadcast. 

iCap encoders can be bought or rented for any type of event or broadcast. They are compatible with a number of broadcast networks, cable channels, OTT platforms, educational institutions, and more.

iCap Access Codes
iCap Access Codes typically look something like this:

Access Code: TV2021

iCap Falcon

iCap Falcon is a virtual encoder offered by EEG. Virtual encoders are hosted in the cloud and require clients to connect their stream digitally. iCap Falcon functions similarly to a normal EEG encoder, but is hosted within the iCap cloud.

In general, virtual encoders like iCap Falcon are useful for events that are streamed online or singular events that don’t necessitate the purchase of permanent equipment. These encoders add closed captioning data and reroute the video stream to the desired platform such as YouTube, Facebook, or Vimeo. 

iCap Falcon Compatability
iCap Falcon is compatible with a variety of streaming video platforms including Facebook, YouTube, Twitch, and more.

What does a closed captioning encoding workflow look like?

Three circles with images inside: a person typing at a computer with a data stream above it; a video player with captions on; a pair of hands shaking with a small data cloud above it.

Most closed captioning encoder workflows function like so:

  • A caption provider transmits a caption feed to the encoder(s).
  • The encoder collects the caption feed for transmission to the viewer.
  • The encoder pairs the captions to the video on a specific data transmission line called line 21, which televisions are mandated to decode captions from.

The Complete Guide to Caption Encoders

decorative

This ebook serves as your comprehensive guide to caption encoders – what they are, when and why you need them, and which encoder to use – to help you create accessible and engaging video content.

Download the eBook for Free

How to know if you need a caption encoder

Not sure if you need a caption encoder? Here’s a rundown of situations that require one:

  • Your program is going straight to broadcast or cable.
  • You’re streaming your live program on Facebook or YouTube.
  • Your video platform requires live captions to be embedded in the stream as 608/708 data.
  • You want viewers who do not have a video player to be able to turn on captions.
  • You want an offline captioning option.
  • You’re captioning video for kiosks and mobile devices.
  • You’re captioning video on social media platforms like Twitter or Instagram.
  • You’re creating a self-contained captioned video that can be distributed as a single asset.

Caption encoding with 3Play Media

When you need caption encoding, 3Play Media has you covered. Simply upload your video file for captioning and transcription processing. If you already have a transcript, you can use the automated transcript alignment service. Once your file has been captioned, you can order the caption encoding service and choose the appropriate encoding profile. Upon completion, you will receive an email notification and be able to download an M4V video with encoded captions.

The video will work with any player or device that supports M4V videos, including QuickTime, iPad, iPhone, iPod, iTunes, JW Player, and Flowplayer. Because the captions are soft-encoded in the video, users will be able to turn them on or off using the video player controls.

The source video that you upload can be in almost any web format that doesn’t use a proprietary codec. When ordering caption encoding, you will have the option to select an encoding profile to optimize video playback for a certain device.

For example, the iPhone5 profile transcodes your video for a target width of 1136 pixels, 30 frames per second, and a frame rate of 3 Mb/sec. You can also use your original source video as long as the video encoding is H.264 and audio is AAC. The closed captions track will be added to the video and put in an M4V container.

Download a demo video with encoded closed captions – you’ll need to play it in a QuickTime or VLC player and make sure to enable the captions (subtitles). Please note that some versions of Windows Media Player do not support caption-encoded videos.

Note: For social media videos, you’ll need to upload your video in a format supported by the social platform (for example, Twitter takes MP4 videos). Then, order caption encoding > source with open captions.

 

The Complete Guide to Caption Encoders. Get Your Free Guide.


About the author

Related Posts

The post What is an EEG Caption Encoder? appeared first on 3Play Media.

]]>
How to Elevate Your Broadcast’s Live Captioning Quality https://www.3playmedia.com/blog/how-to-elevate-your-broadcasts-live-captioning-quality/ Tue, 08 Nov 2022 22:26:39 +0000 https://www.3playmedia.com/blog/how-to-elevate-your-broadcasts-live-captioning-quality/ FCC Requirements for Closed Captioning of Online Video: Are You Compliant? [Free White Paper] Television is a 24/7 industry. Networks are always broadcasting something, and much of what they’re airing is happening live, in real time. Live captioning is a critical component of these live broadcasts, but can sometimes be a source of frustration for...

The post How to Elevate Your Broadcast’s Live Captioning Quality appeared first on 3Play Media.

]]>

  • Live Captioning

How to Elevate Your Broadcast’s Live Captioning Quality


FCC Requirements for Closed Captioning of Online Video: Are You Compliant? [Free White Paper]


Television is a 24/7 industry. Networks are always broadcasting something, and much of what they’re airing is happening live, in real time. Live captioning is a critical component of these live broadcasts, but can sometimes be a source of frustration for viewers due to style inconsistencies, latency issues, and inaccuracies in transcription. Many have simply accepted that this is just the nature of live captioning. But we’ll let you in on a secret: there are some easy ways to elevate your live captioning quality from acceptable to all-star. We’ll show you how in this blog.

Set Your Live Captioner Up for Success with Prep Materials

One of the easiest and most effective ways to improve your live captioning quality is to provide prep materials to your live captioning vendor ahead of the broadcast. The pacing of live broadcasts means that live captioners must make split-second judgment calls and are ultimately driven to get audio transcribed as quickly as possible in a live environment. While live captioners can correct a word if they mistranscribe, that can be hard to do when there’s no context or materials available ahead of time. That’s where prep materials come in.

When productions provide helpful information such as proper name spellings, key terms, and wordlists, live captioners are able to reference and contextualize this information during the captioning process by creating dictionaries and glossaries of key terminology and names to have on hand as they caption. This ensures more accurate spellings and transcription of your broadcast content, automatically improving your live captioning quality.

So how do these dictionaries/glossaries work? Powerful software used by live captioners (who may be using either a stenography machine or voice writing methods to transcribe closed captions) allows them to maintain active control over the accuracy and formatting of the words they’re creating. 

For instance, say a captioner is scheduled to caption a live baseball game. They’re going to create a robust glossary of baseball vocabulary, ranging from the basic—bat, base, outfield—to the specific—dinger, WHIP, fungo. This dictionary will also include a list of personnel involved with the game: rosters of both teams, the broadcasting crew, umpires, and information about the stadium and city it’s located in.  

A Strong Network Connection = Strong Live Captions

In live programming, issues with antennas and cable can corrupt data streams and the transmission of captions. 608 and 708 closed captioning data is decoded to make captions appear overlayed on a video stream. These are typically stable when being transmitted through a strong network connection, giving you live captions that appear as intended by the captioner.

However, poor weather, weak transmission signals, internet quality, and satellite issues can all affect these captioning streams. This can result in missing words, strange characters, caption placement changes, or even change the color of the captions. 

Some of these factors, like the weather, can’t be helped. But some, like poor internet quality, can be improved. Aim to use the most stable and high quality connection as possible when working with your live captioning vendor and test with them pre-broadcast. 

While testing, a live captioner should use their captioning software to connect to the client’s encoder or virtual captioning session and send test captions. At this point, they should also confirm with you that captions are being properly received and that what they are hearing is correct, be it the sound of the preceding program, tone, or silence. This testing is designed to provide a clean, constant link between the captioner and broadcaster, allowing them to caption with as low latency as possible. 

In the cases where a live stream drops anyway, 3Play offers Stream Reconnect Wait Time to give you peace of mind that your captioning service will pick back up without unnecessary delay.

The Clearer the Speech, the Clearer the Captions

Most producers and captioners of live broadcasts know that audio quality can be a mixed bag. But if you’re looking to make accessibility part of the live broadcasting process from the start and get the highest quality, compliant live captions, consider implementing a few of these tips to get clearer audio during your live broadcasts. (Note that not all audio tips are going to be possible for every broadcast due to the nature of different live events, such as sporting games.)

Aim for High Quality Audio

Whenever on-air talent is speaking, they should be using a high-quality microphone. Ensure speakers are enunciating and speaking as steadily as possible into the mic line to get clear speech. If no direct mic line is present, make it a best practice to have speakers repeat important information and questions in case captioners or viewers don’t hear it the first time.

Avoid Unnecessary Background Noise

Loud cheering, applause, and overlapping chatter is unavoidable in some programs, but when possible, avoid on-air talent and speakers having to compete with loud background noise by dampening sound inside studio settings.

One Speaker at a Time

Overlapping chatter is a top reason why captions may be transcribed incorrectly. When possible, ask on-air talent and speakers to speak one at a time and avoid talking over one another.

 What does the FCC say about the captioning of live and online video? ➡ 

Highly Trained Live Professional Captioners are Key

Using highly trained live professional captioners is essential for live broadcasts. Live automatic captions on their own will not be sufficient for television and OTT streaming, so it’s imperative to ensure humans are part of your live captioning process. But even if you have a live captioner doing the work, how do you know if they’re the right fit for your broadcast?

When vetting a live captioning vendor, check if their live captioners are professionally trained in proven methods like stenography or voice writing. 3Play Media live professional captioners consist of a team of in-house staff and contractors with robust training and qualifications that enables you to get your broadcast programming live captioned at a high level of quality without hassle.

Where does ASR fit into this?
ASR and auto captioning solutions aren’t usually recommended for broadcast television and high-visibility events on their own, but they can serve as a helpful reference for a live captioner who is editing ASR as they go, a backup option when a connection drops, or a way to make content accessible when professional captioners are not an option. If you have a resilient failover captioning solution, ASR can ensure captions keep going and your broadcast remains accessible while you and your captioning vendor troubleshoot any drop issues.

Be Mindful of FCC Guidelines

While no governing body or official rules exist to regulate live captioning at this time, Federal Communications Commission (FCC) regulations and other legal precedents still compel live broadcast programming to be captioned at a high accuracy rate. 

The FCC lists best practices for live captioning of televised video for both vendors and captioners, and while they do allow for some leniency in quality compared to recorded captioning quality, they suggest live captioners aim to “caption as accurately, synchronously, completely, and appropriately placed as possible, given the nature of the programming.”

3Play’s live professional captioning accuracy rates typically range between 95% to 98% or higher for live broadcasts, with a focus on comprehensibility. We also provide future-proof paths to upgrading live captions and transcripts to 99% accuracy and FCC compliance for recorded broadcasts or re-air of programming after your live broadcast has ended.

Elevating Your Live Captions

Some of the tips listed above aren’t always going to be possible due to the nature of broadcast television and the fast-paced, ever-evolving media and entertainment industry. But they’re good starting points for producers and networks who strive to build inclusivity and accessibility into the production process from the start.

It may initially require a little extra effort, but when you set high standards for the live captioning of your broadcast programming, you’re creating a better, more inclusive experience for all of your viewers.

 

FCC Rules for Closed Captioning of Online Video: Are You Compliant? Read the guide.


About the author

Related Posts

The post How to Elevate Your Broadcast’s Live Captioning Quality appeared first on 3Play Media.

]]>
What is an SCC File? https://www.3playmedia.com/blog/what-is-an-scc-file/ Thu, 27 Oct 2022 13:00:14 +0000 https://www.3playmedia.com/blog/what-is-an-scc-file/ Closed Captioning Best Practices for Media & Entertainment [Free eBook] For years, Scenarist Closed Caption (SCC) files have been one of the most commonly used closed captioning files on broadcast television. These closed captioning files hold CEA-608 captioning data in 29.97 drop (DF) and non-drop (NDF) frame rates, and were originally designed for usage with...

The post What is an SCC File? appeared first on 3Play Media.

]]>

  • Captioning

What is an SCC File?


Closed Captioning Best Practices for Media & Entertainment [Free eBook]


For years, Scenarist Closed Caption (SCC) files have been one of the most commonly used closed captioning files on broadcast television. These closed captioning files hold CEA-608 captioning data in 29.97 drop (DF) and non-drop (NDF) frame rates, and were originally designed for usage with analog television, VHS, and DVDs.

Between the rising popularity of streaming and web video content over the past decade, CTA-708 closed captioning requirements, and the death of NTSC broadcast transmissions in the United States, it would be easy to write off SCC files as an outdated relic of broadcast’s past or dismiss their future usage in a digital landscape.

But this hasn’t been the case for SCCs. In fact, SCC closed captioning files have adapted to video content outside of traditional broadcast, like web and streaming, becoming an incredibly versatile closed caption file type used across platforms and industries. Below, we’ll take a look at the evolution and capabilities of SCC files and find out just what makes these caption files so special.

A Brief History of SCC Files

SCC files were originally developed by Sonic and hold 608 captioning information by design, meaning the files are transmitted via Line 21, a hidden data stream containing closed captions and V-chip data. The 608 closed captioning format was created for analog television, but remains in use alongside 708 data today. SCC files use hexadecimal values to encode captioning information which are deciphered by closed captioning decoders.

Broad Support for Stylistic Elements

SCC files contain the baseline text and timing information necessary for all closed caption files, but also support caption styling information such as positioning, italics, and music notes – all of which are valuable tools for expert captioners to visually convey audio elements, such as dialogue spoken off-screen and lyrics being sung. These stylistic captioning elements are particularly important when it comes to compliance with FCC requirements.

As with other CEA-608 formats, SCC files support a maximum of 32 characters per line for closed captions, and the stylistic elements of the captions must be implemented on the captioner’s end; they cannot be adjusted by the viewer unless the file has been converted to a 708 format.

 

SCC Style Elements Supported?
Positioning  Yes
Italics Yes
Music notes Yes
Special characters Latin language characters supported
Line limits 32 characters
Number of caption lines Up to 4 per caption
Viewer can make adjustments Only if file is up-converted to support 708 formats

 

 Everything you need to know about closed captioning for the media & entertainment industry 🎬 

Versatility

SCC files are popular due to their flexibility and ability to conform to traditional broadcast video needs, modern video content platforms, web players, video editing softwares, and more.

How SCCs Support 708 Closed Captioning Data

SCC files only store CEA-608 caption data. These files are “up-converted” to include 708 data where appropriate when they are run through a decoder or used to embed caption tracks into a master video file. Common “up-conversion” workflows can include exporting into 708-supported formats, such as MacCaption Closed Caption (MCC) files.

Some broadcasters and streaming services will automatically “up-convert” SCC files as part of the post-production process; some may opt for an additional file type for delivery. If you’re unsure of whether your SCC file will end up supporting 708 closed captioning, reach out to your network or platform contact to clarify deliverable file types to ensure you are in compliance with their requirements.

How SCCs are used in Video Editing Softwares

SCC files are compatible with most video editing softwares, like Adobe Premiere Pro and Final Cut Pro, making them a favorite of post-production professionals and expert captioners who use the SCC for importing, exporting, and/or conversion to other caption file formats. If you’re looking for a caption file format that will work with your software, chances are that an SCC may meet your needs.

How SCCs are used in Streaming & Web Videos

Some of the most relevant and important mediums that SCCs have adapted to in recent years are web, OTT, and streaming platforms. SCC files are supported by multiple web video players like YouTube (it’s their preferred caption file type!) and Vimeo. SCCs are also increasingly used/accepted on a variety of popular OTT platforms such as Amazon Prime Video, PBS, and Warner Bros. Discovery (which counts HBO Max among its streaming services and networks.)

So how do SCCs work on non-broadcast players and platforms such as these? They function as sidecar files. While most standard streaming players are incapable of reading caption data that’s been embedded into a file, SCC files are set up to work as text files if the player is enabled to read and translate that data, much like it would for an SRT file or WEBVTT file.

The Decoding Process

Unlike a WEBVTT or SRT file, SCC files cannot be edited directly unless you have professional closed captioning software. This is because SCC files require decoding, no matter the destination of the video. If you were to open an SCC file in a text editor, you’d find raw data appearing in the form of numbers and letters arranged in such a way that is meant for robotic decoders to interpret, like so:

A screenshot of an SCC file opened in a text editor. Raw data in the form of numbers and letters arranged in hexadecimal values.

The timecodes of SCC files are in SMPTE format, and as previously mentioned, are always in either for 29.97 DF and NDF frame rates. These frame rates are indicated within the SCC file by utilizing a colon for NDF and a semi-colon for DF timecodes. The above example has a starting timecode of 01:00:00:03, meaning the file is timed to a 29.97 NDF frame rate. If the timecode was formatted as 01:00:00;03, that would make it a 29.97 DF frame rate. 

When SCC files are exported in a frame rate that differs from drop and non-drop frame rates (at 23.98 or 25 frames per second, for example), the timecodes are supported via specialized timecode math that broadcast captioning software is designed to calculate, ensuring the caption file remains synchronized to the video.

While SCC files can technically be edited in a manual text editor, it is inadvisable to do so unless you are a closed captioning professional.

SCCs: the optimal caption file type?

The consistency and versatility of SCC files makes them one of the top choices of closed captioners, broadcast networks, streaming platforms, and people looking for a caption file that will work across a variety of video destinations. Though SCC files lack some advanced formatting features and character support, at the end of the day, they are most likely to meet many of your broadcast and/or web video accessibility needs and help you remain compliant.

Always check with your delivery contacts about exactly which files your network or platform supports before making the big decision about which closed captioning file(s) to go with. Not sure where to start? Our team can help you determine what’s right for you so that you can create accessible, searchable, and engaging videos for your audience.

 

Closed captioning best practices for media and entertainment. Read the guide.


About the author

Related Posts

The post What is an SCC File? appeared first on 3Play Media.

]]>