Captioning Archives - 3Play Media https://www.3playmedia.com/blog/tag/captioning/ Take Your Video Content Global Thu, 23 Oct 2025 18:51:10 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.3playmedia.com/wp-content/uploads/2025/07/cropped-favicon_1x-300x300-1-32x32.webp Captioning Archives - 3Play Media https://www.3playmedia.com/blog/tag/captioning/ 32 32 Captioning and Transcription for Higher Education https://www.3playmedia.com/blog/captioning-transcription-higher-education/ Wed, 21 May 2025 07:00:43 +0000 https://www.3playmedia.com/blog/captioning-transcription-higher-education/ • Strategizing Accessibility in Higher Education [Webinar] There are many benefits to offering captions for online video in higher education institutions. Closed captioning in higher education makes videos more accessible to students who are deaf or hard of hearing. By prioritizing video accessibility, colleges and universities can ensure that more students have equal access to educational content and...

The post Captioning and Transcription for Higher Education appeared first on 3Play Media.

]]>

  • Captioning

Captioning and Transcription for Higher Education


Strategizing Accessibility in Higher Education [Webinar]


There are many benefits to offering captions for online video in higher education institutions. Closed captioning in higher education makes videos more accessible to students who are deaf or hard of hearing. By prioritizing video accessibility, colleges and universities can ensure that more students have equal access to educational content and media.

Importantly, providing accessible video content is not just a best practice—it is a legal obligation. Under various legislation, colleges and universities are required to ensure effective communication with individuals with disabilities.

While captions are primarily intended to make videos accessible to people with disabilities, they can also benefit all students. One study revealed that 80% of people who use captions are not deaf or hard of hearing – they find that captions improve their engagement, focus, and comprehension.

Another study by the University of South Florida St. Petersburg (USFSP) explored the impact of captions and transcripts on student learning. The results shed light on the value of captions in the classroom and showed that accessible video could have a positive impact on students’ performance.

What’s Important for Captioning in Higher Education?

Caption Accuracy

Inaccurate captions are frustrating for anyone, but for students, it’s particularly detrimental to their learning and performance. Many students rely on captions to assist them in their studies, especially those who are:

  • D/deaf or hard of hearing
  • English language learners or non-native English speakers
  • Individuals with learning disabilities

Accurate captions are a necessity for higher education institutions because students must have access to accurate learning materials, including educational videos.

Note that in 2019, the court acknowledged that caption accuracy is critical to accessibility as seen in its decision for the NAD v. Harvard and NAD v. MIT accessibility suits.

Timeliness

Captions must be made available simultaneously with the video content to ensure that all students have equal access to instructional materials. This is especially critical in educational environments where videos are used as part of core instruction, assignments, or assessments.

When captions are delayed, students who are deaf or hard of hearing, or who rely on captions for comprehension, may fall behind or miss essential information. This creates a situation of unequal access, which can not only disadvantage the student academically but may also place the institution at risk of noncompliance with federal accessibility laws.

Billing Flexibility

Universities often have many different departments and may even have additional campuses aside from the main campus. Higher education institutions require flexible billing options to bill each department or campus separately and to provide specific administrators access to billing information. A smooth billing process helps to make the entire captioning process painless, efficient, and sustainable.

Legal Compliance and Accessibility Standards

Higher education institutions are legally obligated to ensure that all students, including those with disabilities, have equal access to academic content and services. This includes captioning and transcription for video and audio materials, which are considered essential components of accessible communication.

Americans with Disabilities Act (ADA)

legal scalesThe ADA is a foundational civil rights law that prohibits discrimination based on disability. Two key sections apply to colleges and universities:

  • Title II applies to public institutions (such as state colleges and universities), requiring them to provide equal access to all programs, services, and activities. This includes ensuring that digital content is accessible through accurate captioning and transcription.
  • Title III applies to private institutions, mandating that they remove barriers to access and provide auxiliary aids and services, including captioning, to ensure effective communication with students with disabilities.

Click here for information on the rapidly approaching ADA compliance deadlines.

The Rehabilitation Act

Two key provisions of the Rehabilitation Act of 1973 are especially relevant to higher education institutions:

  • Section 504: Requires institutions receiving federal funding to provide equal access to students with disabilities through academic adjustments and auxiliary aids, such as captions and transcripts.
  • Section 508: Mandates that electronic and information technology used by federally funded institutions be accessible, following standards like the Web Content Accessibility Guidelines (WCAG).

Common Challenges in Captioning for Higher Education

Restricted Budgets

State schools have set funding for academic programs and departments, whether it be from private donations or state and federal funding. This requires state schools to operate within a limited budget, which is one of their most significant barriers to captioning. They will look for a captioning solution that allows them to stay within budget while still maintaining a 99% accuracy rate of their content.

Workflow and Compatibility

books on shelf

While the process for captioning in higher education varies from college to college, there are often several steps a professor must go through to get a video captioned on time. Sending a captioning request may take a lot of back and forth. Having a solution that helps a college streamline the captioning process will ensure that videos are captioned when students need them.

There are many options for lecture capture systems and video platforms, and schools will use whichever platform fits their unique needs. To ensure their transcription and captioning processes are seamless and efficient, schools will look for captions that are compatible with their lecture capture systems and video platforms.

Complex Content

Higher education institutions offer multiple areas of study and hundreds of degrees and certificates with different focuses. For reference, the University of Wisconsin-Madison offers over 600 undergraduate majors and certificates. With large amounts of high-level content in varying subjects, it’s a challenge for schools to ensure their content is transcribed accurately.

How Captions & Transcripts Impact Students’ Performance

What Vendor Features Are Important for Higher Education?

Guaranteed Accuracy

3Play Media’s closed captions and transcripts comply with federal accessibility laws. Our captions provide a measured accuracy rate of 99.6%, and we guarantee at least 99% accuracy, even in cases of poor audio quality, multiple speakers, difficult content, and accents.

Competitive Pricing

stack of books with a graduation cap

Our advanced technology is what enables our competitive prices, but our quality assurance measures ensure that our caption quality is top-notch. We also offer flexible billing, allowing customers to have project-level billing for higher education organizations that require that multiple departments and campuses are billed separately or have access to separate billing information.

Skilled Transcript Editors

3Play Media always provides accurate transcripts for a broad range of complex content. We have a staff of thousands of skilled transcript editors who can edit content from topics in which they are knowledgeable. We also allow customers to upload wordlists with correct spellings, punctuation, and capitalization for difficult words and subject-specific terms.

Video Platform Integrations

Integrations with lecture capture systems and online video management platforms allow for a more streamlined captioning process. 3Play offers integrations with all major video players, including Kaltura, Panopto, Mediasite, Echo360, and YouTube. Our integrations will automatically post your captions back to your video, giving you more time to focus on other projects.

User-friendly Account System

Our Account System is easy for customers to use, and you can rest assured that captioning won’t be a complicated endeavor. Each account can support multiple users, departments, and permissions. Account admins can control user access to any of the core account functions like invoices & billing, uploading, editing, publishing control, and user management. On top of that, we have a fabulous support team to help you along the way.

Higher Education Institutions that Use 3Play Media

A logo splash of schools that use 3Play Media

Download Free Report: How Closed Captions & Transcripts Impact Student Learning: A Report By The University Of South Florida St. Petersburg


This blog post is written for educational and general information purposes only, and does not constitute specific legal advice. This blog should not be used as a substitute for competent legal advice from a licensed professional attorney in your state.

This blog was originally published on April 27, 2020 by Jaclyn Leduc and has since been updated by Abby Alepa and Noah Pearson for accuracy, clarity, and freshness.


About the author

The post Captioning and Transcription for Higher Education appeared first on 3Play Media.

]]>
How to Create an SRT File https://www.3playmedia.com/blog/create-srt-file/ Wed, 16 Apr 2025 04:00:00 +0000 https://www.3playmedia.com/blog/create-srt-file/ • Create your own SRT files [Free Template] An SRT (.srt) file is one of the most common file formats for subtitling and/or captioning. ‘SRT’ stands for ‘SubRip Subtitle’ file. This format originated from the DVD-ripping software by the same name. SubRip would “rip” (or extract) subtitles and timings from live video, recorded video, and,...

The post How to Create an SRT File appeared first on 3Play Media.

]]>

  • Captioning

How to Create an SRT File


Create your own SRT files [Free Template]


An SRT (.srt) file is one of the most common file formats for subtitling and/or captioning. ‘SRT’ stands for ‘SubRip Subtitle’ file. This format originated from the DVD-ripping software by the same name. SubRip would “rip” (or extract) subtitles and timings from live video, recorded video, and, of course, DVDs. Today, this format is widely supported by most media players and video software, and you can even create SRT files yourself.

SRT files offer a straightforward way to add captions to your videos. However, getting started can feel overwhelming. As industry leaders in captioning solutions, we’ve created a comprehensive guide to give you the lowdown on everything you need to know about SRT files – what they are, how to create them (on Mac and Windows), and why you should use them.

 

  FREE Template: Create an SRT Files 📲 

 

What is an SRT file?

As we mentioned, SRT files are derived from the SubRip software. This software extracted subtitles and their timing information from video content as a text file. Today, creating an SRT text file is easy to do without needing special software, and we’ll show you how! But first, it’s helpful to understand how SRT files are formatted and the components they’re made up of.

The Anatomy of an SRT File

There are many types of caption formats, but SRT files are very simple. This makes them easy for people to read and even edit using a basic text editor. Each caption frame within an SRT file follows the same structure.

This simple structure allows web players to synchronize the text with the video playback accurately. While some advanced formatting like italics or positioning might be supported by certain video players, the core strength of SRT lies in its universal compatibility and readability.

 

 

Timecodes in SRT files follow this format: hours:minutes:seconds,milliseconds. The milliseconds are always shown with three decimal places. The start and end timecodes for each subtitle are separated by a double arrow (written as: – ->). After the timecodes and the subtitle text, you need to add a blank line to signal the start of the next subtitle. When you save your SRT file, make sure to use the .srt extension.

Example of timecode format, pointing out key components like caption text, sequential numbers, a two-hash arrow separating beginning and end codes, and a blank line separating captions

 

 

Why are SRT files so popular?

SRT files are widely used because they provide the following benefits:

  • Wide Compatibility: SRT files work seamlessly with a vast range of media players, video hosting platforms, lecture capture software, and video editing tools.
  • Human-Readable: Their plain text format makes them easy to understand, edit, and troubleshoot.
  • Language Support: SRT files can accommodate characters from almost any language.
  • Versatility: They can be used for both closed captions (including sound descriptions and other non-speech elements) and subtitles (primarily dialogue).

3Play Media includes seamless SRT captioning integrations with many popular platforms used for online video, including Facebook, YouTube, and Wistia.

 

  FREE Template: Create an SRT Files 📲 

 

How to create SRT files:

The first step in creating an SRT file is to create the transcript for your video – depending on the operating system you’re using, the instructions may vary. Don’t worry, we’ve got you covered:

For Mac users

  1. Open a new file in TextEdit
  2. To begin, type the number 1 to indicate the beginning of the first caption sequence. To move on, press enter 
  3. Enter the beginning and ending timecode, using the following format: hours:minutes:seconds,milliseconds – -> hours:minutes:seconds,milliseconds
  4. When you’re finished, press enter
  5. In the next line, begin typing your captions. It is best practice to limit captions to 32 characters, with 2 lines per caption – this ensures viewers aren’t forced to read too much too quickly, and that captions don’t take up too much space on the screen. Additionally ensure your captions comply with legal guidelines.*
  6. After the last line of text in the sequence, press enter twice. Always leave a blank line to indicate a new caption sequence
  7. After the blank line, type the number 2 to indicate the beginning of the second caption sequence and type your captions following SRT format. 
  8. Repeat these steps until you have a completed transcript!
  9. To save your file as an .srt, click FormatMake Plain Text, or you can use the keyboard shortcut Shift + Command + T
  10. Then click FileSave. Under Save As, type the name of your file. Then, change the file extension from .txt to .srt 
  11. Uncheck Hide Extension on the bottom left-hand side of the menu, as well as If no extension is provided, use “.txt”
  12. Click Save. Congratulations – you are now ready to upload your captions!

Screenshot highlighting steps 9, 10, and 11 of creating an SRT file

 

For Windows users

  1. Open a new file in Notepad
  2. To begin, type the number 1 to indicate the beginning of the first caption sequence. To move on, press enter 
  3. Enter the beginning and ending timecode, using the following format: hours:minutes:seconds,milliseconds –> hours:minutes:seconds,milliseconds
  4. When you’re finished, press enter
  5. In the next line, begin typing your captions. Best practices recommend limiting captions to 32 characters, with 2 lines per caption – this ensures viewers aren’t forced to read too much too quickly, and that captions don’t take up too much space on the screen. Additionally ensure your captions comply with legal guidelines.*
  6. After the last line of text in the sequence, press enter twice. Always leave a blank line to indicate a new caption sequence
  7. After the blank line, type the number 2 to indicate the beginning of the second caption sequence and type your captions following SRT format. 
  8. Repeat these steps until you have a completed transcript! 
  9. Then click FileSave. Under File Name, type the name of your file and include .srt at the end
  10. Under Save as type select All Files
  11. Click Save, and congratulations! You are now ready to upload your captions.

Screenshot showing the steps creating an SRT file

 

How to upload SRT files

The process of uploading your newly created SRT file may vary depending on which media player, lecture capture software, or video recording software you choose to upload your video to – that’s why we’ve written how-to guides for just about every platform you can think of, including YouTube, Canvas, and Zoom.

 

Read the Guide: How to Create SRT Files 💬

 

*For more information on legal requirements and closed captioning guidelines, refer to our white papers:

 

DIY SRT Creation vs. professional captioning

SRT file creation is an easy (and free) solution to independently create captions on your own videos. However, those looking for DIY solutions should be aware that caption creation often additionally requires timecode creation, which typically makes the captioning process more time consuming. 

To avoid the requirement of setting your own timecodes, YouTube’s captioning tool is one alternative that automatically syncs captions with what is being spoken in the video. Using this tool, users can select a video from their YouTube account, manually add captions to that file, and the corresponding timecodes will automatically populate. This effectively eliminates the need to manually enter timecodes (unlike in SRT file creation) and can save DIY captioners some time. 

The length of time it takes to caption a video can vary, but largely depends on the length of the video itself, the captioner’s level of experience, and video quality. Typically, it could take an experienced transcriptionist 5-10 times a video’s length to transcribe captions – this means a five-minute video could take anywhere from 25 to 50 minutes to complete! If you’re creating your own captions and timecodes using an SRT file, it may take longer. 

There are numerous benefits to captioning your videos, so don’t let the time it takes to create captions prevent you from adding them to your video! Captioned video content has the ability to improve your SEO rankings and serve your content to new audiences – including viewers who are deaf or hard of hearing, those who know English as a second language, and even those who simply prefer using captions. 

Creating your own captions can be a cost-saver, but if you’re planning on captioning many videos or lengthy videos, consider hiring a captioning service. A full-service captioning solution ensures all of your captions are legally compliant and avoids the need to consider timecode creation in the captioning process. 

A good captioning service will take care of all the logistics for you. That’s why 3Play Media guarantees turnaround based on your schedule, and a 99.6% average accuracy rate. Before selecting a vendor, it’s important to research who exactly will be captioning your videos as well as how the captioning and transcription process works, to better understand their rates.


Think you’re ready to start writing SRT captions? Get started today ⤵

How to Create Your Own SRT File. Get the Template

This post was originally published on March 8, 2017 by Sofia Enamorado & has since been updated for accuracy, freshness, and clarity.


About the author

Related Posts

The post How to Create an SRT File appeared first on 3Play Media.

]]>
Closed Captioning vs. Subtitles: What’s the Difference and Why it Matters for Accessibility (Including EAA) https://www.3playmedia.com/blog/closed-captioning-vs-subtitles/ Fri, 11 Apr 2025 04:00:00 +0000 https://www.3playmedia.com/blog/closed-captioning-vs-subtitles/ • Watch the Webinar: How the EAA Impacts Global Business Captions and subtitles are important timed text solutions that make video content accessible to all audiences. But over the last several years, the two have become clouded with questions and confusion, with the top concern being “What’s the difference between captions and subtitles?” Many experts...

The post Closed Captioning vs. Subtitles: What’s the Difference and Why it Matters for Accessibility (Including EAA) appeared first on 3Play Media.

]]>

  • Captioning

Closed Captioning vs. Subtitles: What’s the Difference and Why it Matters for Accessibility (Including EAA)


Watch the Webinar: How the EAA Impacts Global Business


Captions and subtitles are important timed text solutions that make video content accessible to all audiences. But over the last several years, the two have become clouded with questions and confusion, with the top concern being “What’s the difference between captions and subtitles?”

Many experts have weighed in, slapping labels to “captions” and “subtitles” in order to give each a singular, yet narrow definition. Now, some of these definitions may be correct, but they’re often only partially so. Why? 

Captions and subtitles are a lot more complex than most people realize. While they may seem interchangeable, understanding the differences between captions and subtitles is not only crucial for selecting the most appropriate option to enhance viewer experience and reach, but it also carries significant weight when addressing legal and accessibility requirements. For organizations and content creators serving the European market, this understanding is paramount for ensuring compliance with the European Accessibility Act (EAA).

In this blog, we’re diving head-first into the captions vs. subtitles debate. We’ll define timed text, captions, and subtitles; review the various types of captions and subtitles; and explore why they’ve become such a source of confusion in recent years.

What is a timed text?

Untitled design (1)

A timed text is a text-based file that includes timing information. 

In the accessibility space, timed text files are usually intended to pair the transcription of dialogue and/or sound to media. The timing information allows the text to be synchronized to specific time codes of media. Both captions and subtitles are forms of timed text.

What are captions?

Captions were introduced to accommodate D/deaf and hard of hearing television viewers in the early 1970s. Eventually, captions became a mandated requirement for broadcast television in the United States.

Captions provide a textual transcript of a video’s dialogue, sound effects, and music. Captions are designed for use by D/deaf and hard of hearing audiences, but have gained popularity with all audiences

Screenshot of man and woman talking. Closed caption reads "These are captions."

Standard closed captioning style: white text on a black box.

Captions appear as white text over a black box by default, but can sometimes be customized by viewers, depending on where media is being viewed.  Placement varies, but is often centered at the bottom of the screen for readability. When graphics or text appear in the lower third of the video, captions are typically placed at the top of the screen.

608 Captions

608 closed captions (also known as CEA-608, EIA-608, or Line 21 captions) were the standard captioning type for analog television transmission. 608 captions are unable to be customized by viewers, though they are compatible with digital television.

708 Captions

708 closed captions (also known as CEA-708/EIA-708/CTA-708 captions) are the newer standard captioning type for digital television. 708 captions are customizable by viewers, but are not compatible with analog television.

Styles
Captions have a few main display styles: pop-on, roll-up, and paint-on. Pop-on is used for recorded content. Roll-up is used for live programming. Paint-on is rarer to find in modern captioning workflows, but may occasionally be used in certain types of programming.

What are subtitles?

Subtitles were introduced in the 1930s, when silent film transitioned to “talkies,” or film with spoken audio, in order to accommodate foreign audiences who didn’t understand the language used in a film. 

Subtitles provide a textual translation of a video’s dialogue. Traditionally, subtitles assume the viewer can hear the audio but cannot understand the language. The exception to this is subtitles for the D/deaf and hard of hearing, which assume the viewer cannot hear the audio or understand the language.

Screenshot of man and woman talking. White subtitle reads "These are subtitles."

Common subtitle style: white text with black dropshadow, no background.

Screenshot of man and woman talking. White on semi-transparent black box subtitle reads "These are subtitles."

Subtitles mimicking the appearance of closed captions.

Subtitles can appear in a variety of styles, but often appear as white or yellow text outlined in black, or with a black dropshadow. It is also common for subtitles to mimic the appearance of captions. Placement varies, but is often centered at the bottom of the screen for readability and ease in translation. When graphics or text appear in the lower third of the video, subtitles are typically placed just above the graphic/text. Subtitles can sometimes be customized by viewers, depending on where media is being viewed.

non-SDH

Non-subtitles for the d/Deaf and hard of hearing (non-SDH) are traditionally referred to as just “subtitles.” Non-SDH are designed for viewers who can hear the dialogue and non-dialogue information but cannot understand the language. The only transcribed element of non-SDH is dialogue. On-screen graphics or words may also be transcribed, when time allows for the translation of these elements.

SDH

Subtitles for the D/deaf and hard of hearing (SDH) assume the end user cannot hear the dialogue and include important non-dialogue information such as sound effects, music, and speaker identification.

SDH were originally designed for viewers who cannot understand the language, but are increasingly used in place of captions on some video platforms and services.

Forced Narrative

Forced narrative (FN) subtitles, also known as forced subtitles, clarify pertinent information meant to be understood by the viewer. FN subtitles are overlaid text used to clarify dialogue, burned-in texted graphics, and other information that is not otherwise explained or easily understood by the viewer. 

Open vs. Closed
Both captions and subtitles can be open or closed.

On and off toggle buttons

Open: The captions or subtitles are permanently visible or burned onto the video. The viewer cannot turn them off.

Closed: Captions and subtitles are not visible unless they are turned on. The viewer can toggle the captions or subtitles on and off at their leisure.

Why Do Caption and Subtitle Choices Matter for European Accessibility Act (EAA) Compliance?

 

Learn how 3Play can support you in becoming EAA compliant

 

Why are captions sometimes called subtitles and vice versa?

Captions and subtitles are infamous for being confused with one another, and there’s a few reasons for this. Let’s take a quick look at how global differences in terminology and the increased usage of SDH have been adding chaos to the CC vs. subs discourse.

Global Terminology Differences

Globe with location pins in various places. Words "CC" and "SUB" appear next to pins, depending on location.

Outside of the United States and Canada (for example: the UK, Ireland, and most other countries), video subtitling and captioning are usually considered one and the same. In other words, the use of the term “video subtitling” does not distinguish between subtitles used for foreign language translation, and captioning used to aid the D/deaf and hard-of-hearing audiences.

The globalization of video content across corporate, education, and entertainment industries has greatly impacted how viewers use the terms “captions” and “subtitles”. It can be hard for viewers to understand the difference between the two when different entities label their accessible timed text files based on regional preferences. 

SDH = CC…for some

Because of the aforementioned globalization of video content, closed captions and subtitles for the D/deaf and hard of hearing are now commonly mistaken for one another. It’s easy to see why: they both serve D/deaf and hard of hearing audiences and often look alike.

But SDH and captions are different. SDH were initially designed to accommodate D/deaf and hard of hearing audiences who could not understand the language. But over the past few years, SDH have been used in place of captions on platforms where traditional captions are not supported. Sometimes the platform will refer to SDH as “SDH”; other times, they may be called “CC”. There are even cases where they could be called both, e.g. “CC/SDH”.

Captions vs. Subtitles

Because of the many nuances involved in defining captions and subtitles, it’s hard to compare both in general terms. To get to the heart of the individual differences between them, it’s important to break captions and subtitles down into their individual types.

Feature Captions Subtitles
608 708 SDH non-SDH FN
Text transcribed All All All Dialogue only Only pertinent dialogue & information not easily understood by viewer
Timed text synced to video
Audience assumption D/deaf and hard of hearing D/deaf and hard of hearing D/deaf and hard of hearing Hearing Hearing
Can be turned on/off
In source language Sometimes Sometimes
Speaker identification    
Music & sound effects    
Signs & graphics transcribed       Sometimes
Translation options Limited Limited
Appearance White text on black box; 32 characters per line White text on black box; 32 characters per line Varies; 42 characters per line Varies; 42 characters per line Varies; 42 characters per line
Placement Varies–usually centered at bottom, moving to top for lower third graphics Varies–usually centered at bottom, moving to top for lower third graphics Varies–usually centered at bottom, moving to top or just above lower third graphics Varies, usually centered at bottom, moving to just above lower third graphics Varies, usually centered at bottom, moving to just above lower third graphics
User Customization (when available)  

There’s a lot of nuance missing from the captions vs. subtitles discourse, and the complexities of each won’t go away anytime soon. In the broadest sense, each serves a different purpose with a common goal:

  • Captions provide an accessible way for viewers who cannot hear audio to watch video.
  • Subtitles provide an accessible way for speakers of any language to watch video.

Video accessibility is the string that ties captions and subtitles together, but there are ways to move beyond generalization of these accessibility solutions. The question of “what’s the difference between captions vs. subtitles?” is one that will always require us to break it down further. By comparing and contrasting the individual types of captions and subtitles, we can begin to grasp the differences between the two a lot more easily. 

 

EAA Get Started

 

This blog post was originally published by Sofia Leiva on August 14, 2016, and was updated on June 22, 2021 by Kelly Mahoney. It has since been updated again for comprehensiveness, clarity, and accuracy.


About the author

The post Closed Captioning vs. Subtitles: What’s the Difference and Why it Matters for Accessibility (Including EAA) appeared first on 3Play Media.

]]>
Studies Find Captions Can Improve Focus on Video Content https://www.3playmedia.com/blog/studies-find-captions-improve-engagement/ Tue, 14 May 2024 20:53:21 +0000 https://www.3playmedia.com/blog/studies-find-captions-improve-engagement/ Captions are well-known as an accommodation for the d/Deaf and hard-of-hearing, but the benefits go beyond accessibility – several studies have proven that captions can improve focus, engagement, and comprehension of online video content.  Research findings from media agencies and universities alike indicate that captions help viewers to stay focused and better absorb information. Plus,...

The post Studies Find Captions Can Improve Focus on Video Content appeared first on 3Play Media.

]]>

  • Captioning

Studies Find Captions Can Improve Focus on Video Content

Captions are well-known as an accommodation for the d/Deaf and hard-of-hearing, but the benefits go beyond accessibility – several studies have proven that captions can improve focus, engagement, and comprehension of online video content. 

Research findings from media agencies and universities alike indicate that captions help viewers to stay focused and better absorb information. Plus, captioned videos support brand awareness and recall

Let’s dig into the top takeaways from five industry studies to learn just how captions can create a better user experience for everyone.

 

Read more industry studies on the power of captions 📚
 

Captions proven to improve focus in classrooms

Not only do captions affect the way an audience watches video, but it also affects the way they interact with video. In classroom settings, researchers have discovered captions have a positive impact on student engagement with video-based course materials. bar chart going upward

The accessibility committee at the University of South Florida St. Petersburg (USFSP) conducted a report on student usage and attitudes toward captions and interactive transcripts in online courses. The results demonstrate the power of captions and their capacity to improve student performance.

Here are the highlights: 

  • 42% of students use closed captions to improve focus on course material.
  • 38% of students use interactive transcripts to boost information retention.
  • Test scores increased by 3% for students who used closed captions.
  • Test scores increased by 8% for students who used interactive transcripts.

Additionally, 29% of students reported using caption/transcript materials as a study guide. In this way, captions/transcripts can be used by students and instructors alike to efficiently create derivative materials for test prep, course review, and more.

 

Read the full report from USFSP 📑
 

Students simply prefer to use captions

To learn more about how and why students use closed captions and transcripts, 3Play teamed up to perform a study with the Oregon State University eCampus.

This study provides insight on the use of closed captions for on-campus classes across the country. Fifteen colleges and universities participated in this study and received a total of 2,124 students responded to the survey. Demographically, there was a relatively even mixture of freshman, sophomore, junior, senior, and graduate students.

Findings revealed that because captions improve engagement for everyone, students with and without disabilities were using captions for a variety of reasons – the most common being the potential to improve focus. 

The study’s top takeaways include:

  • 71% of students who use captions do not have hearing difficulties. 
  • 75% of students indicated that they use captions as a learning aid.
  • 52% said that captions specifically helped them with comprehension.
  • 20% said that captions keep them more engaged with the material.

 

Read the full report from OSU eCampus 📄
 

Social media views boosted by captions

Facebook conducted an internal user behavior study which uncovered that captions have the potential to boost video view time by 12% on average. person using megaphone to amplify digital message

A&W Canada, a client in the study, reported a 25% increase in watch time on captioned videos. This kind of growth is no small feat, especially considering the endless supply of video content available on social media.

Another key finding revealed that 80% of Facebook users react negatively to video ads auto-playing with the sound on – but 41% of videos are incomprehensible without sound. Captions are one great way to deliver on the user experience your audience is looking for.

 

[FREE] Beginner’s Guide to Accessible Social Media Videos
 

Discovery Digital Networks reaps the benefits

Discovery Digital Networks (DDN) includes closed captions on a segment of their YouTube videos, and wanted to quantify their return on investment before rolling out captions across their entire video catalog. Using 3Play services, DDN conducted a controlled study on the impact of adding captions to YouTube videos.

Here’s what they found:

  • Views on captioned videos saw an overall increase of 7.32%.
  • View count was most dramatically impacted within the first 14 days of adding captions, where DDN saw a 13.48% increase.

These findings were substantial, and proved to Discovery Digital Networks that captions have the power to improve engagement as well as view count.

 

Read the full Discovery Digital Networks study 📊
 

Brands use captions to improve video-based KPIs

Verizon and Publicis Media conducted a study on the relationship between videos, sound, and captions. This study highlights user preferences and behavior and supports the theory that captions play a significant role in the video-viewing experience.

Turns out, the majority of consumers prefer to watch video with the sound off – in fact, 92% of mobile users and 83% of desktop users report viewing video this way. This viewing behavior causes rightful concern among brand marketers that their audience is missing out on the content they’ve worked hard to provide. 

That’s where captions come in. When captions are included, viewers can still watch, comprehend, and engage with your video content regardless of whether audio is playing. 

In this way, brands can use captions to deliver the soundless and unobtrusive experience their audience wants while simultaneously supporting their own video-based success metrics and KPIs.


The facts don’t lie – the benefits of captioning go beyond accessibility. Captions improve focus in classroom settings, encourage viewers to stay engaged, and boost overall video performance.

 

Download the report: How captions and transcripts impact student learning


About the author

Related Posts

The post Studies Find Captions Can Improve Focus on Video Content appeared first on 3Play Media.

]]>
Closed Caption Styling & Formatting Best Practices You Need to Know https://www.3playmedia.com/blog/closed-caption-styling-formatting-best-practices-you-need-to-know/ Fri, 03 Nov 2023 21:03:15 +0000 https://www.3playmedia.com/blog/closed-caption-styling-formatting-best-practices-you-need-to-know/ • Captioning Best Practices for Media & Entertainment [Free eBook] Closed caption styling is an important element of video production that significantly impacts video quality and accessibility.  Traditionally, caption styling best practices were determined by television networks, streaming services, and captioning professionals based on feedback from D/deaf and hard of hearing communities. Guidelines from such...

The post Closed Caption Styling & Formatting Best Practices You Need to Know appeared first on 3Play Media.

]]>

  • Captioning

Closed Caption Styling & Formatting Best Practices You Need to Know


Captioning Best Practices for Media & Entertainment [Free eBook]


Closed caption styling is an important element of video production that significantly impacts video quality and accessibility. 

Traditionally, caption styling best practices were determined by television networks, streaming services, and captioning professionals based on feedback from D/deaf and hard of hearing communities. Guidelines from such entities as the Described and Captioned Media Program (DCMP), the Federal Communications Commission (FCC), and the World Wide Web Consortium (W3C) also played a key role in the development of best practices.

With the increase in video content and development of new captioning solutions over the past several years, caption styling has been unlocked for all video creators. This has come with an explosion in creative methods and DIY captioning. Unfortunately, creativity can sometimes come at the expense of accessibility, leading folks right back to conventional caption styling rules.

So how can you curate a captioning style that fits your video and brand while simultaneously maximizing the accessibility of your content?

In this blog, we will explore the best practices for closed caption styling and formatting. We’ll show you all of the styling elements you’ll need to consider, weigh the pros and cons of using different styles, learn why consistency is critical in any caption style, and provide tips for compiling your own captioning style guide to best support your brand’s content.

Caption Styling Elements to Consider

Whether you’re styling your own recorded captions or subtitles using YouTube or Premiere, or you’re in the process of creating your brand’s recorded captioning style guide, you will most likely be thinking about captions in pop-on format. Pop-on format is the most common captioning type for prerecorded video content, and it’s the only format available for subtitles. It allows for the greatest amount of customization in offline captions and subtitles.

Speaker Identification

Dashes: This is a simple way to identify new speakers. Use a dash followed by a space to indicate when a different speaker is talking.

Woman in workout gear holds a kettlebell. A closed caption with white text on a black background reads "- Hold this pose."

Name/title: This method identifies new speakers by name or title and can be helpful for viewers who want to know which character is speaking. Using names or generic titles to identify speakers can be done in several ways.

Four identical images of a woman in workout gear holding a kettlebell. A closed caption with white text on a black background sits on each image to demonstrate different speaker IDs. The first reads "JANE: Hold this pose." The second reads "Jane: Hold this pose." The third reads: "(Jane) Hold this pose." "The fourth reads [JANE] Hold this pose."

 

Speaker-oriented placement: This identification style uses manual horizontal caption placement to follow each speaker around the screen. Dashes and names may be used in addition to this style, or they may have no identification at all unless they are off-screen. This style can be useful for those who struggle with center-placed identification, but others may find this style distracting and hard to follow.

Two women sit side by side on a sofa with beverages. A closed caption with white text on a black background, positioned to the far left reads "- I really loved the movie!"

Overall, the use of speaker-oriented placement has been moving out of favor due to its incompatibility with many internet-based streaming platforms and video players. 

Placement

Bottom-center only: This style is compatible with almost every television and online video player. It is often the default on some web players, and is sometimes the only placement option for certain web caption file types. Despite its compatibility, bottom-center placement can obscure lower-third video graphics if they are present.

A person checks their watch. A closed caption with white text on a black background, in the bottom center reads: "- My ride is late."

Bottom-center, moving for lower thirds: This style is standard for many television and streaming networks, and many captioning vendors adhere to this placement by default. Captions stay in the bottom, center portion of the screen and are placed on the top of the screen when lower-third graphics are present.

A person wearing scrubs and a stethoscope listens to a golden retriever's heartbeat. A pink lower third graphic in the bottom right corner reads "Dr. Jay, Veterinarian." At the top, center of the screen is a closed caption with white text on a black background reading "- Today we're doing a lot of check-ups."

 

Speaker-oriented: As mentioned in the previous section, this style of placement is becoming less common because of its incompatibility with some web video players. This style can also be distracting and difficult for some viewers to follow.

Two women sit side by side on a sofa with beverages. A closed caption with white text on a black background, positioned to the far right reads "- The acting could have been better."

Narration and Off-Screen Speech

Italics: Italics are commonly used to differentiate voice-over narration and off-screen speech. They are sometimes used in tandem with speaker IDs.

An empty room of a house. A closed caption in white text on a black background is formatted in italics and reads "- We want to take a bold approach to this room."

Descriptors: Name descriptors may be used in addition to italics to indicate off-screen speech or narration. They are sometimes used without italics, as the means for indicating off-screen speech.

Two images of the same empty room of a house. Top image: A closed caption in white text on a black background on top uses italics and a name followed by a colon to identify the narrator. It reads "narrator: We want to take a bold approach to this room." Bottom image: A closed caption in white text on a black background on top uses no italics and uppercase text followed by a colon identify the narrator. It reads "NARRATOR: We want to take a bold approach to this room."
Two images of the same empty room of a house. Top image: A closed caption in white text on a black background on top uses no italics and parentheses to identify the narrator. It reads "(narrator) We want to take a bold approach to this room." Bottom image: A closed caption in white text on a black background on top uses no italics, uppercase text, and brackets to identify the narrator. It reads "[NARRATOR] We want to take a bold approach to this room."

Sound Effects, Music, and Other Non-Speech Information

Brackets: This style uses brackets to enclose sound effects or music descriptors. Brackets usually surround words in lowercase, without spaces. Sometimes, sound effects may be in uppercase or include additional spaces/italics as well.

Four images of the same set of trees blowing in the wind. Each image has a closed caption in white text on a black background located in the bottom center of the image. Each image uses brackets to indicate a "wind howling" sound effect. Top left contains brackets with no spacing: [wind howling]. Top right contains brackets with no spacing in uppercase: [WIND HOWLING]. Bottom left contains brackets with spaces: [ wind howling ]. Bottom right contains brackets with spaces in uppercase: [ WIND HOWLING ]

Parentheses: This style is almost exactly used like the brackets style, but includes parentheses to indicate sound effects instead.

Four images of the same set of trees blowing in the wind. Each image has a closed caption in white text on a black background located in the bottom center of the image. Each image uses parentheses to indicate a "wind howling" sound effect. Top left contains parentheses with no spacing: (wind howling). Top right contains parentheses with no spacing in uppercase: (WIND HOWLING). Bottom left contains parentheses with spaces: ( wind howling ). Bottom right contains parentheses with spaces in uppercase: ( WIND HOWLING )

Detailed descriptors: Highly detailed descriptors have gained traction with many hearing caption users due to their creativity and entertainment value. These can be a fun way to help immerse viewers in a program. However, it’s important to note that these can also confuse other viewers, particularly when advanced vocabulary is used in the descriptor.

Trees blowing in the wind with a closed caption in white text on a black background located in the bottom center of the image that reads in brackets: [treacherous Aeolian howling]
Captioning Sound Effects
If you’re creating captions yourself, adding non-speech elements is equally as important as ensuring all dialogue in transcribed. When trying to describe sound effects or music, ensure you are thinking about words that best describe the sound as opposed to the actions making the sounds. For example, [wind whooshing] or [wind howling] gives a better idea of the sound wind makes as opposed to simply writing [wind blowing].

Font, color, and character limits

Font: Sans Serif fonts with medium thickness are preferable for captions. Serif fonts can be used when they are simpler but tend to be less readable for viewers in general. Overly thin or bold fonts can additionally pose issues with readability. The more decorative a font is, the harder it may be for viewers to read.

Five examples of closed captions with white text on a black background. Each uses a different font. Caption one displays in a non-Serif font and reads: "This is a Sans Serif font." Caption two displays in a Serif font and reads: "This is a Serif font." Caption three displays in a bold non-Serif font and reads: "This is an extra bold Sans Serif font." Caption four displays in a thin non-Serif font and reads: "This is an extra thin Sans Serif font." Caption five displays in a decorative Serif font and reads: "This is a decorative Serif font." Captions one and two are the easiest to read.

Color: Closed captions are typically displayed as white text on an opaque or semi-transparent black box. Subtitles are often styled in white text with a black outline or black drop shadow. These tend to be the most readable colors for viewers, but open captions and open subtitles can be styled in other colors. Choosing different colors can be a creative way to extend branding, but caution should be used to ensure appropriate contrast is provided. 

Six examples of captions. Each uses different colors. Caption one displays as white text on a black background: "This is a standard caption." Caption two displays as white text on a semi-transparent background and reads: "This has a semi-transparent background." Caption three displays as white text with a black outline and reads: "This is has a black outline." Caption four displays as white text with a black dropshadow and reads: "This has a black dropshadow." Caption five displays as yellow text with a black dropshadow and reads: "This is yellow with a black dropshadow." Caption six displays as yellow text on a semi-transparent background and reads: "This is yellow on a semi-transparent background."

Character Limits: Closed captions have a line limit of 32 characters per line by default. Subtitles can have varying line limits, but are often capped at 42 characters per line to best support readability.

Profanity and Censorship

Bleeping: When bleeps are used to censor audio, the profanity is typically reflected as [bleep], (bleep), or [BLEEP] within the captions.

Dropped Audio: When audio is entirely dropped or silenced, the profanity is usually reflected as […] or (…) within the captions. 

Partial Censorship: When words are partially censored in the audio, or if producers wish to indicate the word being used in the captions, profanity can be transcribed using the first and/or second letter of the word followed by asterisks or dashes, such as sh– or sh**. Note that dashes are preferable due to asterisks’ display incompatibility with certain caption file types and players/televisions.

 

Can captions be customized by users?
Yes, captions can sometimes be customized by users. 

On television, 608 captions are unable to be customized by viewers, but digital 708 captions do have the capability for user customization, with choices for font, color, size, and background.

Some streaming platforms and online video players additionally support customization options to varying degrees, such as YouTube.

 

 
 Discover Captioning Best Practices for the Entertainment Industry ➡ 
 

Consistency in Caption Styling is Key

There is no blanket guideline for caption or subtitle styling. This can be great for creativity, but less so for accessibility. That’s where consistency comes in.

Consistency in Broadcast and Streaming

Video accessibility requirements for the FCC and WCAG, for example, are broad enough to allow for different caption styling options. However, it’s important to remember that content going to broadcast networks and streaming services, such as Netflix or Amazon, may require particular styling guidelines to be met. This helps each individual platform or network create greater consistency for captions and subtitles within their libraries of programming.

When applicable, network or streaming style guides should always be consulted and followed before defaulting to any other style. Some captioning vendors, like 3Play Media, are familiar with and well-versed in handling these specs, but always ensure they have the most updated style guides to review prior to caption creation.

If your content is being distributed to a network or platform without any specifications beyond following FCC guidelines, your captioning vendor will typically default to their house style. A caption vendor’s house style should integrate key compliance requirements and major recommendations from organizations like DCMP.

Consistency in Non-Entertainment Video Content

For video producers, organizations, and individuals with recorded video content not geared toward entertainment–including corporate training videos, brand videos, educational videos, event recordings, and more–ensuring a consistent caption style can help optimize both accessibility and branding. But how can you do this? Where do you start?

To create greater consistency across video content, it can be useful to review other style guides, talk to a captioning vendor about their house style, and watch captioned videos across different players and platforms. In fact, many captioning vendors, networks, and streaming services have designed their caption style specs with guidance and suggestions from disability communities and organizations over the years.

However, even the standard best practices can become outdated or may no longer best meet the needs of D/deaf and hard of hearing communities. That’s why it’s incredibly important to research the current preferences of these communities in order to gain a holistic view of caption styling priorities from the people who rely on them. 

Keep in mind that every individual will have their own preferences and reasoning behind their choice in caption styling. Because one cannot speak for the entirety of caption users, these preferences may not always be within the general best practices for captioning, but should still be considered when crafting your own caption style. 

Building a Captioning Style for Your Brand

When creating a captioning or subtitling style guide for your brand, remember that accessibility must be placed before aesthetics. Using your brand’s font and colors may support a consistent brand experience, but they can also be illegible to caption users if a font is too fanciful or colors don’t have enough contrast. Overly detailed sound and music descriptions may be entertaining and provide hearing caption users with a memorable brand experience, but they can also be distracting and confusing to others who need them to understand your video. Plus, it’s important to remember that not all captioning customizations display the same way across web platforms and televisions unless they are permanently burned in.

So with all of these caveats, how can you create a consistent and accessible captioning experience that supports your brand and complements your video content?

Choose Your Basic Style Requirements

Closed captions are not permanently burned into the video, unlike open captions or subtitles. Therefore, style elements like font, size, and color should not be considered during this stage. 

Stick to determining the basics of closed caption styling elements. How should speakers be identified? How do you want sound effects and music formatted? How should off-screen speech be indicated?

Once you figure out the basics, document your preferences so that they can be followed by your captioning vendor.

Choose Advanced Captioning Style Elements

After creating your basic preferences, you may begin selecting advanced captioning style elements if you will be creating or adding permanently burned-in open captions or open subtitles for your video content.

Take your own brand and preferences into account here, but make adjustments and considerations for accessibility as you do so. If you’re looking for a font, and your brand font is non-Serif with medium thickness, it will likely be readable in captions. If it’s Serif, decorative, has very thin lines, or is overly bold, there may be readability issues. 

When determining caption or subtitling color, consider utilizing a color contrast checker to ensure captions have enough contrast to support readability. For subtitles, consider how the use of outlines, drop shadows, and semi-transparent elements can improve contrast.

Put Your Captioning Style Guide to Use

Now it’s time to test your style elements together. How do they look in your video content? What do your viewers and caption users think? Do your caption styling preferences support captioning best practices?

After successful testing, you can go live with your new captioning style. Provide a copy of your style guide or requirements to your caption vendor, and review your files–ideally in the final video platform or player–to confirm the finalized caption display is accessible and to ensure overall consistency and compatibility.

 

 

Captioning Best Practices for Media and Entertainment: Read the eBook

 

This blog was originally published by Kelsey Brannan on November 1, 2016, as “Guest Post from PremiereGal: Trends in Captioning Style & Formatting” and has since been updated for comprehensiveness, clarity, and accuracy.


About the author

The post Closed Caption Styling & Formatting Best Practices You Need to Know appeared first on 3Play Media.

]]>
Real-Time Captioning in the College Classroom 101 https://www.3playmedia.com/blog/real-time-captioning-in-the-college-classroom-101/ Thu, 14 Sep 2023 19:42:53 +0000 https://www.3playmedia.com/blog/real-time-captioning-in-the-college-classroom-101/ • The 3Play Way: Real-Time Captioning in Higher Education [Free Webinar] As a new school year kicks off, students are stocking up on the traditional academic tools: course books, notebooks, pens, laptops, etc. These items are unquestionably essential to the learning experience for nearly all students. Yet there is another critical learning tool for a...

The post Real-Time Captioning in the College Classroom 101 appeared first on 3Play Media.

]]>

  • Live Captioning

Real-Time Captioning in the College Classroom 101


The 3Play Way: Real-Time Captioning in Higher Education [Free Webinar]


As a new school year kicks off, students are stocking up on the traditional academic tools: course books, notebooks, pens, laptops, etc. These items are unquestionably essential to the learning experience for nearly all students. Yet there is another critical learning tool for a significant portion of the student population that often goes overlooked: real-time captions.

Real-time captioning in the college classroom can be equally as important as those course books, notebooks, and laptops–especially for D/deaf and hard of hearing students. That’s because captions help remove access barriers, providing an equitable and inclusive way for students to fully experience lectures and participate in class discussions. 

So how does real-time captioning in the college classroom work? In this blog, we’re covering all of the most frequently asked questions about classroom captioning: workflows, captioner qualifications and assignments, how captions are ordered, and more. Get out your writing tools and prepare to take notes, because Real-Time Captioning in the College Classroom 101 is now in session.

How does captioning work in a college classroom?

encoding equipment

Real-time captions in a live classroom setting can be delivered to a student through different mechanisms. If the student is present in person, they are usually receiving captions on a second screen, such as a tablet or laptop, using a solution known as Communication Access Realtime Translation, or CART.

For on-demand or remote classes that are not live, closed captions are usually provided in a sidecar file alongside the video recording, which can be toggled on or off by the user.

Who is captioning college classes?

person at computerFor student accommodations in a live classroom, real-time captions are usually transcribed by a live professional captioner. Traditionally, CART utilized a stenographer or in-person captioner and displayed captions on a large screen. 

Nowadays, remote CART captioning options and alternatives have become very common, with a remote captioner connecting to the classroom’s audio source, such as a clip-on microphone worn by a professor. The captioner then transcribes the lecture or discussion word-for-word, with live captions populating on a second screen or streaming link to the text.

What about auto captions?
Live automatic captions, or auto captions, are another solution for higher education settings. These captions are machine-generated and offer accommodations at a lower cost, but are generally not recommended for student accommodations in a classroom setting due to their lower accuracy and limited options for audio capture. Live automatic captions tend to work best for low-visibility events or meetings that don’t require professional captioning.

How do the captioners connect to a class?

headphones and waveform

We touched on CART solutions and how in the past, a live captioner would sit in a room, transcribing, as captions populate on a larger screen. While this method does still happen for larger events, it’s becoming less common due to advances in technology that allow for greater flexibility with real-time captioning.

Remote CART or similar captioning experiences allow remote live professional captioners to connect to a class’s audio via sources such as phone, RTMP, iCap, Zoom meetings, and more. The lecture is then live captioned, with captions displayed via a second screen or streaming link.

What kinds of qualifications do live professional captioners have?

card with star and checkmark

Real-time captions for college classrooms require a high degree of accuracy to provide an equivalent experience for students requesting accommodations. Live professional captioners should be experienced in providing high-quality, accurate captions and following best practices for real-time captioning.

At 3Play, live professional captioners undergo a rigorous certification process and use 3Play’s innovative proprietary voice writing technology to produce accurate and comprehensive real-time captions. 

How accurate are real-time captions for college classrooms?

arrow in middle of dartboard

Live captioning accuracy can be tricky to determine because of a couple of factors at play: Word Error Rate (WER) and Formatted Error Rate (FER). WER is used as the standard measure of transcription accuracy in captions. FER accounts for errors in formatting, sound effects, grammar, and punctuation and is a better representation of the experienced accuracy of captions. 

Both of these measurements are crucial to accuracy, yet WER is the most often used by live captioning vendors providing accuracy measurements. Unfortunately, WER on its own is usually not enough to support an accurate and equitable learning experience for students, and that’s where FER comes in. FER accuracy can impact a student’s understanding of the lecture and discussion if punctuation, formatting, and other complexities aren’t captioned correctly.

It’s important for live captions to boast a high accuracy rate that takes into account both WER and FER. 3Play’s innovative combination of humans and technology allows us to consistently obtain high levels of accuracy and quality for college classroom captions.

What about context?
Context is another important factor at play when it comes to accuracy, but isn’t the easiest to measure. Varying subject matter and diverse courses means that context can be key for captioners transcribing numerous classes for individual students seeking accommodations.

3Play approaches the context piece of accuracy through a diverse pool of live professional captioners who specialize in an array of topics. These captioners have been certified through our rigorous process and are able to capture the intent of the speaker, ensuring that a class’s proper names, key words, and terminology are captioned correctly.

Additionally, 3Play future-proofs real-time captioning accuracy with robust customization options like custom speaker labels, curated event instructions, and wordlists, which can be uploaded and made available for live captioners to review and reference prior to an event.

 

The 3Play Way: Real-Time Captioning in Higher Education

 

How do schools coordinate real-time accommodations for students?

coordination icon

Colleges, universities, and other higher education institutions may handle and coordinate real-time accommodations differently, depending on workflows, budget, and other student needs.

Usually, schools dedicate a position or even a department to handling the accommodation and/or captioning process. These can include CART Supervisors, Real-Time Captioning Coordinators, Student or Disability Services professionals, Access or Disability Resource Center professionals, and more. Student accommodation requests are submitted to these professionals or departments, who then coordinate fulfillment of the accommodation, such as real-time classroom captions.

How do real-time accommodation professionals order and pay for captions?

arrow clicking browser window with accessibility symbol

Higher education professionals usually have a wide range of needs for live accommodations: lectures, meetings, conferences, webinars, and more. These events may be hosted by different departments, campuses, and even individuals. Some universities and colleges have a centralized location and clear policy for student accommodations. Some may be only beginning the process of centralizing, but have some ways to go. Others may use accommodation platforms, like AIM.

This range of needs and policies means ordering and paying for captions can become complex for higher education professionals. They may be the ones doing the actual ordering for all captions, or departments and professors could be tasked with directly carrying out the accommodations with a university’s captioning vendor.

Ordering and billing needs are going to be different at every institution, so vendor agility is very important here. 3Play takes a flexible approach to these aspects by giving professionals exactly what they need to track spending and budget, whether it’s full visibility into how the institution is spending on accessibility services like real-time captioning, or small-scale, single projects with specific purchase orders (P.O.s) attached.

How do real-time accommodation professionals overcome issues with getting captions?

person helping another person up steps

No matter who is directly coordinating real-time accommodations, common issues in the classroom captioning process revolve around captioner coverage, staffing shortages, lack of vendor support, tech issues, and cumbersome workflows. These can make for a poor captioning experience for not only the students, but also the professors, administrators, and other staff trying to create an inclusive learning environment.

Fortunately, there are some key traits to seek in a captioning vendor that will help mitigate inefficient methods for providing real-time accommodations for students. 

How 3Play Supports Students

3Play Media is a trusted provider of accessibility services for colleges and universities. We offer future-proof solutions to transform your university’s accessibility and operational efficiency with a wide range of services, including real-time captioning, closed captioning, audio description, and translation.

We are 3Play Media. Three people celebrating together.

Our real-time classroom captioning services are designed for your budget and peace of mind. Here’s how:

We Eliminate Hours of Manual Work for Your Staff

With our user-friendly platform and flexible workflows, your staff can easily manage recurring events, canceled classes, and captioner assignments at the push of a button.

We Are a Reliable Partner with Limitless Scalability

Our marketplace structure ensures your courses will be matched with a qualified professional, regardless of whether you need to support one class or a dozen.

We Offer Compliant Real-Time Captions with 98%+ Accuracy

We offer compliant live solutions that meet all applicable accessibility regulations and provide word-for-word transcription and up to 98%+ measured accuracy.

We Provide Rapid and Attentive Support

Our on-call tech support team will assist you with any issues before and during each scheduled course.

We Have Flexible Billing Options 

Our flexible billing options allow you to easily track spending with university or department-based billing.

 

Learn more about real-time captioning in higher education ⬇


About the author

The post Real-Time Captioning in the College Classroom 101 appeared first on 3Play Media.

]]>
Using Subtitles to Learn a Language: Captions for ESL Students https://www.3playmedia.com/blog/how-captions-help-esl-learners-improve-their-english/ Thu, 24 Aug 2023 19:00:00 +0000 https://www.3playmedia.com/blog/how-captions-help-esl-learners-improve-their-english/ • Discover the Benefits of Captioning and Transcription [Free Ebook] What are the benefits of captions for ESL learners (English as a second or foreign language) and English language learners (ELLs)? More than one in 10 of the nation’s approximately 50 million public school students speak a native language other than English, according to federal...

The post Using Subtitles to Learn a Language: Captions for ESL Students appeared first on 3Play Media.

]]>

  • Captioning

Using Subtitles to Learn a Language: Captions for ESL Students


Discover the Benefits of Captioning and Transcription [Free Ebook]


What are the benefits of captions for ESL learners (English as a second or foreign language) and English language learners (ELLs)?

More than one in 10 of the nation’s approximately 50 million public school students speak a native language other than English, according to federal data.

These numbers grow steadily every year, meaning there are vast opportunities to help English language learners and ESL students succeed.

Traditional ESL classes provide a great foundation for basic vocabulary, grammar, syntax, and other linguistic features of a language. However, watching videos with captions or subtitles over the audio of native speakers is a great way for ESL students to improve vocabulary, pronunciation, and inflection and pick up on more nuanced features of English, such as slang terms, phrases, and colloquialisms.

Terminology 101 of Captions for ESL Learners and ELLs

First, let’s quickly clarify some key terminology:

  • Subtitles: time-synchronized text on a video that translates the spoken audio to another language
  • Dubbing: a voice-over or time-synchronized spoken audio translated into another language from that of the video, replacing the original speaker’s voice
  • Captions: time-synchronized text on a video in the same language as the spoken audio. Captions provide a textual transcript of a video’s dialogue, sound effects, and music.
  • Closed captions: captions that can be turned on and off
  • Open captions: captions that are “burned” into the video and cannot be turned off
Captions vs. Subtitles
Captions provide a textual transcript of a video’s dialogue, sound effects, and music and assume a viewer cannot hear the audio. Subtitles provide a textual translation of a video’s dialogue and typically assume the viewer can hear the audio but cannot understand the language being spoken.

English Captions Improve Language Retention

New ELLs listening to a native English speaker talk often find it difficult to identify which words are being spoken, how they are spelled, and in what order they are arranged (syntax). That’s why, for anyone learning a new language, it is extremely helpful to read the words one is hearing at the same time.

Even if the viewer cannot fully understand what they are reading on screen, captions can provide some helpful context, encouraging the viewer to stay engaged with the video. Time-synchronized captions focus the ELL’s attention on the words being spoken in real-time, which helps with the retention of vocabulary, spelling, pronunciation, grammar, and other valuable linguistic qualities one must understand to speak a language properly.

In 2009, a study conducted with Dutch ELLs concluded that watching English-language video content with English captions led to high scores after testing for aural word recognition, while watching English videos with Dutch subtitles led to lower scores on those tests. This suggests that reinforcing English speech with English text helps ELLs memorize spoken and written words in the language, leading to stronger vocabulary skills.

In 2016, a study conducted with a group of intermediate Spanish students of English as a foreign language watched an episode of a television show in its original English version with English, Spanish, or no subtitles overlaid. Before and after the viewing, participants took a listening and vocabulary test to evaluate their speech perception and vocabulary acquisition in English, plus a final plot comprehension test. The results of the listening skills tests revealed that after watching the English subtitled version, participants improved these skills significantly more than after watching the Spanish subtitled or no-subtitle versions.


 Learn more about the benefits of captioning and transcription ➡ 


English Captions Help Students Decipher Accents and Dialects

Accents and dialects are another reason why captions for ESL students and ELLs can be beneficial.

Many Americans have difficulty understanding certain accents and dialects from places like the UK, Ireland, Australia, and other places where English is spoken. So, imagine what ESL learners have to go through in the same scenario.

Accents tend to go hand in hand with dialects—regionally-exclusive ways of speaking. Captions can help ELLs learn words and phrases from different dialects by helping them process the audio in the videos they watch.

In the previously mentioned study with Dutch ESL students, it was found that adding closed captions to videos with Scottish and Australian actors speaking in native accents and dialects helped the students identify the words spoken. Interestingly, it was also found that watching those same videos with Dutch subtitles diminished students’ success in word recognition:

If an English word was spoken with a Scottish accent, English subtitles usually told the perceiver what that word was, and hence what its sounds were. This made it easier for the students to tune in to the accent.

In contrast, the Dutch subtitles did not provide this teaching function, and, because they told the viewer what the characters in the film meant to say, the Dutch subtitles may have drawn the students’ attention away from the unfamiliar speech.

In 2008, an academic study involving 20 Chinese ESL students found that video content with captions helped students learn new words and expressions better than students who watched the same content without captions. Specifically, the study revealed that “the use of video plus captions can help students learn colloquial language [including] how and when native speakers use it.”

This means that by adding captions to their videos, English-speaking online video providers on YouTube and elsewhere can attract viewers anywhere in the world who want to improve their language skills and understand as much regionally-varied English as a native speaker.

The Easiest Way to Create YouTube Captions
3Play Media’s round-trip integration with YouTube provides an automated workflow for adding captions and subtitles. Your YouTube videos can be processed in a matter of hours, and captions will be automatically sent to YouTube and added to your videos. Learn more about YouTube captioning.

‘Subbing’ vs. ‘Dubbing’

If you’ve ever seen a foreign film in which the actors talk in a different language, it is either ‘dubbed’ or ‘subbed’ (subtitled) so that viewers can understand what is being said. Everyone has their preference, but for students of a second language, subbing tends to be much more helpful.

Subbing is better for ELLs because the translated text reinforces the speech, helping the viewer learn by encouraging them to match the foreign speech with words from their own language.

Hearing English speakers talk normally on video helps the viewer tune their ear to the unique sounds of spoken English, which is critical for learning a new language.

Other Benefits of Captions for ESL Learners and ELLs

  • Control: You can pause and rewind whenever necessary, so you can go to “ESL class” whenever you want!
  • Subject-specific vocabulary: Captions broaden vocabulary about specific subjects (e.g., YouTube videos about science, cooking, politics, business, pop culture, etc.)
  • Mouth movement: In most cases, you can watch the mouths of the person speaking, which helps with lip-reading and pronunciation of difficult sounds unique to a language
  • Situational context: Watching foreign films and TV shows with subtitles is great for understanding when to use formal or casual language and knowing when and when not to use certain words

Discover the benefits of captioning and transcription. Download the ebook.

This blog was originally published by Patrick Loftus in 2016 and has since been updated for accuracy, clarity, and comprehensiveness.


About the author

The post Using Subtitles to Learn a Language: Captions for ESL Students appeared first on 3Play Media.

]]>
Closed Captioning Types: Learn the Difference Between Pop-On, Roll-Up, and Paint-On https://www.3playmedia.com/blog/roll-up-vs-pop-on-captions-whats-difference/ Mon, 03 Jul 2023 15:00:00 +0000 https://www.3playmedia.com/blog/roll-up-vs-pop-on-captions-whats-difference/ • Beginner’s Guide to Captioning [Free eBook] When beginning the process of ordering captions for your media, it can be easy to get bogged down with all the variations, customizations, and styles that can be applied to your captions. Even the decision of which captioning service to use (live or recorded) can be daunting if...

The post Closed Captioning Types: Learn the Difference Between Pop-On, Roll-Up, and Paint-On appeared first on 3Play Media.

]]>

  • Captioning

Closed Captioning Types: Learn the Difference Between Pop-On, Roll-Up, and Paint-On


Beginner’s Guide to Captioning [Free eBook]


When beginning the process of ordering captions for your media, it can be easy to get bogged down with all the variations, customizations, and styles that can be applied to your captions. Even the decision of which captioning service to use (live or recorded) can be daunting if you are new to video accessibility.

The good news? Captioning doesn’t have to be complicated, because choosing between pop-on and roll-up captioning styles is simpler than you might think.

In this blog, we will provide you with a comprehensive overview of the three main formats of captioning: pop-on, roll-up, and paint-on. We’ll shed light on their applications, explore use cases, and discover the possibilities for customization within each type so that you’re empowered to make informed decisions for your media.

Pop-On Captions

What are they?

Pop-on closed captions are what you’re most used to seeing in recorded (non-live) broadcast, streaming, and web content. These captions are exactly what they say they are: they pop on your screen and then disappear when the next caption appears.

Who uses them?

Pop-on style is standard for recorded content because these captions can be highly customized to best fit the viewing experience and reflect aspects such as timing, tone, and location of speakers. Closed captioners have the ability to manipulate timing to closely synchronize with words as they are spoken.

Pop-on captions are not used for live broadcast content. The nature of live captioning means that each word written is immediately sent to an encoder, and encoders must wait for all text information before they can display a caption. If live captions utilize pop-on style, the text would be delayed, defeating the point of having quick captions delivered right to the viewer as the program is happening.

What do they look like?

Pop-on captioning example. A man and woman stand side-by-side. A pop-on caption in progress reads "These are pop-on captions."

For the most optimal readability across viewing platforms, our captioning experts have found that recorded pop-on captions with the following style tend to share these qualities:

  • Sentence case
  • Center-placed and justified
  • Rest at the bottom of the screen, moving to the top to avoid lower-third graphics
  • Use speaker dashes to differentiate speakers
  • Off-screen sound (such as voice-over narration, digitized speech, non-diegetic music) conveyed using italics
  • Quotation marks utilized for works of art (movie, show, song titles)
  • Sound effects and music descriptors indicated on their own lines, surrounded by brackets
  • Cleanly broken into two lines at conjunctions, end of clauses, prepositions, articles, or grammatical breaks
  • Timed with ample load and reading time to align with spoken words

Pop-on captioning example. A woman looks to the side. A pop-on caption reads "(Eric) Whoa. I'm doing it. I'm voicing over."

The above aspects of pop-on captions have helped inform 3Play Media’s captioning style, but that is not to say that this is the only way to do pop-on captions; varying styles are commonly applied to the pop-on captions we create, such as:

  • Speaker-oriented placement (this placement follows the speaker around the screen)
  • Speaker IDs, such as a name followed by a colon, or a name in parentheses
  • No speaker IDs for on-screen speakers at all; IDs only for off-screen speech or captions containing dual speakers
  • All uppercase captions or all uppercase speaker IDs
  • Countless other options!

Other considerations

Recorded web captions always display in pop-on style, but due to the limitations of some players and other applications, these captions may lack certain stylistic elements (caption movement, italics, and music notes.)

These captions are usually delivered in a sidecar caption file format, such as SRT. Live captions are sometimes delivered in SRT format as well for video-on-demand (VOD) programming.

Zoom pop-on captioning example. A man and woman speak over a Zoom virtual meeting. A pop-on caption in progress reads "Eric, you're on mute."

Live captions on platforms such as Zoom and YouTube only display captions in pop-on style, so viewers of live programs and events on these platforms could experience a slight delay as they wait for all the text to appear.

 

New to captioning? Our Beginner’s Guide has the basics you need to get started 🧑‍💻

 

Roll-Up Captions

What are they?

Roll-up captions continuously roll up onto your screen, one right under the next, allowing for more time for the viewer to read them. The very top line disappears each time a new line populates.

Individual roll-up captions generally require less load time. However, they have a tighter reading rate threshold when it comes to timing due because multiple sentences stay on screen for a longer period of time. One sentence will appear quickly but will stay on the screen longer than a pop-on caption would.

Who uses them?

Live programming uses roll-up style because of the time allowances and ability to quickly synchronize dialogue in real time.

Recorded programming can utilize roll-up captions, but the style is uncommon and outdated. Most producers and platforms prefer pop-on style for offline programming.

What do they look like?

Roll-up captioning example. A man and woman stand side-by-side. The woman is doubled over and grinning at a joke she made while the man sighs. A roll-up caption in progress reads "They're on a roll, am I right, folks?"

Roll-up captions vary in fewer ways than pop-on style can, but usually share these qualities in live programming:

  • Uppercase
  • Two-line captions at top or bottom
  • Left-justified
  • Two chevrons differentiate speakers
  • When speakers, show hosts, and announcers can be identified, chevrons will be followed by a first name and colon.
  • Quotation marks utilized for film/show titles, segment titles, and works of art
  • Sound effects and music descriptors indicated on their own lines, surrounded by brackets
  • No italics used
  • Line breaking of less concern
  • Timing is slightly delayed and elastic due to a live captioner transcribing as they hear the content

Other considerations

Most recorded, or offline, programming uses pop-on captioning styles, but certain types of content may be in roll-up format. Soap operas are a great example of a type of recorded broadcast content that may utilize roll-up captions for comprehension reasons. In soaps, specific name IDs are used to assist the viewer in keeping track of the multiple characters and storylines and to fit the steady, yet dramatic pace of storytelling.

Paint-On Captions

What are they?

Paint-on captions populate on screen, letter by letter, from left to right. In essence, you see the caption being typed out or “painted on” as you read it. It happens very quickly, so it can be hard to notice this nuance unless an entire show is captioned in paint-on style.

Who uses them?

Paint-on captions are occasionally used for the opening caption of a recorded program to avoid the load-time requirements and slight delay that pop-on captions take to come on the screen.

What do they look like?

Paint-on captioning example. A man and woman stand side-by-side. A paint-on caption in progress reads "And paint-on c".

Paint-on captions are stylized in the same way as pop-on or roll-up captions, depending on the situation. 

Other considerations

Paint-on captions are considered nonstandard in the industry. However, some fast-paced programs, like reality shows, use paint-on captions for the top of their segments when speech begins quickly and producers wish to avoid a delay in the on-screen appearance of closed captions. Overall, it is not recommended to use paint-on style in live or prerecorded broadcasts.

Choosing live or recorded captioning doesn’t completely dictate which caption style you can use. Still, both usually stick to one style as its standard based on the technical limitations that each type of programming presents.

3Play Media’s experienced captioners usually recommend using roll-up style for live captioning and pop-on style for recorded captioning, making it easy for you to choose what’s right for your media. These different types of closed captioning give you the freedom to customize your media accessibility features and create a positive user experience for your viewers. 

 

Beginner's Guide to Captioning. Download the eBook.

 

This blog was originally published by Jena Wallace for Captionmax in February 2022 and has since been updated for comprehensiveness, clarity, and accuracy.


About the author

The post Closed Captioning Types: Learn the Difference Between Pop-On, Roll-Up, and Paint-On appeared first on 3Play Media.

]]>
The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them https://www.3playmedia.com/blog/the-ultimate-guide-to-subtitles-different-types-how-they-work-and-when-to-use-them/ Thu, 22 Jun 2023 19:30:41 +0000 https://www.3playmedia.com/blog/the-ultimate-guide-to-subtitles-different-types-how-they-work-and-when-to-use-them/ The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them Video subtitling is instrumental in reaching global audiences, but can be a complex and nuanced media accessibility solution. Add captions to the equation, and it can become even more confusing for producers and creators of video content. We know it’s...

The post The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them appeared first on 3Play Media.

]]>

  • Subtitling

The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them


The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them


Video subtitling is instrumental in reaching global audiences, but can be a complex and nuanced media accessibility solution. Add captions to the equation, and it can become even more confusing for producers and creators of video content.

We know it’s easy to get bogged down with the different types of subtitles. That’s why we’re excited to debut our new eBook, The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them.

In this eBook, we compiled a comprehensive overview of the different types of subtitles based on the knowledge and experience of 3Play’s tenured subtitling experts. 

The Ultimate Guide to Subtitles covers the top subtitling solutions used across industries, including subtitles for D/deaf and hard of hearing (SDH), non-subtitles for the D/deaf and hard of hearing (non-SDH), and forced narrative (FN). Read on for a closer look into our extensive guide to all things subtitling.

Everything You Need to Know About Different Types of Subtitles 🌎

Discover the Different Types of Subtitles and How They Work

Learn all there is to know about subtitles in general. We provide an overview of their history, how they work, what they can look like, and how they’re encoded. Then, dig deeper and discover how SDH, non-SDH, and FN subtitles are defined.

Understand How Subtitling Types Compare

As mentioned above, subtitling is a nuanced solution and it can be difficult to wade through the different types without additional context. Explore in detail how each subtitling type compares to one another and how they stack up to captions. We even discuss why subtitles and captions have become so entangled in recent years and how you can better determine which media accessibility service you really need for your video.

Learn the Best Subtitling Type for Your Video

Each subtitling type has differing use cases and audience assumptions. In The Ultimate Guide to Subtitles, we cover the top use cases for SDH, non-SDH, and FN subtitles using examples that span across industries to help you find the best subtitling type for your video.

Resources

Gain access to a curated list of key 3Play Media resources for you to reference as you make accessibility part of your content production process.

The Ultimate Guide to Subtitles offers an in-depth exploration of the different types of subtitles, their functionality, and how they compare to captions. Using this knowledge and helpful use case examples, you will be able select the perfect subtitling solution for your media based on your viewers’ dynamic needs, no matter where they’re located in the world.

The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them. Download the eBook


About the author

Related Posts

The post The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them appeared first on 3Play Media.

]]>
Demystifying Caption Encoder Workflows https://www.3playmedia.com/blog/demystifying-caption-encoder-workflows/ Tue, 16 May 2023 16:51:44 +0000 https://www.3playmedia.com/blog/demystifying-caption-encoder-workflows/ The Complete Guide to Caption Encoders [Free eBook] With such a wide variety of caption encoder workflows available, determining whether to use a physical or virtual encoder can be a complicated process to navigate. Perhaps you’re making a decision about your encoding method. Or maybe you’re simply trying to figure out whether you even need...

The post Demystifying Caption Encoder Workflows appeared first on 3Play Media.

]]>

  • Captioning

Demystifying Caption Encoder Workflows


The Complete Guide to Caption Encoders [Free eBook]


With such a wide variety of caption encoder workflows available, determining whether to use a physical or virtual encoder can be a complicated process to navigate.

Perhaps you’re making a decision about your encoding method. Or maybe you’re simply trying to figure out whether you even need an encoder at all. Either way, it’s important to have all of the information before you begin, which is why we decided to demystify caption encoding and all of its associated workflows in this blog.

A general understanding of caption encoder workflows can help you best determine how and when encoding is necessary for your media. Read on to discover a high-level overview of caption encoding, a breakdown of specific live and recorded caption encoding workflows, and our detailed resources on each aspect of encoding.

Caption Encoding 101

File with arrow pointing to encoded data

 

Sometimes sidecar files, such as SRT or VTT, are not acceptable for a platform or television. In these cases, encoding may be necessary to transmit captions. Caption encoding is the process of embedding captions into a video stream. 

A caption encoder itself is the piece of equipment or software that a television network or video platform uses to pair the captions with the video and audio stream. Encoders convert captions into data that can be decoded by individual televisions or video players.

A broad range of caption encoder workflows exist for both live and recorded captions. But first, let’s take a look at how caption encoding works in general.

Traditional Caption Encoding

Caption encoder workflow: A caption provider transmits a caption feed to the encoder.The encoder collects the caption feed for transmission. The encoder pairs the captions to the video on line 21.

 

Traditional caption encoder workflows involve the use of physical encoder equipment or software. In general terms, there are three types of encoder connections: telco (analog/modem), telnet (digital/IP), and iCap (only if the encoder is manufactured by EEG). The typical encoder workflow usually goes like so:

  • A caption provider transmits a caption feed to the encoder(s).
  • The encoder collects the caption feed for transmission to the viewer.
  • The encoder pairs the captions to the video on a specific data transmission line known as line 21–this is the data that televisions are mandated to decode captions from. 

There are two main standards for the encryption and decryption of closed captioning data via encoders. These standards were developed based on Federal Communications Commission (FCC) regulations: CEA-608 and CTA-708. Learn more about the differences between 608 and 708 captions and how they can impact captioning workflows.

Virtual Encoding

Cloud data

Virtual encoding options have expanded in recent years and are popular for web-based platforms or players. Virtual encoders function similarly to physical encoders without the physical box and connection. Virtual encoders are hosted in the cloud and require clients to connect their stream digitally. 

Virtual encoders are useful for events that are streamed online, where the virtual encoder will add the captioning data and re-route the video stream to the desired platform.

Web-based platforms don’t usually follow the same data transmission methods as traditional broadcast television, so virtual and alternative encoding options are often used instead. 

Live Caption Encoding Workflows

Live caption encoding allows broadcasters to simultaneously receive and encode captions, allowing them to be displayed alongside a television program or video in real time. 

Live Caption Encoding Methods

The three main physical live caption encoding workflows involve the use of telco, telnet, or iCap encoders.
Modem

Telco Encoders

A telco encoder is based on analog technology and requires phone lines to connect to.

Telnet Encoders

A telnet encoder uses an IP and port number to receive the caption data. Similar to a telco encoder, a separate audio line is needed to hear the dialog that needs captioning. 

iCap Encoders

iCap encoders are caption encoders manufactured by EEG. They include iCap software for improved functionality, such as sending audio to the captioner. They can also be set up as IP connections if desired. 

Explore each of these live encoding methods in greater detail in The Complete Guide to Caption Encoders.

Live Virtual Caption Encoding

In March 2023, 3Play Media introduced an exciting new live virtual caption encoding solution, which eliminates the need for additional live captioning hardware. 3Play’s virtual encoding solution delivers high-accuracy and low-latency captions to platforms, while streamlining live captioning workflows from listening through delivery. Learn more about 3Play’s exciting virtual encoder developments.

Looking for an audio described version of this video? We’ve got you covered!

Everything You Need to Know About Caption Encoders

Title page of The Complete Guide to Caption Encoders: an eBook by 3Play Media

This ebook serves as your comprehensive guide to caption encoders – what they are, when and why you need them, and which encoder to use.

Get your free eBook

Other Live Virtual Encoding Options & Alternatives

Aside from 3Play’s Live Virtual Caption Encoding solution, additional virtual encoding options, such as iCap Falcon (by EEG), are available for live captioning purposes. 
A growing number of alternative options to encoding have arisen in recent years due to the evolution of broadcast, streaming, and other technological advances. For instance, captions are sometimes included as a separate entity on applications that have built caption functionality directly into their players, such as Zoom and YouTube. 

Sidecar files and video player integrations remain popular options for many users due to their ease of use. Integrations in particular help take the guesswork out of whether a video requires encoding by simplifying captioning workflows. 3Play Media offers numerous integrations and partnerships with top video platforms such as Brightcove, Wistia, and YouTube.

Recorded Caption Encoding Workflows

In certain cases, it is necessary to embed recorded captions in the video itself rather than use a separate track. This is done using caption encoders.

Recorded caption encoding ensures that your closed captions will be viewable if you don’t have a video platform, if you want an offline option, or if you need captioned videos for kiosks and social media.

Closed & Open Caption Encoding

null

Closed captions are usually output on a separate track as a sidecar file and added to a player to be played in sync with the video. In this case, the captions can be turned on or off, usually by pressing the “CC” button on the video player.

Open captions, on the other hand, are encoded via video embedding. This encoding workflow permanently burns captions into the video, meaning that they are always showing and cannot be toggled off.

Open captions eliminate rendering inconsistencies across different video players and allow publishers to control the exact size and style of the captions. Open captions also make it easier to create DVDs and other physical media. Open captioned video files can be imported into any NLE or DVD authoring software.

Because open captions are part of a video itself, they are supported by all video players and devices. Discover more about recorded caption encoding workflows.

Subtitle Encoding

Pixels

Subtitles, while closely related to captions, differ in their encoding processes

Subtitles are often encoded as bitmap images, which tend to be a lot more compatible with newer digital media methods. HD disc media, like Blu-ray, does not support traditional closed captioning but is compatible with subtitles. The same goes for some streaming services and OTT platforms. SDH or other subtitling formats may be used on these platforms due to their inability to support traditional Line 21 broadcast closed captions.

Review the differences between closed captions and subtitles for the D/deaf and hard of hearing (SDH).

The Complete Guide to Caption Encoders

Title page of The Complete Guide to Caption Encoders: an eBook by 3Play Media

To determine the encoding needs of your next video project, it’s crucial to ask some key questions to gain a full understanding of the numerous types of caption encoders and transmission methods available. 

In 3Play Media’s The Complete Guide to Caption Encoders, we break it all down for you. This free eBook:

  • Defines caption encoding
  • Helps you determine whether you need an encoder
  • Explains the different types of encoders and encoder alternatives

Encoders can seem daunting, but they’re an important part of making both live and recorded captions fully accessible to viewers. By learning the basics of caption encoder workflows, you can take the next step towards making your media accessible in the most efficient way possible.

The Complete Guide To Caption Encoders: Get Your Free Guide

About the author

Related Posts

The post Demystifying Caption Encoder Workflows appeared first on 3Play Media.

]]>
SDH vs. CC: What’s the Difference? https://www.3playmedia.com/blog/whats-the-difference-subtitles-for-the-deaf-and-hard-of-hearing-sdh-v-closed-captions/ Mon, 06 Mar 2023 05:00:00 +0000 https://www.3playmedia.com/blog/whats-the-difference-subtitles-for-the-deaf-and-hard-of-hearing-sdh-v-closed-captions/ • The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them [Free Ebook] When it comes to media accessibility, one of the most common questions from television viewers revolves around the differences between subtitles and closed captions. But between the rise of streaming content and the global use of the...

The post SDH vs. CC: What’s the Difference? appeared first on 3Play Media.

]]>

  • Captioning

SDH vs. CC: What’s the Difference?


The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them [Free Ebook]


When it comes to media accessibility, one of the most common questions from television viewers revolves around the differences between subtitles and closed captions. But between the rise of streaming content and the global use of the term “subtitles” versus “captions,” the answer has become complicated.

As the lines between subtitles and captions continue to blur, perhaps none has become more confusing than the difference between subtitles for the d/Deaf and hard of hearing (SDH) and closed captions (CC). 

The issue of SDH vs. CC has been compounded by the availability of both options on certain streaming platforms. Adding further confusion, there’s also the matters of:

  • Mixed usage of terminology 
  • Different interpretations of what makes a timed text file SDH or CC
  • General misinformation on the purpose and function of SDH vs. CC files

This widespread confusion is precisely why we’ve decided to tackle SDH vs. CC in this blog. We’ll review the key differences between subtitles and closed captions, closely examine SDH subtitles, cover each of their respective roles and use cases, and explain why some streaming services are moving towards offering both options to viewers.

 

 

 

Looking for a described version of this video? We’ve got you covered!

Defining Subtitles and Captions

Before fully understanding the difference between SDH and closed captions, it is helpful to first understand the basic differences between subtitles and captions.

Person sitting between two boxes that read "CC" and "sub".

How are they alike?

Both subtitles and captions are timed text files synchronized to media content, allowing the text to be viewed at the same time the words are being spoken. Captions and subtitles can be open or closed.

How are they different?

In the United States and Canada, subtitles are intended for hearing viewers who do not understand the language. Traditionally, subtitles show the spoken content but not the sound effects or other audio elements. They are often used to refer to translations (think: subtitles for a foreign film.) In places like the UK, the term “subtitles” is used to describe both subtitles and captions.

Closed captions are designed for d/Deaf and hard-of-hearing audiences. They communicate all audio information, including sound effects, speaker IDs, and non-speech elements. They originated in the 1970s and are required by law for most video programming in the United States and Canada.

What are Subtitles for the d/Deaf and Hard of Hearing (SDH)?

It’s important to note that there are a few different types of subtitles. The most frequently used types are known as: SDH, non-SDH, and forced narrative (FN).

SDH stands for subtitles for the d/Deaf and hard of hearing. These subtitles assume the end user cannot hear the dialogue and include important non-dialogue information such as sound effects, music, and speaker identification. In the United States and Canada, SDH traditionally assumes the end user cannot understand the language being spoken, whereas traditional subtitles (also referred to as non-SDH) assume the viewer can hear the audio but doesn’t know the spoken language.

SDH often emulates closed captions on media that does not support closed captions, such as digital connections like HDMI or OTT platforms. In recent years, many streaming platforms, like Netflix, have been unable to support standard broadcast Line 21 closed captions. This has led to a demand for English SDH subtitles styled similarly to FCC-compliant closed captions instead. 

SDH can also be translated into foreign languages to make content accessible to d/Deaf and hard-of-hearing audiences who speak other languages.

Translation
Translation is often cited as a major difference between subtitles and captions. But can’t captions also be in other languages?

 

Yes! It’s common in the United States and Canada to find closed caption offerings in Spanish and French, along with other languages. The FCC even requires Spanish CC for all Spanish television programming in the US. There are limitations with translated closed captions, however. Because of CC’s line limits and lack of extensive international character support outside of Western languages, SDH subtitles are preferred to get the most accurate translations for d/Deaf and hard of hearing viewers across languages.

 

3Play Media Explains… SDH vs. CC – Watch the Video 👀

 

A Deep Dive into SDH vs. CC

SDH subtitles and closed captions are closely related, and there’s often confusion between the two. One of the main reasons? Preferred jargon.

The term “closed captions” has dominated the vernacular for nearly half a century in North America. The term “subtitles” has encapsulated any timed text format in the UK and other parts of the globe. 

But in recent years, rapid developments in streaming content and the globalization of media has shaken up the popular nomenclature across the world. This has left viewers and users of these accessibility services scratching their heads and wondering how SDH and CC are different.

Appearance

Example of SDH subtitles styled to closely resemble closed captions. Text reads "I'm street smart..." in white text on a semi-transparent black background.

SDH subtitles styled to closely resemble closed captions: white text with a semi-transparent black background.

SDH subtitles have a lot of flexibility in terms of appearance. They can be customized by professional captioners to look exactly like closed captions, or styled to match a customer’s request or platform’s specifications. 

Example of SDH subtitles styled to a standard subtitling appearance. Text reads "I'm street smart..." in white text, black outline, no background.

SDH subtitles styled to a standard subtitling appearance: white text, black outline, no background.

SDH subtitles’ appearance can sometimes be determined by a video player or platform, which sets the appearance independently of the original captioner. Occasionally, SDH can also be customized by the end user, but this varies based on the player or platform’s customization options.

 
Example of closed captions. Text reads "I'm street smart..." in white text on a black background.

Default closed captioning style: white text on an opaque black background.

By default, closed captions are displayed as white text on a black box, with placement that is customized on the captioner’s end. This has changed over the years with the introduction of digital television and 708 captioning standards, which allows for user customization.

User Customizations
When customization options are available to users, they can choose from a variety of font, sizing, and color options for SDH or CC. Customization options vary depending on the television, video player, or OTT platform capabilities.

Placement

SDH subtitles and closed captions are both capable of supporting placement. Viewers often find SDH and CC are placed in the bottom center, with movement to the top to avoid lower thirds. Some styles of CC may include horizontal placement to indicate speaker changes.

SDH can theoretically be placed anywhere on the screen if they are burned-in. As a best practice, SDH are typically centered for readability and ease in the translation process. 

Caption placement is usually implemented by a captioner and cannot be adjusted by the user unless the captions are formatted to 708 standards. According to FCC rules, captions must be positioned in such a way to avoid covering important lower third graphics.

Ultimately, SDH and CC positioning is dictated by the file type being used, or by the requested formatting specs from a platform or television network. 

Why are SDH and CC often centered?
Many streaming platforms and networks are moving towards center placement for both SDH and CC files for readability. It’s still common to encounter CC positioning to indicate speakers, but current trends point to left-justified, center-aligned SDH and CC.

 

Streaming services that follow this trend include Netflix and Amazon

Encoding

The move from analog television to high-definition (HD) media over the last 20 years had major implications for the encoding of closed captions and subtitles.

Standard 608 closed captions are transmitted via Line 21 as a stream of commands, control codes, and text. 708 closed captions are transmitted via MPEG-2 video streams in MPEG user data.

Subtitles, on the other hand, are often encoded as bitmap images – a series of tiny dots or pixels. And this method of transmission is a lot more compatible with newer digital media methods.

HD disc media, like Blu-ray, does not support traditional closed captioning but is compatible with SDH subtitles. The same goes for some streaming services and OTT platforms. SDH formats are increasingly used on these platforms due to their inability to support traditional Line 21 broadcast closed captions. That being said, some classic captioning formats, like SCC, have proven to be versatile across television and digital formats.

SDH vs. CC: At a Glance

Features SDH Closed Captions
Timed text synced to video
Can be turned on/off
In source language
Speaker Identification
Sound effects
Translation options Limited
Text appearance Varies; often white text on black or semi-transparent background to mimic captions Usually white text on black background
On-screen placement Varies; typically centered at the bottom, with movement to the top for lower third graphics Varies
Encoding Supported through HDMI Not supported through HDMI

 

Why Do Streaming Platforms Sometimes Include Both SDH and CC?

While many streaming and OTT platforms only offer one timed text option for viewers to use, some have started offering both SDH and CC options when available.

Apple TV+ is one of such platforms offering a wide array of accessibility choices for viewers on select programming. Depending on the program chosen, a viewer could find themselves choosing between CC and SDH. So why offer this?

Person thinking with text in a thought bubble: "English CC, English SDH, English non-SDH."

The answer can be different depending on the platform, but by offering both options, viewers are able to choose the format that they prefer. In situations where there is no distinction made between CC and SDH, the file could be considered one in the same. 

When both options are available to select, it’s often likely that the captions originate from a true CC file and are formatted to match that style; whereas the SDH could be a simpler timed transcript in the source language that was intentionally designed for translation into other languages. The difference between the two isn’t always clear when both are offered on a platform, but usually comes down to how each is displayed.

 
 
 

Closed captions and subtitles for the d/Deaf and hard of hearing are like siblings: closely related, with similar mannerisms, yet each has their own unique traits and appearance.

Like many media accessibility services, CC and SDH are nuanced and tricky to definitively declare as being one specific solution designed for one specific purpose. In the greater scheme of timed text files, either solution offered by a television network or streaming platform will provide an accessible experience for viewers.

Neither CC or SDH will ever fit neatly into one box, and it’s possible that defining them may only get more complicated as digital video evolves. But one thing remains certain for CC and SDH: they will always serve the d/Deaf and hard of hearing community first and foremost.


The Ultimate Guide to Subtitles: Different Types, How They Work, and When to Use Them. Download the eBook 

This blog was originally published by Lily Bond on May 21, 2014 as “How Subtitles for the Deaf and Hard-of-Hearing (SDH) Differ From Closed Captions.” This blog was updated on August 24, 2021 by Elisa Lewis and has since been updated again for comprehensiveness, clarity, and accuracy.


About the author

The post SDH vs. CC: What’s the Difference? appeared first on 3Play Media.

]]>
What are Forced Subtitles? https://www.3playmedia.com/blog/what-are-forced-narrative-subtitles/ Tue, 14 Feb 2023 14:52:13 +0000 https://www.3playmedia.com/blog/what-are-forced-narrative-subtitles/ • Download the [FREE] Checklist: Dubbing We previously covered SDH subtitles, non-SDH subtitles, and when they’re used; the difference between SDH subtitles and closed captions; and how subtitles vary from closed captions in general. That leaves us a common yet important subtitle type that most viewers never actually have to toggle on: forced narrative subtitles....

The post What are Forced Subtitles? appeared first on 3Play Media.

]]>

  • Subtitling

What are Forced Subtitles?


Download the [FREE] Checklist: Dubbing


We previously covered SDH subtitles, non-SDH subtitles, and when they’re used; the difference between SDH subtitles and closed captions; and how subtitles vary from closed captions in general. That leaves us a common yet important subtitle type that most viewers never actually have to toggle on: forced narrative subtitles.

A number of subtitling types exist in the world of video translation and localization services. The most commonly used subtitles include: 

  • Subtitles for the Deaf and Hard of Hearing (SDH)
  • non-Subtitles for the Deaf and Hard of Hearing (non-SDH)
  • Forced Narrative (FN) 

Forced narrative subtitles are crucial to supporting audience comprehension in a number of programs, regardless of the genre. So why is that? In this blog, we’ll explore what forced narrative subtitles are, what they look like, and when to use them.

What are Forced Subtitles, and What Purpose Do They Serve? 

Forced narrative (FN) subtitles, sometimes referred to as forced subtitles, are used to clarify pertinent information meant to be understood by the viewer. FN subtitles are overlaid text used to clarify dialogue, burned-in texted graphics, and other information that is not otherwise explained or easily understood by the viewer. Forced narrative subtitles are typically used in video translation and localization workflows to ensure any viewer can understand critical textual elements displayed on screen.

Forced narrative subtitles broaden the viewing experience across a wide range of countries, languages, and devices. FN subtitles are delivered as separate timed ­text files; therefore, they are not burned into the video. 

How are Forced Narrative Subtitles Different from Traditional Full Subtitles?
Forced subtitles clarify only the necessary information that would not be understood by the audience. The subtitles are “forced” because a viewer will not have to toggle the subtitles on to see them.
 

A full subtitle file translates the entirety of a program’s content, but must be toggled on by the viewer. It may or may not contain forced narrative content, depending on the viewing platform and other factors, such as timing. This means that information contained in a forced narrative, like the translation of a sign or other on-screen text that is normally not translated in full subtitle files, may not be included if dialogue is happening at the same time as the other text is displayed. Dialogue translation takes precedence over forced narrative elements in these cases.

What Do Forced Subtitles Look Like?

Many OTT providers will not display forced subtitles unless the Subtitles/CC settings are set to “off.” That being said, some platforms, like Netflix, incorporate forced narrative content into full subtitling and closed caption files.

When forced narrative subtitles are displayed on their own, their appearance can mirror that of typical subtitling or closed captioning files. And much like subtitles and captions, the visual appearance of FN subtitles varies depending on the platform, player, television, or other viewing device.

 

Adding dubbing or voice-over to your video? This checklist covers everything you need to consider 💬

How Are Forced Subtitles Used?

Forced narrative subtitles are commonly used in several scenarios. Let’s explore these different use cases for FN subtitles to better understand what they are and how they work.

Sporadic Foreign Language

Although a film may be in one source language, occasionally certain characters will use a phrase or short segment of a different language. 

Person speaking on the phone. A speech bubble above the person reads "Guten tag." Below, a forced narrative subtitle reads "Good afternoon."

One scenario might be a German character living in the United States who makes a phone call to a family member where they speak in German. If the information during this scene is important to the plot and overall understanding of the movie or show, FN subtitles will be used to translate the conversation.

 

Translation of Labels

Sometimes burned-in text graphics are used to enhance the viewing experience. Oftentimes, these are labels for locations, names, or dates. Since they are burned into the video in the original language, FN subtitles can be used to translate these into another language for viewers.

Silhouette of Boston, Massachusetts with Chinese characters written above it. Below, a forced narrative subtitle reads "Boston, Massachusetts."

This image showcases an example of a film containing a location label in the original language at the top. When shown in the United States, English FN subtitles would be used to translate the city name for English-speaking viewers to understand.

 

 

 

Other Forms of Communication

Forced narrative subtitles are helpful in cases where other forms of communication are showcased in a video, such as non-verbal communication formats like sign language; or fictional languages, such as Game of Thrones’ Dothraki or Elvish dialects in The Lord of the Rings.

Person using sign language with blackboard with a sketch of a tree behind them. Below, a forced narrative subtitle reads "Today we're learning about trees."

For example, if a character communicates in sign language, forced narrative subtitles would be used to clarify the meaning for viewers who aren’t familiar with the language. This example shows forced narrative subtitles below a teacher communicating via sign language.

 

 

Transcribed Dialogue

Sometimes forced narrative subtitles are used for transcribed dialogue in the same language. This is done to assist audience members when audio is inaudible or distorted.

Police cruiser chasing a car with explosions behind them. Below, a forced narrative subtitle reads "We're in pursuit!"

It may be hard to hear dialogue in an action movie with a lot of background noise, or in a documentary with poor audio quality. In either of these cases, FN subtitles could be used to clarify dialogue for the viewer.

 

 

Forced Narrative Subtitling with 3Play Media

Did you know 3Play Media creates forced narrative subtitles for video content?

Our experienced translation and subtitling team creates forced narrative subtitles for video content across networks and major OTT platforms. View our plans, and get in touch with 3Play Media to get started!

Not sure if you need forced narrative subtitles?

We’re here to help. Our team is filled with experienced localization professionals who have created countless SDH, non-SDH, and FN subtitling files for a variety of networks and streaming platforms. Reach out to begin scoping your project, and we’ll help determine if forced narrative subtitling is right for your content.

Dubbing Checklist: Get your free checklist

This blog was originally published by Elisa Lewis on December 8, 2017, as “What Are Forced Narrative Subtitles?” and has since been updated for comprehensiveness, clarity, and accuracy.


About the author

The post What are Forced Subtitles? appeared first on 3Play Media.

]]>