Live Captioning Archives - 3Play Media https://www.3playmedia.com/blog/tag/live-captioning/ Take Your Video Content Global Wed, 17 Sep 2025 22:23:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.3playmedia.com/wp-content/uploads/2025/07/cropped-favicon_1x-300x300-1-32x32.webp Live Captioning Archives - 3Play Media https://www.3playmedia.com/blog/tag/live-captioning/ 32 32 Real-Time Captioning in the College Classroom 101 https://www.3playmedia.com/blog/real-time-captioning-in-the-college-classroom-101/ Thu, 14 Sep 2023 19:42:53 +0000 https://www.3playmedia.com/blog/real-time-captioning-in-the-college-classroom-101/ • The 3Play Way: Real-Time Captioning in Higher Education [Free Webinar] As a new school year kicks off, students are stocking up on the traditional academic tools: course books, notebooks, pens, laptops, etc. These items are unquestionably essential to the learning experience for nearly all students. Yet there is another critical learning tool for a...

The post Real-Time Captioning in the College Classroom 101 appeared first on 3Play Media.

]]>

  • Live Captioning

Real-Time Captioning in the College Classroom 101


The 3Play Way: Real-Time Captioning in Higher Education [Free Webinar]


As a new school year kicks off, students are stocking up on the traditional academic tools: course books, notebooks, pens, laptops, etc. These items are unquestionably essential to the learning experience for nearly all students. Yet there is another critical learning tool for a significant portion of the student population that often goes overlooked: real-time captions.

Real-time captioning in the college classroom can be equally as important as those course books, notebooks, and laptops–especially for D/deaf and hard of hearing students. That’s because captions help remove access barriers, providing an equitable and inclusive way for students to fully experience lectures and participate in class discussions. 

So how does real-time captioning in the college classroom work? In this blog, we’re covering all of the most frequently asked questions about classroom captioning: workflows, captioner qualifications and assignments, how captions are ordered, and more. Get out your writing tools and prepare to take notes, because Real-Time Captioning in the College Classroom 101 is now in session.

How does captioning work in a college classroom?

encoding equipment

Real-time captions in a live classroom setting can be delivered to a student through different mechanisms. If the student is present in person, they are usually receiving captions on a second screen, such as a tablet or laptop, using a solution known as Communication Access Realtime Translation, or CART.

For on-demand or remote classes that are not live, closed captions are usually provided in a sidecar file alongside the video recording, which can be toggled on or off by the user.

Who is captioning college classes?

person at computerFor student accommodations in a live classroom, real-time captions are usually transcribed by a live professional captioner. Traditionally, CART utilized a stenographer or in-person captioner and displayed captions on a large screen. 

Nowadays, remote CART captioning options and alternatives have become very common, with a remote captioner connecting to the classroom’s audio source, such as a clip-on microphone worn by a professor. The captioner then transcribes the lecture or discussion word-for-word, with live captions populating on a second screen or streaming link to the text.

What about auto captions?
Live automatic captions, or auto captions, are another solution for higher education settings. These captions are machine-generated and offer accommodations at a lower cost, but are generally not recommended for student accommodations in a classroom setting due to their lower accuracy and limited options for audio capture. Live automatic captions tend to work best for low-visibility events or meetings that don’t require professional captioning.

How do the captioners connect to a class?

headphones and waveform

We touched on CART solutions and how in the past, a live captioner would sit in a room, transcribing, as captions populate on a larger screen. While this method does still happen for larger events, it’s becoming less common due to advances in technology that allow for greater flexibility with real-time captioning.

Remote CART or similar captioning experiences allow remote live professional captioners to connect to a class’s audio via sources such as phone, RTMP, iCap, Zoom meetings, and more. The lecture is then live captioned, with captions displayed via a second screen or streaming link.

What kinds of qualifications do live professional captioners have?

card with star and checkmark

Real-time captions for college classrooms require a high degree of accuracy to provide an equivalent experience for students requesting accommodations. Live professional captioners should be experienced in providing high-quality, accurate captions and following best practices for real-time captioning.

At 3Play, live professional captioners undergo a rigorous certification process and use 3Play’s innovative proprietary voice writing technology to produce accurate and comprehensive real-time captions. 

How accurate are real-time captions for college classrooms?

arrow in middle of dartboard

Live captioning accuracy can be tricky to determine because of a couple of factors at play: Word Error Rate (WER) and Formatted Error Rate (FER). WER is used as the standard measure of transcription accuracy in captions. FER accounts for errors in formatting, sound effects, grammar, and punctuation and is a better representation of the experienced accuracy of captions. 

Both of these measurements are crucial to accuracy, yet WER is the most often used by live captioning vendors providing accuracy measurements. Unfortunately, WER on its own is usually not enough to support an accurate and equitable learning experience for students, and that’s where FER comes in. FER accuracy can impact a student’s understanding of the lecture and discussion if punctuation, formatting, and other complexities aren’t captioned correctly.

It’s important for live captions to boast a high accuracy rate that takes into account both WER and FER. 3Play’s innovative combination of humans and technology allows us to consistently obtain high levels of accuracy and quality for college classroom captions.

What about context?
Context is another important factor at play when it comes to accuracy, but isn’t the easiest to measure. Varying subject matter and diverse courses means that context can be key for captioners transcribing numerous classes for individual students seeking accommodations.

3Play approaches the context piece of accuracy through a diverse pool of live professional captioners who specialize in an array of topics. These captioners have been certified through our rigorous process and are able to capture the intent of the speaker, ensuring that a class’s proper names, key words, and terminology are captioned correctly.

Additionally, 3Play future-proofs real-time captioning accuracy with robust customization options like custom speaker labels, curated event instructions, and wordlists, which can be uploaded and made available for live captioners to review and reference prior to an event.

 

The 3Play Way: Real-Time Captioning in Higher Education

 

How do schools coordinate real-time accommodations for students?

coordination icon

Colleges, universities, and other higher education institutions may handle and coordinate real-time accommodations differently, depending on workflows, budget, and other student needs.

Usually, schools dedicate a position or even a department to handling the accommodation and/or captioning process. These can include CART Supervisors, Real-Time Captioning Coordinators, Student or Disability Services professionals, Access or Disability Resource Center professionals, and more. Student accommodation requests are submitted to these professionals or departments, who then coordinate fulfillment of the accommodation, such as real-time classroom captions.

How do real-time accommodation professionals order and pay for captions?

arrow clicking browser window with accessibility symbol

Higher education professionals usually have a wide range of needs for live accommodations: lectures, meetings, conferences, webinars, and more. These events may be hosted by different departments, campuses, and even individuals. Some universities and colleges have a centralized location and clear policy for student accommodations. Some may be only beginning the process of centralizing, but have some ways to go. Others may use accommodation platforms, like AIM.

This range of needs and policies means ordering and paying for captions can become complex for higher education professionals. They may be the ones doing the actual ordering for all captions, or departments and professors could be tasked with directly carrying out the accommodations with a university’s captioning vendor.

Ordering and billing needs are going to be different at every institution, so vendor agility is very important here. 3Play takes a flexible approach to these aspects by giving professionals exactly what they need to track spending and budget, whether it’s full visibility into how the institution is spending on accessibility services like real-time captioning, or small-scale, single projects with specific purchase orders (P.O.s) attached.

How do real-time accommodation professionals overcome issues with getting captions?

person helping another person up steps

No matter who is directly coordinating real-time accommodations, common issues in the classroom captioning process revolve around captioner coverage, staffing shortages, lack of vendor support, tech issues, and cumbersome workflows. These can make for a poor captioning experience for not only the students, but also the professors, administrators, and other staff trying to create an inclusive learning environment.

Fortunately, there are some key traits to seek in a captioning vendor that will help mitigate inefficient methods for providing real-time accommodations for students. 

How 3Play Supports Students

3Play Media is a trusted provider of accessibility services for colleges and universities. We offer future-proof solutions to transform your university’s accessibility and operational efficiency with a wide range of services, including real-time captioning, closed captioning, audio description, and translation.

We are 3Play Media. Three people celebrating together.

Our real-time classroom captioning services are designed for your budget and peace of mind. Here’s how:

We Eliminate Hours of Manual Work for Your Staff

With our user-friendly platform and flexible workflows, your staff can easily manage recurring events, canceled classes, and captioner assignments at the push of a button.

We Are a Reliable Partner with Limitless Scalability

Our marketplace structure ensures your courses will be matched with a qualified professional, regardless of whether you need to support one class or a dozen.

We Offer Compliant Real-Time Captions with 98%+ Accuracy

We offer compliant live solutions that meet all applicable accessibility regulations and provide word-for-word transcription and up to 98%+ measured accuracy.

We Provide Rapid and Attentive Support

Our on-call tech support team will assist you with any issues before and during each scheduled course.

We Have Flexible Billing Options 

Our flexible billing options allow you to easily track spending with university or department-based billing.

 

Learn more about real-time captioning in higher education ⬇


About the author

The post Real-Time Captioning in the College Classroom 101 appeared first on 3Play Media.

]]>
Closed Captioning Types: Learn the Difference Between Pop-On, Roll-Up, and Paint-On https://www.3playmedia.com/blog/roll-up-vs-pop-on-captions-whats-difference/ Mon, 03 Jul 2023 15:00:00 +0000 https://www.3playmedia.com/blog/roll-up-vs-pop-on-captions-whats-difference/ • Beginner’s Guide to Captioning [Free eBook] When beginning the process of ordering captions for your media, it can be easy to get bogged down with all the variations, customizations, and styles that can be applied to your captions. Even the decision of which captioning service to use (live or recorded) can be daunting if...

The post Closed Captioning Types: Learn the Difference Between Pop-On, Roll-Up, and Paint-On appeared first on 3Play Media.

]]>

  • Captioning

Closed Captioning Types: Learn the Difference Between Pop-On, Roll-Up, and Paint-On


Beginner’s Guide to Captioning [Free eBook]


When beginning the process of ordering captions for your media, it can be easy to get bogged down with all the variations, customizations, and styles that can be applied to your captions. Even the decision of which captioning service to use (live or recorded) can be daunting if you are new to video accessibility.

The good news? Captioning doesn’t have to be complicated, because choosing between pop-on and roll-up captioning styles is simpler than you might think.

In this blog, we will provide you with a comprehensive overview of the three main formats of captioning: pop-on, roll-up, and paint-on. We’ll shed light on their applications, explore use cases, and discover the possibilities for customization within each type so that you’re empowered to make informed decisions for your media.

Pop-On Captions

What are they?

Pop-on closed captions are what you’re most used to seeing in recorded (non-live) broadcast, streaming, and web content. These captions are exactly what they say they are: they pop on your screen and then disappear when the next caption appears.

Who uses them?

Pop-on style is standard for recorded content because these captions can be highly customized to best fit the viewing experience and reflect aspects such as timing, tone, and location of speakers. Closed captioners have the ability to manipulate timing to closely synchronize with words as they are spoken.

Pop-on captions are not used for live broadcast content. The nature of live captioning means that each word written is immediately sent to an encoder, and encoders must wait for all text information before they can display a caption. If live captions utilize pop-on style, the text would be delayed, defeating the point of having quick captions delivered right to the viewer as the program is happening.

What do they look like?

Pop-on captioning example. A man and woman stand side-by-side. A pop-on caption in progress reads "These are pop-on captions."

For the most optimal readability across viewing platforms, our captioning experts have found that recorded pop-on captions with the following style tend to share these qualities:

  • Sentence case
  • Center-placed and justified
  • Rest at the bottom of the screen, moving to the top to avoid lower-third graphics
  • Use speaker dashes to differentiate speakers
  • Off-screen sound (such as voice-over narration, digitized speech, non-diegetic music) conveyed using italics
  • Quotation marks utilized for works of art (movie, show, song titles)
  • Sound effects and music descriptors indicated on their own lines, surrounded by brackets
  • Cleanly broken into two lines at conjunctions, end of clauses, prepositions, articles, or grammatical breaks
  • Timed with ample load and reading time to align with spoken words

Pop-on captioning example. A woman looks to the side. A pop-on caption reads "(Eric) Whoa. I'm doing it. I'm voicing over."

The above aspects of pop-on captions have helped inform 3Play Media’s captioning style, but that is not to say that this is the only way to do pop-on captions; varying styles are commonly applied to the pop-on captions we create, such as:

  • Speaker-oriented placement (this placement follows the speaker around the screen)
  • Speaker IDs, such as a name followed by a colon, or a name in parentheses
  • No speaker IDs for on-screen speakers at all; IDs only for off-screen speech or captions containing dual speakers
  • All uppercase captions or all uppercase speaker IDs
  • Countless other options!

Other considerations

Recorded web captions always display in pop-on style, but due to the limitations of some players and other applications, these captions may lack certain stylistic elements (caption movement, italics, and music notes.)

These captions are usually delivered in a sidecar caption file format, such as SRT. Live captions are sometimes delivered in SRT format as well for video-on-demand (VOD) programming.

Zoom pop-on captioning example. A man and woman speak over a Zoom virtual meeting. A pop-on caption in progress reads "Eric, you're on mute."

Live captions on platforms such as Zoom and YouTube only display captions in pop-on style, so viewers of live programs and events on these platforms could experience a slight delay as they wait for all the text to appear.

 

New to captioning? Our Beginner’s Guide has the basics you need to get started 🧑‍💻

 

Roll-Up Captions

What are they?

Roll-up captions continuously roll up onto your screen, one right under the next, allowing for more time for the viewer to read them. The very top line disappears each time a new line populates.

Individual roll-up captions generally require less load time. However, they have a tighter reading rate threshold when it comes to timing due because multiple sentences stay on screen for a longer period of time. One sentence will appear quickly but will stay on the screen longer than a pop-on caption would.

Who uses them?

Live programming uses roll-up style because of the time allowances and ability to quickly synchronize dialogue in real time.

Recorded programming can utilize roll-up captions, but the style is uncommon and outdated. Most producers and platforms prefer pop-on style for offline programming.

What do they look like?

Roll-up captioning example. A man and woman stand side-by-side. The woman is doubled over and grinning at a joke she made while the man sighs. A roll-up caption in progress reads "They're on a roll, am I right, folks?"

Roll-up captions vary in fewer ways than pop-on style can, but usually share these qualities in live programming:

  • Uppercase
  • Two-line captions at top or bottom
  • Left-justified
  • Two chevrons differentiate speakers
  • When speakers, show hosts, and announcers can be identified, chevrons will be followed by a first name and colon.
  • Quotation marks utilized for film/show titles, segment titles, and works of art
  • Sound effects and music descriptors indicated on their own lines, surrounded by brackets
  • No italics used
  • Line breaking of less concern
  • Timing is slightly delayed and elastic due to a live captioner transcribing as they hear the content

Other considerations

Most recorded, or offline, programming uses pop-on captioning styles, but certain types of content may be in roll-up format. Soap operas are a great example of a type of recorded broadcast content that may utilize roll-up captions for comprehension reasons. In soaps, specific name IDs are used to assist the viewer in keeping track of the multiple characters and storylines and to fit the steady, yet dramatic pace of storytelling.

Paint-On Captions

What are they?

Paint-on captions populate on screen, letter by letter, from left to right. In essence, you see the caption being typed out or “painted on” as you read it. It happens very quickly, so it can be hard to notice this nuance unless an entire show is captioned in paint-on style.

Who uses them?

Paint-on captions are occasionally used for the opening caption of a recorded program to avoid the load-time requirements and slight delay that pop-on captions take to come on the screen.

What do they look like?

Paint-on captioning example. A man and woman stand side-by-side. A paint-on caption in progress reads "And paint-on c".

Paint-on captions are stylized in the same way as pop-on or roll-up captions, depending on the situation. 

Other considerations

Paint-on captions are considered nonstandard in the industry. However, some fast-paced programs, like reality shows, use paint-on captions for the top of their segments when speech begins quickly and producers wish to avoid a delay in the on-screen appearance of closed captions. Overall, it is not recommended to use paint-on style in live or prerecorded broadcasts.

Choosing live or recorded captioning doesn’t completely dictate which caption style you can use. Still, both usually stick to one style as its standard based on the technical limitations that each type of programming presents.

3Play Media’s experienced captioners usually recommend using roll-up style for live captioning and pop-on style for recorded captioning, making it easy for you to choose what’s right for your media. These different types of closed captioning give you the freedom to customize your media accessibility features and create a positive user experience for your viewers. 

 

Beginner's Guide to Captioning. Download the eBook.

 

This blog was originally published by Jena Wallace for Captionmax in February 2022 and has since been updated for comprehensiveness, clarity, and accuracy.


About the author

The post Closed Captioning Types: Learn the Difference Between Pop-On, Roll-Up, and Paint-On appeared first on 3Play Media.

]]>
Demystifying Caption Encoder Workflows https://www.3playmedia.com/blog/demystifying-caption-encoder-workflows/ Tue, 16 May 2023 16:51:44 +0000 https://www.3playmedia.com/blog/demystifying-caption-encoder-workflows/ The Complete Guide to Caption Encoders [Free eBook] With such a wide variety of caption encoder workflows available, determining whether to use a physical or virtual encoder can be a complicated process to navigate. Perhaps you’re making a decision about your encoding method. Or maybe you’re simply trying to figure out whether you even need...

The post Demystifying Caption Encoder Workflows appeared first on 3Play Media.

]]>

  • Captioning

Demystifying Caption Encoder Workflows


The Complete Guide to Caption Encoders [Free eBook]


With such a wide variety of caption encoder workflows available, determining whether to use a physical or virtual encoder can be a complicated process to navigate.

Perhaps you’re making a decision about your encoding method. Or maybe you’re simply trying to figure out whether you even need an encoder at all. Either way, it’s important to have all of the information before you begin, which is why we decided to demystify caption encoding and all of its associated workflows in this blog.

A general understanding of caption encoder workflows can help you best determine how and when encoding is necessary for your media. Read on to discover a high-level overview of caption encoding, a breakdown of specific live and recorded caption encoding workflows, and our detailed resources on each aspect of encoding.

Caption Encoding 101

File with arrow pointing to encoded data

 

Sometimes sidecar files, such as SRT or VTT, are not acceptable for a platform or television. In these cases, encoding may be necessary to transmit captions. Caption encoding is the process of embedding captions into a video stream. 

A caption encoder itself is the piece of equipment or software that a television network or video platform uses to pair the captions with the video and audio stream. Encoders convert captions into data that can be decoded by individual televisions or video players.

A broad range of caption encoder workflows exist for both live and recorded captions. But first, let’s take a look at how caption encoding works in general.

Traditional Caption Encoding

Caption encoder workflow: A caption provider transmits a caption feed to the encoder.The encoder collects the caption feed for transmission. The encoder pairs the captions to the video on line 21.

 

Traditional caption encoder workflows involve the use of physical encoder equipment or software. In general terms, there are three types of encoder connections: telco (analog/modem), telnet (digital/IP), and iCap (only if the encoder is manufactured by EEG). The typical encoder workflow usually goes like so:

  • A caption provider transmits a caption feed to the encoder(s).
  • The encoder collects the caption feed for transmission to the viewer.
  • The encoder pairs the captions to the video on a specific data transmission line known as line 21–this is the data that televisions are mandated to decode captions from. 

There are two main standards for the encryption and decryption of closed captioning data via encoders. These standards were developed based on Federal Communications Commission (FCC) regulations: CEA-608 and CTA-708. Learn more about the differences between 608 and 708 captions and how they can impact captioning workflows.

Virtual Encoding

Cloud data

Virtual encoding options have expanded in recent years and are popular for web-based platforms or players. Virtual encoders function similarly to physical encoders without the physical box and connection. Virtual encoders are hosted in the cloud and require clients to connect their stream digitally. 

Virtual encoders are useful for events that are streamed online, where the virtual encoder will add the captioning data and re-route the video stream to the desired platform.

Web-based platforms don’t usually follow the same data transmission methods as traditional broadcast television, so virtual and alternative encoding options are often used instead. 

Live Caption Encoding Workflows

Live caption encoding allows broadcasters to simultaneously receive and encode captions, allowing them to be displayed alongside a television program or video in real time. 

Live Caption Encoding Methods

The three main physical live caption encoding workflows involve the use of telco, telnet, or iCap encoders.
Modem

Telco Encoders

A telco encoder is based on analog technology and requires phone lines to connect to.

Telnet Encoders

A telnet encoder uses an IP and port number to receive the caption data. Similar to a telco encoder, a separate audio line is needed to hear the dialog that needs captioning. 

iCap Encoders

iCap encoders are caption encoders manufactured by EEG. They include iCap software for improved functionality, such as sending audio to the captioner. They can also be set up as IP connections if desired. 

Explore each of these live encoding methods in greater detail in The Complete Guide to Caption Encoders.

Live Virtual Caption Encoding

In March 2023, 3Play Media introduced an exciting new live virtual caption encoding solution, which eliminates the need for additional live captioning hardware. 3Play’s virtual encoding solution delivers high-accuracy and low-latency captions to platforms, while streamlining live captioning workflows from listening through delivery. Learn more about 3Play’s exciting virtual encoder developments.

Looking for an audio described version of this video? We’ve got you covered!

Everything You Need to Know About Caption Encoders

Title page of The Complete Guide to Caption Encoders: an eBook by 3Play Media

This ebook serves as your comprehensive guide to caption encoders – what they are, when and why you need them, and which encoder to use.

Get your free eBook

Other Live Virtual Encoding Options & Alternatives

Aside from 3Play’s Live Virtual Caption Encoding solution, additional virtual encoding options, such as iCap Falcon (by EEG), are available for live captioning purposes. 
A growing number of alternative options to encoding have arisen in recent years due to the evolution of broadcast, streaming, and other technological advances. For instance, captions are sometimes included as a separate entity on applications that have built caption functionality directly into their players, such as Zoom and YouTube. 

Sidecar files and video player integrations remain popular options for many users due to their ease of use. Integrations in particular help take the guesswork out of whether a video requires encoding by simplifying captioning workflows. 3Play Media offers numerous integrations and partnerships with top video platforms such as Brightcove, Wistia, and YouTube.

Recorded Caption Encoding Workflows

In certain cases, it is necessary to embed recorded captions in the video itself rather than use a separate track. This is done using caption encoders.

Recorded caption encoding ensures that your closed captions will be viewable if you don’t have a video platform, if you want an offline option, or if you need captioned videos for kiosks and social media.

Closed & Open Caption Encoding

null

Closed captions are usually output on a separate track as a sidecar file and added to a player to be played in sync with the video. In this case, the captions can be turned on or off, usually by pressing the “CC” button on the video player.

Open captions, on the other hand, are encoded via video embedding. This encoding workflow permanently burns captions into the video, meaning that they are always showing and cannot be toggled off.

Open captions eliminate rendering inconsistencies across different video players and allow publishers to control the exact size and style of the captions. Open captions also make it easier to create DVDs and other physical media. Open captioned video files can be imported into any NLE or DVD authoring software.

Because open captions are part of a video itself, they are supported by all video players and devices. Discover more about recorded caption encoding workflows.

Subtitle Encoding

Pixels

Subtitles, while closely related to captions, differ in their encoding processes

Subtitles are often encoded as bitmap images, which tend to be a lot more compatible with newer digital media methods. HD disc media, like Blu-ray, does not support traditional closed captioning but is compatible with subtitles. The same goes for some streaming services and OTT platforms. SDH or other subtitling formats may be used on these platforms due to their inability to support traditional Line 21 broadcast closed captions.

Review the differences between closed captions and subtitles for the D/deaf and hard of hearing (SDH).

The Complete Guide to Caption Encoders

Title page of The Complete Guide to Caption Encoders: an eBook by 3Play Media

To determine the encoding needs of your next video project, it’s crucial to ask some key questions to gain a full understanding of the numerous types of caption encoders and transmission methods available. 

In 3Play Media’s The Complete Guide to Caption Encoders, we break it all down for you. This free eBook:

  • Defines caption encoding
  • Helps you determine whether you need an encoder
  • Explains the different types of encoders and encoder alternatives

Encoders can seem daunting, but they’re an important part of making both live and recorded captions fully accessible to viewers. By learning the basics of caption encoder workflows, you can take the next step towards making your media accessible in the most efficient way possible.

The Complete Guide To Caption Encoders: Get Your Free Guide

About the author

Related Posts

The post Demystifying Caption Encoder Workflows appeared first on 3Play Media.

]]>
Legal Requirements for Stadium Captioning https://www.3playmedia.com/blog/legal-requirements-for-stadium-captioning/ Wed, 19 Apr 2023 18:01:00 +0000 https://www.3playmedia.com/blog/legal-requirements-for-stadium-captioning/ Baking Accessibility into Your Event Strategy [FREE webinar] Whether it’s a concert, sporting event, or theatrical performance, attending live events is a source of joy and excitement for many people. The energy of the crowd, the spectacle of the performance or game, and the sense of being part of something special all contribute to the...

The post Legal Requirements for Stadium Captioning appeared first on 3Play Media.

]]>

  • Legislation & Compliance

Legal Requirements for Stadium Captioning


Baking Accessibility into Your Event Strategy [FREE webinar]


Whether it’s a concert, sporting event, or theatrical performance, attending live events is a source of joy and excitement for many people. The energy of the crowd, the spectacle of the performance or game, and the sense of being part of something special all contribute to the magic of live entertainment.

However, for those who are deaf or hard of hearing, this experience is incomplete without access to live captions. In-stadium captioning ensures that everyone has the opportunity to fully experience the event. This blog will cover legal requirements for accessible in-stadium viewing.

The Americans with Disabilities Act

Signed in 1990, the Americans with Disabilities Act (ADA) is the most far-reaching piece of accessibility legislation in the U.S.

The act and its amendments guarantee equal opportunity for disabled people in employment, state and local government services, public accommodations, commercial facilities, and transportation. The ADA affects both public and private entities.

The ADA mandates that it’s the responsibility of public and private organizations to provide equal access through appropriate accommodations. The act includes 5 sections or “Titles;” Titles II and III impact web accessibility and closed captioning.

Stadium Captioning Accessibility Laws Under the ADA

Under Title III of the ADA, stadiums and arenas must provide auxiliary aids and services, including captioning, to ensure effective communication for individuals who are deaf or hard of hearing. Specifically, the ADA’s regulations on “Nondiscrimination on the Basis of Disability in Public Accommodations and Commercial Facilities” (28 CFR Part 36) provide guidance on the requirements for effective communication for D/deaf and hard of hearing individuals. This requirement applies to both new and existing facilities.

The specific requirements for in-stadium captioning under the ADA include the following:

  • Captioning for public address announcements: Stadiums must provide captioning for all public address announcements made during events, such as game scores, player names, and other important information.
  • Captioning for videos: If stadiums display videos on scoreboards or other screens, they must provide closed captioning for those videos.
  • Captioning for emergency announcements: In the event of an emergency, stadiums must provide captioning for any announcements made over the public address system.
  • Captioning for other communications: Stadiums must also provide captioning for any other communications that are necessary to ensure effective communication for individuals who are D/deaf or hard of hearing.

There is no minimum seating capacity under the ADA that would exempt a stadium or arena from providing accessibility for disabled individuals. The ADA applies to all public accommodations, regardless of their size or capacity.

While captions are legally required for any type of event, specifications may vary based on factors like venue size or the type of event. For example, a sports event may require captioning that can keep up with fast-paced commentary, whereas a concert may require captioning that can be synced to the music.

It’s also important to consider that live captions are not enough to be fully accessible. American Sign Language (ASL) interpreters should be considered for in-stadium events in addition to live captioning. As professional sports sign language interpreter Brice Christianson explained in an episode of 3Play Media’s Allied podcast, English is a second language for many in the Deaf community:

There are two million [people] that use American Sign Language. And so when you look at that, that means that English is their second language. So typically they’re not as proficient in English. So when you’re providing captions and saying, hey, we’re accommodating you, what you’re telling someone is that you better be proficient in English. And you better understand what all these words mean.Brice Christianson

 Learn how to bake accessibility into your event strategy🍰 


Past Legal Settlements for Stadium Accessibility

Let’s review some settlements between the U.S. Department of Justice (DOJ) and various universities and venues.

Ohio State University
In 2009, a group of deaf students at Ohio State University filed a complaint alleging that Ohio State’s athletic department discriminated against D/deaf and hard of hearing individuals by failing to provide auxiliary aids and services at Ohio Stadium and Value City Arena at the Jerome Schottenstein Center.

An agreement was reached between Ohio State University and the DOJ that requires the university to provide open captioning on the scoreboard and closed captioning through individual devices at all home games.

The settlement also requires the university to provide open captioning for all public announcements and emergency alerts made through its public address system. Captions must be visible from all areas of the stadium and remain on the scoreboard until the corresponding announcement is complete.

Under the agreement, Ohio State University must also provide training to its staff about how to ensure that the captioning is functioning properly and provide assistive listening devices to D/deaf and hard of hearing attendees.

Ohio State is part of the Big Ten Conference of universities, the oldest Division 1 collegiate athletic conference in the United States. The NAD used the Ohio State settlement as a model to other Big Ten universities, sending them a letter outlining the settlement agreement with Ohio State and requesting that these universities adopt similar policies and practices to ensure their stadiums provide equal access to deaf and hard of hearing fans.

The Denver Pepsi Center

In 2018, a deaf individual filed a complaint against the Denver Pepsi Center, alleging that the arena violated the ADA by failing to provide captioning during games.

The owner of the Denver Pepsi Center settled the lawsuit with a consent decree that requires open captions on ribbon boards that can be seen from every seat in the stadium.

The captions cover all public announcements, and an independent monitor was appointed to check the accuracy of the captions.

The University of Maryland

In 2013, the NAD filed a lawsuit against the University of Maryland on behalf of two deaf individuals who regularly attended athletic events at the university. The events were not captioned and violated the ADA.

The agreement between the University of Maryland and the DOJ requires the university to provide accessible captioning services, including closed captioning on screens and assistive listening devices, for all home football and basketball games.

The University of Maryland must provide captions that are “accurate, complete, and synchronized with the spoken words,” and provide training to staff on the use of captioning equipment and services.

In-Stadium Captioning: A Necessity for Accessibility and Legal Compliance

In-stadium captioning is a legal requirement that must be fulfilled by stadiums and event organizers. Failure to comply with in-stadium accessibility requirements can result in legal action and penalties. Therefore, it is essential for stadiums to prioritize fulfilling these legal requirements to avoid legal consequences and to promote equal access for all fans.

Unlock the power of accessibility at your next event. WATCH THE WEBINAR: Baking Accessibility into your Event Strategy


About the author

Related Posts

The post Legal Requirements for Stadium Captioning appeared first on 3Play Media.

]]>
Are You Captioning Your Meetings? https://www.3playmedia.com/blog/are-you-captioning-your-meetings/ Fri, 27 Jan 2023 16:36:09 +0000 https://www.3playmedia.com/blog/are-you-captioning-your-meetings/ • 2025 State of Automatic Speech Recognition [Free eBook] If you’re one of the millions of Americans who engage in virtual work meetings, then you might be considering adding captions to meetings. Maybe you have a coworker who is deaf or hard of hearing, or perhaps you prefer to read along with a transcript while...

The post Are You Captioning Your Meetings? appeared first on 3Play Media.

]]>

  • Captioning

Are You Captioning Your Meetings?


2025 State of Automatic Speech Recognition [Free eBook]


If you’re one of the millions of Americans who engage in virtual work meetings, then you might be considering adding captions to meetings. Maybe you have a coworker who is deaf or hard of hearing, or perhaps you prefer to read along with a transcript while someone else presents a topic. Whatever your reason is, adding captions to meetings is an excellent step in making internal communication more accessible for all employees, including those who are deaf, hard of hearing, or neurodivergent.

In this blog, we’ll discuss why captions for meetings are important and the various methods for adding captions to your video conference platform.


 Thinking about using auto captions? Learn about the state of automatic speech recognition: 


Why Should You Caption Meetings?

Make your meetings accessible

The Americans with Disabilities Act (ADA) mandates reasonable accommodations for employees and the use of “auxiliary aids and services” to ensure effective communication with people who are d/Deaf or hard of hearing. 

The U.S. Department of Justice regulations for ADA Title II (state and local governments) and ADA Title III (public accommodations) define the term “auxiliary aids and services” as including computer-aided transcription services and open and closed captioning. For meetings, “auxiliary aids and services” means adding captions to ensure employees who are d/Deaf or hard of hearing can participate fully. However, it’s important to note that captions are not a substitute for sign language interpretation.

Additionally, captions can also provide a more accessible experience for people who are neurodivergent or non-Native English speakers.

Make your meetings more engaging

In the age of digital distraction, we tend to juggle multiple tasks all at once. We often split our attention across several devices, and numerous stimuli compete to grab and keep our attention. One way to help viewers maintain concentration and engagement is to caption your meetings.

Captions have been proven to aid in focus and memory. The accessibility committee at the University of South Florida St. Petersburg (USFSP) conducted a report that gives insight into students’ uses and perspectives of captions and interactive transcripts in online courses.

The results show the power of captions and interactive transcripts:

  • 42% of students use closed captions to help maintain focus.
  • 38% of students use interactive transcripts to help with information retention.
  • Test scores increased by 3% for students who used closed captions.
  • Test scores increased by 8% for students who used interactive transcripts.

While this study was done in an educational context, other studies have corroborated the finding that captions help viewers maintain focus and engagement.

Captions also help with reading comprehension, spelling, and pronunciation for different learning styles. Whether there’s complex terminology, poor audio quality, or detailed information, captions and transcripts help clarify content.

Live Auto Captions vs. Live Professional Captions

Now that you’ve decided to add captions to your meetings, you’ll need to choose between a live automatic captioning service or a professional captioning service.

Live Professional Captions vs. Live Auto Captions

Live professional captions are created in various ways but always include a human captioner. Some standard methods include stenography or voice writing.

Live automatic captions are generated by machine learning algorithms and automatic speech recognition (ASR) technology.

If cost is your biggest concern, live auto captions might be your best option as they are less expensive than CART or live professional captions. However, inaccuracy often prevents auto captions from being legally compliant. Although there are no official legal guidelines for live captioning quality, the best practice is to assume that live captions should be as close as possible to accurate. The industry standard for the accuracy rate of closed captions is 99%, and inaccurate captions are considered legally inaccessible because they do not provide an equal experience.

If you have coworkers who are d/Deaf or hard of hearing, then accuracy is even more critical, and using live professional captions is necessary. However, live auto captions may work fine for small, internal meetings without any accommodation requests. In these instances, the accuracy of live professional captions might not justify their cost, and live auto captions may be all that’s needed.

If you do choose to use live auto captions for a meeting, we recommend the following measures for increased accuracy:

  1. Ensure high-quality audio, clear speakers, strong internet connection, and no overlapping speech.
  2. Upload a wordlist of any technical terms, phrases, or acronyms that will come up in your meeting.
  3. If you plan on sharing a recording after the meeting, upgrade the recording to full transcription to ensure a legally compliant, 99%+ accurate caption file.

How to Caption Your Meetings

To ensure you can add captions to your meetings, the most critical first step is choosing a virtual meeting platform with accessibility features. Some common ones include Zoom and Microsoft Teams.

Multiple virtual meeting platforms, including Zoom and Google Meet, provide in-platform automatic captioning, which allows you to caption a meeting for no additional charge. While these auto captions will not have the same high accuracy as a professional captioner, they can be a great low-budget option for meetings in which no participants require captioning as an accommodation.

If you decide to use an outside vendor for live auto captioning or live professional captioning, you’ll need to use a platform that supports a captioning integration with your preferred vendor.

Here are a few video conference platforms that offer live auto captions:

Zoom

Zoom gives you the option of using either live auto captions or assigning someone to create manual captions. Zoom’s live auto captions only support US English and are subject to common ASR inaccuracies, so we recommend using a manual captioner if a participant requires captions. You can also integrate a third-party captioning vendor for increased accuracy and accessibility.

Microsoft Teams

Microsoft Teams offers live auto captions, translations, and features such as speaker attribution. By default, live captions are displayed in the language that’s spoken during the meeting. Live translated captions allow users to see captions translated into the language they choose. Meeting organizers can also set up CART captioning for increased accuracy and accessibility. The live captioning feature is available in the mobile app.

Google Meet

Google Meet offers live auto captions in multiple languages and is available on the mobile app for Android, iPhone, and iPad. Google Meet does not currently support third-party captioning vendors.

Slack

Slack offers live auto captions in English for Huddles. Slack does not currently support third-party captioning vendors.

Live Captions with 3Play Media

At 3Play Media, our live captioning service allows you to schedule captions for any live meeting or event. We streamline the traditional live captioning workflow by integrating with many video platforms, including Zoom, Brightcove, and more.

To schedule live captions for meetings with 3Play Media, you’ll log into the 3Play online platform, navigate to the live captioning interface, and select “schedule live captions.” From there, you’ll choose the video or conference platform to which you’d like your captions delivered.

You’ll then indicate your account for your preferred meeting platform and the event you’d like live captioned. Once you choose your event, you’ll select the service type for live auto captions or live professional captions. We’ll match your upcoming event with a professional captioner if you choose professional captioning. You can also verify your event start time, stream start time, and caption start time.

After confirming relevant event times, you’ll indicate your estimated event duration, captioning overtime options, and event type, which helps our captioners prepare for the format of your event. You can also add relevant information, such as an event description, speaker names, and a wordlist, which help increase accuracy.

Once you approach your event start time, you can start streaming, and the live captioner or live auto captions will begin captioning in real time. When the event is over, you’ll have full access to the transcript and captions from the live recording and the option to order additional services.


2020 state of automatic speech recognition. Download the report.


Filed under

About the author

Related Posts

The post Are You Captioning Your Meetings? appeared first on 3Play Media.

]]>
What is an EEG Caption Encoder? https://www.3playmedia.com/blog/what-is-an-eeg-caption-encoder/ Tue, 03 Jan 2023 19:51:24 +0000 https://www.3playmedia.com/blog/what-is-an-eeg-caption-encoder/ The Complete Guide to Caption Encoders [Free eBook] Throughout the past few decades, caption encoders have allowed televisions to receive closed captioning transmissions, and they remain widely used for many broadcast and streaming workflows today. There’s several different types of encoder technologies available to help simplify caption delivery of your broadcast and streaming content; in...

The post What is an EEG Caption Encoder? appeared first on 3Play Media.

]]>

  • Captioning

What is an EEG Caption Encoder?


The Complete Guide to Caption Encoders [Free eBook]


Throughout the past few decades, caption encoders have allowed televisions to receive closed captioning transmissions, and they remain widely used for many broadcast and streaming workflows today. There’s several different types of encoder technologies available to help simplify caption delivery of your broadcast and streaming content; in this blog, we will highlight EEG caption encoders like iCap and give an overview of what encoding workflows look like.

What is a caption encoder?

Encoders let a broadcaster simultaneously receive and encode captions, allowing them to be displayed alongside a television program or video in real time

Modern encoder technology took a big step in 1993, when the Federal Communications Commission (FCC) mandated that TVs include a decoder to receive caption signals, thus allowing a viewer to turn captions on or off on their television. 

Closed vs. Open Captions
“Closed captions” means a viewer is able to toggle on/off the captions, whereas “open captions” are always on.

What is an EEG encoder?

An EEG encoder refers to a captioning encoder manufactured by EEG, such as iCap and iCap Falcon.

iCap encoders

These EEG caption encoders have iCap software for improved functionality, such as sending audio to the captioner, but can also be set up as IP connections if desired. 

iCap-enabled encoders are manufactured by EEG, and with their direction, you can set up the encoder to feed both audio and video to the captioner, making it easier to monitor and caption effectively. 

The video and audio are converted to a data stream on the iCap cloud which is accessible via an Access Code. Captions are routed through the cloud and into the encoder where it is married to the stream and ready for broadcast. 

iCap encoders can be bought or rented for any type of event or broadcast. They are compatible with a number of broadcast networks, cable channels, OTT platforms, educational institutions, and more.

iCap Access Codes
iCap Access Codes typically look something like this:

Access Code: TV2021

iCap Falcon

iCap Falcon is a virtual encoder offered by EEG. Virtual encoders are hosted in the cloud and require clients to connect their stream digitally. iCap Falcon functions similarly to a normal EEG encoder, but is hosted within the iCap cloud.

In general, virtual encoders like iCap Falcon are useful for events that are streamed online or singular events that don’t necessitate the purchase of permanent equipment. These encoders add closed captioning data and reroute the video stream to the desired platform such as YouTube, Facebook, or Vimeo. 

iCap Falcon Compatability
iCap Falcon is compatible with a variety of streaming video platforms including Facebook, YouTube, Twitch, and more.

What does a closed captioning encoding workflow look like?

Three circles with images inside: a person typing at a computer with a data stream above it; a video player with captions on; a pair of hands shaking with a small data cloud above it.

Most closed captioning encoder workflows function like so:

  • A caption provider transmits a caption feed to the encoder(s).
  • The encoder collects the caption feed for transmission to the viewer.
  • The encoder pairs the captions to the video on a specific data transmission line called line 21, which televisions are mandated to decode captions from.

The Complete Guide to Caption Encoders

decorative

This ebook serves as your comprehensive guide to caption encoders – what they are, when and why you need them, and which encoder to use – to help you create accessible and engaging video content.

Download the eBook for Free

How to know if you need a caption encoder

Not sure if you need a caption encoder? Here’s a rundown of situations that require one:

  • Your program is going straight to broadcast or cable.
  • You’re streaming your live program on Facebook or YouTube.
  • Your video platform requires live captions to be embedded in the stream as 608/708 data.
  • You want viewers who do not have a video player to be able to turn on captions.
  • You want an offline captioning option.
  • You’re captioning video for kiosks and mobile devices.
  • You’re captioning video on social media platforms like Twitter or Instagram.
  • You’re creating a self-contained captioned video that can be distributed as a single asset.

Caption encoding with 3Play Media

When you need caption encoding, 3Play Media has you covered. Simply upload your video file for captioning and transcription processing. If you already have a transcript, you can use the automated transcript alignment service. Once your file has been captioned, you can order the caption encoding service and choose the appropriate encoding profile. Upon completion, you will receive an email notification and be able to download an M4V video with encoded captions.

The video will work with any player or device that supports M4V videos, including QuickTime, iPad, iPhone, iPod, iTunes, JW Player, and Flowplayer. Because the captions are soft-encoded in the video, users will be able to turn them on or off using the video player controls.

The source video that you upload can be in almost any web format that doesn’t use a proprietary codec. When ordering caption encoding, you will have the option to select an encoding profile to optimize video playback for a certain device.

For example, the iPhone5 profile transcodes your video for a target width of 1136 pixels, 30 frames per second, and a frame rate of 3 Mb/sec. You can also use your original source video as long as the video encoding is H.264 and audio is AAC. The closed captions track will be added to the video and put in an M4V container.

Download a demo video with encoded closed captions – you’ll need to play it in a QuickTime or VLC player and make sure to enable the captions (subtitles). Please note that some versions of Windows Media Player do not support caption-encoded videos.

Note: For social media videos, you’ll need to upload your video in a format supported by the social platform (for example, Twitter takes MP4 videos). Then, order caption encoding > source with open captions.

 

The Complete Guide to Caption Encoders. Get Your Free Guide.


About the author

Related Posts

The post What is an EEG Caption Encoder? appeared first on 3Play Media.

]]>
How to Elevate Your Broadcast’s Live Captioning Quality https://www.3playmedia.com/blog/how-to-elevate-your-broadcasts-live-captioning-quality/ Tue, 08 Nov 2022 22:26:39 +0000 https://www.3playmedia.com/blog/how-to-elevate-your-broadcasts-live-captioning-quality/ FCC Requirements for Closed Captioning of Online Video: Are You Compliant? [Free White Paper] Television is a 24/7 industry. Networks are always broadcasting something, and much of what they’re airing is happening live, in real time. Live captioning is a critical component of these live broadcasts, but can sometimes be a source of frustration for...

The post How to Elevate Your Broadcast’s Live Captioning Quality appeared first on 3Play Media.

]]>

  • Live Captioning

How to Elevate Your Broadcast’s Live Captioning Quality


FCC Requirements for Closed Captioning of Online Video: Are You Compliant? [Free White Paper]


Television is a 24/7 industry. Networks are always broadcasting something, and much of what they’re airing is happening live, in real time. Live captioning is a critical component of these live broadcasts, but can sometimes be a source of frustration for viewers due to style inconsistencies, latency issues, and inaccuracies in transcription. Many have simply accepted that this is just the nature of live captioning. But we’ll let you in on a secret: there are some easy ways to elevate your live captioning quality from acceptable to all-star. We’ll show you how in this blog.

Set Your Live Captioner Up for Success with Prep Materials

One of the easiest and most effective ways to improve your live captioning quality is to provide prep materials to your live captioning vendor ahead of the broadcast. The pacing of live broadcasts means that live captioners must make split-second judgment calls and are ultimately driven to get audio transcribed as quickly as possible in a live environment. While live captioners can correct a word if they mistranscribe, that can be hard to do when there’s no context or materials available ahead of time. That’s where prep materials come in.

When productions provide helpful information such as proper name spellings, key terms, and wordlists, live captioners are able to reference and contextualize this information during the captioning process by creating dictionaries and glossaries of key terminology and names to have on hand as they caption. This ensures more accurate spellings and transcription of your broadcast content, automatically improving your live captioning quality.

So how do these dictionaries/glossaries work? Powerful software used by live captioners (who may be using either a stenography machine or voice writing methods to transcribe closed captions) allows them to maintain active control over the accuracy and formatting of the words they’re creating. 

For instance, say a captioner is scheduled to caption a live baseball game. They’re going to create a robust glossary of baseball vocabulary, ranging from the basic—bat, base, outfield—to the specific—dinger, WHIP, fungo. This dictionary will also include a list of personnel involved with the game: rosters of both teams, the broadcasting crew, umpires, and information about the stadium and city it’s located in.  

A Strong Network Connection = Strong Live Captions

In live programming, issues with antennas and cable can corrupt data streams and the transmission of captions. 608 and 708 closed captioning data is decoded to make captions appear overlayed on a video stream. These are typically stable when being transmitted through a strong network connection, giving you live captions that appear as intended by the captioner.

However, poor weather, weak transmission signals, internet quality, and satellite issues can all affect these captioning streams. This can result in missing words, strange characters, caption placement changes, or even change the color of the captions. 

Some of these factors, like the weather, can’t be helped. But some, like poor internet quality, can be improved. Aim to use the most stable and high quality connection as possible when working with your live captioning vendor and test with them pre-broadcast. 

While testing, a live captioner should use their captioning software to connect to the client’s encoder or virtual captioning session and send test captions. At this point, they should also confirm with you that captions are being properly received and that what they are hearing is correct, be it the sound of the preceding program, tone, or silence. This testing is designed to provide a clean, constant link between the captioner and broadcaster, allowing them to caption with as low latency as possible. 

In the cases where a live stream drops anyway, 3Play offers Stream Reconnect Wait Time to give you peace of mind that your captioning service will pick back up without unnecessary delay.

The Clearer the Speech, the Clearer the Captions

Most producers and captioners of live broadcasts know that audio quality can be a mixed bag. But if you’re looking to make accessibility part of the live broadcasting process from the start and get the highest quality, compliant live captions, consider implementing a few of these tips to get clearer audio during your live broadcasts. (Note that not all audio tips are going to be possible for every broadcast due to the nature of different live events, such as sporting games.)

Aim for High Quality Audio

Whenever on-air talent is speaking, they should be using a high-quality microphone. Ensure speakers are enunciating and speaking as steadily as possible into the mic line to get clear speech. If no direct mic line is present, make it a best practice to have speakers repeat important information and questions in case captioners or viewers don’t hear it the first time.

Avoid Unnecessary Background Noise

Loud cheering, applause, and overlapping chatter is unavoidable in some programs, but when possible, avoid on-air talent and speakers having to compete with loud background noise by dampening sound inside studio settings.

One Speaker at a Time

Overlapping chatter is a top reason why captions may be transcribed incorrectly. When possible, ask on-air talent and speakers to speak one at a time and avoid talking over one another.

 What does the FCC say about the captioning of live and online video? ➡ 

Highly Trained Live Professional Captioners are Key

Using highly trained live professional captioners is essential for live broadcasts. Live automatic captions on their own will not be sufficient for television and OTT streaming, so it’s imperative to ensure humans are part of your live captioning process. But even if you have a live captioner doing the work, how do you know if they’re the right fit for your broadcast?

When vetting a live captioning vendor, check if their live captioners are professionally trained in proven methods like stenography or voice writing. 3Play Media live professional captioners consist of a team of in-house staff and contractors with robust training and qualifications that enables you to get your broadcast programming live captioned at a high level of quality without hassle.

Where does ASR fit into this?
ASR and auto captioning solutions aren’t usually recommended for broadcast television and high-visibility events on their own, but they can serve as a helpful reference for a live captioner who is editing ASR as they go, a backup option when a connection drops, or a way to make content accessible when professional captioners are not an option. If you have a resilient failover captioning solution, ASR can ensure captions keep going and your broadcast remains accessible while you and your captioning vendor troubleshoot any drop issues.

Be Mindful of FCC Guidelines

While no governing body or official rules exist to regulate live captioning at this time, Federal Communications Commission (FCC) regulations and other legal precedents still compel live broadcast programming to be captioned at a high accuracy rate. 

The FCC lists best practices for live captioning of televised video for both vendors and captioners, and while they do allow for some leniency in quality compared to recorded captioning quality, they suggest live captioners aim to “caption as accurately, synchronously, completely, and appropriately placed as possible, given the nature of the programming.”

3Play’s live professional captioning accuracy rates typically range between 95% to 98% or higher for live broadcasts, with a focus on comprehensibility. We also provide future-proof paths to upgrading live captions and transcripts to 99% accuracy and FCC compliance for recorded broadcasts or re-air of programming after your live broadcast has ended.

Elevating Your Live Captions

Some of the tips listed above aren’t always going to be possible due to the nature of broadcast television and the fast-paced, ever-evolving media and entertainment industry. But they’re good starting points for producers and networks who strive to build inclusivity and accessibility into the production process from the start.

It may initially require a little extra effort, but when you set high standards for the live captioning of your broadcast programming, you’re creating a better, more inclusive experience for all of your viewers.

 

FCC Rules for Closed Captioning of Online Video: Are You Compliant? Read the guide.


About the author

Related Posts

The post How to Elevate Your Broadcast’s Live Captioning Quality appeared first on 3Play Media.

]]>
What is an SCC File? https://www.3playmedia.com/blog/what-is-an-scc-file/ Thu, 27 Oct 2022 13:00:14 +0000 https://www.3playmedia.com/blog/what-is-an-scc-file/ Closed Captioning Best Practices for Media & Entertainment [Free eBook] For years, Scenarist Closed Caption (SCC) files have been one of the most commonly used closed captioning files on broadcast television. These closed captioning files hold CEA-608 captioning data in 29.97 drop (DF) and non-drop (NDF) frame rates, and were originally designed for usage with...

The post What is an SCC File? appeared first on 3Play Media.

]]>

  • Captioning

What is an SCC File?


Closed Captioning Best Practices for Media & Entertainment [Free eBook]


For years, Scenarist Closed Caption (SCC) files have been one of the most commonly used closed captioning files on broadcast television. These closed captioning files hold CEA-608 captioning data in 29.97 drop (DF) and non-drop (NDF) frame rates, and were originally designed for usage with analog television, VHS, and DVDs.

Between the rising popularity of streaming and web video content over the past decade, CTA-708 closed captioning requirements, and the death of NTSC broadcast transmissions in the United States, it would be easy to write off SCC files as an outdated relic of broadcast’s past or dismiss their future usage in a digital landscape.

But this hasn’t been the case for SCCs. In fact, SCC closed captioning files have adapted to video content outside of traditional broadcast, like web and streaming, becoming an incredibly versatile closed caption file type used across platforms and industries. Below, we’ll take a look at the evolution and capabilities of SCC files and find out just what makes these caption files so special.

A Brief History of SCC Files

SCC files were originally developed by Sonic and hold 608 captioning information by design, meaning the files are transmitted via Line 21, a hidden data stream containing closed captions and V-chip data. The 608 closed captioning format was created for analog television, but remains in use alongside 708 data today. SCC files use hexadecimal values to encode captioning information which are deciphered by closed captioning decoders.

Broad Support for Stylistic Elements

SCC files contain the baseline text and timing information necessary for all closed caption files, but also support caption styling information such as positioning, italics, and music notes – all of which are valuable tools for expert captioners to visually convey audio elements, such as dialogue spoken off-screen and lyrics being sung. These stylistic captioning elements are particularly important when it comes to compliance with FCC requirements.

As with other CEA-608 formats, SCC files support a maximum of 32 characters per line for closed captions, and the stylistic elements of the captions must be implemented on the captioner’s end; they cannot be adjusted by the viewer unless the file has been converted to a 708 format.

 

SCC Style Elements Supported?
Positioning  Yes
Italics Yes
Music notes Yes
Special characters Latin language characters supported
Line limits 32 characters
Number of caption lines Up to 4 per caption
Viewer can make adjustments Only if file is up-converted to support 708 formats

 

 Everything you need to know about closed captioning for the media & entertainment industry 🎬 

Versatility

SCC files are popular due to their flexibility and ability to conform to traditional broadcast video needs, modern video content platforms, web players, video editing softwares, and more.

How SCCs Support 708 Closed Captioning Data

SCC files only store CEA-608 caption data. These files are “up-converted” to include 708 data where appropriate when they are run through a decoder or used to embed caption tracks into a master video file. Common “up-conversion” workflows can include exporting into 708-supported formats, such as MacCaption Closed Caption (MCC) files.

Some broadcasters and streaming services will automatically “up-convert” SCC files as part of the post-production process; some may opt for an additional file type for delivery. If you’re unsure of whether your SCC file will end up supporting 708 closed captioning, reach out to your network or platform contact to clarify deliverable file types to ensure you are in compliance with their requirements.

How SCCs are used in Video Editing Softwares

SCC files are compatible with most video editing softwares, like Adobe Premiere Pro and Final Cut Pro, making them a favorite of post-production professionals and expert captioners who use the SCC for importing, exporting, and/or conversion to other caption file formats. If you’re looking for a caption file format that will work with your software, chances are that an SCC may meet your needs.

How SCCs are used in Streaming & Web Videos

Some of the most relevant and important mediums that SCCs have adapted to in recent years are web, OTT, and streaming platforms. SCC files are supported by multiple web video players like YouTube (it’s their preferred caption file type!) and Vimeo. SCCs are also increasingly used/accepted on a variety of popular OTT platforms such as Amazon Prime Video, PBS, and Warner Bros. Discovery (which counts HBO Max among its streaming services and networks.)

So how do SCCs work on non-broadcast players and platforms such as these? They function as sidecar files. While most standard streaming players are incapable of reading caption data that’s been embedded into a file, SCC files are set up to work as text files if the player is enabled to read and translate that data, much like it would for an SRT file or WEBVTT file.

The Decoding Process

Unlike a WEBVTT or SRT file, SCC files cannot be edited directly unless you have professional closed captioning software. This is because SCC files require decoding, no matter the destination of the video. If you were to open an SCC file in a text editor, you’d find raw data appearing in the form of numbers and letters arranged in such a way that is meant for robotic decoders to interpret, like so:

A screenshot of an SCC file opened in a text editor. Raw data in the form of numbers and letters arranged in hexadecimal values.

The timecodes of SCC files are in SMPTE format, and as previously mentioned, are always in either for 29.97 DF and NDF frame rates. These frame rates are indicated within the SCC file by utilizing a colon for NDF and a semi-colon for DF timecodes. The above example has a starting timecode of 01:00:00:03, meaning the file is timed to a 29.97 NDF frame rate. If the timecode was formatted as 01:00:00;03, that would make it a 29.97 DF frame rate. 

When SCC files are exported in a frame rate that differs from drop and non-drop frame rates (at 23.98 or 25 frames per second, for example), the timecodes are supported via specialized timecode math that broadcast captioning software is designed to calculate, ensuring the caption file remains synchronized to the video.

While SCC files can technically be edited in a manual text editor, it is inadvisable to do so unless you are a closed captioning professional.

SCCs: the optimal caption file type?

The consistency and versatility of SCC files makes them one of the top choices of closed captioners, broadcast networks, streaming platforms, and people looking for a caption file that will work across a variety of video destinations. Though SCC files lack some advanced formatting features and character support, at the end of the day, they are most likely to meet many of your broadcast and/or web video accessibility needs and help you remain compliant.

Always check with your delivery contacts about exactly which files your network or platform supports before making the big decision about which closed captioning file(s) to go with. Not sure where to start? Our team can help you determine what’s right for you so that you can create accessible, searchable, and engaging videos for your audience.

 

Closed captioning best practices for media and entertainment. Read the guide.


About the author

Related Posts

The post What is an SCC File? appeared first on 3Play Media.

]]>
What is 608 and 708 Closed Captioning? https://www.3playmedia.com/blog/difference-cea-608-cea-708-captions/ Fri, 23 Sep 2022 18:00:00 +0000 https://www.3playmedia.com/blog/difference-cea-608-cea-708-captions/ FCC Requirements for Closed Captioning of Online Video [Free eBook] Closed captions are a vital accessibility feature for people who are deaf or hard of hearing. There are two main standards for encryption and decryption of closed captioning data based on Federal Communications Commission (FCC) regulations: CEA-608 and CTA-708. CEA-608 is the older standard and...

The post What is 608 and 708 Closed Captioning? appeared first on 3Play Media.

]]>

  • Captioning

What is 608 and 708 Closed Captioning?


FCC Requirements for Closed Captioning of Online Video [Free eBook]


Closed captions are a vital accessibility feature for people who are deaf or hard of hearing. There are two main standards for encryption and decryption of closed captioning data based on Federal Communications Commission (FCC) regulations: CEA-608 and CTA-708. CEA-608 is the older standard and is used for analog television, while CEA-708 is the newer standard and is used for digital television.

While 708 is the newer standard, 608 is still widely used because it is compatible with digital televisions. Additionally, some devices, such as streaming players, may only support 608 closed captions.

In this blog post, we will discuss the differences between 608 and 708 closed captions, and explain why both standards are still important.

A Tale of Two Caption Standards

In 2009, the DTV Delay Act was passed in the United States, officially replacing analog television with digital television. At the time, it was expected that 608 captions would transition to 708 captions, so 708 closed captions subsequently became the preferable standard by the FCC for all digital television.

In an effort to make the transition from 608 to 708 closed captioning smoother, digital television maintained the ability to support 608 captions. And while this support was intended for transitional purposes, 608 closed captions continue to be widely used in digital televisions today.

What are 608 Closed Captions?

608 closed captions (also known as CEA-608, EIA-608, or Line 21 captions) were the standard for analog television. 608 captions are compatible with digital television via picture user data, which was meant to make the transition from analog easier. However, 608 captions do not support any of the appearance or customization options offered by 708 captions.

Appearance

608 captions are most recognizable for their stereotypical closed caption appearance: white text over a black box.

An example of a 608-style closed caption. White text reading "This is what 608 captions usually look like" is centered over a black box.

608 closed captions usually have the classic appearance as depicted in this example.

Transmission

608 closed captions are transmitted via Line 21 captioning data. This is a transmission data stream that carries closed captions as well as V-Chip data (which provides the small TV rating you see in the top corner of the screen based on violence, language, and more.) 

Line 21 itself is not viewable on television or videos, but the hidden data is decoded to make captions appear overlayed on a video stream. It has two fields – usually, English captions are transmitted in the first field and Spanish captions are transmitted in the second field.

Languages

608 closed captions only support the display of regular Latin language characters in languages such as English, Spanish, and French. Extended character sets have also been added to 608 over the years to better support Western languages. The two fields available in Line 21 allow for only two language options at a time.

Formatting & Style Options

The formatting options are limited for 608 closed captions, but have basic support for styles such as placement, italics, and capitalization. These elements must be implemented on the captioner’s end, as the user does not have control over the customization options for 608 captions.

 

 The Closed Captioning Terms & Laws You Need to Know 📖 

What are 708 Closed Captions?

708 closed captions (also known as CEA-708/EIA-708/CTA-708 captions) are the newer standard for digital television. 708 captions are not compatible with analog television.

Appearance

708 closed captions are customizable to viewers. Because of this, 708 captions are considered more accessible to individual viewers with unique requirements & preferences – for example, a person who is colorblind may prefer to change the text and background colors to create higher contrast. 

The only appearance-related quality of 708 captioning that a user cannot control is whether the captions are roll-up or pop-on style, because of the fact that the two methods must be formatted differently.

An example of 708 captions with varying closed caption options including white text over black box, black text on white, yellow text on a semi-transparent black box, cyan text over a black box, and small capitalized text in magenta over a cyan box.

Note: Above is an example of potential options for 708 closed captions. Exact options (colors, fonts, sizing) may vary across televisions.

Transmission

708 closed captions are transmitted via MPEG-2 video streams in MPEG user data, which carries information such as the aspect ratio in addition to 708 captioning data. 708-supported digital encoders have higher processing power and bandwidth, allowing for greater customization of closed captions on the user’s end.

Languages

708 closed captions allow for broader character recognition based on Unicode, which supports a wider array of languages beyond 608’s Latin-based characters, such as Korean, Japanese, and more.

Furthermore, 708 closed caption transmission allows for multiple tracks to be included in one program, extending the multilingual capacity of closed captioning and making programming more accessible for a global audience.

Formatting & Style Options

Greater functionality exists for color and font customization in 708 closed captions, which can be controlled by the user. These settings are often adjustable via the channel box in cable or satellite TV.

Styles such as placement, italics, and case are supported, and the closed captioner will implement best style practices based on the program, as with 608 captions, but the viewer may make further customizations to font, color, backgrounds, etc. in order to best meet their personal needs. 

What kind of customizations can viewers choose from in 708 captions? 8 font options, 3 text sizes, 64 text colors, 64 background colors, background opacity, and dropshadowed (or edged) text!

608 vs. 708 Captions: Which is Better?

608 = old, 708 = new. So that means 708 is better, right? Unfortunately, the answer isn’t that simple.

It’s true that 708 captions are an improved captioning standard, with more options for appearance, placement, and languages. And it’s true that 708 captions are recommended by the FCC as the best closed captioning standard for digital television.

But despite the advancements in technology, 608 closed captions remain the primary format for many transmissions in the United States. In fact, most existing industry-standard caption data formats, such as SCC files, only store CEA-608 caption data by design. These files are “up-converted” to include 708 data where appropriate when they are run through a decoder or used to embed caption tracks into a master video file.

608 captions may be “old-school”, but they remain relevant for digital video today because of their flexibility and ability to conform to modern specs; 608 data can be used as a substitute for 708 data, but 708 data cannot be used in analog systems that are only set up to receive 608. Many broadcast networks still require files with 608 data for compatibility with older devices, rather than assuming all audiences have upgraded to digital television.

608 Closed Captions 708 Closed Captions
Standard for analog television Standard for digital television
Compatible with digital Incompatible with analog
Line 21 transmission Transmission via MPEG-2 data streams
White text over a black box Customizable, with options including 8 fonts, 3 text sizes, 64 text colors, 64 background colors, background opacity, and dropshadowed (or edged) text
Supports two languages at a time Supports multiple languages at a time
Language options limited to regular Latin characters, with some support for extended characters Language options are extensive and based on Unicode
Caption positioning implemented by captioner; cannot be adjusted by viewer Caption positioning implemented by captioner; can be adjusted by viewer

The Future of 608 Captions & 708 Captions

The last remaining National Television Standards Committee (NTSC) analog transmissions were switched off in July of 2021, over 12 years after the DTV Delay Act was passed, but as for when all closed captions will completely move to 708 closed captioning standards remains to be seen. 

Between CEA-608’s flexibility and CTA-708’s customizations, both continue to stay relevant in the digital age of broadcast television. And as long as 608 and 708 are supported, 3Play Media can help you meet either, or both, of the 608 and 708 closed captioning standards while simultaneously remaining compliant with FCC regulations

Closed captioning 101: Everything you need to know about industry standards, best practices, and DIY captions with YouTube – read the guide

 

This blog was originally published in October 2014 as Closed Captioning for Broadcast Television: What’s the Difference Between 708 Captions and 608 (Line 21) Captions? It was updated on September 15, 2021 by Kelly Mahoney. This article has since been updated again in combination with information from 608 and 708 Closed Captions: A Primer (originally published by Captionmax) for accuracy, clarity, and freshness.

This blog post is written for educational and general information purposes only and does not constitute specific legal advice. This blog should not be used as a substitute for competent legal advice from a licensed professional attorney in your state.


About the author

Related Posts

The post What is 608 and 708 Closed Captioning? appeared first on 3Play Media.

]]>
Legal Requirements for Live Captioning https://www.3playmedia.com/blog/legal-requirements-for-live-captioning/ Fri, 04 Feb 2022 18:23:46 +0000 https://www.3playmedia.com/blog/legal-requirements-for-live-captioning/   Learn how the ADA impacts video accessibility   If you’re hosting live virtual events, you should be providing live captions for many reasons: live captions increase accessibility for viewers who are deaf and hard-of-hearing, boost user engagement, comprehension, and learning, and help you comply with legal requirements for live video. However, legal compliance is...

The post Legal Requirements for Live Captioning appeared first on 3Play Media.

]]>

  • Legislation & Compliance

Legal Requirements for Live Captioning

 

Learn how the ADA impacts video accessibility

 

If you’re hosting live virtual events, you should be providing live captions for many reasons: live captions increase accessibility for viewers who are deaf and hard-of-hearing, boost user engagement, comprehension, and learning, and help you comply with legal requirements for live video.

However, legal compliance is sometimes confusing. Many accessibility laws were created before today’s technological innovations, and it can be hard to keep up with relevant lawsuits.

In this blog, we’ll go over the legal requirements for live video and how providing live captioning can help you stay compliant.


 Discover which lawsuits support video accessibility ➡ 


ADA Requirements for Live Captioning

Passed in 1990, the Americans with Disabilities Act (ADA) set landmark accessibility requirements that impact both private and public entities. While the ADA does not explicitly mention online or live video, it requires that “auxiliary aids and services” be made available to anyone with a disability to ensure effective communication. Live captioning, also called real-time captioning or CART, is an example of an auxiliary aid for live events.

Under the ADA, captioning is required for:

  • “Public entities,” including state and local governments, in internal and external communications.
  • “Places of public accommodation,” which are public or private businesses used by the public at large. Private clubs and religious organizations are exempt.

Numerous lawsuits set a legal precedent for video accessibility and live captioning. For example, the 2012 lawsuit National Association of the Deaf v. Netflix categorized Netflix, a purely virtual business, as a “place of public accommodation” requiring closed captioning. While this lawsuit concerned closed captioning for pre-recorded content, other lawsuits have targeted live captioning.

In 2006, the National Association of the Deaf (NAD)  filed a lawsuit against the Washington Commanders, called the Washington Redskins at the time, for failing to provide captioning during games. The complaint requested live captioning on scoreboards and video monitors for all announcements, plays, and penalties called during the game.

The court ruled that the Washington Commanders needed to make all audio projected into the stadium bowl over the public address system accessible to deaf and hard-of-hearing fans. The court held that the ADA required the team to “provide auxiliary aids beyond assistive listening devices, which are useless to plaintiffs, to convey the: (1) game-related information broadcast over the public address system, including play information and referee calls; (2) emergency and public address announcements broadcast over the public address system; and (3) the words to music and other entertainment broadcast over the public address system.”

Additionally, the NAD sued MIT and Harvard for not providing captions, or providing inaccurate captions, for online educational videos. Under the 2020 settlement, MIT agreed to not only provide captions for recorded content but also live captions for certain events that are streamed online. Harvard also agreed to provide captions for recorded content and live captions for certain live-streamed events.

Rehabilitation Act (Section 508 and 508) Requirements for Live CaptioningBriefcase, legal scale, and document on pink background

Enacted in 1973, the Rehabilitation Act originally addressed disability discrimination for federal entities or organizations receiving federal funding. Sections 504 and 508 broadened the act’s application to online and live video content.

Section 504 makes accessibility for disabled individuals a civil right. Failure to accommodate disabled individuals can result in a discrimination lawsuit, which applies to federal agencies and any entity receiving federal funding.

Section 508 mandates accessibility for electronic media or IT in federal programs or services. While this section doesn’t explicitly extend beyond federal agencies, many states passed laws called “mini 508 laws” that extend the section’s reach to organizations that receive federal funding.

Section 508 also requires compliance with WCAG 2.0 Level A and AA success criteria, which means that pre-recorded video must have captions and audio description, and live video must be live captioned.


 Learn how the ADA impacts video accessibility ➡ 


WCAG Requirements for Live CaptioningLegal scale on computer screen

Aside from state and federal web accessibility laws, the most widely adopted and comprehensive technical standards have emerged from the W3C’s Web Content Accessibility Guidelines (WCAG).

WCAG is a set of standards for making digital content accessible for all users, including people with disabilities. The guidelines:

  • Outline best practices for making web content universally perceivable, operable, understandable, and robust.
  • Define criteria for successful inclusive web design, with ascending levels of compliance (levels A, AA, and AAA).
  • Are composed and reviewed by a global community of digital experts.
  • Connect the world through common information technology and user experience standards.

WCAG 2.0 and 2.1 outline three levels of compliance. Level A is the highest priority and the easiest to achieve. Level AA is more comprehensive and the recommended standard for accessibility. Level AAA is the strictest, most comprehensive standard for accessible design.

Here are the WCAG compliance levels for accessible video and synchronized media:

  • Level A: (1.2.2) Captions are provided for all pre-recorded audio content in synchronized media, except when the media is a media alternative for text and is clearly labeled as such.
  • Level AA: (1.2.4) In addition to Level A compliance, captions are provided for all live audio content in synchronized media.
  • Level AAA: (1.2.6) In addition to Levels A and AA compliance, sign language interpretation is provided for all pre-recorded audio content in synchronized media. 

Since WCAG Level AA is the recommended standard for accessibility, live video content must be live captioned.

Live Professional Captions vs. Live Auto Captions for Compliance

When considering legal compliance and accommodations, it’s imperative to distinguish between live auto-captioning and live professional captioning.

According to the W3C, automatic captions are not sufficient for meeting user needs or accessibility requirements unless they are confirmed to be entirely accurate, which is rarely the case.

We can apply the same logic to live captioning.

While event hosts can choose live auto-captioning, which uses Automatic Speech Recognition (ASR) technology, this option is rarely accurate enough to be considered an appropriate accommodation for deaf or hard-of-hearing viewers. Instead, the most suitable choice is live professional captioning, which uses a professional human captioner to deliver highly accurate real-time captions.


Want to learn more about how accessibility laws impact online video? Download the ebook:

How the ADA impacts online video accessibility. Download the ebook.


Filed under

About the author

Related Posts

The post Legal Requirements for Live Captioning appeared first on 3Play Media.

]]>
Let’s Talk About Live Captioning https://www.3playmedia.com/blog/lets-talk-about-live-captioning/ Tue, 18 Jan 2022 15:45:37 +0000 https://www.3playmedia.com/blog/lets-talk-about-live-captioning/ • Live Captioning with 3Play Media [Free Webinar] One of the most exciting things for the 3Play Media team is developing a new product. Recently, we were thrilled to announce the release of our latest live captioning service, Live Professional Captioning. This service complements our other video accessibility services and will provide customers with the...

The post Let’s Talk About Live Captioning appeared first on 3Play Media.

]]>

  • Live Captioning

Let’s Talk About Live Captioning


Live Captioning with 3Play Media [Free Webinar]


One of the most exciting things for the 3Play Media team is developing a new product. Recently, we were thrilled to announce the release of our latest live captioning service, Live Professional Captioning. This service complements our other video accessibility services and will provide customers with the most accurate and reliable live captioning on the market.

Prospective customers often have questions about new products. While we love learning the ins and outs of captioning and streaming, we know the processes can sometimes be intricate and hard to understand. Many questions come up: How does live captioning work? How does live captioning differ from closed captioning? What is encoding? What does it mean to have live captions on a second stream? The list goes on.

In this blog, we’ll go over some of our most frequently asked questions about our live captioning service. Read on to learn more.

What is live captioning?

Live captioning is designed for live events and is performed in real-time. Our live captioning service uses automatic speech recognition (ASR) technology and professional captioners, who receive access to the media at the same time as the viewer, to deliver highly accurate captions in real-time.

“In recorded captioning, we typically go through three rounds of review to get to our 99% accuracy. The fact that we’re holding ourselves to around a 96% accuracy for live captions really says something about the techniques we use in order to generate what we intend to be an equivalent experience to what is being said in the event.”Stephanie Laing, 3Play Media Senior Product Manager

How is live captioning different from closed captioning?

Live captions are for content happening in real-time, and closed captions are for pre-recorded content. It’s important to note that live captions are typically “closed,” meaning the viewer can turn the captions on or off. While video streaming is used for live and recorded content, live captioning is specifically for real-time videos. For example, real-time events such as a live webinar use live captioning, whereas streaming apps like Netflix would use closed captioning for recorded content.

Live captions are necessary for d/Deaf and hard-of-hearing viewers to access live events and live streaming, which involves playing multimedia content across the internet to allow for real-time viewing.

How does live captioning work?

3Play Media’s live captioning service goes through a three-step process: Listen, Caption, Deliver.

Step 1: Listen. To begin our three-step process, we need a video Real-Time Messaging Protocol (RTMP) stream. As a default, 3Play provides a stream target to which the customer can send their RTMP stream so 3Play can listen to the audio and generate captions.

For customers that aren’t able to send an RTMP stream to 3Play, we can still provide live professional captions. In this case, customers will need to provide a way for the professional captioner to access the event audio, such as a Zoom meeting link or dial-in phone number.

While customers have the option of both streamed and streamless live events for live professional captions, streaming adds increased flexibility and resiliency with our auto-captioning failover. We recommend customers send 3Play a stream if possible, as streaming allows us to have auto-captions running persistently in the background.

“A key differentiator for us and a part of what makes our service great is that for our professionally captioned events, we are persistently running our auto-captions in the background. We can only do this if we have a stream.”Stephanie Laing, 3Play Media Senior Product Manager

Step 2: Caption. Depending on a customer’s preference, the captions are produced by ASR technology or a professional captioner. For streamed events, live professional captions will always have 3Play’s live auto-captions persistently running in the background in case of interruption. In the unlikely event that the professional captioning feed is lost, captions automatically failover to auto-captions. When the connection regains, captions revert to the professional captioning feed.

Step 3: Deliver. We have multiple delivery methods to make captions available to your audience in real-time: Captions can be embedded in a web page, delivered to an API, or delivered to a second screen.
<!–

In-player vs. external delivery

Embedding, in which a customer receives a code snippet to embed into a webpage, is an external delivery method, whereas API and encoding are in-player deliveries. Customers can also use a Second Screen delivery with other delivery methods. With streamless events, we must deliver captions to an embed or Second Screen; if we have a stream, we can provide captions for in-player delivery.->


Not all live captions are equal. Learn more in our guide ➡


What is a Second Screen delivery?

Similar to our embed delivery method, our Second Screen delivery method offers an external webpage for customers to provide to their audience that renders the live transcript and captions produced during the live event. Audience members can click on the provided URL to view the live transcript and captions produced in real-time. This type of delivery is a common presentation method for CART captions.

The Second Screen delivery is available for all 3Play live captioned events regardless of integration, production, or delivery method.
<!–

What is 608 Standard Encoding?

3Play offers real-time 608 Standard Encoding Delivery into video streams for native caption playback on live players.

Encoding is highly desirable for optimal audience experience and caption control in web-based and mobile players when caption embedding and API delivery are impossible.

We always advise customers using 608 Standard Encoding Delivery to set up a secondary stream as a failover in case of interruption. If desired, they may also caption the secondary stream.–>

How does a secondary stream work?

Customers can use a secondary stream as a backup live stream linked directly between the encoder and the end video player. In the unlikely event of an interruption to the primary stream, the live stream switches to the secondary stream; when the interruption resolves, the live stream reverts to the primary stream.

How do customers set up live captioning?

We make all our video accessibility services easy to order, and live captioning is no exception.

Customers can schedule live captions on-demand in the 3Play Media online platform. After scheduling your service, we’ll match your upcoming event with a professional captioner.

Customers have the option to submit custom event instructions, speaker names, or wordlists, which the live captioner can review before the event.

Once you approach your event start time, you can start streaming, and the live captioner will begin captioning in real-time. When the event is over, you’ll have full access to the transcript and captions from the live recording, as well as the option to order additional services.


Get Started Live Captioning With 3Play Media

At 3Play Media, our live captioning service allows you to schedule captions for any live event. We streamline the traditional live captioning workflow by integrating with many video platforms, including YouTube, Zoom, Brightcove, and more.

Our goal is to make video accessibility easy. With professional captioners, best-in-class ASR technology, and custom features, we deliver a highly efficient, accurate, and reliable captioning solution to make your live events accessible to all audiences.
 


Learn more about 3Play Media’s live captioning service with our webinar Live Captioning with 3Play Media.


Filed under

About the author

Related Posts

The post Let’s Talk About Live Captioning appeared first on 3Play Media.

]]>
How to Handle Live Closed Captioning – and the Challenges https://www.3playmedia.com/blog/how-to-handle-live-closed-captioning-and-the-challenges/ Tue, 27 Jul 2021 20:00:28 +0000 https://www.3playmedia.com/blog/how-to-handle-live-closed-captioning-and-the-challenges/ • Technological innovation has paved a new way to conduct business, education, and life in general – particularly in a world forced to adapt to virtual substitutes during the pandemic. Most of the time, the technology we use is very helpful. For example, virtual meeting platforms and live closed captioning software have helped us adapt...

The post How to Handle Live Closed Captioning – and the Challenges appeared first on 3Play Media.

]]>

  • Live Captioning

How to Handle Live Closed Captioning – and the Challenges

Technological innovation has paved a new way to conduct business, education, and life in general – particularly in a world forced to adapt to virtual substitutes during the pandemic.

Most of the time, the technology we use is very helpful. For example, virtual meeting platforms and live closed captioning software have helped us adapt to our new hybrid realities and virtual needs. However, technology doesn’t always go to plan – the unfortunate reality is that sometimes, technology doesn’t work how it’s intended, which means having a backup plan is imperative.

Live captioning, which utilizes automatic speech recognition (ASR) technology and/or a professional stenographer or voice writer to deliver captions in real-time, is one of the areas we often see a need for troubleshooting. As closed captioning and transcription experts, we’ve learned a thing or two about live auto-captioning and professional live captioning during virtual events and conferences. This means we’ve also learned what to do when live captions don’t work as planned!

In this blog, we’ve compiled a list of tips and backup plans to help you navigate any challenges you might face when captioning your live event through ASR technology, with a professional captioner, or a combination of the two.

 

💡 3 Tips for Online Conferences & Events

 

If possible, create a script to follow ahead of time.

Script iconIf the format of your live event permits, try writing a script ahead of time and following it throughout the event. Include all of the “big stuff,” or the important information critical to your audience’s understanding. This way, any real-time deviation won’t significantly change a viewer’s overall comprehension. 

Also, be sure to introduce any other speakers by name. This allows your audience to differentiate between speakers, and best practice calls for testing each speaking participant’s microphone and audio quality prior to the start of the event. 

By starting your event following these simple steps, you can set your audience up with a basic level of understanding. Even if you don’t have live closed captioning, creating a script in advance and ensuring a standard of audio clarity and quality works as a substitute if the event format permits. 

In addition, having a script can be great for using professional live captioning because it can provide the stenographer a reference point for any spelling or related questions.

 

Verbally describe any visual media.

Audio description logoIf your live event includes the presentation of images or videos as visual aids, be sure to verbally describe the purpose and content of each one. The description doesn’t need to be long, but it should be enough that your audience can understand whether the media is for visual effect (purely decorative) or communicates meaning (i.e. infographics or visual data). 

Avoid using language like “As you can see here…” or “We can all see…” throughout the course of your event because some people might not be able to see! Audio description primarily benefits blind and low-vision users, and making a point to include verbal descriptions of visual cues can be helpful to someone who may be listening to the closed captions or the transcript via a screen reader at a later time. 

Live closed captioning often won’t include these descriptions, so it’s always a good idea to incorporate verbal description into your presentation. But how do you decide what information is important enough to describe? The Described and Captioned Media Program (DCMP) has created a description key that explains how audio description works, creates guidelines for standards of quality, and clarifies what exactly should be described.

 

 Watch the webinar: How to Create Accessible Presentations ➡  

 

Present with live closed captioning using Google Slides.

Google Slides logo on yellow backgroundWhen using the Chrome browser, Google Slides allows presenters to turn on automatic captions to display a speaker’s words in real-time. Using your device’s microphone or an external microphone, you can use automatic closed captions that will populate and allow you to adjust the sizing and position. 

While this is a great resource for live captioning, it’s worth noting a couple of the feature’s limitations. Currently, Google Slides is only offering live captions in U.S. English and, once activated by the presenter, the viewers do not have the ability to toggle captions on/off from their end. Because of this, Google recommends notifying your audience of the source of live captioning before getting started. 

Another heads up:  Live captions are not stored through Google. Despite this caveat, it’s still a fantastic (and free!) live captioning solution for those who don’t already have one. This just means that if you’re planning on distributing the captioned presentation at a later time, be sure to make a recording during the original live presentation.

Use professional live captioning and ASR.

While ASR captions can be used on their own to some success, pairing ASR captions with a real-time professional captioner is a great way to deliver more accurate captions and ensure backup in case technology falters.

Professional captioners can catch mistakes in ASR, continue transcribing if auto-captions don’t go as planned, and ensure an overall higher level of accuracy than auto-captions alone. In addition, if your professional captioner needs to stop captioning for any reason, ASR captions can take over and your viewers won’t miss a word.

Send out a recording of the event ASAP.

Whether your live event is hosted via Zoom, Google Meet, or another video conferencing software, captions can be added retroactively to a recorded presentation. Thankfully, this means that even if you couldn’t provide live closed captioning at the time of your event, you can always add captions later! 

Recording your live event has more benefits than just accessibility, including audience retention, lead generation, and the creation of derivative content. On top of distributing a recording of the event, you can also provide participants with any supplemental materials that were used during the event, such as PowerPoint slides.

 Check out our Toolkit for Live Captioning Events 🛠  

 

Include an external link to live closed captioning.

Another alternative if the software of your choice doesn’t support live closed captions is the option to link an external URL. This method functions similarly to a plug-in solution, where the external URL will include the code to stream live captions for your event on a separate page. You can use live auto-captions or professional live captions for this option.

In fact, 3Play offers this external linking capability, among our other live captioning solutions. If you choose to use this method as an alternative to live closed captioning, it’s best to share the external URL with your audience ahead of the event and ensure it’s visible on the same page that viewers are watching from.

 


 

Want to learn more? 

 

Live captioning. Hosting a live event? You should add captions to that. Learn More.


Filed under

About the author

The post How to Handle Live Closed Captioning – and the Challenges appeared first on 3Play Media.

]]>