ASR/Technology Trends Archives - 3Play Media https://www.3playmedia.com/blog/tag/asr-technology-trends/ Take Your Video Content Global Wed, 03 Sep 2025 15:16:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.3playmedia.com/wp-content/uploads/2025/07/cropped-favicon_1x-300x300-1-32x32.webp ASR/Technology Trends Archives - 3Play Media https://www.3playmedia.com/blog/tag/asr-technology-trends/ 32 32 Why Advocates Are Calling Out Closed Captions at Movie Theaters and Festivals https://www.3playmedia.com/blog/why-advocates-are-calling-out-closed-captions-at-movie-theaters-and-festivals/ Tue, 07 Feb 2023 20:36:13 +0000 https://www.3playmedia.com/blog/why-advocates-are-calling-out-closed-captions-at-movie-theaters-and-festivals/ • Download the [FREE] Checklist: Caption Reformatting Open captioning is back in the forefront of accessibility advocates’ minds after Sundance Film Festival’s 2023 dramatic jurors Marlee Matlin, Jeremy O. Harris, and Eliza Hittman walked out of a film screening after Matlin’s closed captioning device malfunctioned and no other captioning alternatives were available to her and...

The post Why Advocates Are Calling Out Closed Captions at Movie Theaters and Festivals appeared first on 3Play Media.

]]>

  • Captioning

Why Advocates Are Calling Out Closed Captions at Movie Theaters and Festivals


Download the [FREE] Checklist: Caption Reformatting


Open captioning is back in the forefront of accessibility advocates’ minds after Sundance Film Festival’s 2023 dramatic jurors Marlee Matlin, Jeremy O. Harris, and Eliza Hittman walked out of a film screening after Matlin’s closed captioning device malfunctioned and no other captioning alternatives were available to her and other d/Deaf and hard of hearing audience members.

Before this incident at Sundance, the issue of closed captioning at movie theaters and festivals had long been debated by filmmakers and viewers alike. Many in the d/Deaf and hard of hearing communities have called for film screenings to include permanent, burned-in open captions. The current closed captioning solution for film screenings relies on captioning devices, which are often plagued with technological and user experience issues.

But what exactly are open captions, and why are accessibility advocates passionate about adding them to films screened at movie theaters and festivals? 

In this blog, we’ll discuss the current state of closed captions at movie theaters and festivals; why accessibility advocates are calling on the media and entertainment industry to move toward open captioning for films; and discuss artistic, cost, and audience loss concerns many filmmakers have about adding open captions to movies.

The State of Closed Captions at Movie Theaters and Film Festivals

Cinema entrance

ADA Requirements for Movie Theaters

Movie theaters are required to provide and maintain closed captioning and audio description equipment for digital films that are produced with accessibility features, according to a Final Rule revising the Americans with Disabilities Act (ADA) Title III. 

Additionally, theaters are required to provide notice to the public about the availability of accessibility features and ensure that staff is available to assist patrons with equipment.

How Movie Theater Closed Captioning Devices Work

The National Association of the Deaf (NAD) states that the two types of captioning equipment available in theaters are Sony Entertainment Access Glasses and CaptiView:

Sony Entertainment Access Glasses

Captions are transmitted to a wearable wireless receiver device, which viewers wear while watching a film. Captions appear overlaid on the screen through the lenses.

CaptiView 

A small display with a flexible arm is attached to the arm of the seat or cupholder. Captions are transmitted to the device and appear on the display screen.

The Closed Captioning User Experience at Theaters and Festivals

Many accessibility advocates and people who use closed captioning find the user experience of captioning devices in their current state difficult. In the last year alone, multiple disabled people who use captioning devices have lamented the poor user experience of the current technology, including filmmaker Alison O’Daniel and advocate Shari Eberts.

To get further insight into the captioning issues at movie theaters and film festivals, we chatted with Matt Lauterbach, a filmmaker and accessibility advocate who founded All Senses Go and serves as ReelAbilities Film Festival Co-Director in Chicago. 

Lauterbach said that “a lot of what’s happening is intentions that aren’t yet matched by an understanding of what’s involved” when it comes to accessibility at film festivals and movie theaters. He noted that filmmakers generally want to reach a universal audience and be accessible to all but are facing technological and procedural constraints to get to the point where films are truly accessible. In the meantime, closed captions remain a way for filmmakers and theaters to provide a compliant solution without taking a “visible stand” on the issue.

Open captions are a visual stand [for inclusion].Matt Lauterbach

Lauterbach works with many filmmakers and caption users who support the use of open captions over closed captioning devices in movie theaters and film festivals. He explained that captioning technology can be cognitively draining, straining on the eyes, and even cause users to miss content in screenings due to the need to look back and forth from a device to the screen. “It’s a tough user experience,” he said.

Besides the user experience, Lauterbach also noted some basic technological functions in captioning devices that are prone to disrupt users. 

“The device needs to be set to the proper theater,” he said. “You might get a caption device set to theater 7, and it’s set to theater 6. You then need to bring it back to get it fixed [during the movie]. That’s tough.” 

On top of incorrect theater settings, dead batteries and uncharged devices are a common issue, not to mention theater and festival staff who aren’t trained on how to use or troubleshoot captioning devices.

Why Accessibility Advocates Want Open Captions

Accessibility symbol

When it comes to captioning at movie theaters and film festivals, many accessibility advocates and disabled users have aligned on adding open captioning to all screenings. Open captions, similar to burned-in SDH subtitles, provide a permanently accessible way to view dialogue and sound effects on screen. Advocates prefer open captions over closed captions for film screenings due to their more inclusive user experience.

What are Open Captions and How Do They Work?
Open captions are permanently burned into a video so that the viewer cannot turn them off. Because open captions are part of a video, they are supported by all video players and devices. Open captions eliminate rendering inconsistencies across different video players and devices.

According to Variety, many international film festivals, including Cannes and Venice, already include open captions or subtitles in multiple languages on the screen, and Sundance’s 2023 dramatic jury “repeatedly expressed concerns to both Sundance and filmmakers that movies playing at this year’s festival should come with open captions.”

Open captioning for movies has become more mainstream in the last few years, with some theaters and filmmakers adopting the practice to make films more inclusive for d/Deaf and hard of hearing viewers: 

Do you need to update your existing caption files? 👀

Filmmakers’ Concerns About Open Captions

Filmmakers holding a camera, clapboard, and boom microphone.

The enormous progress being made with accessible film experiences at movie theaters and festivals has not come without pushback. Some filmmakers and viewers find open captions to be too costly or distracting. Even Lauterbach admits that there are “legitimate artistic concerns” when it comes to open captions on films. 

[It] depends on what you as a venue want to value. Film festivals are less profit-motivated and often have inclusive missions. To really practice what they are preaching, I think open captions are one of the strongest symbols you can send.Matt Lauterbach

Some creators, particularly disabled filmmakers, strongly believe in the benefits of open captioning and make it part of their art rather than an obligatory element. For example, filmmaker Alison O’Daniel’s 2023 Sundance debut, The Tuba Thieves, includes open captions specifically crafted to be part of the art itself. Additionally, the use of certain types of SDH subtitles can support numerous customizations so that filmmakers and producers can curate the look and feel of the subtitles to align with a film’s other artistic elements.

For filmmakers with open captioning concerns, the issue is less about intentional exclusion and rather one about production costs and viewer experience. But are these concerns legitimate?

Cost

The issue of cost for the creation of an open-captioned print of a film is often cited by filmmakers as a barrier to offering open captions. Regarding the most recent incident at Sundance, several filmmakers reportedly brought up concerns about costs and time associated with the creation of an open-captioned film print, in addition to fears that burned-in captions could negatively impact a film’s asking prices for distribution.

In response, Lauterbach said that the Digital Cinema Package (DCP), a collection of files that includes caption formats used at film festivals and theaters, can actually be formatted as both closed and open captions without a need for additional quality control or much of a difference in overall cost. When a captioner creates a DCP caption file, it’s a matter of toggling settings on and off via the DCP.

If a filmmaker is not utilizing DCP specs, it can be a different matter in terms of time and cost. For example, a festival or theater may require different exports, which can add complexity to the open captioning or SDH subtitling process. Still, if a film is closed captioned, it can easily be reformatted to an open-captioned or SDH-subtitled version, regardless of export.

Many accessibility advocates say that the cost of not including a major group of people is greater than the cost of adding open captions or subtitles to film screenings because of the enormous segment of consumers being excluded. The U.S. d/Deaf, hard of hearing, and hearing loss communities consist of over 30 million people. Plus, millions of non-native English speakers, neurodivergent audiences, and viewers who prefer media with captions turned on make up additional viewing groups who have helped fuel the unprecedented usage of captions in recent years.

Audience Loss

Another commonly cited issue around open captioning surrounds the loss of audience over having permanent captions or subtitles on the screen. Lauterbach did not want to dismiss these concerns but noted that having open captions does not guarantee audiences will have a bad viewing experience.

You may find that you gained audiences. Captions [are often] compared to curb cuts–many people benefit from it, even if your hearing is crystal clear.Matt Lauterbach

A recent Preply study in the U.S. found that only 22% of viewers find subtitles more distracting than helpful, from which it can be inferred that over three-quarters of potential viewers don’t find subtitles distracting. The study also found that:

  • 74% of viewers say subtitles help them comprehend the plot.
  • 68% say subtitles help hold their attention on the screen.
  • 55% say they often have to rewind after missing things said when they don’t use subtitles.

Lauterbach added that while he is not disabled, he is a dedicated caption user due to captions helping reinforce characters’ names, clarifying dialogue, and bringing to light other elements you can miss during a viewing.

Making Movies More Accessible

Film screen with dramatic imagery surrounded by audience seats

As the news cycle moves beyond the renewed calls for open captioning at movie theaters and film festivals, the question remains: How can venues and creators ensure films are inclusive and accessible to all? 

At 3Play Media, accessibility is always on our minds. We want to help filmmakers learn about the benefits and limitations of closed and open captioning so they can make an informed decision about what kind of service is best for them.

3Play has a robust offering of closed captioning, open captioning, and SDH subtitling services designed to give cinematic content creators peace of mind when it comes to films screened at movie theaters, festivals, streaming platforms, or broadcast television. Whether you are submitting a film and require Simple DCP specifications or you want a curated, customized experience for your film’s SDH subtitles, 3Play will help you build accessibility into the process for a future-proof solution that is inclusive to all audiences.

Do your captions and subtitles need a refresh? Our Caption Reformatting Checklist can help! Free download.


About the author

The post Why Advocates Are Calling Out Closed Captions at Movie Theaters and Festivals appeared first on 3Play Media.

]]>
What is an EEG Caption Encoder? https://www.3playmedia.com/blog/what-is-an-eeg-caption-encoder/ Tue, 03 Jan 2023 19:51:24 +0000 https://www.3playmedia.com/blog/what-is-an-eeg-caption-encoder/ The Complete Guide to Caption Encoders [Free eBook] Throughout the past few decades, caption encoders have allowed televisions to receive closed captioning transmissions, and they remain widely used for many broadcast and streaming workflows today. There’s several different types of encoder technologies available to help simplify caption delivery of your broadcast and streaming content; in...

The post What is an EEG Caption Encoder? appeared first on 3Play Media.

]]>

  • Captioning

What is an EEG Caption Encoder?


The Complete Guide to Caption Encoders [Free eBook]


Throughout the past few decades, caption encoders have allowed televisions to receive closed captioning transmissions, and they remain widely used for many broadcast and streaming workflows today. There’s several different types of encoder technologies available to help simplify caption delivery of your broadcast and streaming content; in this blog, we will highlight EEG caption encoders like iCap and give an overview of what encoding workflows look like.

What is a caption encoder?

Encoders let a broadcaster simultaneously receive and encode captions, allowing them to be displayed alongside a television program or video in real time

Modern encoder technology took a big step in 1993, when the Federal Communications Commission (FCC) mandated that TVs include a decoder to receive caption signals, thus allowing a viewer to turn captions on or off on their television. 

Closed vs. Open Captions
“Closed captions” means a viewer is able to toggle on/off the captions, whereas “open captions” are always on.

What is an EEG encoder?

An EEG encoder refers to a captioning encoder manufactured by EEG, such as iCap and iCap Falcon.

iCap encoders

These EEG caption encoders have iCap software for improved functionality, such as sending audio to the captioner, but can also be set up as IP connections if desired. 

iCap-enabled encoders are manufactured by EEG, and with their direction, you can set up the encoder to feed both audio and video to the captioner, making it easier to monitor and caption effectively. 

The video and audio are converted to a data stream on the iCap cloud which is accessible via an Access Code. Captions are routed through the cloud and into the encoder where it is married to the stream and ready for broadcast. 

iCap encoders can be bought or rented for any type of event or broadcast. They are compatible with a number of broadcast networks, cable channels, OTT platforms, educational institutions, and more.

iCap Access Codes
iCap Access Codes typically look something like this:

Access Code: TV2021

iCap Falcon

iCap Falcon is a virtual encoder offered by EEG. Virtual encoders are hosted in the cloud and require clients to connect their stream digitally. iCap Falcon functions similarly to a normal EEG encoder, but is hosted within the iCap cloud.

In general, virtual encoders like iCap Falcon are useful for events that are streamed online or singular events that don’t necessitate the purchase of permanent equipment. These encoders add closed captioning data and reroute the video stream to the desired platform such as YouTube, Facebook, or Vimeo. 

iCap Falcon Compatability
iCap Falcon is compatible with a variety of streaming video platforms including Facebook, YouTube, Twitch, and more.

What does a closed captioning encoding workflow look like?

Three circles with images inside: a person typing at a computer with a data stream above it; a video player with captions on; a pair of hands shaking with a small data cloud above it.

Most closed captioning encoder workflows function like so:

  • A caption provider transmits a caption feed to the encoder(s).
  • The encoder collects the caption feed for transmission to the viewer.
  • The encoder pairs the captions to the video on a specific data transmission line called line 21, which televisions are mandated to decode captions from.

The Complete Guide to Caption Encoders

decorative

This ebook serves as your comprehensive guide to caption encoders – what they are, when and why you need them, and which encoder to use – to help you create accessible and engaging video content.

Download the eBook for Free

How to know if you need a caption encoder

Not sure if you need a caption encoder? Here’s a rundown of situations that require one:

  • Your program is going straight to broadcast or cable.
  • You’re streaming your live program on Facebook or YouTube.
  • Your video platform requires live captions to be embedded in the stream as 608/708 data.
  • You want viewers who do not have a video player to be able to turn on captions.
  • You want an offline captioning option.
  • You’re captioning video for kiosks and mobile devices.
  • You’re captioning video on social media platforms like Twitter or Instagram.
  • You’re creating a self-contained captioned video that can be distributed as a single asset.

Caption encoding with 3Play Media

When you need caption encoding, 3Play Media has you covered. Simply upload your video file for captioning and transcription processing. If you already have a transcript, you can use the automated transcript alignment service. Once your file has been captioned, you can order the caption encoding service and choose the appropriate encoding profile. Upon completion, you will receive an email notification and be able to download an M4V video with encoded captions.

The video will work with any player or device that supports M4V videos, including QuickTime, iPad, iPhone, iPod, iTunes, JW Player, and Flowplayer. Because the captions are soft-encoded in the video, users will be able to turn them on or off using the video player controls.

The source video that you upload can be in almost any web format that doesn’t use a proprietary codec. When ordering caption encoding, you will have the option to select an encoding profile to optimize video playback for a certain device.

For example, the iPhone5 profile transcodes your video for a target width of 1136 pixels, 30 frames per second, and a frame rate of 3 Mb/sec. You can also use your original source video as long as the video encoding is H.264 and audio is AAC. The closed captions track will be added to the video and put in an M4V container.

Download a demo video with encoded closed captions – you’ll need to play it in a QuickTime or VLC player and make sure to enable the captions (subtitles). Please note that some versions of Windows Media Player do not support caption-encoded videos.

Note: For social media videos, you’ll need to upload your video in a format supported by the social platform (for example, Twitter takes MP4 videos). Then, order caption encoding > source with open captions.

 

The Complete Guide to Caption Encoders. Get Your Free Guide.


About the author

Related Posts

The post What is an EEG Caption Encoder? appeared first on 3Play Media.

]]>
Dog Training and Machine Learning: What They Have In Common https://www.3playmedia.com/blog/dog-training-and-machine-learning-what-they-have-in-common/ Wed, 12 May 2021 13:00:35 +0000 https://www.3playmedia.com/blog/dog-training-and-machine-learning-what-they-have-in-common/ Although sometimes it seems we’re eerily close, machines haven’t replaced us yet. Yes, machines can make faster and more complex decisions, but it’s pretty easy to break one. Also, machines still can’t process logic that they haven’t been taught. Try out some unexpected questions on your favorite voice assistant. A 2018 study found that Amazon’s...

The post Dog Training and Machine Learning: What They Have In Common appeared first on 3Play Media.

]]>

  • Industry Trends

Dog Training and Machine Learning: What They Have In Common

Although sometimes it seems we’re eerily close, machines haven’t replaced us yet. Yes, machines can make faster and more complex decisions, but it’s pretty easy to break one. Also, machines still can’t process logic that they haven’t been taught.

Try out some unexpected questions on your favorite voice assistant. A 2018 study found that Amazon’s Alexa was answering just over 50 percent of questions it was asked and 80 percent of those were correct. Amazon started crowdsourcing answers for Alexa from users in 2018, and in 2019, answer quantity went up, while measured quality took a more subjective turn. One user responded to the question “How do dolphins breed?” with “Dolphins are mammals and breathe with the lungs,” presumably assuming “breed” was meant to be “breathe.” 

Andrew Ng recently affirmed that machine learning models may shine on curated test sets yet struggle on applications beyond a controlled environment.  “So even though at a moment in time, on a specific data set, we can show this works, the clinical reality is that these models still need a lot of work to reach production… All of AI, not just healthcare, has a proof-of-concept-to-production gap,” Ng says. 

Training Artificial Intelligence

Artificial intelligence (AI) can’t teach itself yet – and a recent article from the Harvard Business Review asserts that the secret to AI is people, seemingly underscoring Ng’s point with a similar theme. From personal experience, including a recent tour in adtech and now the video accessibility tech world, this rings true. Both solution spaces are highly dependent on AI – specifically machine learning  – to offer value at scale. Machine learning (ML) applications in adtech include optimization of media and consumer pairings, identity, fraud detection and audience propensity, to name a few. All require training or a “truth set”.

Light bulb with sparklesMachine learning applications in video accessibility are equally diverse, with the obvious use being automated speech recognition (ASR). 3Play incorporates machine learning in a myriad of processes, including determining expected transcription job difficulty, likely errors in a transcript, and automated training of customer-specific language modeling with continuously updated truth sets. 3Play has been been training this process for 13 years, which starts to explain our position as the premium service provider in the captioning and video accessibility space. 

Allow me to expand on the “secret to AI is people”, especially regarding transcription. Training in any capacity (whether it be training machine learning, training a new puppy, or training for a marathon) isn’t always easy. Just this morning, in fact, my dog, Fluffy, chewed up a new carpet in my dining room. No kidding. I needed to take the opportunity to teach Fluffy that chewing on the carpet is not appreciated, in hopes he might be discouraged. This, in theory, is not so different from training AI. If AI mangles an accented speaker’s dialogue, or struggles through obscure or specific terminology, a training set must be updated for that model to learn and handle similar challenges in the future. The fuller and better quality the training set, the more effectively the model learns.

3Play Media & Artificial Intelligence

As you may or may not know, 3Play transcription has refined the same fundamental process for caption production since 2008. Automated Speech Recognition (ASR), and especially 3Play’s application of ASR, has since significantly improved, in part because we’ve directly trained it and in part because we’ve augmented general ASR training with customer specific mappings of common corrections via bespoke and proprietary post-ASR process models.

Running automation on files to produce text is easy – that’s why AI generated captions are cheap, sometimes free, and worth every cent.

Training machine learning models correctly is hard and expensive – just like training a puppy. That’s why you should care that 3Play patented our editing training and contractor processes. Not just because we’ve been doing it longer (we have) and are objectively, consistently producing the highest accuracy output (we are), but because both the volume and quality of human edited training corrections matters. That dolphins don’t breed with their lungs feels like a detail we’ll want right in captions that reflect directly on our brand, school or program. While 3Play Media is detecting words (and not pneumonia) in captioning content that teaches people critical skills, including perhaps how to detect pneumonia; you probably don’t want to be on the receiving end of someone who had bad captions on that lesson. Accuracy matters. 

Microphone icon on yellow background

3Play invented the original hybrid machine-human transcription process in 2008, utilizing automatic speech recognition (ASR) and AI, with editing, and quality assurance (QA) review. We’ve filed multiple patents yearly describing this process and enhancements to it from 2011 to 2021, and we remain busy. The rise of the marketplace model enabled 3Play to articulate and file a patent application for our contractor job market in 2011. Our contractors, along with 3Play technology, have been training our technology driven process to improve each year for 13 years. That’s 5 years earlier than most newer market entrants began developing a product. 

Improving on 99.6% average transcript accuracy is challenging. 3Play devotes an entire third of our process to raise transcript accuracy from 98% to 99.6%, and we’re currently running multiple machine learning models and tooling experiments to push it higher. The last .4% can be subjective, trivial, or could just be a formatting preference as language and communication continue to evolve. A machine alone won’t get us all the way there, just as dogs won’t train themselves anytime soon, but we should expect the right tech-enabled processes to continue making real gains.

 

John SlocumThis blog post was written by John Slocum, Vice President of Product at 3Play Media.

How to select the right video accessibility vendor, 10 questions you need to ask with link to downloadable checklist


About the author

Related Posts

The post Dog Training and Machine Learning: What They Have In Common appeared first on 3Play Media.

]]>