How Equitable Access Technology Has Been Adopted in Our Daily Lives

A woman holds a TV remote with her hand on one of the buttons. There is a TV in the background.

In 1969, physics graduate Harry Lang watched history. Broadcast across television screens, Neil Armstrong took his first step on the moon — but Harry, who was Deaf, couldn’t hear what was happening. Decades later, Harry’s battle to ensure Deaf and hard-of-hearing people were part of major cultural moments succeeded: closed captioning is now available on TV, streaming and social media.

Closed captioning is part of a legacy of technology that we’ve adopted for everyday usage but that has its roots in equitable access. This technology emerged from battles to expand avenues for communication, safety and inclusivity, with innovation finding new ways to triumph over a lack of accessibility. Though there is still work to be done, these conversations allow us to continuously expand equitable access.

closed captioning

When Armstrong took mankind’s first step on the moon, his famous words were not accessible to all. “Our country’s scientists could send a spaceship to the moon and back, but we couldn’t put captions on television for millions of [D]eaf people who were watching it,” Lang told TIME.

Without captions, the Deaf community was alienated from cultural moments. Though law required Hollywood films to provide captioning in 1958, the same rules didn’t apply to television. Shows began to experiment with captions beginning in the 1970s — particularly, Julia Child’s The French Chef — and major networks began using captions in the 1980s. The National Captioning Institute (NCI) debuted decoders in 1980 that people could put on top of TV sets to create captions.

Closed captions are appearing more and more, but especially in social media, there is still room for development to advance accessibility for others outside of the Deaf community, supporting emerging readers and nonnative speakers.

text- to-speech

Text-to-speech technology reads digital texts aloud from computer screens or other devices. It’s used to help students struggling with reading and writing, and originally emerged to support the Blind and low vision communities to access written text.

Earlier efforts to emulate human speech via a computer were speckled throughout history, but successful text-to-speech technology emerged in the 1970s. The first model to have a computer read aloud printed words, the Kurzweil Reading Machine, weighed hundreds of pounds and cost $30,000 (about $147,000 today).

Today, text-to-speech is used in various applications, including reading support, second language acquisition and in the gaming industry to create characters that sound distinctly human. It’s also been used to make gaming more accessible to people who can’t read the text, using audio to narrate what had formerly been graphics or text. Its applications continue to be expanded — Lina Dong, the first certified Blind broadcaster in China, teaches at a nonprofit institute where she recorded audiobooks for her students. The institute recently partnered with Microsoft to create a version of Lina’s voice with AI, using it to narrate audiobooks more quickly and create a larger audio library accessible to those who are blind or low vision.

Audiobooks

The difference between audiobooks and text-to-speech? Audiobooks are narrated by a person. Like text-to-speech, they emerged in response to the quest to expand equitable access. Their first iteration arrived in the 1930s, as the American Federation of the Blind, the Carnegie Corporation and the Braille Institute of America worked to create books on tapes, or talking books, in response to a law aiming to make books accessible to blind adults (it was amended to include children in 1952). According to the Library of Congress’ National Library Service for the Blind and Print Disabled, some of the first titles on tape included the Constitution of the United States and Hamlet.

But though other disabilities can preclude people from reading printed media, talking books were only accessible to people with a strict definition of blindness. Some with varying degrees of vision did not meet the criteria and could not request talking books. That changed in 1966, when Congress passed a law that expanded eligibility to anyone who could not read standard print, including physically handicapped readers.

Teletypewriters (TTY)

Decades before texting became more commonplace than phone calls, Deaf orthodontist James Marsters was frustrated by the communication barrier that those calls posed. While reading lips helped him navigate his practice, there wasn’t an option that helped Deaf people navigate telephone conversations. He partnered with Deaf physicist Robert Weitbrecht, who had been experimenting with a teletypewriter (TTY), which he used to send messages via radio. Together, with Andrew Saks, they connected two machines so that messages could be sent between them and printed at the receiving end. When Marsters went on a tour to showcase its merits, he highlighted its importance in emergency situations.

Like in texting’s earlier days, TTY used abbreviations for common phrases, including “GA,” to indicate that the other person could respond, “Q” for a question mark and “ILY” to mean “I Love You.”

Video Relay service (VRS)

Zoom and Google Meets are a ubiquitous part of our lives now, but videophone technology had been discussed as early as the 1910s. And in 1964, AT&T debuted the picture phone at the World’s Fair. Unlike the telephone, this technology opened up new areas of communication for the Deaf and hard-of-hearing communities. In the 1970s and 1980s, Video Relay Service (VRS) linked two parties with an interpreter and facilitated equitable access by allowing sign language users to have real-time conversations through an interpreter. This meant location or remote meetings were no longer a barrier.

However, due to VRS’ steep cost at the beginning, it wasn’t a feasible option for many. But the Americans with Disabilities Act (ADA) required the government to pay for VRS services, a huge win for the Deaf and hard-of-hearing communities.

GPs location services

While not developed specifically for the blind community, some technology now integral to our lives was adopted early on by these communities to facilitate communication — including GPS.

Google Maps, Find My iPhone and location sharing services have become mainstays. While GPS wasn’t invented in direct response to empowering those who are blind or vision-impaired, some of its first applications to assist the vision-impaired emerged in 1994. GPS devices were available in the early 2000s, and it has improved and adapted to offer a comprehensive view of a route for someone who is Blind or low vision. Today, various types of GPS software allow users to plot their journey in advance, track their position in real time, and learn about local landmarks and nearby businesses.

___

Applications for these technologies have expanded to many people’s daily lives, and ensuring that assistive technology is available to those who need it remains critical, from professional to educational spheres. To discuss equitable access consulting or an equitable access audit of your business, reach out here.

Previous
Previous

Why Your Business Should Care About Implementing Equitable Access

Next
Next

Lessons From the Field: Creating Equitable Access in Sports