ASLink: Making Everyday Conversations Accessible

CSE 440 Staff
7 min readDec 9, 2020

--

Contributors: Nora Morsi, Micah Witthaus, Kaytlin Melvin, and Maryam Saleh

As we move farther into the digital age, more and more of our conversations are being held virtually. The COVID-19 pandemic has only further intensified this change; now, it’s common for school lessons, work meetings, social events, and even family gatherings to be conducted over platforms like Zoom and Microsoft Teams. And although the increased prevalence of virtual meetings has been beneficial for many, it’s also left behind members of the Deaf and Hard of Hearing (DHH) community.

After speaking with DHH individuals earlier in the fall, we noticed that a common theme they expressed was difficulty participating in group conversations with hearing people. These challenges that DHH people face tend to change based on how the conversation is being held. Face-to-face, for example, one might find it useful to lipread, but they will likely have a difficult time figuring out exactly what was said by everyone in the group and identifying context switches within the conversation. Over video chat, a DHH person could take advantage of accessibility features like auto-captioning, but a poor internet connection would make it almost impossible to lipread.

After we finished our initial research, it became clear to us that we needed to leverage technology to help facilitate conversation between DHH and hearing people. Our ultimate goal was to level the playing field, so to speak, and give DHH individuals an equal opportunity to participate in in-person and virtual conversations. This goal is now the basis of our app-based solution, which we (fittingly) call ASLink.

ASLink: A Virtual Transcription Service

Exploring the communication problems that the DHH community faces brought us to the solution we have today: ASLink, an application that can transcribe both spoken English and American Sign Language (ASL). ASLink is designed to accompany in-person and virtual conversations by creating an accurate, real-time transcription of the conversation. It lets users create and join meetings using their preferred method of communication, whether that be speaking, signing in ASL, or typing. Everything that a user says, signs, or types will be included in the transcript along with their name. Users can also choose to have their words read out loud to other meeting participants, which comes in handy when one of the participants is signing or typing.

Since the app has both a mobile and a desktop version, anybody with access to a device can use ASLink to create or join a meeting. Participants simply join the meeting on their own devices, adjust their settings to reflect how they want to communicate, and let ASLink do the rest.

What We Learned From Others

We initially created a paper prototype of ASLink that involved a physical depiction of all of the screens and actions that would allow for us to do user testing without diving too deep into a digital representation. Due to the COVID restrictions across the country, we created a paper prototype and uploaded it to Marvel to allow for it to be digitally distributed. Marvel allowed us to upload images of our paper prototype screens and connect them together with interactive buttons. For our user testing process, we wanted to reach individuals who both had experience with interface design and those who did not. Each team member’s roommate received the Marvel paper prototype and a description of the scenario the team wanted them to test. We had a few students who were either in the Computer Science field, or specifically the Human Interface Design field, and other students who were not related to those fields at all which gave us a diverse set of responses. We ideally would have liked to have a test participant who is part of the DHH community, however that wasn’t possible given the timeline of our project and the current lockdown restrictions.

Our initial paper prototype on Marvel can be found here.

As a result of our testing, we were able pinpoint some issues with the functionality as a digital product rather than the core chat and communication functionality. Simple things like reverse navigation (back buttons) and the ability to return to the settings page were both key features that we overlooked as a team. While we thoroughly developed the design to support our core communication functionality, we didn’t focus as much in the preliminary design on the entire navigational flow of our product. Most of our improvements between paper and digital mockups were navigational improvements and design scheme changes (themes, fonts, logo, etc.). Overall, we gathered impactful changes from our entire design process that were vital to the product’s digital presence.

Finalizing the digital mockup

After the creation of our preliminary digital mockup using the feedback gathered from our usability testing, we then received a design critique from the course staff to further improve the application before its prototype was finalized. In this critique there were a variety of suggestions ranging from minor navigational flow changes which we simply fixed (e.g. differentiating between back arrows and home buttons, differentiating between the home screen and meeting setup screens, etc.), to major feature additions that required more planning to implement (e.g. allowing users to switch their method of communication during a meeting, allowing users to join a meeting that has already begun, etc.).

When our team got together to discuss the best way to incorporate the major changes into our final digital mockup, we decided it would be best to do a soft redesign of our prototype. Rather than having the user select their preferred method of communication as a step in the create/join a meeting process, we instead separated it into its own settings page which is saved for logged-in users. This improved two things: first, it allowed for faster meeting creation and joining, since there is one less step in the process for logged-in users; second, it allowed users to change their preferred method of communication during meetings since they can now access the settings page during the meeting (via the gear icon). We also added the ability for users to join meetings already in progress by putting them in a waiting room and prompting the meeting host to either allow them in or reject their request to join. The last and arguably most important design change we made was the ability for messages to be dictated via the speakers of the device(s) its running on. This feature is meant to preserve the fluidity of group conversations since users no longer need to be looking at the application to read what the DHH group members are saying. Instead, they can stay engaged in the conversation and hear what the DHH group members are saying out loud. With the iterative improvements we have made to our preliminary mockup, we now have a digital mockup that facilitates the communication between DHH individuals and hearing individuals both in-person and virtually the best we can!

Our final digital mockup can be found here.

ASLink helps facilitate group conversations in-person and virtually by providing a meeting room-based transcription service for anyone who communicates audibly, by text, or by ASL. The way this works is by first having the user either sign up or continue as a guest on our homepage.

After doing this, the user will be prompted to enter their name and indicate their preferred way of communicating, whether it be speaking, signing, or typing. Speaking and signing users will additionally have the option to type during meetings. The user can also check the box “Dictate my messages” to have what they type, sign, or speak to be read out loud to each participant. All of these settings can be changed after they are set as well, even during meetings!

Next, the user will be prompted to either create a meeting or join one using a meeting code.

Lastly, the user will be taken to the meeting room where they can see all of the participants in the group conversation as well as a live transcript of the conversation. These transcripts will be saved after the meeting ends and can be downloaded to the users device for convenient referencing.

In the End….

During our user research we identified a need for smoother communication between DHH individuals and hearing individuals in a group setting, both in-person and virtually. ASLink provides a solution by supplying users with a high-quality transcription service on the user’s device. It also gives the option for messages to be dictated, allowing for fluid conversation between users of all communication styles. In the future, ASLink could be expanded to transcribe other spoken languages and sign languages in order to accommodate DHH individuals from outside the United States. Our hope is that ASLink will remove the communication barrier between Deaf and hearing individuals, allowing DHH people to fully engage in group conversation no matter the setting.

--

--

CSE 440 Staff
CSE 440 Staff

Written by CSE 440 Staff

University of Washington Computer Science, Intro to Human Computer Interaction

No responses yet