文 (Wén) to Meet a Doctor? Anytime with Ahead(set)

CSE 440 Staff
9 min readMar 12, 2021

--

Contributors: Eric Chan, Linda Do, Mayki Hu, George Zhang

The Issue At Hand

Despite it being called a melting pot, the United States in many settings still favors those who can fluently speak English in their day to day lives. This is especially true in hospital settings, where the default language to speak there is English and if any patients speak otherwise a special translator must be found and requested (which is not always possible!). So we decided to try and help limited to no English-speaking Chinese immigrants ages 40–60 with just that. After interviewing members of that audience, we learned that the communication barrier is very much real and that while applications like Google Translate are helpful, what people really enjoy is having someone, whether it be a family member or professional translator there to talk and speak freely to. However, due to circumstances like few to none translators for hire and family members unable to be physically present, our Asian patients have to settle for less than ideal which is not what anyone would like to see.

Getting a Headset

Introducing Getting Ahead(set), a headset paired with a mobile app that allows seamless translation experience between the listening and the speaker. Our users will be able to use our mobile application or the headset to trigger real-time translation, being able to speak and have their voice translated. Also to be able to listen in a different language and have that translated in real time as well, yielding the perfect flow of information between two different languages. We imagine users to use our app to perform all the actions of translations, either through using the software translation or to call a translator to perform the translation.

Improving our Design

We conducted three usability tests which included participants completing two tasks. The two tasks were: use the prototype to communicate with a doctor via a human translator, and use the prototype to communicate via the automated translator. Participants were presented with the physical app prototype (figure 1) and an image of the headset prototype with components that can be interacted with labeled (figure 2). They were told to imagine themselves visiting a doctor as a middle-aged (40–60 year old) Chinese immigrant who speaks limited English. We asked the participants to accomplish both tasks using the prototypes while thinking out loud.

Figure 1: Paper prototype of app presented to participants
Figure 2: Labeled image of headset components presented to participants

During the usability testing, we found that the participant was confused on the next step on how to make a call after pressing the call button on the headset. This was a significant issue as it prevented the participant from accomplishing the task. We modified our design to include a voice notification on the headset to divert the user’s attention to the app as well as a notification banner on their phone. From the app, users can select who they wish to call and call from there. Another issue we encountered was that when the participant was tasked to make a phone call to a translator, the buttons icons functions were unclear leaving the participant confused on how to make a phone call. It was unclear that the user had the option to make a phone call once they selected the translate button. This confusion caused a disruption to the user’s workflow and hindered them from completing the task. Therefore, we decided to split the automated translate button and call button. There is now a button to access the user’s contacts which the user can then select to call, and a separate button for the automated translation feature. Taking these two main issues into consideration, we modified our prototype in a way to bring clarity to the user on how the headset and app were intended to be used.

The End Result

Using the App

Getting in Contact With A Translator

  • Connecting by app: Once the headphones are turned on, users can open the app and connect the headphones to the device and app
  • Connecting by headset: Once the headphones are turned on and used to pair with the phone, a voice will play in the headset letting the user know that they should now call someone with their phone.
Figure 3: The notification banner shown on lock screen
  • Find contact: When the notification is tapped and the phone is unlocked, the app will open and show the user’s contact list so they can choose who they want to call.
Figure 4: The contacts list of the app will show up
  • Select Contact: Once the user has selected a contact, it will take them to another screen that shows the contact information. This is to allow the user to make edits to contacts as need be as well as making sure the user won’t just accidentally call someone.
Figure 5: The contact’s information as well as where the user can call the contact
  • Call and Select Device: Once the user hits the call button, it will call the specified contact. One the call starts, the user will also have the choice to select which device they want audio from just in case the headset dies or malfunctions for any reason.
Figure 6: The call screen where the user can do things like mute and change output device

Using the Auto-Translate Feature

  • Connecting the Headset: If not done already through the headset, users will first have to connect the headset to the app which can be done through the app itself
Figure 7: Connecting the headset to the phone and app
  • Starting the Auto-Translate: Using the button at the bottom of the menu bar, users can hit the translate button to be taken to a screen where they can begin the auto-translation process by selecting which languages will be used as well as if they want to record this translation session.
Figure 8: Users can select languages being used as well as if they want to record or not
  • Translating: Once started, the app will ask users to tap a microphone icon where they can begin speaking what they want translated to the second language. Once speaking has finished, a translated version will auto-play and can be paused/replayed by pressing the stop button. After playing the translation, the microphone will automatically start listening for the next part of the conversation from both languages and continue on this flow. Mic input can be stopped anytime by pressing the microphone.
Figure 9: The listening and translating screens the app will show
  • Ending Sessions: To end a session, users can hit the “x” button in the top right corner where the user will then be asked to make sure that they want to cancel the translation session.
Figure 10: The notice users will get when they want to end their translation session
  • Previous Recordings: When sessions are recorded, once they are finished, the app will automatically take the user to the previous recordings tab of the app
Figure 11: The recording area where users can listen to their previous sessions

Using the Headset

Figure 12: Labeled diagram of headset features, referred to below for flow

Getting in Contact With A Translator

The user is wearing the headset. User powers on headset by pressing power button, label 1. Afterwards, they will need to pair bluetooth to phone (label 2) if they haven’t done so before or do so by the app, or they can also plug in wire to their audio jack using cord (label 8). When the headset is connected to the phone, the user will hear through speaker (label 6) that device is now paired. User will press the button (label 7) to toggle to call a translator mode. The speaker, label 6, projects to the user’s ears “please choose someone to call on the phone app.” User selects who to call from the app’s contact list (see above app interface figure 4, 5, 6). User hears through speaker, label 6, dialing in and connecting with their translator. All simultaneously, doctor/medical staff input can hear translator through LED speaker (label 3), lighting up green, while user hears translator through noise-cancelling headset speaker (label 6). Finally, the translator hears doctor/medical staff through surround sound speakers (label 4). Now, a 3-way conversation is established to accomplish task 1 of communicating through a human translator. The user also can use the phone app for this task if they prefer to use a phone or if headset malfunctions (multiple ways to use).

Using the Auto-Translate Feature

To enable auto translate mode, users will press the button, label 7 on figure 12. A voice notification will say that auto translate mode has been enabled so that they know which mode they are on. Users will speak into the mic (label 5) and what they say will be translated and outputted to the medical staff via the speaker (label 3). As the user is speaking the speaker will light a yellow color. Once the translation is playing, the speaker will light a green color. As the medical staff responds and speaks, their voice gets picked up by the microphone (label 4), goes through the auto translator, and the translation is outputted (label 6) so the user can hear.

From Paper to Digital Prototype

In the end, the changes we made to our prototype were focused on the mobile app and its overall design. We knew from the start that our color choices had to be changed, as we only had sparse amounts of blue and orange scattered throughout the app in the initial prototype. To fix it, we instead we with a more dark theme for the menu bar to not only add some contrast but also to give that area some more weight. To highlight the important things though, we still kept the use of blue to either highlight which area of the app the user was on or to grab the user’s eyes to show them which parts they should be focusing on like in the auto translation page.

The other major change we made was to the recordings button on the menu bar. Beforehand, the button consisted of a microphone icon as well as the label “Record”. However, this proved to be confusing as instead of being seen as the place to listen to previous recorded sessions, it looked like that was where users should go to record their translation session. In order to minimize that confusion, we instead replaced the label with “Recordings” and a play button icon to hopefully be able to convey better that this is the area where users can view and listen to their old recordings rather than go there to start a new recording.

Feel free to interact with our final prototype HERE!

Reducing Language Barriers

Health is very important, and many older immigrants who speak limited to no English are reliant on translators for health visits. In the case that translators are not available, we want the power of translation and communication to be accessible to Chinese immigrant adults. Currently, we have designed a solution of a headset pairable with a phone app to provide automatic translations from Chinese dialects to English or contact to saved translators for easy calls. With a few buttons on the headset and a simple phone app design, we envision older Chinese immigrant adults to be able to communicate freely and independently with their doctor just by having these tools. A language barrier should not deny health access from any individual, and we hope to expand our languages beyond the Chinese dialect and usage beyond medical visits to day-to-day scenarios.

--

--

CSE 440 Staff
CSE 440 Staff

Written by CSE 440 Staff

University of Washington Computer Science, Intro to Human Computer Interaction

No responses yet