Seeing for the Blind
Our Team
Jacob Lee: research, storyboards
Cole Purcell: research, design, document management
Ryan Flores: research, storyboards, writing
Andy Roland: research, brainstorming
Problem and Solution Overview
We are trying to help people with severe vision loss live a more independent life. We are specifically focusing on helping visually impaired people identify objects around them and help them navigate through unfamiliar user interfaces. We are planning to create a scanner with an integrated camera and speaker system that allows the user to point at objects or screens and identify different points of interest like where the object is, what color the object is, reading out text on the object, identifying a search bar on a screen, etc. This will come with a non visual dependent interface that utilizes the speaker system and haptic feedback to inform the user about their environment. There are plenty of methods that people with severe vision impairments can use to find misplaced objects or navigate new user interfaces, but these methods aren’t always available or safe, and some people may feel uncomfortable using them or feel they are a violation of their privacy. It is because of these issues with current solutions that we are trying to give people with severe vision loss a way to conquer these tasks independently.
Design Research Goals, Stakeholders, and Participants
We chose Digital Ethnography as our design research method. We initially considered a graffiti wall on the subreddit /r/Blind along with conducting some surveys, but after some research, we concluded that the subreddit finds such practices to be exploitative. We also realized that the amount of research needed to be respectful to those on r/Blind could not be done before the assignment was due. We were advised by the teaching staff to instead perform a digital ethnography given there is so much research available online for our target group. We decided instead to find the information we needed online through people’s posts of their first-person accounts and suggestions. We found many resources to help us empathize with people with severe vision loss, including first-person accounts from stakeholders on social media and message boards. One of the more influential accounts we found was that of Debra, whose real name we will not give, who we choose because her posts made us feel sympathetic and feel empathy for her struggles. She has been blind for 13 years and is lucky to have her husband around to help her. Even though she has been blind, she works with people with vision loss teaching them how to find things when nobody is around to help. We learned a great deal from Debra’s posts, including how people who suffer from this disability manage to find things (such as the “grid method”) and minimize losing things altogether (bins for organizing everything): “Most of the time when I misplace something I try to know where I was when I lost it. I can then do a grid pattern either on my desk or if I dropped something on my floor. If I cannot find something within like 24 hours I either ask for help (fortunately I live at my mom’s home so there are 3 people to ask) or I consider it lost forever.”
Design Research Results and Themes
One of the themes that emerged from our digital ethnography was independence. Due to their poor vision, many of the people belonging to our target group aren’t able to accomplish certain tasks on their own. This results in them needing to depend on other people to a certain degree. Some people are able to maintain a degree of independence, requiring little intervention from others in order to go about their lives, while others are much worse off and are unable to do much by themselves. A lack of independence can be especially frustrating to those who were born sighted but lost their vision later on in life, as these people remember what it’s like to live an independent life but have had their independence taken away from them. Being unable to accomplish tasks that were previously easy to do is a significant challenge for many people with low vision. For example, from one /r/Blind post we found, one user speaking about their experience adjusting to having low vision wrote that their “attitude right now is that you don’t really get around it, you don’t ever love it, you don’t feel like it’s OK. I had sight until I was 19 and I know very well what I am missing. Nobody in their right mind would ever enjoy it. “ Even Be My Eyes, a service allowing volunteers to enter calls with people with low vision to help them with things like finding lost jewelry, may be avoided by its intended audience due to the involvement of other people. One /r/Blind user who wrote a post talking about their frustration with losing jewelry wrote in response to a suggestion to use Be My Eyes: “I just got nervous and hung up before it even found me someone.” We decided that one very important goal our design needed to accomplish was granting its users an improved sense of independence, to allow them to accomplish as much as they can without the help of anyone else.
Going along with the theme of independence is a theme of self-improvement. It’s true that people with low vision often have many tasks they cannot accomplish alone due to their disability. While there have been many advancements in assistive tools for the blind in recent times, there are still some tasks that require the assistance of a sighted person. However, one common trend we noticed while conducting our digital ethnography was a focus on self-improvement. Many users who make posts talking about their frustrations with their disability often receive helpful advice from others about the approach they took to solve the issue they’re facing, or how they developed skills that helped them overcome their struggles. The previously mentioned user that wrote about their experience adjusting to low vision also had this to say about adapting to their condition: “But I take the position that if I’m going to be forced to do it, I’m going to do it properly and be good at it. That doesn’t necessarily mean following anyone else’s advice on it, including mine. In the end though it is a matter of finding solutions to problems. Work the problem, as astronauts say.” It’s important to remember that people with low vision don’t necessarily need a solution that does all the work of a task for them. While our design is made to allow people with low vision to accomplish as much as they can on their own, that doesn’t mean that our design needs to do everything for them, rather it should do as much as is needed for the user to be able to perform the rest of a task on their own. Our design should not only grant users agency within their daily lives, our design should respect the user’s agency as they are using the design.
Proposed Design
Through our research, we found that our solution should assist the user in a wide variety of tasks, and allow the user to perform them without the aid of another person. After sketching a couple of designs, we ended up choosing the design shown below.
The glasses feature a camera on the right side of the device, that is used to collect information about whatever it’s pointed at, and bone conducting headphones in place of traditional temple tips that are used to tell the user specific information about what the camera is pointed at, without obstructing environmental sounds that the user would rely on. On the same side as the camera is a microphone that the user can use to issue voice commands that tell the glasses to read whatever text the user is looking at.. The power/bluetooth button, and the charging port are both located on the left temple. The glasses will be able to connect to an app on the user’s phone via bluetooth, where the information gathered by the glasses will be sent to be processed. The user’s phone will also be able to give haptic feedback when the glasses detect that the user is pointing at text. We decided to go with this design over that of a screen-reader app with access to the phone’s camera or other such software because, based on our user research, screen-reader software sometimes does not work when it’s integrated with the device since certain apps or websites may not be designed with screenreaders in mind. Having our design fail at an important time due to software limitations or complexity was something we wanted to avoid. Our design would allow it to be used on things other than a screen as well, such as medication bottles, magazines, anything with legible text. To help paint a picture of how these glasses can be used, we can take a look at one of our storyboards, in which the userneeds to find their medication.With the glasses, a potential user of our product would be able to easily confirm that they have found the correct medication. They would first find their medication using methods of searching such as the grid method, then point at the bottle and say “read” out loud so the glasses know to read whatever text is on the bottle. In another storyboard we developed, a potential user needs to print something out at the library, but the screenreader used by the library doesn’t work properly with the application they are trying to use.
Here, we see the user using the glasses to ensure that they are printing the correct document. Since the user is in a library, it would be rude of them to start speaking, so they press a button on the app on their phone to tell the glasses to read the screen instead. While there are certainly many programs and whatnot that people with vision loss can use such as screen readers and the like, they are only usable on whatever they are downloaded on. Our proposed design for wearable glasses with a built-in camera would allow people to read screens out in public, such as the tablets used to place orders at certain fast-food chains. Most importantly, our proposed design allows users to live their lives without outside assistance from someone. In other words, our design gives people with vision loss the tools they need to live independent lives and letting them keep their agency while they use it.