LUCIA
An ecosystem of interconnected wearable devices that aid in the mobility training and navigation of users with vision loss, instilling a sense of autonomy and inspiring exploration.
// Fall_2018 Human Factors course project
Personal Input delivered
User research, concept development, product design, interaction design, prototypes, user testing, voice user interface, renders, vision video.
Collaborators
Bernard Sahdala
Geethika Simhadri
Leah Van Proeyen
Problem
Individuals with a visual impairment do not feel confident going out and navigating without the assistance of someone else. This hinders them from experiencing a sense of independence and autonomy.
Goal
Design a product utilizing the Internet of Things to create a minimalist and innovative solution to improve the quality of life of the visually impaired.
Research
For our primary research, we participated in a contextual inquiry exercise where we visited the Savannah Center for the Blind and Low Vision (SCBLV) and had an immersive mobility training experience, empathizing with our target users. We also interviewed members with vision loss, they mentioned their mobility problems and how they overcame them.
We also participated in the fundraiser event “Walk A Mile in My Shoes Fund Raiser Event” organized by SCBLV on October 13. The event was handled and attended by members of the visually impaired community and their families. The focus of the event was to raise awareness of the difficulties of navigating without vision to individuals with vision, and served as a great insight into the community.
We were in constant communication with a student with complete vision loss at SCAD. He actively participated in interviews and shadowing practices, volunteered to participate in our testing and was always there to actively give feedback.
Affinitization
Our gathered insights were affinitized according to the few factors. This further helped us to narrow down our solution, where should we focus more and other supportive issues that can solved with it.
The garnered insights were further developed to aid in our ideation. With it, we found out the biggest struggles of living with vision loss were related to mobility around environments, but more specifically the training that one must undergo while traversing, and the anxieties and worries that plague users while doing such training. And with this we ask ourselves…
How might we?
Instill a sense of autonomy
Create awareness of surroundings
Deliver spatial information
Ease navigation
Inspire exploration
To answer these questions, we mapped out a fictional journey of the user and looked at areas of opportunity for design solutions.
We created a 5 stage journey to recognize the main pain points that the user goes through when navigating from one place to another on transportation, and navigating inside a building. In this case, from boarding the bus to reaching their classroom.
After concluding our research and analysis, we set out to explore different concepts that would culminate in an impactful solution:
Wearable headset equipped with depth sensors that map out a three-dimensional view of the user’s surroundings, providing guidance via our Assistant through bone conduction technology.
LUCIA.core features
Area mapping of indoor environment for ease of mobility
Utilize data gathered from other users to navigate where desired
Image recognition to help localize places and/or items
Awareness of rooms and objects
Orientation calibration
Hands free notifications
Mapping Language
Utilizing the latest technology in Depth mapping and Artificial Intelligence. Our cloud servers will interpret the recorded data to understand the scene of the user’s surroundings.
Using virtual 3D sound, LUCIA will narrate the surrounding, each audio coming from the corresponding location of the object or area being translated. Creating virtual landmarks to ease mobility.
3D Mapping done by core and node LUCIA devices is sent to LUCIA cloud serves for storage and cross referencing with others scans to create a more accurate the environmental model.
When a LUCIA core user enters a compatible building for the first time, the map data is extracted into their device to enable assisted navigation if desired, and provide a higher accuracy to exploration Mode and Snapshot.
User of both devices continuously collaborate to map out indoor environments that help them and others navigate confidently.
LUCIA core UI
For sighted users that wish to preemptively map indoor surroundings and cooperate with other users by sharing their scans in the LUCIA Cloud System.
LUCIA .node features
Area mapping of indoor environment for ease of mobility
Image recognition to help localize places and/or items
Awareness of rooms and objects
Attaches to clothing for comfortable continuous use
Slim and compact form
LUCIA node UI
VUI
Access LUCIA by just calling
Hey LUCIA
OK LUCIA
or just hold your index and thumb together along the sides of your core device
Our Impact
Mobility training is a necessary part of learning how to navigate through the world without vision. Depending on the location, mobility training can take at least a week to get familiar with a new place. By increasing the ease of navigation through LUCIA, we will be able to save time for individuals with a visual impairment, as well as allow them to navigate through new places without asking others for assistance.