One of the opportunities we have with AD/Dementia patients, is to use idle time for brain stimulation activities. How much time will these patients spend in waiting rooms, sitting idle. Maybe interacting with their phone?
This use case leverages smart eyewear with eye-tracking, to conduct a silent brain stimulation dialog between the wearer and the patient.
This use case leverages smart eyewear with eye-tracking, to conduct a silent brain stimulation dialog between the wearer and the patient.
Given that the eyewear will be paired with the wearers smartphone, provides us access to large amount of user data, likely including the photo library, and socials. I imagine a user journey similar to this.
As Carol sits in the waiting room, the devices detect her location as the doctor's office (GPS + Contact/Calendar/Maps), and that she is sitting idle (Accelerometer) , and is relaxed (Heart Rate). If there is no detected conversation (microphone, accelerometer), the glasses prompt her.
"Hey Carol, want to look at some pictures?"
She confirms, and some pictures she has viewed before are selected to be displayed in her field of view, one at a time. After a pause, the glasses prompt her.
"Do you know what the picture is of?
As Carol sits in the waiting room, the devices detect her location as the doctor's office (GPS + Contact/Calendar/Maps), and that she is sitting idle (Accelerometer) , and is relaxed (Heart Rate). If there is no detected conversation (microphone, accelerometer), the glasses prompt her.
"Hey Carol, want to look at some pictures?"
She confirms, and some pictures she has viewed before are selected to be displayed in her field of view, one at a time. After a pause, the glasses prompt her.
"Do you know what the picture is of?
Carol can confirm in several ways. She can single (No) or double (Yes) tap her smart watch, to answer. She can slightly incline her head for Yes, or a single nod for No. A single or double blink could also be used. The wearer should have several options, so that they can choose the response that is the least conspicuous, or most appropriate for the surrounding environment. If she responds No, the glasses can tell her, or give her hints, until she indicates that the image is recognized. There is no need for Carol to speak ( and draw attention. )
Since throughout this interaction, Carols attention is likely focused on her devices, the devices should be aware of her surroundings for her. For example if her name is called, the device should inform her and end the photo display.
So there are a lot of interactions and considerations here. I think a lot of these interactions should be discovered, tested, and included in any modern design system.
Since throughout this interaction, Carols attention is likely focused on her devices, the devices should be aware of her surroundings for her. For example if her name is called, the device should inform her and end the photo display.
So there are a lot of interactions and considerations here. I think a lot of these interactions should be discovered, tested, and included in any modern design system.