So I decided to give a bit more thought to the "tapping" response strategy from the previous use cases.  Tapping is really a subset of the haptic language category.

Expanding beyond "yes" and "no" responses, allows us to create an opportunity for greater dialog between Carol and Seymour.

I chose tapping as it is a common, generally socially acceptable behavior that tends not to draw undesirable attention in public settings. People often tap their fingers out of habit, while thinking, or when listening to music, making it a discreet gesture that doesn't stand out as unusual or conspicuous. I imagined Carol sitting on a bus or in a waiting room, hands in her lap, her finger tapping....in this case on her smartwatch. Utilizing finger tapping as a method for interacting with smart devices or wearables capitalizes on this natural, inconspicuous action. It enables users to communicate with their devices in a way that blends seamlessly into common behaviors observed in social contexts. This approach minimizes any potential stigma or discomfort that might be associated with more overt or less common forms of interaction, especially in assistive technologies designed for continuous, everyday use.

Some of the example I thought of, tended to be an application of the Morse Code approach... dits and dahs and pauses.  Hardly intuitive, and spelling out words one letter at a time, would be cumbersome and slow.  We need a small vocabulary of short gestures that can be easily learned.  We will also need a confirmation from the device that the input was understood, and was an appropriate response to the query.

So without replicating Morse Code, what kind of vocabulary can we create?  Remembering that this is a potentially a solution for AD/Dementia patients, we must keep it simple...  maybe the short list below?

Yes = one tap
No = two taps
Help = three taps
Stop = tap and hold
About midway through that list, I realized that a smartwatch can detect more than taps, it can also detect swipes.  This opens up even more possibilities.

Swipe left = Previous
Swipe right = Next

The confirmation of these commands can be audible or visible from the eyewear, or tactile from the watch.

Since we don't know the wearers ability to remember sequences, we can offer options beyond the basic responses above.  We then  introduce them through practice scenarios, gamification, or customization.  This would need to be thoroughly usability tested, and designed to adapt to each individual.








Back to Top