Abstract
The human skin provides an ample, always-on surface for input to smart watches, mobile phones, and remote displays. Using touch on bare skin to issue commands, however, requires users to recall the location of items without direct visual feedback. We present an in-depth study in which participants placed 30 items on the hand and forearm and attempted to recall their locations. We found that participants used a variety of landmarks, personal associations, and semantic groupings in placing the items on the skin. Although participants most frequently used anatomical landmarks (e.g., fingers, joints, and nails), recall rates were higher for items placed on personal landmarks, including scars and tattoos. We further found that personal associations between items improved recall, and that participants often grouped important items in similar areas, such as family members on the nails. We conclude by discussing the implications of our findings for design of skin-based interfaces.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems |
Antal sider | 11 |
Forlag | Association for Computing Machinery |
Publikationsdato | 2 maj 2017 |
Sider | 1497-1507 |
ISBN (Elektronisk) | 978-1-4503-4655-9 |
DOI | |
Status | Udgivet - 2 maj 2017 |
Begivenhed | 2017 ACM SIGCHI Conference on Human Factors in Computing Systems: explore, innovate, inspire - Denver, USA Varighed: 6 maj 2017 → 11 maj 2017 |
Konference
Konference | 2017 ACM SIGCHI Conference on Human Factors in Computing Systems |
---|---|
Land/Område | USA |
By | Denver |
Periode | 06/05/2017 → 11/05/2017 |
Emneord
- interface design, recall performance, skin input