Abstract
The human skin provides an ample, always-on surface for input to smart watches, mobile phones, and remote displays. Using touch on bare skin to issue commands, however, requires users to recall the location of items without direct visual feedback. We present an in-depth study in which participants placed 30 items on the hand and forearm and attempted to recall their locations. We found that participants used a variety of landmarks, personal associations, and semantic groupings in placing the items on the skin. Although participants most frequently used anatomical landmarks (e.g., fingers, joints, and nails), recall rates were higher for items placed on personal landmarks, including scars and tattoos. We further found that personal associations between items improved recall, and that participants often grouped important items in similar areas, such as family members on the nails. We conclude by discussing the implications of our findings for design of skin-based interfaces.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems |
Number of pages | 11 |
Publisher | Association for Computing Machinery |
Publication date | 2 May 2017 |
Pages | 1497-1507 |
ISBN (Electronic) | 978-1-4503-4655-9 |
DOIs | |
Publication status | Published - 2 May 2017 |
Event | 2017 ACM SIGCHI Conference on Human Factors in Computing Systems: explore, innovate, inspire - Denver, United States Duration: 6 May 2017 → 11 May 2017 |
Conference
Conference | 2017 ACM SIGCHI Conference on Human Factors in Computing Systems |
---|---|
Country/Territory | United States |
City | Denver |
Period | 06/05/2017 → 11/05/2017 |