20 Citations (Scopus)

Abstract

The human skin provides an ample, always-on surface for input to smart watches, mobile phones, and remote displays. Using touch on bare skin to issue commands, however, requires users to recall the location of items without direct visual feedback. We present an in-depth study in which participants placed 30 items on the hand and forearm and attempted to recall their locations. We found that participants used a variety of landmarks, personal associations, and semantic groupings in placing the items on the skin. Although participants most frequently used anatomical landmarks (e.g., fingers, joints, and nails), recall rates were higher for items placed on personal landmarks, including scars and tattoos. We further found that personal associations between items improved recall, and that participants often grouped important items in similar areas, such as family members on the nails. We conclude by discussing the implications of our findings for design of skin-based interfaces.

Original languageEnglish
Title of host publicationProceedings of the 2017 CHI Conference on Human Factors in Computing Systems
Number of pages11
PublisherAssociation for Computing Machinery
Publication date2 May 2017
Pages1497-1507
ISBN (Electronic)978-1-4503-4655-9
DOIs
Publication statusPublished - 2 May 2017
Event2017 ACM SIGCHI Conference on Human Factors in Computing Systems: explore, innovate, inspire - Denver, United States
Duration: 6 May 201711 May 2017

Conference

Conference2017 ACM SIGCHI Conference on Human Factors in Computing Systems
Country/TerritoryUnited States
CityDenver
Period06/05/201711/05/2017

Fingerprint

Dive into the research topics of 'Placing and recalling virtual items on the skin'. Together they form a unique fingerprint.

Cite this