Skip to main content

Audio cues enhance mirroring of arm motion when visual cues are scarce

Cornell Affiliated Author(s)

Author

E.D. Lee
E. Esposito
Itai Cohen

Abstract

Swing in a crew boat, a good jazz riff, a fluid conversation: these tasks require extracting sensory information about how others flow in order to mimic and respond. To determine what factors influence coordination, we build an environment to manipulate incoming sensory information by combining virtual reality and motion capture. We study how people mirror the motion of a human avatar’s arm as we occlude the avatar. We efficiently map the transition from successful mirroring to failure using Gaussian process regression. Then, we determine the change in behaviour when we introduce audio cues with a frequency proportional to the speed of the avatar’s hand or train individuals with a practice session. Remarkably, audio cues extend the range of successful mirroring to regimes where visual information is sparse. Such cues could facilitate joint coordination when navigating visually occluded environments, improve reaction speed in human–computer interfaces or measure altered physiological states and disease. © 2019 The Author(s) Published by the Royal Society. All rights reserved.

Date Published

Journal

Journal of the Royal Society Interface

Volume

16

Issue

154

URL

https://www.scopus.com/inward/record.uri?eid=2-s2.0-85066243539&doi=10.1098%2frsif.2018.0903&partnerID=40&md5=031f81c9e1bfe1fa6c189c0c7e5126da

DOI

10.1098/rsif.2018.0903

Research Area

Group (Lab)

Itai Cohen Group

Funding Source

69189-NS-II

Download citation