Avoiding or intercepting looming objects implies a precise estimate of both time until contact and impact location [1-4]. In natural situations, extrapolating a movement trajectory relative to some egocentric landmark requires taking into account variations in retinal input associated with moment-to-moment changes in body posture [5-7]. Here, human observers predicted the impact location on their face of an approaching stimulus mounted on a robotic arm, while we systematically manipulated the relation between eye, head, and trunk orientation. The projected impact point on the observer's face was estimated most accurately when the target originated from a location aligned with both the head and eye axes. Eccentric targets with respect to either axis resulted in a systematic perceptual bias ipsilateral to the trajectory's origin. We conclude that (1) predicting the impact point of a looming target requires combining retinal information with eye position information, (2) that this computation is accomplished accurately for some, but not all, possible combinations of these cues, (3) that the representation of looming trajectories is not formed in a single, canonical reference frame, and (4) that the observed perceptual biases could reflect an automatic adaptation for interceptive/defensive actions within near peripersonal space.

Spatial coding of the predicted impact location of a looming object

NEPPI-MODONA, Marco;
2004-01-01

Abstract

Avoiding or intercepting looming objects implies a precise estimate of both time until contact and impact location [1-4]. In natural situations, extrapolating a movement trajectory relative to some egocentric landmark requires taking into account variations in retinal input associated with moment-to-moment changes in body posture [5-7]. Here, human observers predicted the impact location on their face of an approaching stimulus mounted on a robotic arm, while we systematically manipulated the relation between eye, head, and trunk orientation. The projected impact point on the observer's face was estimated most accurately when the target originated from a location aligned with both the head and eye axes. Eccentric targets with respect to either axis resulted in a systematic perceptual bias ipsilateral to the trajectory's origin. We conclude that (1) predicting the impact point of a looming target requires combining retinal information with eye position information, (2) that this computation is accomplished accurately for some, but not all, possible combinations of these cues, (3) that the representation of looming trajectories is not formed in a single, canonical reference frame, and (4) that the observed perceptual biases could reflect an automatic adaptation for interceptive/defensive actions within near peripersonal space.
2004
14(13)
1174
1180
VENTRAL INTRAPARIETAL AREA; PREMOTOR CORTEX; PARIETAL CORTEX; FLY BALLS; PERCEPTION; INFORMATION; REPRESENTATION; RESPONSES; MOVEMENTS; POSITION
M. NEPPI MODONA; D. AUCLAIR; A. SIRIGU; J-R DUHAMEL
File in questo prodotto:
File Dimensione Formato  
116794 spatial coding_neppi modona.pdf

Accesso riservato

Tipo di file: POSTPRINT (VERSIONE FINALE DELL’AUTORE)
Dimensione 291.12 kB
Formato Adobe PDF
291.12 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/101994
Citazioni
  • ???jsp.display-item.citation.pmc??? 3
  • Scopus 15
  • ???jsp.display-item.citation.isi??? 11
social impact