Sound-localization is ill-posed … What to do?
Prof. John van Opstal (U Nijmegen / Donders Institute)
Abstract. The brain estimates the direction of sounds from the pressure-induced linear displacements of the eardrums. Accurate sound localisation along the horizontal plane (azimuth angle) is enabled by binaural difference cues in timing and intensity. Localisation along the vertical plane (elevation angle) relies on complex spectral-shape cues made possible by elevation-dependent filtering in the pinna. However, the problem of extracting elevation from the sensory input is ill-posed, since the spectrum results from a convolution between source spectrum and the particular head-related spectral filter (HRTF) associated with the source elevation, which are both a-priori unknown to the system. As a result, sound localization should be impossible, as infinitely many sound-wave patterns can give rise to dentical inputs at the eardrums. Yet, humans can localise sounds in unknown environments with considerable accuracy and precision. I will argue that the auditory system relies on several non-acoustic assumptions (priors) to cope with the ill-posed nature of the localisation problem, and will describe results from experiments in our lab that probe these priors. These considerations have also implications for restorative therapies and device-encoding strategies of the hearing impaired.
Bio. John van Opstal is Chair in Biophysics at the Faculty of Science and Director of the Donders Centre for Neuroscience, both at Radboud University, Nijmegen. Prof. van Opstal studies sound localization and the saccadic eye-head control system of human and non-human primates. During his PhD he studied the neurophysiological responses of saccade-related neurons in the monkey Superior Colliculus, and later the neurophysiology underlying Listing’s law for 3D eye movements. He applies computational modelling to guide his research. He regards sound localization as an action-perception problem, and probes the system with fast, saccadic eye-head gaze-control paradigms, to study the very earliest underlying neurocomputational mechanisms. He exposed his ideas in his book “The auditory system and human sound-localization behavior”, published in April 2016 by Academic Press, Elsevier. He received several personal grants, such as Human Frontiers, NWO VICI, and ERC advanced. Within his ERC grant he started a collaboration with the visual robotics group in Lisbon, with the aim to develop a humanoid 3D eye-head robotic system that generates gaze shifts, just like humans.