In the current study, a simplistic and linear ITD-ILD trade-off model based on subjective data has been proposed to allow for various combinations of interaural time and level differences (ITD and ILD) for positioning auditory images at azimuth angles up to 60°. This can also be used to predict the azimuths of auditory images from input ILD and ITD. Independent ILD and ITD values used to generate the model were obtained through a lateral pointer task using a novel method which utilises HRIRs for defining azimuth position. From analysing the model, it was found that combination values were smaller compared to natural combinations extracted from HRTF in localisation. This was also apparent in a virtual localisation experiment where subjects used a HRIR pointer to report the azimuth of auditory image, laterally displaced by various ILD-ITD combinations from the model. It was found that the perceived angles were persistently narrower than their target angles. This underestimation was more significant for wider target angles above 45° and for ILD/ITD combination ratios with a larger weighting of ILD.

MRes project: Oct 2018 – Jun 2020

Researcher: Nikita Goddard

Supervisor: Dr Hyunkook Lee


Next Post
Capturing and rendering of audio for virtual reality
Previous Post
The effect of sound source and reflection angle on the perception of echo thresholds