This was a collaborative project between MagicBeans, APL and Warner Music UK, funded by Innovate UK as part of the Audience of the Future programme.

The rapid development of Augmented and Mixed Reality (AR and MR) technology means a concomitant demand is rapidly developing for the production of audio for ‘Six-Degrees-of-Freedom’ (6DOF) applications, in which listeners are able to approach sound scenes from any angle, move through them and interact with objects at close quarters; with an expectation that the audio will adapt to the physical environment in which it is presented. This need will rapidly disrupt traditional methods of audio production, leaving highly experienced UK creatives in need of new enabling technologies.

The challenge is to create and implement audio content which responds to this listener movement to a sufficient degree of realism that it is believable as both an audio-only experience and in conjunction with visual cues – particularly when “the visuals” are real-world physical objects and environments. A sufficiently concise and efficient representation of complex behavior is required that enables implementation on devices with limited processing power. While current state-of-the-art permits placement of simple sound sources within overall acoustic spaces, the methodology and aesthetic employed is directly descended from games implementations, and is unable to achieve the complex volumetric acoustic behavior required to meet the threshold for “suspension of disbelief” demanded by 6DOF interaction. Additionally, current methodologies are labour intensive, and require a skill-set typically completely outside that of audio professionals; making them inefficient in terms of workflow and personnel requirements.

The VASAR (Volumetric Audio Synthesis for AR) project aims to answer the need for a technology enabling the efficient capture, reproduction and synthesis of 6DOF audio; building on existing workflows and infrastructure to generate high-resolution volumetric 6DOF outputs from both live captures and legacy materials. A successful project outcome, through innovative improvements to audio technology and engineering practice, will shorten production times, decrease costs and greatly improve the realism, and thus audience experience, for AR and MR based content. VASAR will develop the technical foundations for an efficient music-led AR experience deployable by major record labels and a scalable 6DOF soundfield that can be ‘mapped’ directly to real world environments – opening up commercialisation opportunities for a brand new class of Mixed Reality audio experience.

Innovate UK-funded Project: 2019 – 2021 

Researchers: Dr Dale Johnson

Supervisor: Dr Hyunkook Lee

Industry partners: MagicBeans, Warner Music UK


The work conducted during this project made a major contribution to the development of the 6DoF virtual acoustics rendering framework for the APL Spatial Audio Engine (ASPEN) – see Solutions.

Next Post
Phantom image elevation effect & Virtual Hemispherical Amplitude Panning (VHAP)
Previous Post
Investigations into the Perception of Vertical Interchannel Decorrelation in 3D Surround Sound Reproduction