Audio Nomad is a three-year Research & Development programme into the Creative and Technological potentials of immersive and location sensitive Audio. The project is a collaboration between the Artist Dr. Nigel Helyer of Sonic Objects; Sonic Architecture and the University of New South Wales; Dr. Daniel Woo of the Human Computer Interaction Lab (HCI) and Prof. Chris Rizos of the Satellite Navigation and Positioning Lab (SNAP)
Audio Nomad is jointly supported by the Australian Research Council and the Australia Council for the Arts, under the “Synapse” scheme that facilitates Art and Science collaborations.
Over the next three years the Audio Nomad team will be developing a series of projects specifically designed to balance creative and conceptual endeavours with the development of technological platforms. Our principle will be to marry creative and technological approaches, working as a cross-disciplinary team that will develop the philosophical, creative and technological systems within the critical arena of public cultural events and major international electronic-arts festivals.
The Audio Nomad “Syren” project places a strong emphasis
on a highly imaginative and creative approach to sound composition and
sound design in order to highlight the potential of this emergent field
of geo-spatially located virtual audio. Unlike conventional
sound-design or musical composition, geo-spatially located audio needs
to be highly sensitive to its environmental and architectural context
as well as to the fundamentally non-linear manner in which the auditor
may interact with the content.
Conceptually and sonically, the principal challenge of the research is to develop a ‘compositional’ strategy able to deliver a non-linear but coherent ‘field’ of audio. The system provides the possibility for both (apparently) fixed and mobile audio events, as well as several mechanisms for sequencing sound files in a variety of ways.
Another significant conceptual challenge is the re-conceptualisation
of sonic events in the mould of a topology, thus escaping the view of
the world (and of sound composition) which is ‘object’
oriented towards one which is relational and inextricably connected
– both through both spatial and temporal axes.
Virtual Audio Reality refers to a system that immerses an auditor in a dynamic and spatially active audio environment, which may or may not be linked to a corresponding visual domain (real or virtual). In this case the audio is intended as a total environment and supplants any local or ambient sound.
Augmented Audio Reality refers to a system in which allows an auditor to experience ambient/local sounds whilst simultaneously overlaying these with additional audio information. AAR generally operates in a real-world visual context.