Programme for 2019!
Sounds in Space 2019 – Day 1 (25th June)
|11:45||Welcome||Bruce Wiggins||Aud 3|
|11:55||Invited Talk: D and B Soundscape||Andrew Horsburgh||Aud 3|
|12:45||Time-Domain Upmixing of Two-Channel Audio Signals||Steven Oxnard||Aud 3|
|13:15||Threading the needle: Managing noise exposure and pollution while maintaining a high-quality audience experience at outdoor events||Adam Hill||Aud 3|
|14:45||Production of immersive music documentaries form top to the bottom (and why you should always use ambisonic microphone on the set).||Przemyslaw Danowski||Aud 3|
|15:15||Audiovisual||Patricio Ballesteros Ledesma||Aud 3|
|16:15||Individualisation of HRTFs using MEMS Speakers to Accurately Reproduce Pinnae Frequency Shaping Effects||Alex Vilkaitis||Aud 3|
|16:55||Developments of GASP project – ‘Guitars with Ambisonic Spatial Performance’||Duncan Werner and Emma Fitzmaurice||Aud 3|
|17:30||Live Performance Demonstration of GASP (optional)||Duncan Werner, Emma Fitzmaurice and guitarist guests!||Aud 1|
|18:30||Close of Day 1||MS005|
Sounds in Space 2019 – Day 2 (26th June)
|10:00||Welcome||Bruce Wiggins||Aud 3|
|10:10||Chasing Space: Some Spatial Approaches to Binaural Creativity||Dallas Simpson||Aud 3|
|10:50||Modular Synthesis and Spatial Audio||John Crossley||Aud 3|
|11:20||When the Boys Come Home||Mark Randell, Daithi McMahon and Michael Brown||Aud 3|
|12:50||Pitch Black: A ‘AAA’ Audio Game||Harry Cooper||Aud 3|
|13:30||Understanding 3D game audio runtime systems||Simon Goodwin||Aud 3|
|14:30||Invited Talk : Ten Billion||Adam Stanovic||Aud 3|
Time-Domain Upmixing of Two-Channel Audio Signals
The upmixing of two-channel stereophonic audio for surround sound reproduction has been the subject of extensive research in recent years. There are now numerous methods by which stereo audio signals may be analysed, interpreted and manipulated to form programme material suitable for playback over three or more loudspeakers. Current approaches, such as primary-ambient extraction and dynamic panning processing, regularly make use of the short-time Fourier transform (STFT). However, in order to produce audio of acceptable fidelity using STFT processing, it is necessary to employ suitably large analysis frame sizes to yield frequency spectra of appropriate resolution. In practice, this can require prohibitive computational resources which when not available leads to sub-optimal performance of the underlying upmixing algorithms. In order to address this issue, this work extends existing research into time-domain upmixing processes which circumvent the computational issues of STFT based approaches while matching or bettering the performance of the resulting upmixer.
Stephen Oxnard obtained a PhD in Music Technology from the University of York in 2016. Upon completion of his studies, he was appointed Senior Lecturer and Course Leader for the BSc, Audio and Music Technology course at Anglia Ruskin University. In 2018, Stephen joined Meridian Audio Ltd. where he is currently employed as a Research Engineer. His current research interests include numerical room acoustic modelling and prediction, room equalisation, loudspeaker design and measurement and multi-channel audio reproduction formats. Stephen is a member of the Institute of Engineering and Technology and a fellow of the Higher Education Academy.
Threading the needle: Managing noise exposure and pollution while maintaining a high-quality audience experience at outdoor events
The issue of noise pollution due to large outdoor (and sometimes indoor) entertainment events has become increasingly problematic, with a significant number of festivals and other events taking place in centrally-located areas that are often under strict noise regulations. The AES Technical Committee on Acoustics and Sound Reinforcement formed a working group in 2018 to examine this issue. In this talk, the group’s initial findings will be presented, covering areas such as noise regulations, guidelines and prediction techniques, audience level regulations, practical measurement methods, and sound system design (primary and secondary) for limited noise spill while maintaining a high-quality audience experience.
Adam Hill is a senior lecturer at the University of Derby where he runs the MSc Audio Engineering program. He received a Ph.D. from the University of Essex, an M.Sc. in Acoustics and Music Technology from the University of Edinburgh, and a B.S.E. in Electrical Engineering from Miami University. His research generally focuses on analysis, modeling, and wide-area spatiotemporal control of low-frequency sound reproduction and reinforcement. Adam also works seasonally as a live sound engineer for Gand Concert Sound, where he has designed and operated sound systems for over 1000 artists. Adam is co-chair of the AES Technical Committee on Acoustics and Sound Reinforcement and Head of Content for the Electro-Acoustics Group Committee of the Institute of Acoustics.
Production of immersive music documentaries from top to the bottom (and why you should always use ambisonic microphone on the set)
Music concerts seem to be the perfect material for recording them in order to be played in a virtual reality environment. However, it is difficult to find registrations that are satisfactory in terms of image and sound quality on popular distribution platforms. The number of difficulties you can encounter while preparing material and playing is quite enough. Starting from the difficulties associated with how the 360 3D cameras work by the question of how to record sound in the conditions of a music concert ending in the fact that there are no good players that support the appropriate image and sound resolution. I managed to develop answers for these questions during the production of two immersive music documents – “Echo Serca” and “Pasja VR”. From the plan to registration, through editing, to the creation of a dedicated player and projection booth.
Creator of sound and composer of music for computer games, theater performances, film and multimedia. Author of spatial sound installations. Musician, multi-instrumentalist, music producer and DJ. Research and teaching assistant at the Department of Sound Engineering at The Fryderyk Chopin University of Music. Creator and curator of permanent VR projection exposition – UMFC VR. PhD candidate in the subject of sound design in 3D space.
Patricio Ballesteros Ledesma
This is my acousmatic piece. “With closed eyes it looks better”.
Artist, audiovisual producer and journalist of Buenos Aires, Argentina, since 30 years ago. He studied social communication, made courses of film and video, made more than 150 videoart and experimental shorts, including digital editing and soundtrack, and also takes photos and composes music. His works were selected and screened at cinemas, auditoriums, galleries, museums, culture centers and festivals in Argentine, Australia, Brazil, Bulgaria, Czech Republic, Dominican Republic, Finland, France, Greece, Ireland, Israel, Italy, Japan, Mexico, The Netherlands, Portugal, Russia, Saudi Arabia, Slovakia, Spain, UK, USA.
Chasing Space: Some Spatial Approaches to Binaural Creativity
A personal account of over twenty years of binaural recording and environmental performance including concepts of live spatial choreography and the practicalities of live headphone binaural concerts. This will include the use of simple in-ear microphone techniques for live head binaural recording and will consider personal and environmental factors that affect the perceptual surround soundfield. Examples of a range of approaches and subjects for live head binaural recording, including headphone concerts with a live band, will be discussed together with some extracts available over headphones during the presentation.
Working as a Mastering Engineer from 1998 to 2016 and forming dallas MASTERS as a sole trader in 2005. Retired in January 2016. Continuing interests in audio and visual arts in retirement.
When the Boys Come Home
Mark Randell, Daithi McMahon, Michael Brown and Phil Baggaley
Manging a collaborative multi-disciplinary audio production in the form of a radio play involving script-writing, sound-design, acting and music, and the eventual realisation within a surround sound environment. We will offer a presentation discussing the development and collaboration between students from different academic and creative backgrounds. We will also present key scenes in the auditorium for illustration and then invite all attendees to experience the full production, which lasts for around 15 minutes, in the upstairs ambisonic space. A WW2 Radio Drama
The project involves the collaboration between lecturers Daithi McMahon and Michael Brown, conceived and facilitated by the audio media technical Instructor Mark Randell, involving students from:
BA Media Production, MA Writing for Performance, MA Music Production, BA Contemporary Theatre & Performance and BA Music
Pitch Black: A ‘AAA’ Audio Game
Pitch Black is an in development audio game based on binaural audio and ambisonics. It features unique, and innovative spatial audio mechanics not seen before in the gaming industry. In this talk lead developer Harry Cooper will discuss the technological challenges associated with developing a game of this nature. Using Unity3D and Resonance Audio, the talk will explain how the game world is created, and demonstrate the audio based game play mechanics that the player uses to navigate. The ecological approach to sound design within the game will also be discussed, as will the use of spatial music to set the emotional tone without damaging the directionality of the audio and confusing the player. Finally there will be a brief overview of the game’s implications for inclusivity in the industry, and the conceptual developments that led to the idea.
Harry Cooper is a current MA student at the University of Derby with a keen interest in ambisonics and spatial audio. His background is a musical one, having produced and played on multiple albums for his record label Purple Jam and performed around the UK in various projects. He also has a BSc in Chemistry from Nottingham Trent University. Now co-owner of Purple Jam Ltd along with fellow MA students Jordan Barry and Connor Harrison, Harry is the director, lead unity developer, and sound designer at the company.
Understanding 3D game audio runtime systems
Simon N Goodwin
Within the time available, Simon N Goodwin will explain the inner workings of game audio runtime systems, identifying how they’re built and managed from psychoacoustic and signal-processing principles. He will ground his explanations with practical advice and anecdotes, with special attention to the efficient and timely communication and perception of 3D space, distance and reverberation. Topics explored will include the pragmatic and consistent modelling of distance attenuation and non-point sources, multiple in-game listeners and surround sound mixing for split-screen views, differences between studio DSP effects and those appropriate for interactive use, conventional and practical speaker layouts, Ambisonics, head-related transfer functions and cross-talk cancellation as used in advanced VR, console and mobile games and amusement arcades. He will answer questions at the end, which is likely to be even more interesting.
Simon N Goodwin has been making games and audio tech for almost 40 years, with British firms like Codemasters, Attention to Detail, and Silicon Studio Ltd, and with multinationals like Amiga, Atari, Creative Labs, DTS, Electronic Arts, Intel, Motorola and Sega. He’s also worked extensively in Radio, TV and mass-market technical journalism. Along the way Simon has pioneered the use of seemingly dead-end 1970s British tech like Ambisonics into Virtual Reality and best-selling games, helped get six games to number 1 in the full-price charts here and abroad, as a Principal Programmer, Audio Engineer and Experienced Sound Designer, picking up five patents and a couple of BAFTAs, and written a book, Beep to Boom, for AES and Focal Press.
The United Nations recently predicted that the world population will exceed ten billion by the year 2050. If this prediction is correct, a staggering population boom will occur over the coming thirty years, at a rate never previously witnessed by mankind. This piece responds to the mind-boggling number, ten billion, by giving it a sonic form. Each tiny microsound, or grain, represents one person
and, taken as a whole, they number ten billion. The piece is composed for 10 loudspeakers, at the studios of Bowling Green State University, USA. I am extremely grateful to Joe Klingler, for funding the Klingler ElectroAcoustic Residency (KEAR), and staff and students at Bowling Green for accommodating my visit. Special thanks go to Dr. Elainie Lillios who worked extremely hard to make the residency happen, and was exceptionally helpful and kind throughout.
Adam Stanovic’s compositions have won prizes, residencies and mentions around the world, including: IMEB (France); Metamophoses (Belgium); Destellos (Argentina); Contemporanea (Italy); SYNC (Russia); Musica Viva (Portugal); Musica Nova (Czech Republic); KEAR (USA). Further to this, Adam has worked in studios at the IMEB (France); Musiques et Recherches (Belgium); VICC (Sweden); EMS (Sweden); LCM (UK); CMMAS (Mexico); Holst House (UK), Bowling Green
(USA), Mise En Place (USA), and he is currently scheduled to compose at University of Sydney (Australia), and ArteFacto Sonoro (Ecuador) in 2018. Adam’s music has been performed in over 400 festivals and concerts around the world, including many of the most significant contemporary music events. Adam regularly speaks about electronic music, and has written numerous journal articles and book chapters; these consider compositional methods, analytical approaches to electronic music, the nature of performance interpretation and authenticity, the nature of digitised music, and various
philosophical issues that electronic music seems to produce. Adam is currently Senior Lecturer at The University of Sheffield, UK, where he directs the MA Composition and MA Sonic Arts. He supervises a range of PhD projects relating to electronic music.