Generative Audio and Real-Time Soundtrack Synthesis in Gaming Environments: An exploration of how dynamically rendered soundtracks can introduce new artistic sound design opportunities and enhance the immersion of interactive audio spaces.

An important yet oft-overlooked front in the scope of interactive media, audio technologies have remained relatively stagnant compared to groundbreaking advancements made in fields such as visual fidelity and virtual reality. This paper explores the use of generative audio within a gaming environment, examining how dynamically-rendered audio can modify the creative pipeline, offer greater flexibility for audio designers, and improve the overall immersion of games and interactive media. A prototype generative audio engine is created, allowing for various musical parameters like tempo and pitch to be changed at runtime. Additionally, bidirectional linking between gameplay and music is explored, allowing player inputs to influence the soundtrack and the soundtrack to trigger or quantize player inputs. The final result, while somewhat limited in scope, demonstrates the potential of partially generative soundtracks to provide greater variety and freedom for audio engineers.

Cameron Bossalini
William Raffe
Jaime A. Garcia
Presented At: 
OzCHI '20: 32nd Australian Conference on Human-Computer Interaction
Conference Proceedings