GENERATIVE AUDIO SOUNDTRACKS IN GAMING ENVIRONMENTS
This project aims to explore the feasibility and performance of a generative audio engine within a traditional gaming environment. Although graphics technologies have progressed rapidly alongside hardware advances, most audio engines have remained fairly stagnant. Utilizing generative audio, dynamic soundscapes can be two-way linked to player actions, allowing the user's movement to control the sound and the sound to control user movement. Additionally, this generative audio system allows for typically immutable parameters, such as key transposition, instrument distortion, and tempo, to be changed at runtime with little noticeable artifacting.
This particular project showcases these principles in a top-down shooter environment. Each instrument of the backing track is rendered at runtime and their volume is controlled by a series of virtual audio nodes, with levels controlled by the player's proximity to each node. Additionally, animations and the firing of the player's weapon are synchronized to various music channels, with each action being triggered with the same call that creates the audio note. Finally, predefined parameter control regions allow for the smooth shifting of key, phrase, and tempo throughout the game.