Procedural Synthesis in Video Games

In the previous blog post the advantages and limitations of procedural audio and procedural sound design were discussed. The conclusion was reached that although procedural audio systems have a clear use within future video games, the technologies have not yet been developed to a point at which the majority of a games media can be procedural generated without limitations.

Fournel (2010) broke the primary causes for the use of procedural systems into the following reasons:

  • Due to memory constraints or other technology limitations
  • When there is too much content to create
  • When we need variations of the same asset
  • When the asset changes depending on the game context

The first three of these points are mostly instigated from having to build the procedural system out of necessity to support the gameplay/graphic elements of the game or for hardware limitations, whereas the final poses a more creative use for procedural audio as it promotes the idea that the game itself should be a reactive, if not at least unique experience for each player.

The 2014 release Fract OSC (Phosfiend Systems) is a musically driven minimal exploration game with a procedural synthesis engine at the heart of the system. The player is guided to rebuild the world around themselves through musically reactive puzzles. Upon progressing through the numerous puzzles, the game engine responds by altering parameters of a real-time synthesiser hosted within the game engine.

The synthesis engine within Fract OSC is developed in the visual programming language Pure Data. To access the vast functionality of Pure Datas’ audio processing, the developers embedded Pure Data into Unity using the ‘libpd’ audio synthesis library. This afforded the developers the possibility of integrating the audio processing capabilities of Pure Data and having them respond to information sent from the game engine and vise versa. (Phosfiend Systems, 2013)

Although Fract OSC received an overall positive review, and rightly so technologically alone, the game served as an example of the limitations of procedural technologies in video games as the complex procedural system created for Fract OSC was the game itself. The low-poly Tron-esque environment the developers created is simply an interface for the system, much like the keyboard for a piano or the patch cords and potentiometers of a modular synthesiser.

Developing a procedural synthesis system as intricate as Fract OSC requires not only a longer duration of time to sculpt sounds, as opposed to traditional pre-rendered audio. But also requires extensive research into communication between the procedural sound engine and the game engine, due to the lack of mainstream adoption of procedural technologies in their current form. This is an issue discussed by 

“The hard truth is that while the idea (procedural audio) is great in theory, no one knows what they’re doing in practice. The field is lacking in design principles, tools, and technical performance. New tools must be written to specifically address the needs of creating interactive audio. To be clear, it isn’t just about putting synthesizers in video games. It’s a shift in thinking about audio and audio production from linear to non-linear.”

Although there are data exchanging protocols in place for communication between game engines and audio programming languages they’re often community supported and non-intuitive making them difficult for beginner sound designers to take advantage of. However, one recurring theme of the more popular audio programming languages used for procedural synthesis such as Pure Data and Max Cycling ’74 is that they’re all visual programming languages.

A similar approach to procedurally generated sound effects was taken in Grand Theft Auto 5 (Rockstar, 2013), where the audio team developed their own visual audio programming language for use with their own proprietary engine. This example displayed by Alastair MacGregor (2014), lead audio programmer at Rockstar North used their audio synthesis toolkit to create unique sounds for repetitive events without pre-render audio.

Much like Fract OSC, the procedural audio system in Grand Theft Auto 5 was created to alter upon game context but also to fulfil the three prior points stated by Fournel. The use of the audio synthesis toolkit allowed for a wide array of content to be produced with minimal impact on the game engine due to the engine compatibility.

Referring back to Nairs’ statement previously, “New tools must be written to specifically address the needs of creating interactive audio. It would seem that both the professional and beginner sound designers have settled with a modular audio programming environment to produce procedural audio as it allows for intuitive development and (relatively) simple integration with popular game engines. With the development of SuperCollider for Unity and Unreal Engines synthesiser with DSP tools, it would seem clear that procedural synthesises use in video games, whether as a focal point of the game play or as ambiences in an open worlds is quickly finding its feet.



Flanagan, R. (2013). FRACT Audio Tech: Wrapping Pure Data.Available:

Fournel, N. (2010). Procedural Audio for Video Games: Are we there yet?. Available: (Slide 10)

Jørgensen, J. (2014). Interactive Procedural Audio – Conceptual Demonstration. Available:

MacGregor, A. (2014). The Sound of Grand Theft Audo V. Available:  30:53

Nair, V. (2014). What’s The Deal With Procedural Game Audio?.Available:

Unreal Engine. (2017). The Future of Audio in Unreal Engine | GDC 2017. Available:


One Reply to “Procedural Synthesis in Video Games”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s