MetaSounds: The Next Generation Sound Sources

This document provides an overview of MetaSounds in Unreal Engine 5.

Windows
MacOS
Linux

Sound Cues: Unreal Engine's Legacy Audio Object

Sound Cues are Unreal Engine's legacy audio object. Sound Cues are audio parameter graphs from which audio designers can play sound sources and manipulate various parameters, such as volume and pitch.

Sound Cues are not capable of sample-accurate control and do not support the creation of arbitrary digital signal processing algorithms. The workflow is mostly manual, forcing audio designers to create separate Sound Cues for every sound.

Lastly, the inability to handle sample-accurate timing and scheduling of audio causes the creation of procedural sound to become more challenging.

Introducing MetaSounds

Unreal Engine 5 introduces MetaSounds, a new high-performance audio system that provides audio designers with complete control over Digital Signal Processing (DSP) graph generation for sound sources.

MetaSounds address the issues present in Sound Cues and serve as a replacement for Unreal Engine's default audio object.

A Fully Procedural Audio Engine

Unlike Sound Cues, MetaSounds are fundamentally a Digital Signal Processing (DSP) rendering graph. They provide audio designers the ability to construct arbitrarily-complex procedural audio systems that offer sample-accurate timing and control at the audio-buffer level.

Using MetaSounds, audio designers can generate audio synthetically at runtime and freely mix and match procedurally generated sound with other audio sources.

MetaSounds are also designed to easily integrate with game data and player interactions to create immersive experiences triggered by gameplay events.

More Control for Audio Designers

Each MetaSound is its own audio rendering engine, allowing for specific parameters such as sample rate and buffer size to be specified. This provides audio designers the ability to fine tune each MetaSound for desired performance and latency.

MetaSounds are created in a new MetaSound Editor which provides audio designers with no programming experience the ability to create their own audio rendering engine using a node-based interface. The editor enables the previewing of all audio input parameters, performance metrics, and several ready-made nodes that provide detailed control options for the entire audio rendering pipeline.

The audio team will also continue to improve the MetaSound Editor after the Unreal Engine 5 Early Access period, by adding visualizations for audio data, such as meters and analyzers, and UMG support for simplified interface creation.

Sample-Accurate Control of Audio

MetaSounds offer sample-accurate control of audio sources. Audio designers can create sample-accurate concatenation of several audio sources to create procedural music with endless variation. MetaSounds can also modulate at audible rates, which make them ideal for use cases involving Amplitude Modulation and Frequency Modulation synthesis.

These features provide the ability to create procedural sound systems that take advantage of the ever-changing nature of your game.

Improved Workflow

One of the main workflow improvements from MetaSounds is that it reduces the complexity of managing thousands of sound assets in a single project. For this reason, MetaSounds will allow for asset instancing in the near future.

The new workflow will be analogous to a fully-programmable Material and rendering pipeline, like the Material Editor. Audio designers will be able to create MetaSounds that contain other MetaSounds, as well as create MetaSound instances to handle parameter customization. The process of creating MetaSounds will be similar to creating Materials and Material Instances in Unreal Engine.

Increased Performance

MetaSounds are rendered asynchronously to the main Audio Mixer, providing better performance compared to Sound Cues. Each MetaSound DSP graph is nativized to an optimized C++ object, which avoids the use of expensive virtual function overhead and buffer.

Portable and Extensible

MetaSounds are serialized into an intermediate JSON format for improved readability and portability. This means they can more easily be exported and shared, or sent over a network connection.

MetaSounds can also be extended by third-party plugins using its API. Creating new nodes for the MetaSound Editor is analogous to creating a function in C++ where the programmer defines a set of inputs and outputs, as well as the execution code. This will be represented as a node inside the editor without the need to code directly to Slate.

Rich Gameplay Interactions

MetaSounds accept custom user input parameters, similar to input parameters used in Materials. These parameters can be updated through gameplay and passed to the MetaSound for an improved interactive experience.

MetaSounds can also be integrated with the Parameter Modulation Plugin, which enables MetaSounds to be controlled using parameter buses and provides them with dynamic mixing capabilities.