MetaSounds: The Next Generation Sound Sources

This document provides an overview of MetaSounds in Unreal Engine 5.

Choose your operating system:

Windows

macOS

Linux

Introducing MetaSounds

Unreal Engine 5 introduces MetaSounds, a high-performance audio system that provides audio designers with complete control over a Digital Signal Processing (DSP) graph for the generation of sound sources.

MetaSounds offer user customization, third-party extensibility, graph re-use, and a powerful tool for in-editor sound design.

A Fully Procedural Audio Engine

Unlike Sound Cues, MetaSounds are fundamentally a Digital Signal Processing (DSP) rendering graph. They provide audio designers with the ability to construct powerful procedural audio systems that offer sample-accurate timing and control at the audio-buffer level.

Using MetaSounds, audio designers can generate audio synthetically at runtime and freely mix and match procedurally generated sound with other audio sources.

MetaSounds are also designed to easily integrate with game data and player interactions to create immersive experiences triggered by gameplay events.

More Control for Audio Designers

Each MetaSound is its own audio rendering engine. They render in parallel with each other and have potentially independent rendering formats (e.g. sample rate, buffer size, channel count).

MetaSounds are created in a new MetaSound Editor where audio designers with no programming experience can create procedural sounds using a node-based interface. The editor enables the live previewing of all audio input parameters, and contains several ready-made nodes that provide detailed control options for the entire audio rendering pipeline.

The MetaSound editor features a real-time meter on the output, in-graph widgets to control and visualize parameters (knobs and sliders), and buttons to interact with events in real time.

Sample-Accurate Control of Audio

MetaSounds offer sample-accurate control of audio sources. Sample accuracy refers to the ability for timing to be on the order of a single sample of audio. In other words, if the sample rate is 48,000 samples per second, a sample-accurate event would have a timing resolution of 1/48,000 seconds, or 0.02 milliseconds.

MetaSounds support sample-accurate control in several ways. MetaSound triggers execute sample-accurate events within the graph. Triggers can originate from gameplay events, MetaSound nodes, or the graph itself.

The MetaSound Wave Player node features sample-accurate concatenation.This means that, when the playing sound wave finishes playing, it will seamlessly play the next queued-up sound wave without audible hitches or interruptions.

Many MetaSound node parameters can be modulated by audio buffers, allowing for "audio-rate" parametric modulation. This enables powerful synthesis and sound design techniques.

Improved Workflow

With MetaSounds, you can re-use and reference their graphs through presets. Presets reference an existing MetaSound graph, and you can override the graph's inputs.

This can result in a significant boost in productivity as you now have a single graph with different presets which reference a base graph. The presets will contain the unique differences for the same base graph. Without presets, you would have to manage hundreds of potentially similar graphs to account for different variations.

Another benefit of using presets is the ability to update the base graph and have the changes automatically propagate to any preset that references that graph. This can also result in significant productivity gains throughout the development process.

In addition to presets, MetaSounds also features graph composition, which is the ability to use MetaSounds directly inside other MetaSound graphs. Custom MetaSound nodes can also be created and used inside any other MetaSound graph. These custom nodes can define their inputs and outputs, and provide tooltip and versioning information.

This way, you can build up a library of common functionality nodes and re-use them across multiple graphs. Similar to presets, custom MetaSound nodes will automatically propagate their changes across any MetaSound that references them, further increasing productivity in a complex and changing project.

Increased Performance

MetaSounds are rendered asynchronously to the main Audio Mixer using the same architecture and tasks used for asynchronously decoding sound sources.

Each MetaSound DSP graph is automatically converted to an optimized static, non-virtual C++ object, with data between nodes passed by reference and not copied. This avoids common disadvantages to this type of system, such as interpreted bytecode runtime, expensive virtual function overhead, and data copying.

Growing Node Library

MetaSounds ship with a growing library of MetaSound nodes, which provide a wide variety of powerful options for procedural sound design and music.

The library features a rich Wave Player node, which supports seeking, loop points, sample-accurate concatenation, pitch-scale modulation, and reading from cue-points in an audio file.

Other nodes include a diverse library of Trigger utilities, DSP math operations, DSP filters, dynamics processing, spatialization, real-time synthesis generators, and so on.

Portable and Extensible

MetaSounds can be extended by third-party plugins using its C++ Node API.

Creating a new node for the MetaSound Editor involves creating a C++ class where the programmer defines the node's inputs and outputs, and an execution callback. This class also contains the actual audio rendering code and logic. A new node can be written in a single ‘.cpp' file in only a few hundred lines of code.

Rich Gameplay Interactions

MetaSounds accept custom user input parameters, similar to the input parameters used in Materials and Niagara VFX systems. These parameters can be connected to gameplay systems through the Audio Component parameter interface, either directly in Blueprint, or in gameplay C++ code.

MetaSounds have also been integrated with the Parameter Modulation Plugin, which is an optional plugin that provides a system to write into a modulation bus asset from any modulation source (including from Blueprint). MetaSounds can now read from the parameter buses, which means you can use them to modulate any MetaSound system.