Audio System Overview

An overview of the audio system used for playing in-game sounds, including node-based audio Assets in the Sound Cue Editor

Choose your operating system:

Windows

macOS

Linux

Unreal Engine 4 (UE4) supports importing 16-bit and 24-bit, PCM-formatted .wav files at any sample rate for up to 8 channels. For convenience, it also supports importing Ogg-Vorbis, AIFF files, and FLAC files, though it internally converts them to 16-bit format PCM files.

With UE4, you can also build composite sounds in the form of Sound Cues , and the Sound Cue Editor , which you can use to combine sounds and apply modifiers called Sound Nodes that alter the final output.

Additional elements that are used to define how a sound is heard or played are covered in this overview, and links to more documentation can be found in their respective sections.

Importing Sound Files

UE4 currently supports importing 16-bit and 24-bit PCM formatted .wav files at any sample rate, and for up to 8 channels.

The audio files that are imported are automatically encoded to compression formats based on the platform and features used by the sound.

Importing a sound file into the editor generates a Sound Wave Asset that can be dropped directly into a Level, or that can be used to create a Sound Cue which then can be edited inside the Sound Cue Editor .

You can import a sound file by using the Content Browser Import button, or by selecting a file in File Explorer (Windows) and dragging it into the Content Browser.

Sound Asset Types

To add a Sound Asset, click the CB_button_NewAsset.png button from the Content Browser and select Sounds , then from the menu, select the Asset you want to add.

selectSoundAsset.png

There are several different types of Sound Assets that can be added to your projects, described below.

Sound Cue

soundCueEditor.png

Sound Cues are composite sounds that allow you to modify the behavior of audio playback, combine audio effects, and apply audio modifiers with Sound Nodes to alter the final output.

For more information, refer to the Sound Cue Editor page.

Sound Attenuation

Sound Attenuation assets allow the definition of attenuation properties in a reusable manner. Any place you can specify one-time use attenuation properties, you can instead specify the Sound Attenuation Asset. This allows adjustment to attenuation properties without having to revisit every sound individually.

For more information on Attenuation , see the Sound Attenuation page.

Reverb Effects

Reverb Effects are definable Assets with several properties that can be easily adjusted and applied to any Audio Volume placed in your Level.

With a Reverb Effect, you can adjust settings (see below) that control elements like the echo density, overall reverb gain, air absorption, and more, to help craft an overall feel.

reverbeffect.png

Sound Class

Sound Classes are a collection of properties that can be applied to a number of Sound Assets.

The properties inside a Sound Class act as multipliers to the existing values and will be carried out by all Sound Assets assigned to the Sound Class.

Hierarchies can be created by adding Child Classes , which will allow you to pass down only specified properties from the parent class to children classes. You can connect class together inside the Sound Class Editor , which shares a similar node-based interface as seen in the Sound Cue Editor .

soundClassEditor.png

You can also add Passive Sound Mixes (see the Sound Mix section below) to a Sound Class which will kick in and activate automatically whenever the Sound Class is played (for example, having music automatically lower whenever a dialogue Sound Class is played).

Sound Mix

Sound Mixes allow you to set the EQ Settings (Equalizer Settings) and modify Volume and Pitch properties of Sound Classes.

soundMixDetails.png

Multiple Sound Mixes can be active at the same time, all contributing to the overall audio effect. You can Push (Activate) or Pop (Deactivate) Sound Mixes directly inside a Blueprint with the Push Sound Mix Modifier and Pop Sound Mix Modifier nodes or activate them passively whenever a sound with a given Sound Class is playing within a specified threshold.

However, the Push/Pop method can become complex quickly if you have a large number of mixes you are trying to switch between. This is where the Set Sound Mix Class Override Blueprint Node comes into play. It can set an active Sound Mix to use any Sound Class you have, and interpolate between its current Sound Class and the new Sound Class over time.

image_37.jpg

Then you can set the Sound Mix back to its original setting by using the Clear Sound Mix Class Override .

Within the Sound Mix asset itself, which can be opened by double-clicking the Asset in the Content Browser , several properties exist.

You can specify EQ Settings for the mix to adjust the high, middle, and low frequencies and gains. As the EQ Settings from multiple Sound Mixes cannot be combined, the EQ Priority allows you to control which active mix properties are applied at any given time.

Inside the Sound Classes section, you set which Sound Classes are to be affected by the mix. For each Sound Class, you can set the Volume or Pitch adjusters, set if the mix settings are to be applied to Child Classes, or modify the VoiceCenterChannelVolume .

The Sound Mix section allows you to specify how the Sound Mix properties are applied or removed. Delay indicates how long before the mix properties should begin being applied. Fade In Time and Fade Out Time specify how quickly to transition from no effect to the specified properties. Duration allows a pushed Sound Mix to automatically pop itself after the specified duration. A value of -1 indicates to never automatically pop and passively applied Sound Mixes will not automatically pop.

Dialogue Voice and Dialogue Wave

The Dialogue Voice and Dialogue Wave Assets are used for generating in-game dialogue events, crafting subtitles, and for supplementing localization efforts.

When editing a newly created Dialogue Voice Asset, you can define the Gender and Plurality of a voice actor and although you do not specify any audio Assets inside the Dialogue Voice, the information provided here can be referenced inside the Dialogue Wave.

The Dialogue Wave provides more options and is where the connection between audio and speaker/listener(s) is made. It is also where the correlation between dialogue audio and subtitle text is made. The Dialogue Wave represents a single line of dialogue and the core component of the Dialogue Wave settings is the Dialogue Contexts section.

dialogueWaveDetails.png

Inside the Dialogue Contexts section, you can specify the Speaker or who the dialogue is being Directed At in their respective sections by assigning a Dialogue Voice. The actual audio line of dialogue can be added as a Sound Wave by expanding the context window and choosing the desired asset from the drop-down menu or by pointing to an asset in the Content Browser .

In the event that you have multiple actors who say the same line of dialogue, the Add Dialogue Context option will allow you to create a new entry for the dialogue where you can set new Speaker and Directed At sections.

Dialogue Wave Assets can also be applied to a Sound Cue by using the Dialogue Player node inside the Sound Cue Editor . Also, you can access a Dialogue Wave directly from within a Blueprint using the Play Dialogue at Location and Play Dialogue Attached nodes.

In addition to Dialogue Contexts, you can apply a Mature filter which flags the dialogue as containing mature/adult content. Under Script , you can enter the text that is spoken in the attached audio inside the Spoken Text section. You can also enter contextual information for the audio for translation purposes or for notes to a voice actor in the Voice Actor Direction section.

For more information, see the Using Dialogue Voices and Waves example.

Help shape the future of Unreal Engine documentation! Tell us how we're doing so we can serve you better.
Take our survey
Dismiss