Search public documentation:


Interested in the Unreal Engine?
Visit the Unreal Technology site.

Looking for jobs and company info?
Check out the Epic games site.

Questions about support via UDN?
Contact the UDN Staff

UE3 Home > Audio Home > Audio System

Audio System


UnrealEngine3's audio system consists of a base audio class UAudioDevice. The base implementation takes care of some of the basic logic, such as adding and removing UAudioComponent objects. The UAudioComponent objects are content that can be played on the UAudioDevice. UAudioComponent objects contain USoundCue objects, which reference USoundNode objects which contain the actual audio data (i.e. a USoundNodeWave contains wave audio data).

The platform specific portion of the UnrealEngine3's audio code lives in audio device classes derived from the UAudioDevice audio base class. Currently there are three platform specific audio devices: UALAudioDevice (OpenAL PC), UXeAudioDevice (XBox360) and UPS3AudioDevice (PlayStation3).

All devices support an optional per source low pass filter, per bus reverb effects, per bus EQ filter and multichannel (4.0, 5.1, 6.1 and 7.1) sounds (when the hardware allows it). The intention is to have all platforms sounding as close as possible to each other.

The base audio class implementation can be found in Engine\Src\UnAudio.cpp and the declaration in Engine\Inc\UnAudio.h. The base audio component implementation can be found in Engine\Src\UnAudio.cpp and the declaration in Engine\Inc\UnActorComponent.h.

Common Concepts

Audio Device

As described above, this is the main interface to the audio system. It maintains lists of loaded assets, handles listeners and manages sound sources.

  • Init - Initialises the hardware, allocates channels and does any other work required to hear audio.
  • TearDown - shuts down the hardware and frees all resources back to the system.
  • Update - Updates the listener's position, orientation and velocity.
  • Exec - Process any debug commands typed into the console.

UAudioDevice::Update is the main "Tick" function for the sound system. It iterates over all the active AudioComponents (sounds that the engine wishes to play), sorts by priority, and then updates the sound source or starts a new sound. It also handles all the housekeeping and pausing.

Sound Sources

Sound Sources (a.k.a. Voices) represent a single source of sound to be mixed into the final output. There are a limited number of these, defined by hardware limitations, or by a config file. Each source has a location, volume, pitch and velocity. The default number of MaxChannels is 32 for all platforms and is clamped to between 1 and 64 internally. Setting MaxChannels to 0 is a quick way of circumventing all audio system code.

  • Init - Finds the sound asset to play and submits to the hardware.
  • Update - Updates the Volume, Pitch, Location and Velocity from the engine to the hardware.
  • Play - Starts a sound source playing.
  • Stop - Stops a sound source playing.
  • Pause - Pauses sound playback for this source.
  • IsFinished - Handles a sound ending, sending notification as needed, and the double buffering of queued buffers.

Sound Buffers

Sound Buffers are a container for wave data. The console command "ListSounds" will display all currently loaded waves and their format. Memory permitting, there can be an unlimited number of these. These generally come in two basic types: resident when the sound system completely contains all data and the engine has minimal control, and queued for when the engine needs to know when the sound has looped or when the sound is being decompressed in real time (e.g. ogg vorbis decompression on the PC).

  • Init - Locate the sound resource and load it if necessary.


AudioComponents are how the engine talks to the audio device. When the engine wishes to play a sound, it calls UAudioDevice::CreateComponent and sets the properties of the return structure. This structure is then attached to an actor's list of components for later management. If a location is passed in to the UAudioDevice::CreateComponent function, there will be a simple distance cull check if the sound is short, and the component may not be created. It is always important to check for a NULL return from UAudioDevice::CreateComponent as this is how running with -nosound disables audio.

The most important fields in UAudioComponent are

class USoundCue* SoundCue; // Sound to play with controls
BITFIELD bUseOwnerLocation:1; // Spatialise this sound to the location of the actor this is attached to.
BITFIELD bAllowSpatialization:1; // Spatialise this sound in 3d space
FVector Location; // Location to use for spatialisation if not attached
FVector ComponentLocation; // Location to use for spatialisation if attached

Sound Classes

Each sound cue can be assigned to a sound class which are defined in the package Content\Sounds\SoundClassesAndModes.upk. These specify properties and hierarchy for classes of sound cues which populate the FSoundClassProperties class. These are hierarchal in nature with some parent properties propagating down to all child nodes. The root sound class is called "Master" and is required.

These hierarchy can be setup and properties changed in the sound class editor. This works in a very similar fashion to the sound cue editor.

These groups can be used to apply ducking of volumes and/or pitch. UTGame has a test map "DM-SoundMode" to illustrate this; call SetSoundMode 'lowpass', 'bandpass', 'highpass', 'quick' and 'slow' on the command line to see how this works. One caveat here is that the sound class properties do not act hierarchically.

Reverb Effects

These are implemented in a platform agnostic way using parameters that the target platform can use as they are needed. The parameters are defined in code in the FAudioEffectsManager::ReverbPresets table. In UnPlayer.cpp, the code acquires the reverb settings for the reverb volume the player is in and passes these to FAudioEffectsManager::SetReverbSettings for interpolation and passing to the platform specific layer. As the parameters are interpolated, there are no reverb discontinuities. Reverb volumes can be created by right clicking on the builder brush, "Add Volume" --> "Reverb Volume". FAudioEffectsManager is the base class; platform specific interfaces are derived from that.

AmbientZone Effects

Reverb volumes now incorporate 'AmbientZone' settings; these are used to emulate occlusion for ambient sounds. Volume and/or low pass filter effects can be applied to outside sounds when inside, and inside sounds when outside. For example, you can place a huge outdoor ambient sound wind effect to cover the entire level, but when the player moves inside, that sound will be faded down, and sounds in the same volume as the player will be faded up.

For additional documentation please see the Using Ambient Zones page.

DistanceModel Attenuation

Attenuation is the ability of a sound to get fainter as the player moves away from it – the rate of fading is defined by the DistanceModel property.

Please see the DistanceModel Attenution page for more information.

Low Pass Filter

This is currently implemented via the attenuation sound node. There is a flag to enable and a min and max distance to use. This is actually implemented as a high shelf filter and interpolates from nothing at all at the min distance to no high frequency components at the max distance.

Sound Modes

Sound modes apply changes to a set of sound classes over time. Sound groups is the deprecated name for sound classes (due to a clash with package groups) e.g. Ambient, Weapons.

Set up an array of sound class effects with the sound modifications you want. For example, for cinematic dialog, up the volume on dialog and lower the volume for effects (these are not applied heirarchacally). Setting the sound mode starts fading in (over the fade in time) the mode after the initial delay, lasts for duration, and then fades back to default (over fade out time). A duration of < 0 means the mode will last until another mode is set. You should have a package called soundmodes that just has the default soundmode entry; add new sound modes to that package. New sound modes are ideally triggered via gameplay script.

A picture speaking a thousand words (to abuse the metaphor) would be the SoundMode map in UDKGame. Run it and type 'setsoundmode quick', 'setsoundmode slow' or 'setsoundmode loud' at the command line. You can run in PIE and edit or create your own on the fly.

The volume ducking example below is an example usage case.

Volume Ducking

Volume Ducking is usually used to decrease volumes of all other sound groups besides the one that needs to be heard, most commonly dialog (but not limited to movies). The controls usually are:

  • Identify Sound Group that causes ducking (dialog).
  • When a sound from that group is triggered, other groups decrease in volume to desired amount (fade) over x time (FadeStartTime = 0.3 seconds).
  • Amount other groups decrease in volume (FadeAmount = -0.4).
  • When sound from ducking group stops, other groups increase in volume back to normal volume (fade) over x time (FadeStopTime = 0.2 seconds).
  • May also want sound group exceptions to the ducking process (e.g. music), or a sound group called Exceptions that isn't affected.

If instead of limiting the above system to Volume, by adding Pitch, EQ, and other filters you could have even more control (the Mode System).

As an example, in loud sections in movies (e.g. the middle of a battle) when there is a cut to one person talking to another at normal volumes you can still hear that person because they have ducked all other sounds. All battle/ambient/effect sounds are played back more quietly while the dialog is played back louder.

In the game case, you duck the volumes in the ambient sound group (and others).

Importing Sounds

The engine currently supports importing uncompressed little endian 16 bit wave files at any sample rate (although sample rates of 44100 Hz or 22050 Hz are recommended).

Format .Wav
Bit Rate 16
Speaker Channels Mono, Stereo, 2.1, 4.1, 5.1 6.1, 7.1

Importing with the normal sound factory yields a simple sound node wave that can be referenced by a sound cue for playback. These sounds can be mono or stereo. Importing with a special naming convention gives the potential of multichannel (e.g. 5.1) sounds. Up to 8 channel sounds can be compressed on all platforms, although not all platforms can play them back efficiently. The package saving process does all the necessary work to convert to the platform's native format. There is no strict speaker mapping, so the number of channels infers which channels are played on which speakers (see table below).

  Extension 4.0 5.1 6.1 7.1  
FrontLeft _fl * * * *  
FrontRight _fr * * * *  
FrontCenter _fc   * * *  
LowFrequency _lf   * * *  
SideLeft _sl * * * *  
SideRight _sr * * * *  
BackLeft _bl     * * If there is no BackRight channel, this is the BackCenter channel
BackRight _br       *  

Compression settings are tweaked to maintain similar quality across platforms. After testing a many sounds, our musician determined an XMA quality of 40 was equivalent to an ogg vorbis quality of 0.15.

Visual Editing Tools

Sound Cue Editor

You can import audio files as sound waves and apply modifiers called audio nodes in the visual Sound Cue Editor.

Sound Quality Previewer

If you right click on a SoundNodeWave asset in the Generic Browser and select Sound Quality Previewer, the engine will compress the sound at 5, 10, 15, 20, 15, 30, 35, 40, 50, and 60 compression settings. You can then click on any given quality settings and hear the sound after it has been re expanded back to 16 bit PCM. This can be used to select the best fidelity vs memory setting. Clicking OK will apply the currently selected setting to the wave.

Multichannel Import/Export

In the Generic Browser...

To import:

  • Select Import from the menu.
  • Select the files "Surround_fl.wav", "Surround_fr.wav", "Surround_sl.wav", "Surround_sr.wav".
  • Select the package you wish to save to and click OK to all.

This will create a four channel asset called "Surround."

To export:

  • Select Export from the menu.
  • Pick a location to save the files to.

This will save the original mono files with the speaker extension.


Regarding loudest potential volume, there's a bit more overall bandwidth to work with. For example, a stereo file at 1.0 volume will be 2x louder than a mono file, likewise (4) mono files would be 4x louder. But eventually you'll hit the overall threshold and the output will begin to clip and distort.

On any given sound cue, volume settings up to ~2.0 will increase the perceived loudness of the audio file, anything above that won't. A single cue will never distort, but you won't want all of your files at max volume because it'll likely overload when multiple cues play simultaneously in-game.

You may want to consider coming up with a consistent volume scheme, or at least some general guidelines for your volumes:

Dialog ~1.4
Music ~0.75
Weapons ~1.1
Ambience ~0.5

Additionally, you may consider using mono assets just about everywhere to maintain consistency across platforms for all audio, with the exception of music.

Optimising Sound Memory Usage

When authoring content, it's best to use the lowest sample rate that maintains audio fidelity. All sample rates are supported for all platforms. For example, dialog generally still sounds good at 22.1kHz, whereas commonly played effects with high frequencies (such as gunshots) need to be higher (e.g. 40.0kHz). A similar heuristic can be applied to the quality setting.

Formerly, the distance crossfade node was used to apply a low pass filter for sounds at distance (the dry sound was mixed with a low pass filtered version of the same sound). However, all platforms now support attenuation with a low pass filter meaning the filtered version of the sound is no longer required.

Test maps

UTGame has several test maps for verifying audio system functionality -

  • DM-SoundLoop - to test infinitely looping and loop with notification sounds.
  • DM-SoundReverb - to test reverb and attenuation with a low pass filter.
  • DM-SoundMultichannel - to test 5.1 playback.
  • DM-SoundMode - to test sound modes. Available test modes are loud, quiet, quick, slow, lowpass, bandpass, highpass.
  • DM-SoundInterior - to test the ambient zone functionality.


The Unreal Engine Audio System does not support simultaneous streaming of arbitrary audio content, only the streaming of packages that contain audio. This approach was favored because we wanted to devote as much bandwidth to texture streaming as possible; and having a delay for such fast action games as Unreal Tournament and Gears of War would also be unacceptable, and syncing up the FaceFX anims becomes an additional chore at this point. As an alternative, a special dialog system was created for Gears.

Debug Commands

  • ResetSoundState: turns off any and all debug commands.
  • TestLowPassFilter: applies a low pass filter to all sources.
  • IsolateDryAudio: filters out the reverb sounds leaving only unreverbed sounds.
  • IsolateReverb: filters out the dry source leaving only the reverb sounds.
  • SetSoundMode x: applies volume and pitch modifications from the sound group properties, and applies an EQ filter from a table defined in UnAudioEffects.cpp.
  • ListSounds, ListSoundDurations, ListAudioComponents: list details of the currently loaded sounds.
  • ListWaves: list the details of the currently playing sounds.

Known Issues

  • TTP 60174: Audio will start over when gameplay window is moved.
  • TTP 64605: -nosound does not propagate to PIE
  • TTP 80463: AmbientSoundSimple actors don’t always restart in real-time preview in Editor
  • TTP 92710: Sound Cues using the Delay Node don't preview correctly in generic browser or sound cue editor
  • TTP 120345: AmbientSoundSimpleToggleable volume fading is broken
  • TTP 122495: Looping Node doesn't work if placed after the mixer when non looping nodes are also connected
  • TTP 122683: Distance Crossfade node incorrectly retriggers SoundNodeWave playback

Platform Specifics