Trace is a structured logging framework for tracing instrumentation events from a running process.

Choose your operating system:




Trace is a structured logging framework for tracing instrumentation events from a running process. The framework is designed to produce a stream of high-frequency traced events that is self-describing and easily consumed and easily shared. The modules TraceLog and TraceAnalysis are the principal modules that constitute the framework. Unreal Insights is Trace's primary customer.

A Tour of Trace


The complete system is composed of elements that form a pipeline from emission at runtime through to later consumption during analysis. The first of which is a trace event which are defined as follows.

UE_TRACE_EVENT_BEGIN(LoggerName, EventName[, Flags])
    UE_TRACE_EVENT_FIELD(Type, FieldName)

The EventName and FieldName parameters should hopefully be self-explanatory. Events are grouped by "loggers", a semantic-only concept that helps to namespace events and ease subscription when analyzing a trace stream. The optional Flags modify how events are traced.

Event Mark



By default events are synchronised with events being traced on other threads. Events marked "NoSync" skip this synchronisation; they're smaller and much faster to trace at the expense of being uncoordinated with other threads during analysis.


Some events are only sent once and required to interpret later trace events (mapping a name to an integer identified for example). Such events are marked as "important" so they can be retained and later resent.

This is for Unreal Engine 5.

Field types can be the standard integer or floating-point primitives (for example, uint32, float, etc.), as well as arrays and strings (see below for more information on the latter two). There is no support for nested structures/events as fields.

Events are usually defined at a global scope in a .cpp file. If events need to be traced from multiple translation units the UE_TRACE_EVENT_BEGIN_[EXTERN|DEFINE] pair can be used.

The logging of an event at runtime is done like this:

UE_TRACE_LOG(RainbowLogger, ZippyEvent, ItvChannel)
    << ZippyEvent.Field0(Value0)
    << ZippyEvent.Field1(BundleValue)
    << ZippyEvent.Field2(Data, Num);
    << ZippyEvent.Field3(String[, Num]);

Provided the channel "ItvChannel" is enabled this will add a "RainbowLogger.ZippyEvent" event to the trace stream. Care should be taken when assigning values to fields such that they have no side effects, ones that trace additional events in particular as a log site is not re-entrant.

There is no delta or run-length compression happening when events are traced. All defined fields are present too even if they were not written to at a particular UE_TRACE_LOG() call. A traced event is essentially analogous to a #pragma pack(1)-declared struct. It is advisable to think strategically as to how those bits are used to their fullest.

Side note; The thought process behind the use of a macro-soup was two fold; to hide a lot of boilerplate behind simple where details like offsets are known at compile time, and secondly so that definitions and log sites compile to nothing when Trace is off without littering #if/end everywhere.


In order to constrain the stream of events to what the user is interested in there is the concept of channels. This helps to focus CPU and memory resources on what is important to the observations being made. Channels are easily defined like this:


There is a EXTERN/DEFINE pair for more specific use cases. By default channels are disabled and must be explicitly opted into either via the -trace=itv command line option or the Trace::ToggleChannel() API.

Channels override the bitwise OR operator such that they can be combined with log sites to control the tracing of an event with multiple channels. A UE_TRACE_LOG(..., ItvChannel|BbcChannel) will only emit an event if both the "Itv" and "Bbc" channels are enabled. The astute reader will notice that this is, somewhat confusingly, the opposite of what one might expect from a bitwise OR.


Now that we have defined a new event, added enabled the channel(s) they're keyed off, and added a log site or two, we will want to consume and analyze the events. This is done using the TraceAnalysis module.

Analyzers are created by deriving from IAnalyzer and implementing two primary methods; the OnAnalysisBegin() to subscribe to events, and OnEvent() to receive those subscriptions.

When analysis encounters an event that the analyzer has subscribed to the analyzer's OnEvent() method is called. Access to the event's data is done via FEventContext.EventData.Get*() methods. This API reflects the self-describing nature of trace streams — there is no binary or runtime-code dependency on interpreting trace streams. Each received event also has associated thread information and (in some very rare circumstances) some timing information.

Running analysis on a trace session is straightforward by describing the analyzers to use and feeding the created analysis session the trace data.

struct : IInDataStream { ... } TraceData;
FGeorgeAnalyzer GeorgeAnalyzer;
FAnalysisContext Context;
FAnalysisProcessor Processor = Context.Process(TraceData)

The predominant thread ID is not quite the same as the system thread ID, although the system ID is available. This is to better guard against handling in cases where the OS might reuse a thread ID. So expect that the system ID will get reused from one thread to the next. Trace's ID will not, or at least they are considerably less likely to do so.


The TraceAnalysis module includes two components that together form a service for receiving, storing, and retrieving trace data. The implementation uses async IO so that it can scale easily to many clients (predominantly in-bound trace data). Getting a store up and running is straightforward;

FStoreService::FDesc Desc = {
    StoreDir,     /* directory path on disk */
    RecorderPort, /* -1=off, 0=auto-assign */
    1,            /* No. of IOCP threads */
auto Store = TUniquePtr<FStoreService>(FStoreService::Create(Desc));

The Recorder listens on a TCP port for incoming traces. When a runtime makes a connection to the recorder, the recorder creates a new trace with the store and begins relaying the received data into this new trace. Port 1980 is used by default but it is possible to ask the OS to assign an ephemeral port.

The Store's main role is to manage a directory of traces. It allows new traces to be created, the enumeration of available traces, and the retrieval of a particular trace's data. There is a REST-like protocol built on top for communicating with a store. The protocol is CBOR-based for interoperability. FStoreClient is an implementation of this protocol;

const auto* Store = GetWellKnownTraceStoreServer();
if (auto Client = FStoreClient::Connect(Store->Ip, Store->Port))
    for (int i = 0, n = Client.GetTraceCount(); i < n; ++i)
        const auto* Info = Client.GetTraceInfo(i);
        /* ... */

The recorder-store component is of course optional. It is possible to ask the runtime to trace to a file. Or circumstance might dictate that writing a recorder-like script in Python to receive trace data over a socket is more convenient.

Runtime Details

Normal Events

When an event is traced at a UE_TRACE_LOG() site, a small header and the event's fields' values are written into a buffer local to the current thread. These TLS buffers are modestly sized and linked together in a list. Trace's worker thread traverse the list of buffers, sending event data that has been committed (and is therefore complete and fully visible).

The benefit of using TLS is that it keeps tracing threads out of each others way. Nearly... Order across threads is of significance for some event types and must be reconstructed when trace data in analyzed; allocation and free events for example where addresses can see reuse. Trace uses an atomically incrementing integer for synchronisation — a 24 bit serial number that precedes each event. This of course comes at a cost so can be opted out of. Ideally all events types should opt-out of synchronisation (but perhaps not at the expense of reading a TSC!).

Important Events

This is for Unreal Engine 5.

An additional class of event type called "Important" has been introduced in UE5. This type of event is for those that are only traced once yet (usually) required for the interpretation of other subsequent trace data. They need different treatment as these events can easily get missed (e.g. disconnecting then reconnecting to a runtime's trace data — important events need to be in both trace data streams but were only traced in the first).

Important-type events are written into a buffer that is shared by all threads. This is for two reasons; firstly it separates them from normal events without any complexity of having to interpret and demultiplex traced data. Secondly it should help keep memory use down as there should be a finite total of important events traced.

Trace's worker thread polls the shared buffer for committed data, copying it into a reservoir buffer for temporary storage. As the reservoir of important events fills the data is LZ4-compressed and stored in a cache. This cache is sent each time a new connection to the trace data is made, ensuring not important events are lost, and that they are at the beginning of the stream.

Note that important events have no associated thread information during analysis as all thread-related context is long gone.

Worker Thread

A worker thread is used to collect traced events from the TLS and shared buffering schemes, compress and cache important events, handle command-control communications, and manage the destination IO (socket or file). This runs at a fixed interval and is a free thread (for example, not a job or task kicked as part of the engine's loop).

In some cases it may not be suitable to have trace events consumed and dispatched by a worker thread (i.e. forking on Linux). This can be opted-out of bUseWorkerThread during initialization which defaults to the value returned from FPlatformProcess::SupportsMultithreading(). When disabled it is the user's responsibility to update trace so that events are consumed with Trace::Update(). Note that lower frequency updates will result in higher memory use buffering traced events. Ordinarily this is all taken care of by Core's TraceAuxiliary.


Trace data is sent in packets, referred to as the "Transport". Each packet is prefixed with an internal identifier indicating which thread the events came from (or zero if they are important events), and a size. Most packets are LZ4-compressed, those that are not are too small to benefit.

These packets can be sent by the runtime to one of two places; to a file with the -tracefile= argument or procedurally with Trace::WriteTo() or over a TCP socket (-tracehost=/Trace::SendTo()). The default TCP port is 1980.

There is a small block of handshake data sent right at the beginning of the trace stream along with a block of unstructured metadata.

Command-Control And Late-connect

There are occasions where a tool such as Unreal Insights might wish to control the trace system that it is receiving data from. For this the runtime runs a rudimentary command/control server that mostly mirrors the Trace:: public API. There is a simple class in TraceAnalysis to facilitate communication;

FControlClient ControlClient;
if (ControlClient.Connect(Bungle.GetHostName())
    ControlClient.SendToggleChannel(L"ItvChannel", true);

This simple server listen port 1985 because that's the year the cinematic masterpiece Back To The Future was released. Note that the port can hop about in cases where the default is not available. The port number is embedded in the metadata following the stream's handshake (accessible via TraceAnalysis' Recorder).

Because the port is a well-known value (at least by default) it is possible to late-connect to a running title and tell it to begin tracing data somewhere. This is known as late-connect.




In the early days of trace there was no support for variable length fields. A concept of attachments was introduced where opaque binary blobs could be appended to traced events. The use of attachments is discouraged — array-type fields should be used instead which benefit from being structured and reflected at analysis time. Furthermore, attachment support incurs a cost that every logged event has to pay so it is possible that it may be changed to be an opt-in component in the future to optimize away this overhead.


Trace events support string-type fields using the Trace::AnsiString or Trace::WideString types when declaring events' fields with UE_TRACE_EVENT_FIELD().

    UE_TRACE_EVENT_FIELD(Trace::AnsiString, MyFieldA)
    UE_TRACE_EVENT_FIELD(Trace::WideString, MyFieldW)

String-type fields are written much the same as primitive-type fields, with a few additional extras; ASCII-type fields will automatically truncate wide strings to 7-bit characters, and an optional string length can be given (preferable for performance if string length is already known).

UE_TRACE_LOG(MyLogger, MyEvent)
    << MyFieldA(AnAnsiBuffer, [, ExplicitStringLen])
    << MyFieldW(WideName)


The trace system is global and can only trace to one thing at once. Because some tools available in the editor are now using the trace system, the editor no longer automatically connects to and traces to a running Unreal Insights. If you wish to profile the editor the -tracehost= argument must be given when starting the editor (or possibly the trace.start in-editor depending on a project's plugins).

Unreal Engine 4 Notes

Some of the events traced are one-shots — they are only ever traced once. This is common to assign IDs to names, CPU events' names for example. Unfortunately this makes it impossible to stop tracing and later restart it as contextual information from trace-once events is lost. A common indicator of this scenario are "unknown" CPU events or threads in Unreal Insights. The trace.stop is in fact not really a stop but a disablement of channels as a mitigation to this limitation. Future versions of the engine have a concept of "Important Events". One-time events are cached and resent them when a trace connection is established, thus seamlessly supporting drop-in/out profiling workflows.

Help shape the future of Unreal Engine documentation! Tell us how we're doing so we can serve you better.
Take our survey