unreal.NeuralNetwork

class unreal.NeuralNetwork(outer=None, name='None')

Bases: unreal.Object

NeuralNetworkInference (NNI) is Unreal Engine’s framework for running deep learning and neural network inference. It is focused on: - Efficiency: Underlying state-of-the-art accelerators (DirectML, AVX, CoreML, etc). - Ease-of-use: Simple but powerful API. - Completeness: All the functionality of any state-of-the-art deep learning framework.

UNeuralNetwork is the key class of NNI, and the main one users should interact with. It represents the deep neural model itself. It is capable of loading and running inference (i.e., a forward pass) on any ONNX (Open Neural Network eXchange) model. ONNX is the industry standard for ML interoperability, and all major frameworks (PyTorch, TensorFlow, MXNet, Caffe2, etc.) provide converters to ONNX.

The following code snippets show the UNeuralNetwork basics (reading a ONNX model and running inference on it). For more detailed examples, see {UE5}/Samples/MachineLearning/NNI.

1a Creating a new UNeuralNetwork and loading a network from an ONNX file

// Create the UNeuralNetwork object UNeuralNetwork* Network = NewObject<UNeuralNetwork>((UObject*)GetTransientPackage(), UNeuralNetwork::StaticClass()); // Load the ONNX model and set the device (CPU/GPU) const FString ONNXModelFilePath = TEXT(“SOME_PARENT_FOLDER/SOME_ONNX_FILE_NAME.onnx”); if (Network->Load(ONNXModelFilePath)) {

// Pick between option a or b // Option a) Set to GPU if (Network->IsGPUSupported())

Network->SetDeviceType(ENeuralDeviceType::GPU);

// Option b) Set to CPU Network->SetDeviceType(ENeuralDeviceType::CPU);

} // Check that the network was successfully loaded else {

UE_LOG(LogTemp, Warning, TEXT(“UNeuralNetwork could not loaded from %s.”), *ONNXModelFilePath);

C++ Source:

  • Plugin: NeuralNetworkInference

  • Module: NeuralNetworkInference

  • File: NeuralNetwork.h

Editor Properties: (see get_editor_property/set_editor_property)

  • are_input_tensor_sizes_variable (Array(bool)): [Read-Only] Are Input Tensor Sizes Variable: Whether some of the FNeuralTensor of InputTensor have flexible/variable dimensions. E.g., this is useful for variable batch size.

  • device_type (NeuralDeviceType): [Read-Write] Device Type: The neural device type of the network. It defines whether the network will use CPU or GPU acceleration hardware during inference (on Run). If SetDeviceType() is never called, the default device (EDeviceType::GPU) will be used. See: ENeuralDeviceType, InputDeviceType, OutputDeviceType for more details.

  • input_device_type (NeuralDeviceType): [Read-Write] Input Device Type: It defines whether Run() will expect the input data in CPU memory (Run will upload the memory to the GPU) or GPU memory (no upload needed). If DeviceType == CPU, InputDeviceType and OutputDeviceType values are ignored and assumed to be set to CPU. See: ENeuralDeviceType, DeviceType, OutputDeviceType for more details.

  • is_loaded (bool): [Read-Only] Is Loaded: Whether this UNeuralNetwork instance has loaded a valid network model already, i.e., whether Load() was called and returned true.

  • model_full_file_path (str): [Read-Only] Model Full File Path: Original model file path from which this UNeuralNetwork was loaded from.

  • output_device_type (NeuralDeviceType): [Read-Write] Output Device Type: It defines whether Run() will return the output data in CPU memory (Run will download the memory to the CPU) or GPU (no download needed). If DeviceType == CPU, InputDeviceType and OutputDeviceType values are ignored and assumed to be set to CPU. See: ENeuralDeviceType, DeviceType, InputDeviceType for more details.

  • synchronous_mode (NeuralSynchronousMode): [Read-Write] Synchronous Mode: It defines whether UNeuralNetwork::Run() will block the thread until completed (Synchronous), or whether it will run on a background thread, not blocking the calling thread (Asynchronous).

  • thread_mode_delegate_for_async_run_completed (NeuralThreadMode): [Read-Write] Thread Mode Delegate for Async Run Completed: If SynchronousMode is Asynchronous, this variable will define whether the callback delegate is called from the game thread (highly recommended) or from any available thread (not fully thread safe). - If this variable is set to ENeuralThreadMode::GameThread, the FOnAsyncRunCompleted delegate will be triggered from the main thread only. - If this variable is set to ENeuralThreadMode::AnyThread, the FOnAsyncRunCompleted delegate could be triggered from any thread. See: SynchronousMode for more details.