Choose your operating system:
Windows
macOS
Linux
| UObjectBase
|
Module |
|
Header |
/Engine/Plugins/Experimental/NeuralNetworkInference/Source/NeuralNetworkInference/Public/NeuralNetwork.h |
Include |
#include "NeuralNetwork.h" |
UCLASS(BlueprintType)
class UNeuralNetwork : public UObject
Name | Description | ||
---|---|---|---|
|
DeviceType |
The neural device type of the network. |
|
|
InputDeviceType |
It defines whether Run() will expect the input data in CPU memory (Run will upload the memory to the GPU) or GPU memory (no upload needed). |
|
|
ModelFullFilePath |
Original model file path from which this UNeuralNetwork was loaded from. |
|
|
OutputDeviceType |
It defines whether Run() will return the output data in CPU memory (Run will download the memory to the CPU) or GPU (no download needed). |
|
|
SynchronousMode |
It defines whether UNeuralNetwork::Run() will block the thread until completed (Synchronous), or whether it will run on a background thread, not blocking the calling thread (Asynchronous). |
|
|
ThreadModeDelegateForAsyncRunCompleted |
If SynchronousMode is Asynchronous, this variable will define whether the callback delegate is called from the game thread (highly recommended) or from any available thread (not fully thread safe). |
Name | Description | |
---|---|---|
|
UNeuralNetwork() |
Default constructor that initializes the internal class variables. |
Name | Description | |
---|---|---|
|
~UNeuralNetwork() |
Destructor defined in case this class is ever inherited. |
Name | Description | ||
---|---|---|---|
|
DECLARE_DELEGATE ( |
FOnAsyncRunCompleted is the delegate that gets triggered when an asynchronous inference has been completed (i.e., Run() was called with NeuralSynchronousMode == ENeuralSynchronousMode::Asynchronous). |
|
|
TObjectPtr< ... |
GetAndMaybeCreateAssetImportData() |
|
|
TObjectPtr< ... |
GetAssetImportData() |
Internal and Editor-only functions not needed by the user. |
|
ENeuralBackE... |
GetBackEnd() |
Internal functions that should not be used by the user. |
|
ENeuralDevic... |
GetDeviceType() |
Getter and setter functions for DeviceType, InputDeviceType, and OutputDeviceType: |
|
void * |
GetInputDataPointerMutable ( |
|
|
ENeuralDevic... |
GetInputDeviceType() |
|
|
FNeuralStati... |
GetInputMemoryTransferStats() |
|
|
const FNeura... |
GetInputTensor ( |
GetInputTensor() and GetOutputTensor() return a const (read-only) reference of the input or output FNeuralTensor(s) of the network, respectively. |
|
GetInputTensorNumber() |
GetInputTensorNumber() and GetOutputTensorNumber() return the number of input or output tensors of this network, respectively. |
|
|
float |
GetLastRunTimeMSec() |
Statistics-related functions: |
|
FOnAsyncRunC... |
GetOnAsyncRunCompletedDelegate() |
|
|
ENeuralDevic... |
GetOutputDeviceType() |
|
|
const FNeura... |
GetOutputTensor ( |
|
|
GetOutputTensorNumber() |
||
|
FNeuralStati... |
GetRunStatistics() |
|
|
ENeuralSynch... |
GetSynchronousMode() |
Getter and setter functions for SynchronousMode: |
|
ENeuralThrea... |
GetThreadModeDelegateForAsyncRunCompleted() |
Getter and setter functions for ThreadModeDelegateForAsyncRunCompleted: |
|
InputTensorsToGPU |
Non-computationally-efficient functions meant to be used only for debugging purposes, but should never be used on highly performant systems: |
|
|
IsGPUSupported() |
Whether GPU execution (i.e., SetDeviceType(ENeuralDeviceType::GPU)) is supported for this platform: |
|
|
IsLoaded() |
It returns whether a network is currently loaded. It is equivalent to the output of Load(). |
|
|
Load ( |
Load() + SetInputFromArrayCopy() + Run() is the simplest way to load an ONNX file, set the input tensor(s), and run inference on it. |
|
|
Load |
Load() + SetInputFromArrayCopy() + Run() is the simplest way to load an ONNX file, set the input tensor(s), and run inference on it. |
|
|
Load ( |
Internal function not needed by the user. |
|
|
OutputTensorsToCPU |
||
|
ResetStats() |
||
|
Run() |
Load() + SetInputFromArrayCopy() + Run() is the simplest way to load an ONNX file, set the input tensor(s), and run inference on it. |
|
|
SetDeviceType ( |
||
|
SetInputFromArrayCopy |
SetInputFromArrayCopy(), SetInputFromVoidPointerCopy(), and GetInputDataPointerMutable() are the only functions that allow modifying the network input tensor(s) values: |
|
|
SetInputFromVoidPointerCopy |
||
|
SetSynchronousMode ( |
||
|
SetThreadModeDelegateForAsyncRunCompleted ( |
Name |
Description |
|
---|---|---|
|
FImplBackEndUEOnly |
Name |
Description |
|
---|---|---|
|
ENeuralBackEnd |
Internal enum class that should not be used by the user. |
Name | Description | ||
---|---|---|---|
|
ENeuralBackE... |
GetBackEndForCurrentPlatform() |
Do not use, ENeuralBackEnd will be removed in future versions and only UEAndORT back end will be supported. |
|
SetBackEnd ( |
Do not use, ENeuralBackEnd will be removed in future versions and only UEAndORT back end will be supported. |