Module |
|
Header |
/Engine/Source/Runtime/AugmentedReality/Public/ARSessionConfig.h |
Include |
#include "ARSessionConfig.h" |
class UARSessionConfig : public UDataAsset
Name | Description | ||
---|---|---|---|
|
bEnableAutoFocus |
Whether the camera should use autofocus or not (can cause subtle shifts in position for small objects at macro camera distance) |
|
|
bEnableAutomaticCameraOverlay |
Whether the AR camera feed should be drawn as an overlay or not. Defaults to true. |
|
|
bEnableAutomaticCameraTracking |
Whether the game camera should track the device movement or not. Defaults to true. |
|
|
bGenerateCollisionForMeshData |
Whether the AR system should generate collision data from the mesh data or not |
|
|
bGenerateMeshDataFromTrackedGeometry |
Whether the AR system should generate mesh data that can be rendered, collided against, nav mesh generated on, etc. |
|
|
bGenerateNavMeshForMeshData |
Whether the AR system should generate navigation mesh data from the mesh data or not |
|
|
bHorizontalPlaneDetection |
Should we detect flat horizontal surfaces: e.g. table tops, windows sills |
|
|
bRenderMeshDataInWireframe |
Whether the AR system should render the mesh data in wireframe or not |
|
|
bResetCameraTracking |
Whether the AR system should reset camera tracking (origin, transform) or not. Defaults to true. |
|
|
bResetTrackedObjects |
Whether the AR system should remove any tracked objects or not. Defaults to true. |
|
|
bTrackSceneObjects |
Whether the AR system should report scene objects ( |
|
|
bUseMeshDataForOcclusion |
Whether the AR system render the mesh data as occlusion meshes or not |
|
|
bUsePersonSegmentationForOcclusion |
Whether to occlude the virtual content with the result from person segmentation |
|
|
bVerticalPlaneDetection |
Should we detect flat vertical surfaces: e.g. paintings, monitors, book cases |
|
|
CandidateImages |
The list of candidate images to detect within the AR camera view |
|
|
CandidateObjects |
A list of candidate objects to search for in the scene |
|
|
DesiredVideoFormat |
The desired video format (or the default if not supported) that this session should use if the camera is enabled Note: Call GetSupportedVideoFormats to get a list of device supported formats |
|
|
EnabledSessionTrackingFeature |
A list of session features to enable |
|
|
EnvironmentCaptureProbeType |
How the AR system should handle texture probe capturing |
|
|
FaceTrackingDirection |
Whether to track the face as if you are looking out of the device or as a mirror |
|
|
FaceTrackingUpdate |
Whether to track the face as if you are looking out of the device or as a mirror |
|
|
FrameSyncMode |
||
|
LightEstimationMode |
||
|
MaxNumSimultaneousImagesTracked |
The maximum number of images to track at the same time. Defaults to 1 |
|
|
SerializedARCandidateImageDatabase |
Data array for storing the cooked image database |
|
|
SessionType |
||
|
WorldAlignment |
||
|
WorldMapData |
A previously saved world that is to be loaded when the session starts |
Name | Description | |
---|---|---|
|
UARSessionConfig() |
Name | Description | ||
---|---|---|---|
|
PlaneDetectionMode_DEPRECATED |