In-Camera VFX Best Practices

Ways to improve your In-Camera VFX project for best results


There are a number of challenges around building an environment that will be used for an in-camera VFX shoot. The two main concerns are:

  • Building assets that look real and hold up on an LED wall.

  • Making the environment performant enough to run in real-time.

Use this page as a reference for these topics as you develop your projects for in-camera VFX.

Lighting and Rendering

  • Efficient lighting: Use alternatives to expensive Lighting such as a ray tracing or overuse of dynamic lights (in other words, use light baking and screen space reflections / fallbacks, and so on).

  • Ray Tracing is not Path Tracing. A lot of customers are confused by what ray-tracing actually means in the context of realtime rendering in Unreal Engine 4. Until Unreal Engine 5 launches, better-looking and more stable image quality can be achieved using baked lighting with volumetric light maps, reflection probes, and screen space reflections. After Unreal Engine 5 launches,all this knowledge can still be used for the best quality and performance in static scenes, Lumen will not make lightmaps obsolete because, due to the performance impact on the scene, it might not function in every scenario.

    • While ray tracing is required for GPU Lightmass to work, it does not require the additional ray tracing features that are enabled by default, such as ray-traced shadows, ambient occlusion, and reflections. Instead, it's best to disable all of these features using the following console command:
      r.RayTracing.ForceAllRayTracingEffects=0. Refer to the main GPU Lightmass Global Illumination documentation for more information.

    • When you do need ray tracing, we recommend going through all master materials and optimizing them for the ray-traced scene using the RayTracingQualitySwitch.

  • The Cinematic Engine Scalability quality settings are almost never needed, instead we recommend lowering all the quality settings so they visibly look almost identical to cinematic, as this can save a lot of performance in some scenes.

  • At the start of a project, go through all the Rendering settings in the Project Settings and enable and disable anything you need for the project. This will recompile all shaders. When setting up the Derived Data Cache you shouldn't have many issues with recompiling shaders after that.

  • Sometimes customers have balked at baked lighting because they want to be able to change things on the fly, so it is important to highlight the value of lighting scenarios, and how to properly use both baked and dynamic lighting.

    • The other really important detail about baked lighting is to keep your lightmap resolutions sensible. They don't need to be 4096p resolution.

  • If you're not using baked lighting, disable Allow Static Lighting in the Project Settings.

  • Volumetric Clouds and lighting effects are expensive. Be careful of relying on them.

  • Volumetric Fog is also expensive; consider regular fog or shader tricks instead. If Volumetric Fog is needed, try turning the quality down. To optimize Volumetric Fog use the following CVars:

    • r.VolumetricFog.GridPixelSize 8

    • 8 is the default value, increase this for better performance, for example to 16

    • r.VolumetricFog.GridSizeZ 128

    • 128 is the default value, decrease this for better performance, for example to 64

  • Lightmaps (CPU & GPU): Make sure to set up proper lightmaps. Refer to the Unwrapping UVs for Lightmaps documentation for more information.

  • Overlapping Dynamic Lights: Be mindful of your light's radius and fall off, tweak these settings to reduce overlap.

  • Dynamic Shadows: If dynamic shadows are a must-have, be sure to balance carefully between the performance and visual quality. There are several ways to improve performance with only minimal visual impact.

    • Simplify shadows for complex meshes using Proxy Geometry Shadows.

    • When using Cascade Shadow Maps, you can reduce the Dynamic Shadow Distance and Num Dynamic Cascade Shadows settings for better performance.

    • Consider switching to Distance Field Shadows and / or Far Shadow Cascade shadows.

Asset Creation and Level of Detail

  • Level of Detail: Use different levels of poly counts for how large or small a mesh is rendered on screen. LODs are based on screen size.

  • Representations:

    • Use Hero representation for when an asset is extremely close to the camera.

    • Use Proxy representation for when an asset is further from the screen and the texture and poly detail isn't needed. One method is to build the hero geometry with complex shading and high-resolution textures, and then use that to create lower-resolution geometry and bake down the textures using a simplified shader.

    • If an asset needs to be both near and far from the camera at different points during the shoot, use LODs for flexibility.

    • Proxy meshes can be generated in-Editor for individual or clusters of background assets. Refer to the Proxy Geometry Overview documentation for more information.

    • Localized clumps of the same asset can be grouped together with Merge Actors > Instance mode. You can use LODs and other performance-improving techniques with merged Actors as well.

  • Automatic LODs can produce softer / indistinct-looking assets. Budget resources for some hand-crafted LODs.

  • Get optimized versions approved by the Art Director, or whoever else is signing off on the look. Some studios are reluctant to optimize or change techniques late in the project because a certain look has already been approved.

  • Optimize as you go, don't leave it until the end, this will also help get Art Director approval earlier if the project is always optimized.

  • Don't underestimate the cost of quad overdraw caused by too much translucency or dense meshes. Models should be built for their appropriate screen size. Solid meshes in wire frame mode result in really bad performance.

    • It's important to understand just how much of your mesh will actually be visible on screen. When we reviewed some of the In-Camera VFX projects earlier in 2021, many teams weren't properly considering how geometry would be used. For example, If a mountain only fills 100px x 100px on an LED wall, it doesn't need to be 2 million triangles.

    • The same consideration applies for textures. The GPU probably isn't even going to try to load an 8k texture until it's right in front of the camera.

  • Reuse assets so the engine can instance them. Avoid using many unique meshes, this is less performant.

  • Reduce the number of Material IDs on a mesh as much as possible. Try to keep it at 1 Material per asset as much as possible.

  • Use master Materials as much as possible for easier mass changes.

  • Use more complex shading for assets close to the camera. Use simpler shading for assets further from the camera, with images being used for the different channels rather than having a shader computing it at run time.

  • High-resolution geography is almost always going to be bad for both visuals and performance, more is not better in real-time production.

  • For mipmapping and streaming reasons, textures need to be a power of 2 in both dimensions. For example, 128x256 is fine, 256x256 is fine.

  • Are Decals being used? If not, disable Deferred Decals in the Project Settings.

  • Don't use video elements, use Flipbooks.

    • With In-Camera VFX, you want to be sure that all the nodes of the machine rendering on the wall are synchronized. Because of the way video elements are encoded, there is no good way to properly synchronize them, to be sure that each machine is pointing to the same 'frame' of the video.

    • Using a texture flipbook animated through a material parameter, or an EXR sequence being driven by a level sequence that is being played at the same time, are two good ways to have a synchronized animated sequence.

On Stage

  • Have a 3D representation of the stage in the level, and make sure it is completely empty and the ground is perfectly flat. The LED wall volume can't have any 3D assets for the shoot.

  • Build dynamic controls into a Blueprint that can be controlled using something like an iPad application while on set.

  • Direct sun lighting is difficult to emulate on a stage, so consider placing the stage in some kind of shaded space if outdoors.


  • Always do benchmarking and work towards a target FPS on artist workstations that are comparable to the In-Camera VFX stage target FPS.

  • Always do performance profiling on the same computer as the one in the studio. Also, run it in the same configuration for one of the nodes with the highest output resolution. This means a proper inner and outer frustum setup for a single node with the maximum resolution used in the studio. Don't use the 2x or 3x target framerate approach, not all scenes scale the same.

  • Customers should be running performance tests on a nightly basis and reporting results to everyone on the project. Performance is everyone's responsibility, and testing early and often means you're not surprised by bad performance when you're finally on set.

  • Windows scaling can affect Engine performance

    ![Windows scaling](scaling_ICVFX.png)

  • Use Unreal Insights to generate nDisplay traces. Refer to this live stream for a full explanation of how to do so.

See the documentation for Testing and Optimization and specifically Performance and Profiling for additional resources on improving performance in your project.

Project Settings

  • We recommend disabling the Virtual Reality and Mixed Reality plugins. This will reduce the resources required to run Unreal Engine 4, and, other than for Virtual Scouting, they are of limited use for virtual production.

언리얼 엔진 문서의 미래를 함께 만들어주세요! 더 나은 서비스를 제공할 수 있도록 문서 사용에 대한 피드백을 주세요.
설문조사에 참여해 주세요