UDN
Search public documentation:

NavigationMeshReference
日本語訳
中国翻译
한국어

Interested in the Unreal Engine?
Visit the Unreal Technology site.

Looking for jobs and company info?
Check out the Epic games site.

Questions about support via UDN?
Contact the UDN Staff

UE3 Home > AI & Navigation > Navigation Mesh Reference

Navigation Mesh Reference


Overview


Rather than representing the world as a series of connected points, we instead attempt a more accurate representation of an AI's configuration space via a connected graph of convex polygons. At each node (polygon) we know that an AI can get from any point in that node, to any other point in that node due to its convexity. Thus the task of pathfinding through the graph simplifies into pathfinding along a connected graph of nodes, similar to how a path search would be performed on the waypoint-graph Unreal currently employs. The difference between the two systems being that with the old method, once a path is generated you have no other data except the points along your path.

With a navigation mesh you have a path representing a series of polygons you need to walk through in order to reach your goal, but you also know exactly what the walk-able space looks like along the way. Instead of having to hit each point exactly along a waypoint-graph generated path an AI now has all the information associated with the interface between nodes of the navigation mesh. This allows for accurate and practically free cutting of corners, and in general much more natural looking movement.

Fig A depicts an example of this:

FigA.gif (FIG A)

Note that even with a path of 10 nodes the waypoint graph's pathing behavior is inferior to that of the 4 node navigation mesh path.

Obstacle Mesh


In addition to the mesh itself, we also generate a mesh which represents obstacles in the world. This ends up being 'walls' along the edges of the movement mesh. The purpose of this mesh is to allow low-fidelity raycasts (against this obstacle mesh only) when an AI needs to know whether it can walk from one point to another directly. This allows us to skip doing a path search in wide open areas even if there are many polygons between the start and the goal. An octree is generated containing polys from both graphs which is used for quick lookups of starting polys (what polygon am I in currently) as well as goal poly lookups (what polygon is my goal in).

Generation Process


One of the biggest drawbacks most Navigation mesh implementations suffer from is that the creation of the mesh is left to artists, and can be quite labor intensive (even more so than placing pathnodes). It is with this in mind that we decided to build a system for automatically generating a mesh without designer grunt work.

This process is completed in three stages.

1. Exploration

Starting with each position placed by a designer, the map is 'flood filled'. That is, according to some step size, each segment of the map is examined via raycasts and once verified, added to the mesh. At the end of this stage we end up with a high density mesh that resembles a grid. We are working with squares here due to the AABB nature of Unreal's line-checks. (Fig. C depicts the mesh after the first stage of mesh generation)

FigC.jpg

One disadvantage of this approach is that objects which are slightly out of phase with the step size being used for exploration can end up being far away from the boundary of the mesh. To alleviate this, during exploration when an obstacle is hit, the step size will be subdivided N times to achieve the desired level of accuracy.

(Fig. D depicts a section of the test map which benefits from subdivision)

FigD.jpg

2. Mesh Simplification

The trickiest (and most time consuming step) is that of simplifying the mesh to something more reasonable to fit in memory and run pathfinding on. Currently our simplification is primarily accomplished via a slab merge approach, and then convex decomposition of said slabs into a minimal amount of convex shapes. The whole process takes form of the following steps:

  1. Square merge to reduce number of polys and speed up the following steps
  2. Merge all polys into concave slabs separated only by differences in slope
  3. Decompose concave slabs into convex shapes

Square merge

Square merge simply picks a starting node, and expands it in all directions trying to find the most optimal (biggest area) configuration of that starting square. This process is fast and reduces the number of nodes by a large factor, speeding up the rest of the process. Here is a screenshot of the test map after square merge is completed:

pathtest_squaremerge.jpg

See UNavigationMeshBase::MergeSquares()

Concave Slab Merge

This step merges adjacent polys (whether the result is convex or not) as much as possible. Polys that are too dissimiler in slope, or whose merge would pull the resulting shape too far off the original polys will not be merged.

Here is what the mesh looks like after this step has been performed:

pathtest_nodecomp.jpg

Note: this step incorporates edge simplification which smoothes out stair-step shapes caused by the grid nature of the original expansion process. To illustrate, here is a shot of the mesh without this edge simplification turned on:

pathtest_nodecomp_noedgesimplification.jpg

See UNavigationMeshBase::MergePolysConcave()

Slab decomposition

Once the mesh has been simplified into slabs of similar slope and height, we need to break them down into usable shapes. This is done via a convex decomposition process, that uses an A* approach to find the optimal configuration of shapes to represent the concave slab. See UNavigationMeshBase::DecomposePolyToConvexPrimitives()

3. Mesh Finalization

Now that the mesh has been simplified, the final step is building path-able edges between nodes, and generating the obstacle mesh. Also during this step we cull unused vertices and massage the data for serialization.

(Fig. I depicts the mesh after all steps are complete, note the vertical surfaces which depict the obstacle mesh)

pathtest_fullybuilt.jpg

Benefits of a Navigation Mesh over pathnodes:


Reduction in node density

Since with a mesh we can represent a large area with a single polygon, overall graph density goes down. This is a win for many reasons:

  1. Memory footprint is reduced with the decrease in nodes being stored.
  2. Pathfinding times go down as the density of the graph being searched shrinks.
  3. Less nodes means less time fixing up cross-level pathing information

(Fig B is an example of node density using our current codebase of this in MP_Gridlock)

gridlock_simpler.jpg

More optimal data structures

Currently path data is stored via UReachSpecs, and ANavigationPoints in the level. This results in bloat of memory footprint both because of overhead from parent classes (AActor especially) and due to the distributed nature of the data. With a mesh our data is stored in one big buffer which lends itself more easily to compression, and other optimizations. No significant effort has been made to optimize our data, but already we are seeing 20% gains over pathnodes in MP_Gridlock.

Obviation of FindAnchor

Currently whenever starting a pathsearch an AI first needs to determine which pathnode it should start pathing from. This is accomplished via an octree check to return the pathnodes in range, and then doing raycasts from the AI to the pathnodes in order to find the closest, reachable path node. The same must be done for the path destination if the destination is not on the graph already. Some of this can (and is) mitigated via caching, etc.. but the fact remains that a non trivial amount of raycasts must be done by pathing AIs periodically at run time. Using a navigation mesh the ambiguity which FindAnchor resolves does not exist. We simply find the polygon the AI is currently inside, and that is our start location. The same is true for our destination.

Better pathing behavior

Demonstrated earlier (in Fig. A) there are several situations where movement from waypoint graphs can be un-natural looking. The closest pathnode to the AI may be behind him, or in the opposite direction that he's going. The same problem remains for the goal.

No more raycasts

Using the data we generate into the navigation mesh, a significant portion of the raycasts AI do can be eliminated. One example is when an AI first tries to move, an initial raycast is performed in order to determine if the AI can go directly to its destination and avoid pathfinding on the network. This is no longer needed for two reasons. First, in most cases if a point can be directly reached it will be in the same polygon as the AI, so it's a simple matter of finding the polygon for start and goal and detecing they are the same. Second, we can fall back on the obstacle mesh to do a low-fi linecheck to determine direct reachability. Both options are much cheaper than a raycast. There are several other instances in the gears codebase where an AI asks if it can go directly to a point, all of which no longer need to do a raycast.

Another potential avenue for optimization is having AI move on the mesh itself (rather than running via PHYS_Walking). The mesh is a fair representation of the configuration space the AI can walk on, so it would be fairly trivial to project onto the mesh and do a single raycast to correct the AI onto the visible geo rather than the N raycasts per frame PHYS_Walking does. This would be especially useful for crowds. Since they probably don't need all the fidelity a normal AI does, we could potentially handle many more crowd actors at a time by snapping them to the navigation mesh rather than doing collision checks against world geometry. Indeed we should be able to increase the number of AI on screen in general.

Strictly better representation of the world

A continuous representation of walkable space is beneficial to for many other types of space queries an AI might do. Some examples:

  • The process of determining a position to remain in squad formation is vastly improved because one can actually check to see if the desired formation position is in the mesh and thus walk-able or not. Prior methods relied on finding the closest path node to the formation position, which is expensive to find. Furthermore the closest path node to the position isn't necessarily very near the formation position, and often looks bad.
  • AI are able to mantle over a wall at any point along the wall rather than having to go to a discrete pathnode which represents a 'mantle-able' location
  • Trivial adaptation of the mesh as an accurate influence map. Propagating across a hand-authored waypoint-graph is only marginally accurate due to the incomplete covering of the worldspace, and is reliant on human-placed nodes whereas our mesh is precise, and complete.

There are many other examples, but suffice it to say the increase in available data is beneficial to many ancillary AI behaviors.

Automatic generation

Being that the generation process is automatic, the load on designers to create (and maintain) their levels with paths is alleviated. The obvious benefit is that designers don't have to place the nodes in the first place, but also the likelyhood of paths being 'wrong' (as in someone changed the geometry without changing the path network) is reduced as the automatic generation process always does the 'right' thing. For example there were several instances during gears production where a fully scripted level was handed off for a visual pass, which consequently broke large portions of the pathdata on the level. The ability to build paths automatically allows an easy way out of this situation.

Intrinsic flexibility for agents of varrying sizes

Another benefit of a more accurate representation of the world is that special considerations for entities of varrying widths are no longer necessary. Rather than having to manually add a width class for every type of walking creature in use, we can make use of the extra data the mesh affords. Edge widths between polys are already computed, and we can also fall back on 'extent' linechecks against the low-fi obstacle mesh to obtain accurate information about reachability for wide entities at runtime.

Potential for "real" handling of dynamic objects

When restricted to a waypoint graph handling of dynamic objects which get in the way is difficult, and sometimes not possible. For example if you throw a crate onto a reachspec the data says nothing about how to get around the obstacle. Obviously you can do raycasts and try to add a dynamic anchor to side step the obstacle, but this a) requires raycasts, and b) doesn't work in many situations. Using a navigation mesh, all that is neccesary to avoid an obstacle can be done within the mesh, without the need for raycasts. You simply take the bounding box of the obstacle and split the polys its inside around the boundary. You now have a fully pathable mesh that describes how to avoid that obstacle without the need for any line checks. The newly split polygon would act as a sub-hierarchy for that polygon. Rather than adjusting the overall mesh, we would build a mesh within the affected polygon such that when an AI enters it, he would then pathfind on the sub-mesh in order to navigate around the obstacle. See Fig. J on the next page for an example of this;

SplitExample.gif