Categories
godot performance

Godot Leveraging Timers

This is part 4 of 4 in a series about some recent performance tuning I completed on Elevation TD.

Intro

Start with the very basic assertion anything that does not need to be done every frame should not be done every frame. There is no golden rule about what should or should not be done every frame – it depends highly on your specific game and what you are trying to achieve.

For example:

  • In an FPS shooter, movements often need to be instantaneous (i.e. checked every frame)
  • In a top-down strategy game, movements can be a big laggy (i.e. an enemy might only evaluate if it needs to go to a new target every second or two – it does not need to be checked every frame)

It’s important to remind yourself that if something only needs to happen once every 2 seconds instead of every frame, that is the difference between running some code 120 times (at 60 FPS) versus just once. Running it once will not only be less impactful on performance, but will also free up cycles to do a more complicated version of the process if desired. So if you have code that is checking for enemies each frame, checking to see if the player is in range each frame, seeking a new destination each frame, you might benefit a lot by using timers.

What is a timer?

Let me start off by saying that Godot has Timers you can add via Add Child Node and then link up with Signals. You can read about them here: https://docs.godotengine.org/en/stable/classes/class_timer.html. I don’t use them. I “roll my own” because I like to have a multitude of things I am running timers on and, frankly, its pretty easy to code.

Lets take a very simple use case of “regoaling” – its common for enemies in a tower defense to periodically check to see if they have a new target – they start out with a goal, new towers are built by the player, they have to change their goals to attack those new towers, more towers are built, they change goals again, etc… You don’t want them doing this constantly, maybe every 3 seconds or so – and you want it to be configurable. So put your regoaling logic in a function and call it from a timer like so:

# make the frequency of checking configurable from editor
@export var reassessGoalFreq:float = 3000
# give yourself something to track it with
var nextReGoalTime:float=0

func _process(delta):
  # if your tracking number is in the past, run regoal
  if nextReGoalTime < Time.get_ticks_msec(): regoal()

func regoal():
  # set your next check time in the future by whatever
  # reassessGoalFreq is set to
  nextReGoalTime = Time.get_ticks_msec() + reassessGoalFreq
  # ...
  # do whatever complicated logic you need to do to check
  # if this thing needs a new goal
  # ...

IMHO, the above is way easier than adding nodes, linking signals, setting callbacks, etc… This is basically just two variable declarations and two lines of code. The check in _process is very light weight as you’re just comparing two floats. The way this plays out is:

  • First iteration of _process, nextReGoalTime will always be less that get_ticks_msec (number of milliseconds since start of game), so it will fire off regoal() to set an initial goal
  • Regoal() promptly resets nextReGoalTime to some point in the future that is current milliseconds since start of game plus whatever you have the reassessGoalFreq set to – its actually kind of important to do this first – if you have anything in regoal() that does an await get_timer() then _process might start doubling up calls to regoal() if nextReGoalTime isn’t in the future
  • Regoal() finishes and life returns to _process() until your are reassessGoalFreq number of milliseconds in the future

Its worth noting that since nextReGoalTime and reassessGoalFreq are class-level floats, you can do things like adjust them as needed – for example, if you spawn a bunch of enemies at the same time and want to make sure they don’t regoal at the same time, you can modify reassessGoalFreq with a random float to make sure they are all operating at slightly different intervals.

In the case of Elevation TD, I have a whole set of these kinds of timers controlling enemy movements, resetting goals, checking if enemies should be attacking, checking if towers should be attacking, and so on. In my case, leveraging timers like in the above made a significant difference in FPS.

Conclusion

Using timers to control the frequency of events is a great example of the kind of “unsexy plumbing” in the code that can make a big difference in performance. The above implementation style is easy to wrap around a wide variety of situations. All you need to do to take advantage of it is identify processes that are currently running each frame that can be run much less often.

See the other parts of this series:

Categories
godot performance

Godot Off-Screen Processing Control

This is part 3 of 4 in a series about some recent performance tuning I completed on Elevation TD.

Overview

Out of the box, Godot supports culling of off-screen object display – which is great, but if you have scripts attached to those culled-for-display-purposes-only objects, the scripts are still running, chewing up CPU for no displayable reason. Wouldn’t it be nice to mitigate that off-screen impact?

Godot gives you a way to exactly do that – its actually really easy to use, but you have to put a little thought into what you do with it. At any point in time, you can check to see if the node a script belongs to is within the camera’s frustum (i.e. in the camera’s field of view) via Camera.is_position_in_frustum(global_position) – see: https://docs.godotengine.org/en/stable/classes/class_camera3d.html#class-camera3d-method-is-position-in-frustum.

Frustum Checking

With a pointer to your main camera, you can call this method any time you want to see if your script is attached to an object that is visible with a a conditional (in my case, I have a static class called “StateMachine” that the Camera is referenced through):

if StateMachine.mainCamera.is_position_in_frustum(global_position):
  # do on screen activities
else:
  # do an off-screen version of activities

You want to be a bit careful about how often you check for this – don’t run it every frame. In the case of Elevation TD, I have enemies that perform a series of activities where it might be interesting to check their on-camera/off-camera status for:

  • Conduct a walking motion
  • Attack when in close proximity to targets
  • Do a little performance when they die

When these enemies are off-screen, I still need them to move, attack, search for targets, and die – the camera never has the whole battlefield so there’s pretty much always some enemies off-camera. However, I can periodically check to see if they are off-camera and then perform a lighter-weight version of each of these activities:

  • Don’t perform walking motion, just move to a next position
  • Don’t perform any of the visible parts of an attack, just damage the target
  • Don’t do a death performance, just die and remove yourself from the battlefield

So the enemy script checks if the enemy is off camera at the start of each step motion, when it attacks, and when it dies. You can see the difference in the graphic below – note the FPS when looking at all the enemies attacking vs the FPS when staring into the water:

Checking the camera’s frustum is an easy check – the harder part is coming up with lighter-weight versions of the scripts activities.

Bonus Round

Once you fold in some of these camera checks, it may occur to you that there are some other things you could check to govern whether to show a “full experience” vs a “lighter experience. For example, what’s the current FPS…

In the above image with the battle scene, one of the things you might not realize is that there are three versions of an enemy’s death:

  • Off-Camera – Just die and go away, no drama
  • On-Camera – Explode, particle effects, and other drama
  • On-Camera and FPS is below 90 – Just explode, provide a watered down version of drama

Doing this provides another protective layer to control how much CPU processing is going on under the hood. Like checking the camera frustum, you don’t want to do this on every frame, but it can be handy to control FPS impact of specific events.

To do this is a very simple check from the Engine API:

if Engine.get_frames_per_second() > 90:
  # perform extra dying acts for visual drama

The nice thing about this kind of check is it allows you to provide a more visually rich presentation when there are fewer actors fighting it out on the battlefield – which you want because the fewer the actors on the field, the closer the player will be looking at them. However, in a larger battle scene, it will tone down the visuals, but the player will largely not notice because focusing on any one actor in a battlefield of two hundred other actors is unlikely.

See the other parts of this series:

Categories
godot performance

Godot Performant Nav Agent

This is part 2 of 4 in a series about some recent performance tuning I completed on Elevation TD.

Background

Godot’s build in navigation and agent system works reasonably well. Before proceeding with the rest of this article, please be sure you’ve read and implemented everything in Godot’s Nav Agent tutorial – we are assuming you have a working nav mesh with working nav agents and want to optimize agent performance:

https://docs.godotengine.org/en/stable/tutorials/navigation/navigation_using_navigationagents.html

Problem Description

Let’s say you have hundreds of Nav Agents running concurrently in your game and you’re starting to notice that performance is lagging. Try this: take that entire block of code Godot’s tutorial tells you to put in _physics_process and comment it out – hit play again and see if you just got a whole bunch of FPS back.

The FPS you got back represents an estimate of how much you are loosing on frame-by-frame Nav Agent controls with the default implementation – of course, the problem is now nothing is moving and you still need your Agents to move around and follow their paths to their targets.

Leave that code commented out and add the following – first, add two class-level vars called currentNavArray and nextWayPoint – then modify the set_movement_target function:

...in header...
var currentNavArray
var nextWayPoint

func set_movement_target(movement_target: Vector3):
  navigation_agent.set_target_position(movement_target)
  # give the agent a moment to repath itself
  await get_tree().create_timer(1.0).timeout
  # tell it to grab its next position
  navigation_agent.get_next_path_position()
  # grab the entire navigation path and store it
  # the path is simply an array of Vector3's
  currentNavArray = navigation_agent.get_current_navigation_path()
  # make sure it has at least one entry and grab the first
  if currentNavArray.size() > 0:
    nextWaypoint = currentNavArray[0]
  else:
    # otherwise leave at current location
    nextWaypoint = global_position

At this point, you have an array of the entire navigation path to the target position for this agent. You simply need to implement a very plain vanilla “go to next waypoint” system in your _process() that is much lighter-weight than what Godot’s tutorial had put in. Remove or comment out everything Godot’s tutorial put in _physics_process() and replace it with the below in either _physics_process() or _process() (this does not need to be in a _physics_process()):

if currentNavArray.size() < 2:
  # if you are almost at the end of your waypoints, 
  # don't get closer than 7 to keep a bit of distance
  if global_position.distance_to(nextWaypoint) > 7:
    global_position = global_position.move_toward(nextWaypoint,movement_speed*delta)
else:
  global_position = global_position.move_toward(nextWaypoint,movement_speed*delta)
  # if you are close to your current waypoint, get a next one
  if global_position.distance_to(nextWaypoint) < .05:
    nextWayPoint()

The above code eliminates multiple calls to the nav server to establish what the next Agent position should be in favor of a couple move_toward()’s which have much less overhead. Its important to note that one of the things the Nav Agent takes care of is not actually moving the Nav Agent right on top of the target – in the above example, if you are on the last waypoint, it will keep the agent a distance of 7 from the destination to stop it from moving right on top of the goal. If you are not on the last waypoint and you are less than .05 distance to the waypoint, then you call nextWayPoint() to get the next waypoint.

The nextWayPoint function is actually very simple:

func nextWayPoint():
  if currentNavArray.size() > 1:
    currentNavArray.remove_at(0)
    nextWaypoint = currentNavArray[0]
    nextWaypoint.x += rng.randf_range(-1,1)
    nextWaypoint.z += rng.randf_range(-1,1)
  else:
    nextWaypoint = global_position

If you have more than one more waypoint in the currentNavArray, remove position 0, pulling all the rest in the array forward, and then grab the new one at position 0. This is also an opportunity to modify what that waypoint to add some randomness to movements and avoid a bunch of enemies forming a “conga line”. If you are already at the last position, just return global_position and the agent will go no where.

Presumably, you’ll have some events in your game that reset the target of the agent – when you call set_movement_target() with that new Vector3 target position, the process will simply repeat itself: you’ll navigation_agent.set_target_position, rebuild the currentNavArray, and move waypoint-waypoint-waypoint.

Conclusion

In my case, working on Elevation TD, each enemy is a Nav Agent so I had scenarios where there might be hundreds of Nav Agents at one time. For me, the above was a significant performance boost. I can easily see that the above agent code might:

  1. Not show significant performance gains in situations where there are very few agents
  2. Would create very “coarse grained” pathing movement – its probably not “reactive” enough if you were working on an FPS-style game – but for something like an overhead strategy or tower defense game, that coarseness might not matter (may even be desirable)
  3. Forces you to implement your own Agent behavior (like not getting closer than 7 to the destination) – some of that “overhead” we’re getting rid of handles things like agent avoidance and various other nuances of agent behavior. In my case, it didn’t matter – in other cases, it might, especially if you want very fine-grained movements.

See the other parts of this series:

Categories
godot performance

Godot RenderingServer

This is part 1 of 4 in a series about some recent performance tuning I completed on Elevation TD.

Background

Elevation TD is a tower defense game in which everything in a level is instantiated dynamically – the landscape is built tile-by-tile, each one nudged around with materials applied at runtime to create a unique look each time you play – enemies are constructed at runtime from a library of “bodies” and “legs” to create visual diversity – same with towers and the objects thrown around like “shots” and even the small decorative plants and rocks. Each time you visit a level, it follows a general template to position things, but it never looks the exact same twice.

The drawback for all of this on-the-fly construction is performance. Once you add up all the individual objects, you have thousands of Node3D’s wrapping around thousands of meshes and the default workflow in Godot of “put a GLB in a Node3D and tweak its positioning and other characteristics” start to not scale. This is where RenderingServer comes in.

Heads Up

If you are not comfortable coding, stop here. This isn’t an easy haul, but it can give you some solid performance boosts – in my case it was fairly significant, but not everyone has a game with thousands of distinct objects. If you don’t have a lot of distinct visual objects, this path might not be worth it.

When you have a Node3D that you drop in a scene, you can really spend a lot of time adjusting its look and feel, nesting all kinds of stuff inside it, and it all comes along for one happy ride. When you use the RenderingServer, you are basically writing a distinct mesh and only the mesh to the RenderingServer – you lose anything nested under it, you lose its scale, rotation, position – you will need to reapply all those aspects of it in code and you basically lose almost all ability to manage something in the Godot editor – that’s the bad news – the good news is that you’ll be writing that mesh and all its display instructions directly to Godot’s rendering server so its much faster and lighter weight.

The Basics

Godot provides a good discussion around RenderingServer along with a sample implementation here:

https://docs.godotengine.org/en/stable/tutorials/performance/using_servers.html

This YouTube video is also a good walkthrough of the basics:

You should familiarize yourself with the RenderingServer API doc at Godot – you’ll need it to do much more than the basics:

https://docs.godotengine.org/en/stable/classes/class_renderingserver.html

A Simpler Example

Lets start with the simple version where you instantiate a mesh in a position and you never touch it again, like the landscape and the small rocks and chunks of ice in the below:

At the class level:

var yourMesh

func _ready:

yourMesh = <wherever you get your meshes from>

#
# Mesh Renderer Instance for ground decoration
# We'll call the instance "tmpDecoInstance"
# You should have a mesh stored in the variable
# yourMesh at the class level
#

# first get your instance ID and scenario
var tmpDecoInstance = RenderingServer.instance_create()
var tmpScenario = get_world_3d().scenario

# then set your scenario and base
RenderingServer.instance_set_scenario(tmpDecoInstance, scenario)
RenderingServer.instance_set_base(tmpDecoInstance, yourMesh)

# create your transform3d and set its origin up front
var tmpxform = Transform3D(Basis())
tmpxform.origin = decoPos

# apply rotations as needed (in this case, randomized)
# 90 degrees in radians = 1.5708 / 360 degress = 6.28319
tmpxform = tmpxform.rotated_local(Vector3.UP, rng.randf_range(0,6.2))

# set scale
var tmpScale = Vector3(1,1,1) # or whatever scale you need
tmpxform = tmpxform.scaled_local(tmpScale)

# set the instance to position at the transform
RenderingServer.instance_set_transform(tmpDecoInstance, tmpxform)

This is basically the same example as is in the Godot docs except that I’m setting rotation, origin, and scale – why? When you load meshes via the Rendering Server, you will quickly discover what their actual scale and alignment are and it might not be what you think it is, especially if you didn’t create it yourself. This was a rude awakening for me as I had meshes from a variety of different sources – so a small flower was suddenly huge and sideways and a giant boulder was suddenly tiny. You either need to set the scale/rotation or resize/reorient them in Blender.

Once you’ve pulled the mesh out of whatever it was in, you might realize that you need to set its material:

RenderingServer.instance_geometry_set_material_override(tmpDecoInstance, yourMaterial.get_rid())

In case its not already clear, you’ll need to do this once per mesh you want to display – so if you have a character or structure that you “kit-bashed” together from multiple GLB’s, you will need to either combine all those meshes into one mesh or iterate over the above chunk of code once per mesh.

Moving Things Around

You may have noticed that no where in the simple example did I add_child() anything – you can’t with Rendering Server – the mesh does not exist as a Node that you can add to anything. This should inspire you to ask how you manage it – move it around, make it rotate, etc… That’s done via the instance RID that you get from RenderingServer.instance_create().

To make it easier to move instances around, lets separate the creation of the instance from the manipulation of the instance – you create the instance in your _ready and then you manipulate it via a func. In the below example, we’re representing a “shot” that is “fired” from an enemy to its target – for example, these trees throwing trees:

So first we create the instance in _ready, but we save the instance and the mesh at the class level:

var shotMesh
var shotInstance
var shotRotation
var shotScale = Vector3.ONE
var shotRotDirection = Vector3.LEFT

func _ready():
  shotInstance = RenderingServer.instance_create()
  var scenario = get_world_3d().scenario
  RenderingServer.instance_set_scenario(shotInstance, scenario)
  shotMesh = <where ever you get your mesh from>
  RenderingServer.instance_set_base(shotInstance, shotMesh)
  placeShot(Vector3(10000,-10000,10000))

You’ll notice that looks very similar to the simple example, but stops halfway through and calls that “placeShot” function – placeShot, as the name suggests, places the shot where you want it to be along with handling rotation, scale, etc…:

func placeShot(position):
  # 90 degrees in radians = 1.5708 / 360 degress = 6.28319
  shotRotation += shotRotSpeed
  if shotRotation > 6.28319: shotRotation = 0
  # create transform
  var xform = Transform3D(Basis())
  # set global position
  xform.origin = position
  # rotate as needed
  xform = xform.rotated_local(shotRotDirection, shotRotation)
  # set scale	
  xform = xform.scaled_local(shotScale)	  RenderingServer.instance_set_transform(shotInstance, xform)

You’ll notice that “shotInstance” is leveraged as a pointer to the mesh that was instanced in the Rendering Server. To move the shot around, create a Transform3D representing its new position and orientation and then instance_set_transform the shotInstance to that Transform3D. Putting that all into a general function means you can just call the _placeShot() function to position the visual wherever you need it, whenever you need it (i.e. from inside _process() more than likely).

A few things I will point out here:

  • https://docs.godotengine.org/en/stable/classes/class_transform3d.html – the Transform3D class is your friend – use it.
  • Note that each time placeShot is called you are basically building the Transform3D from scratch – every time I tried to manage the Transform or Basis persistently, I got erratic behavior from the RenderingServer. Once I made peace with re-establishing all positionality factors each time on a new temporary Transform3D, things worked a lot more consistently.
  • Because I’m not persistently managing the transform, I needed to keep track of how rotated the object needed to be to create a smooth rotation. Each time you create a new transform, the rotation is reset, so you need to be prepared to tell it how rotated you want it and its scale each call (thusly, managing the shotRotation variable at the class level).
  • You’ll notice the first thing I set on the Transform is its origin (i.e. the coordinates you want it to appear at) – do this first. If you attempt to perform operations like Transform3D.looking_at() and then set the origin, it does not work correctly. Origin first, everything else second works the most consistently.
  • You’ll notice I’m using scaled_local and rotated_local – again, this produces the most consistent result over a large series of updates.

I use the word “consistently” several times in those bullet points. Perhaps the most annoying thing about working with the Rendering Server is that when it doesn’t like what you are doing, objects tend to just disappear. I iterated many times over changes that should have either worked fine or made a very small difference to the visual display only to have the mesh completely disappear with no errors in the error console. Remember, when you use Rendering Server, you are working outside the node tree, so you can’t even look at Remote and see what’s going on – you basically have nothing to fall back on except debug in your code. Once I got the above “recipe” in place, things worked pretty consistently.

What’s Your Mesh?

Its worth taking a moment to remember that a GLB isn’t a “mesh”, its a collection of things, one of which is a mesh. If you instantiate a GLB, grab the first child and then you can get the mesh. There’s a lot of examples out there (including Godot’s tutorial) where they just load(Path-to-Mesh) into a variable, but that’s if you actually just have a literal Mesh – if you do that with a GLB or FBX, it won’t work right – seems like you need to instantiate it:

var selectedMesh = arrayOfGLBS.pick_random().instantiate()
var yourMesh = selectedMesh.get_child(0).mesh

Summary

In my case, with dozens of concurrent shots, hundreds of enemies, and many hundreds of distinct landscaping elements, using the RenderingServer like above resulted in a significant improvement of FPS (something in the range of 20-40 FPS recovered). In the next post, we’ll explore optimizing Godot’s Nav Agent…

See the other parts of this series:

Categories
unity

A Healthy NavMesh In Unity

I was recently working on a project where I wanted to have a generated terrain decorated with trees on which a NavMesh would be generated so the AI-driven enemies would be able to find appropriate paths to the player targets. I got the NavMesh working, but the NavMeshAgents were getting stuck, having traffic jams, and generally not following the paths I wanted them to.

Much of Unity’s out of the box NavMesh AI expects you to be designing the level in the editor, not generating it in code. So the first challenge was generating a NavMesh for a dynamically created environment where the “ground” was a random series of assembled GameObjects. This is where the Unity NavMeshSurface project helps out:

https://github.com/Unity-Technologies/NavMeshComponents/blob/master/Documentation/NavMeshSurface.md

Download it and add it to your project. Once you have this, go to all the GameObjects you use to build “the ground” of your NavMesh and add the component NavMeshSurface – also set the GameObject as Navigation Static. Once this part is done, you’ll need to call two lines of code at the end of your environment generation routine:

NavMeshSurface nm = GameObject.FindObjectOfType<NavMeshSurface>();
nm.BuildNavMesh();

At this point, you should be able to run your code to generate the environment and it will have a generated NavMesh. Note that BuildNavMesh() is not the lightest API around – you might notice a burp in performance depending on how large the area is – try to call it at an inconspicuous point in your project.

Now the annoying thing about generating your NavMesh on a decorated surface (like with trees) is that it assumes your decorations are obstacles unless you say they are not. The result is something like the NavMesh below where every tree on the landscape created a hole in the NavMesh, creating over-complex paths and resulting in “traffic jams” of the NavMeshAgents using the NavMesh (illustrated in the red path below) – the blue path is really what I wanted:

Sometimes you want trees to block navigation – sometimes you don’t. I did not. The NavMeshSurface package also contains a NavMeshModifier which can be used to instruct the NavMesh generation process to include/exclude objects from the build process. In this case, since I was already dynamically placing the trees, I added a line of code to attach the NavMeshModifier to each tree and tell the NavMesh generation process to ignore them:

tmpObj.AddComponent<NavMeshModifier>().ignoreFromBuild = true;

This resulted in the below which was much better – notice how each tree no longer has a NavMesh hole around it:

The next challenge was that I sometimes modify the terrain, elevating certain GameObjects up, at which point they would no longer be part of the NavMesh. The result was giving me:

The one elevated GameObject at the red arrow did separate itself from the NavMesh, but it also lacked any kind of boundary – the blue arrows point to examples of a small “expected” boundary around NavMesh borders which help the NavMeshAgents navigate cleanly – when you have an obstacle like that one elevated piece with no boundary, NavMeshAgents start bumping up against it, get stuck, think its not there, and sometimes waste a lot of time trying to go through it instead of around it. To solve this, you need to rebuild the NavMesh whenever you modify the landscape – again, the NavMeshSurface package makes this relatively easy.

At the end of the code I wrote that modifies the NavMesh, I added:

NavMeshSurface nm = GameObject.FindObjectOfType<NavMeshSurface>();
nm.UpdateNavMesh(nm.navMeshData);

This regenerates the NavMesh to incorporate changes – it also runs asynchronously so you don’t see a performance “burp”, but it also means the update isn’t “instant”, which was fine for me in this case. The end result was:

Notice how the elevated GameObject now has a nice NavMesh boundary padding around it – this helps the agents navigate around it smoothly and successfully.

By eliminating the holes in the NavMesh formed by placing trees and fixing the padding around modified areas of the NavMesh, I found the NavMeshAgents moved much more smoothly and orderly around the play area. A healthy NavMesh creates smoother, better pathing for your agents.

One other side bonus that I found reduced NavMeshAgent “traffic jams” – randomize the NavMeshAgent avoidancePriority – for example, put this code in your NavMesh agent’s Start() function:

agent = GetComponent<NavMeshAgent>();
agent.avoidancePriority = Random.Range(1, 100);

Every agent will have a variegated priority when evaluating movements that interfere with each other. In my case, I didn’t care who had the priority, but giving them different priority levels meant that agents in close proximity to each other did a much better job of “zippering” themselves together rather than fighting over who should be first.

Categories
unity

My Own NavMeshAgent

While working on Mazucan I recently had an experience that made me rethink a bit about how to use NavMesh’s in Unity. Lets start with a quick talk about RigidBody and NavMeshAgent in Unity.

Unity’s NavMesh breaks down to three pieces:

  • NavMesh – which is a baked “map” of where its AI agents can go or not go (including elevation)
  • NavMeshAgent – which is the component that you add to things like the player’s character to make it recognize and use the NavMesh for path-finding
  • NavMeshObstacle – which creates holes / obstacles in the NavMesh (which we’re not going to get into here)

So you have a game where the moveable areas of the map are determined by the NavMesh and you attach NavMeshAgents on the player characters and the enemies the player fights and you can use Unity’s AI engine to take care of the path-finding you inevitably need to do because there’s lots of holes in your map. Relatively easy so far…

Now lets say you the game you’re building involves a lot of things throwing rocks at each other (like Mazucan) and you want those things to “react” to getting hit – you naturally would add a collider and rigidbody to those player and enemy pieces and it, well, sort of works… Sometimes it works great – sometimes things go flying off in weird circles, spin in place, bounce up and down rapidly.

This is because the RigidBody and the NavMeshAgent are having a disagreement on what to do with that misbehaving GameObject. The NavMeshAgent is trying to keep the GameObject on its NavMesh path and the RigidBody is trying to enforce the laws of physics – the two don’t always align – in fact, they frequently disagree – it sounds like this:

RigidBody: Hey, we just got slammed on the x-axis with another object of equal mass so we need to move that way

NavMeshAgent: No freaking way – we’re going straight because I have a path on the NavMesh and I gotta get to the endpoint

RigidBody: Screw that – we’re falling over – physics rules all!

NavMeshAgent: I am leaving our feet glued to this NavMesh – you take me off this NavMesh and I’ll be completely lost, throw exceptions, and bring this game down!

Violent spinning ensues like a cat and a dog fighting

Unity’s answer to this is to click the isKinematic flag in the RigidBody (https://docs.unity3d.com/Manual/nav-MixingComponents.html) – this is basically tantamount to telling the RigidBody that we all know the NavMeshAgent gets what it wants and sometimes the laws of physics just have to wait because NavMeshAgent has a meltdown every time it falls off the NavMesh.

Physics hates being told its Kinematic

The problem with isKinematic is that basically physics looses all the time and everything becomes kind of stiff and rigid and non-reactive to environment events. You still get colliders and whatnot, but Kinematic physics is basically like non-physics and I eventually decided I wanted my physics back for Mazucan – I want things to get whacked on the side and react, I want pieces to accidentally fall off edges, I want some amount of “randomness” introduced into the game via physics (I know – sounds backwards – physics gives you the unexpected).

There is an alternative – you *can* make your own NavMeshAgent and re-use the existing NavMesh for path finding. To be fair, this wasn’t my idea – I got it after reading some Reddit posts that I frankly lost track of where someone else was suggesting to just get corners off a NavMesh path and use them like way-points. At first, I dismissed the idea – later I realized it had a lot of merit. Here’s how it winds up working – assuming you already have a valid NavMesh setup – and yes, you’re going to have to write some code:

  • Add “using UnityEngine.AI” to you code
  • Create a method for setting the destination which takes a Vector3 for a destination – this method will need to call the NavMesh calculatePath based on the destination and get back a NavMeshPath
  • Inside that NavMeshPath are its corners – this is literally just an sequenced array of Vector3’s representing each turning point on the path – save that at the class level cause you’re going to continually reference that
  • Inside your update function, you’re going to iterate over each Vector3 in that array and do something to move towards the waypoint (Vector3.moveTowards or in the below example calling addForce because it creates a nice rolling motion on the rocks I am rolling along the NavMesh)
  • Each time you reach one of the Vector3’s, move on to the next – you’re going to need to track which one you’re on (i.e. currentPathCorner in the below)
  • You might also need to turn each time you reach a corner to be pointing in the correct direction
  • Wrap it all in booleans so you can minimize the impact of having this in your update function (i.e. don’t execute the code if you’re at your final destination)

The net result is you no longer have a NavMeshAgent, but you can still leverage the NavMesh for path finding (which is a much harder thing to “roll your own”) and now you get happy little accidents when things get too close to edges:

One zinger in this is the difference in Y coordinates that the NavMesh wants versus the Y coordinates you use for your destination. All the Vector3’s from the NavMesh have a Y coordinate that’s on the NavMesh – if you use that as-is, your player pieces will try to shove themselves in the ground (assuming their pivot point is in their center which is typically is). You can recalibrate around this by taking all the corner Vector3’s and resetting their Y coordinates to the Y coordinate of the GameObject being moved. Remember, the NavMesh only knows how to path a destination that’s actually on it, with the same Y coordinate.

This is a rough version of what I wound up doing in Mazucan – there’s obviously a lot more to it, but what’s below is the core guts of the process.

// NOTE YOU CANNOT USE THIS CODE AS-IS
// IT ASSUMES YOU HAVE A WHOLE OTHER BLOCK OF CODE
// THAT TELLS IT WHERE TO GO AND THAT YOU
// WANT TO MOVE AROUND VIA APPLYFORCE AND OTHER
// STUFF - USE IT FOR REFERENCE ONLY

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.AI;

public class NavMeshAgentChaser : MonoBehaviour
{
    public float movementSpeed = 1;
    public float turningSpeed = 30;

    // internal private class vars
    private bool isMoving = false;
    private bool isTurning = false;
    private Vector3 recalibratedWayPoint;
    private NavMeshPath path;
    private int currentPathCorner = 0;
    private Quaternion currentRotateTo;
    private Vector3 currentRotateDir;
    private Vector3 groundDestination;
    private Rigidbody rb;

    void Start()
    {
        rb = transform.GetComponent<Rigidbody>();
    }

    private void Update()
    {
        if (isMoving)
        {
            // account for any turning needed
            if (isTurning)
            {
                transform.rotation = Quaternion.RotateTowards(transform.rotation, currentRotateTo, Time.deltaTime * turningSpeed);
                if (Vector3.Angle(transform.forward, currentRotateDir) < 1) isTurning = false;
            }

            // applying force gives a natural feel to the rolling movement 
            Vector3 tmpDir = (recalibratedWayPoint - transform.position).normalized;
            rb.AddForce(tmpDir * movementSpeed * Time.deltaTime);

            // check to see if you got to your latest waypoint
            if (Vector3.Distance(recalibratedWayPoint, transform.position) < 1)
            {
                currentPathCorner++;
                if (currentPathCorner >= path.corners.Length)
                {
                    // you have arrived at the destination
                    isMoving = false;
                }
                else
                {
                    // recalibrate the y coordinate to account for the difference between the piece's centerpoint
                    // and the ground's elevation
                    recalibratedWayPoint = path.corners[currentPathCorner];
                    recalibratedWayPoint.y = transform.position.y;
                    isTurning = true;
                    currentRotateDir = (recalibratedWayPoint - transform.position).normalized;
                    currentRotateTo = Quaternion.LookRotation(currentRotateDir);
                }
            }
        }
    }


    public void setMovementDestination(Vector3 tmpDest)
    {
        groundDestination = tmpDest;
        groundDestination.y = 2;
        currentPathCorner = 1;
        path = new NavMeshPath();
        NavMesh.CalculatePath(transform.position, groundDestination, NavMesh.AllAreas, path);
        // sometimes path winds up having 1 or less corners - skip this setting event if that's the case
        if (path.corners.Length > 1)
        {
            isMoving = true;
            isTurning = true;
            // recalibrate the y coordinate to account for the difference between the piece's centerpoint
            // and the ground's elevation
            recalibratedWayPoint = path.corners[currentPathCorner];
            recalibratedWayPoint.y = transform.position.y;
            currentRotateDir = (recalibratedWayPoint - transform.position).normalized;
            currentRotateTo = Quaternion.LookRotation(currentRotateDir);
        }
    }


}
Categories
unity

Navigating Hexagons without Math

The first time I worked with hexagon tiles was in the 1970’s – I was a dungeon master for a D&D group and I ran out of “normal” graph paper and somehow wound up with hexagonal graph paper to draw out a dungeon with. It looked cool, but it was hard to draw out a straight hallway unless you went with the “flat hexagon sides”. It didn’t lend itself well to the dungeon mastering experience (“You’re walking down a dark hallway when you hit an intersection – you can keep going straight or you can turn 45-ish degrees to your left-forward or 45-ish degrees to your backwards right”).

40 years later, board games and apps are predominantly hexagonally driven. Hexagons lend themselves better to “natural” shapes, bending rivers, rounded bottoms of mountains, etc… The traditional square grid pattern is largely relegated to “legacy support” of old chess and checker boards. However, there is one really important point to make about the tried and true square checker pattern:

Its really easy to work with – like CRAZY easy.

– Aaron

People use hexagons and squares for a really basic reason – they are two of the three shapes that tesselate properly (the third is triangles, but, if you’re building a grid with triangles, you’re weird and we’re not going to talk about them here). Tessellation is the notion that some shapes can be evenly repeated in a pattern with no empty spaces – you can read a good article about it here: https://mathworld.wolfram.com/Tessellation.html. Most shapes do not tessellate well or they only tessellate with some amount of distortion. This is why most grids in the world are based on squares or hexagons – they both tessellate very well and they are not triangles 🙂

Why are squares easier to work with than hexagons? Simple: their shape matches a standard coordinate system. Take any given square – to get the next one up, add one to your Y axis – next one down, take away one from your Y axis, add one to X to go over one way, subtract one from X to go the other. Easy peasy. There’s a reason people have been using square grids for hundreds of years.

Hexagons do not work that way. Depending on how you have them oriented, the next hexagon over in a grid might be X+1, or X+.66 and Y-1, or Y+1 and X-.5, and so on. What a pain in the butt. There’s a really cool article here https://www.redblobgames.com/grids/hexagons/ reviewing hexagon navigation theory – its a great article.

As an individual app developer, I have to make my world manageable. Reading someone tell me that I might want to consider adopting a 3-dimensional coordinate system to properly navigate my a 2-dimensional grid (which, by the way, exists in a 3-dimensional game space) makes me want to go crying back to squares. So I spent some time working on this, googled a lot, and worked out a collider-based solution to hexagonal navigation – that’s right – no axial coordinates or cubed numbering systems – just plain old Unity stuff.

Lets say you’re generating your own hexagonal grid and you want to make it so each tap highlights every contiguous hexagon for two hexagons out. However, the grid is non-contiguous – it has “land areas” and “water areas” and the contiguous space should not cross the water even though there is more land within that two hex radius:

If you add colliders to each hexagon as you lay them out (each one in this case is its own prefab), then you can superimpose a geometric figure that represents a radius that would constitute all neighboring tiles – in our case that super-imposed geometry is an OverlapSphere:


Each overlap sphere is only big enough to touch the directly neighboring tiles. Then you can evaluate those neighboring tiles to see if they are acceptable to highlight and you can repeat the process, effectively “stepping your way” through tiles evaluating neighbors for as many iterations / depth as you wish.

// source is the vector3 representing the starting tile; 
// 1 is the radius - you might need to adjust based on 
// the size of your hexagons

Collider[] hitColliders = Physics.OverlapSphere(source, 1);
foreach (var hit in hitColliders){
  // you can now get hit.transform and do
  // whatever you need to evaluate if this
  // is a valid tile to highlight;
  // if you also take hit.transform.position and
  // feed it back into Physics.OverlapSphere
  // you can repeat the process and get further
  // neighbors - turn it into a function and call
  // it recursively if you want (watch out for 
  // endless looping!)
}    

For my purposes, I looped over the process three times to get “three-levels out” of neighboring tiles and it very nicely supports very non-contiguous grids:

Success!

This may sound like an inefficient way to do things, but remember, colliders are deeply integrated with Unity – they are very fast – and the result you get back are essentially plain old GameObjects, so you can help yourself out a bit by tagging things ahead of time to make the evaluation logic in the middle very easy. I could easily see someone spending a lot more CPU cycles on convoluted, inefficient, difficult to understand math path-evaluation algorithms, so I’m not sure either one is “better” or “more correct” then the other. Using collider-based OverlapSpheres does mean that this bit of code looks and works A LOT like the rest of the code being written for the app – to me, that continuity of implementation approach is a big value for long term stability and debugging.

Hope this helps some others out there.

Categories
personal unity

My Year’s Journey

In December 2019, I decided to commit myself to learning a new skill, something that would be different than what I had done before. I also wanted to do something that would help create a lot of communication with my kids – unsurprisingly, they are not excited by discussing inventory levels or lambda functions, so I decided to take a whack at something more relatable to them, game development.

The last time I programmed a game was the 1980’s on an IBM PC running DOS working in Microsoft QuickBasic. So I started by Googling around for how to make games, found some systems based around javascript, said “Hey, I know that”, downloaded PixiJS and started learning.

PixiJS was actually pretty easy to pick up and work with, but the danger of sticking to things you are familiar with is that they are sometimes ultimately a poor fit for what you are trying to do. You can do a lot with JS – if you wanted to create embeddable, interactive web content, PixiJS is probably a decent direction to go in – but its not how 90+% of the gaming industry does there work and it ultimately limits your capabilities. You’re going to have a hard time creating immersive 3D in JS – or VR, or AR, or creating something that can run on a Switch or a PlayStation.

In the game industry, 90+% of everyone, from the single developer self-producing their own title to the AAA shops that put 100+ developers on a game at a time, use either Unity or Unreal. If you’re going to learn how to make modern games and you’re not using one of those two, you should look yourself in the mirror and ask what is so special about your requirements that you would not go down those paths. There are some legit use-cases to not use Unity or Unreal: maybe you only develop in the Apple ecosphere, never want to leave it and already know Xcode; maybe you need to make something that will run on a Raspberry Pi; maybe you really do just want something to slap on a website and that’s as far as you want to take it. For most everything else, you should really try Unity or Unreal.

I spent January 2020 trying both Unity and Unreal. They are both very good gaming IDE’s. In the end, I chose Unity. Why?

  • Better online training
  • Easier to code against

Unreal is a very, very good system. There is a reason why many high end games use it and why its making deep inroads into movie productions. Its graphics capabilities almost certainly eclipse Unity’s. It also seemed to me to be very geared towards people who do not want to code – most changes are done via graphs, so what could have been one line of code turns into adding a dozen nodes to a graph and interconnecting them and setting their properties – to me, that’s significantly harder than writing one line of code. As a result, the training videos are often a monologue of someone saying “now click here, then click here, and then click this, then drag this and click that and drag the connector to click next to the last click and then click again” – that’s a huge turnoff for me personally. I might revisit Unreal at some point.

unity is a good thing…

Unity is much easier to write code against – its a strength to their IDE that is often also criticized as a weakness, because you will almost certainly have to write code at some point to use Unity (although they recently added Bolt visual graphing so you can do more things without writing code). Their tutorials tend to come in “programmer” or “designer” flavors which I greatly appreciated and, generally speaking, I found myself up and running much more quickly than with Unreal.

I spent February and March going through Unity’s Create with Code course: https://learn.unity.com/course/create-with-code. Its a 40 hour free course, but if you’re considering using Unity and you have a programming background, I would recommend you do it. If you have a programming background, you’re going to find that tutorial very basic, however, part of what its trying to teach you is not just how to write code, but when to write code – and this is a very important distinction that took me a while to get my hands around.

Unity comes with a physics engine built in. You might think, “ok, that’s nice, but who cares – I can write code”. No, stop – do not get your old college physics book out and start writing code that describes the arc a physical object traverses while flying through the air with force applied to it. Instead, put a rigid body component on your object and simply apply a force to it – Unity will figure out how far it travels and where it falls – then add a collider to trigger an event when the object hits the ground, you know, like an explosion on landing.

If you work on a billiards game, for instance, you can simply put all the pool balls out there, wrap them in rigidbodies and colliders and just let them knock into each other and roll around however they see fit. The only code you really need to worry about is how the player controls the cue stick – everything else is just the physics of “solid” objects reacting to collisions and rolling around. This makes it incredibly easy to, for example, add obstacles to increase the amount of ricocheting that happens in the game without adding any code. Or to “slo-mo” everything by decreasing the speed of time, freezing everything in place by turning time off, controlling speed by increasing or decreasing drag or controlling the strength of gravity – these are all either settings or one-line code changes.

You do a lot with colliders in Unity – colliders create triggers when things touch each other. Colliders, combined with the physics engine, make it possible to implement games in a very “event-driven” model – if you embrace the approach, you will find it very flexible and it will reduce the number of lines of code you write. Flattery’s initial release, for example, was just over 3K lines of code – I’ve written HTML pages with more lines than that.

As a programmer, I found a lot of new concepts that I needed to get my head around – I had no idea what “normals” or “inverse normals” or “baked lighting” were let alone what the difference was between “shaders”, “materials”, and “textures”. And quaternions – omg – “a complex number of the form w + xi + yj + zk, where wxyz are real numbers and ijk are imaginary units that satisfy certain conditions.” There were also a lot of considerations about game mechanics, what makes for good game play, how do you legally get music and artwork that can be distributed with your game, what kind of privacy policy do you need – the list goes on and on.

I spent most of my “covid lockdown spare time” going through all these kinds of topics and considerations. I was happy that I was using Unity – it gave me common ground with thousands of others out there who were ready, willing, and able to give pointers about what to do or why to care. I also spent this time torturing my kids with review sessions – they were my “UAT testers”. They were very good at giving me constructive feedback and I don’t think I could have completed Flattery without them. Those review sessions were some of the longest discussions I have ever had with my son and daughter about software engineering – some review sessions literally went on for over two hours.

Somewhere around summer 2020, I decided I wasn’t going to be happy with myself unless I actually finished a game and got it up on the Apple App Store. I originally targeted Labor Day, but it wound up being Thanksgiving when I submitted the first build to Apple (for the record – no, I did not get approved first build). It might not be the greatest game ever, but I learned way more than I ever would have thought and got to spend a lot of time with my teenagers collaborating on a project and that was really awesome.

I would highly recommend Unity for any kind of multimedia development (anyone remember Macromind Director?). I would also highly recommend this as a path to do something with software collaboratively with your kids – just remember, they are your target market, they are the business sponsors, listen to them, take their feedback seriously and they’ll feel more engaged.

Feel free to check out Flattery (its free): https://apps.apple.com/us/app/flattery/id1542242326

Categories
build & deploy unity

Distribution Issues to Apple

When it came time to submit Flattery to the Apple App Store, I was very surprised at some last minute packaging issues for the app that came up. After all, I had literally done hundreds of build-deploy’s to my phone, it seemed like it should submit to the App Store without any issues – however, submitting to the App Store apparently has some additional build requirements that are not required when deploying directly to your phone.

To be clear, I am working on Unity 2020.1.13f1 and the latest Xcode for MacOS 11.0.1.

There were two main issues:

  1. Inclusion of an illegal “frameworks” directory
  2. Inclusion of an empty SwiftSupport directory

No Embedded Frameworks

So your app builds and installs fine from Xcode – you make the Archive and submit it to the App Store and you get this message:

All Your Bundles Are Belong To Us

This is basically telling you that the Unity build process is leaving some files in a directory that Apple disagrees with – there’s nothing “wrong with your app” per se, its just got some extra, unnecessary junk floating around in it that Apple wants to see cleaned up before distributing on its App Store.

After much Googling and trial and error, here are the instructions I wish someone were able to hand me:

I kept missing that little tiny plus sign…

The script in this case should be:

cd "${CONFIGURATION_BUILD_DIR}/${UNLOCALIZED_RESOURCES_FOLDER_PATH}/Frameworks/UnityFramework.framework/"
if [[ -d "Frameworks" ]]; then
    rm -fr Frameworks
fi

Do another build, archive, and upload to the App Store.

Invalid Swift Support

With the first problem resolved I was able to successfully upload the archive of my app, but after the upload is completed, Apple runs additional checks on the packaging of your app – in my case I got an email that said:

ITMS-90424: Invalid Swift Support – The SwiftSupport folder is empty. Rebuild your app using the current public (GM) version of Xcode and resubmit it.

Signed, your friends at Apple

The problem was that I was using the latest Xcode and I did everything in Unity and wasn’t writing any Swift code. After Googling around, I found that others have resolved this issue by:

  1. Create the application Archive as you normally would – this should leave you at the Organizer window (or go Window > Organizer)
  2. Right click on your recently made Archive and Show in Finder
  3. Right click on the archive file in Finder and select Show Package Contents
  4. Delete the SwiftSupport directory
  5. Return to the Organizer window in Xcode and submit your app

I’m sure someone better versed in Xcode build processes could add another Run Script in the right place and then the manual deletion wouldn’t be needed. In the end, its not like I need to submit builds to the App Store every day, so I can live with this as part of the “build and distribution process”.

I want to be clear – all of the above was not something I figured out on my own – these are issues that many others have been posting about and I would have gotten no where without their help. So thanks out to the community.

Reference URL’s:

https://forum.unity.com/threads/2019-3-validation-on-upload-to-store-gives-unityframework-framework-contains-disallowed-file.751112/

https://stackoverflow.com/questions/25777958/validation-error-invalid-bundle-the-bundle-at-contains-disallowed-file-fr

https://developer.apple.com/forums/thread/125902

https://github.com/bitrise-io/build.issues/issues/31

https://developer.apple.com/forums/thread/654980