Categories
unity

A Healthy NavMesh In Unity

I was recently working on a project where I wanted to have a generated terrain decorated with trees on which a NavMesh would be generated so the AI-driven enemies would be able to find appropriate paths to the player targets. I got the NavMesh working, but the NavMeshAgents were getting stuck, having traffic jams, and generally not following the paths I wanted them to.

Much of Unity’s out of the box NavMesh AI expects you to be designing the level in the editor, not generating it in code. So the first challenge was generating a NavMesh for a dynamically created environment where the “ground” was a random series of assembled GameObjects. This is where the Unity NavMeshSurface project helps out:

https://github.com/Unity-Technologies/NavMeshComponents/blob/master/Documentation/NavMeshSurface.md

Download it and add it to your project. Once you have this, go to all the GameObjects you use to build “the ground” of your NavMesh and add the component NavMeshSurface – also set the GameObject as Navigation Static. Once this part is done, you’ll need to call two lines of code at the end of your environment generation routine:

NavMeshSurface nm = GameObject.FindObjectOfType<NavMeshSurface>();
nm.BuildNavMesh();

At this point, you should be able to run your code to generate the environment and it will have a generated NavMesh. Note that BuildNavMesh() is not the lightest API around – you might notice a burp in performance depending on how large the area is – try to call it at an inconspicuous point in your project.

Now the annoying thing about generating your NavMesh on a decorated surface (like with trees) is that it assumes your decorations are obstacles unless you say they are not. The result is something like the NavMesh below where every tree on the landscape created a hole in the NavMesh, creating over-complex paths and resulting in “traffic jams” of the NavMeshAgents using the NavMesh (illustrated in the red path below) – the blue path is really what I wanted:

Sometimes you want trees to block navigation – sometimes you don’t. I did not. The NavMeshSurface package also contains a NavMeshModifier which can be used to instruct the NavMesh generation process to include/exclude objects from the build process. In this case, since I was already dynamically placing the trees, I added a line of code to attach the NavMeshModifier to each tree and tell the NavMesh generation process to ignore them:

tmpObj.AddComponent<NavMeshModifier>().ignoreFromBuild = true;

This resulted in the below which was much better – notice how each tree no longer has a NavMesh hole around it:

The next challenge was that I sometimes modify the terrain, elevating certain GameObjects up, at which point they would no longer be part of the NavMesh. The result was giving me:

The one elevated GameObject at the red arrow did separate itself from the NavMesh, but it also lacked any kind of boundary – the blue arrows point to examples of a small “expected” boundary around NavMesh borders which help the NavMeshAgents navigate cleanly – when you have an obstacle like that one elevated piece with no boundary, NavMeshAgents start bumping up against it, get stuck, think its not there, and sometimes waste a lot of time trying to go through it instead of around it. To solve this, you need to rebuild the NavMesh whenever you modify the landscape – again, the NavMeshSurface package makes this relatively easy.

At the end of the code I wrote that modifies the NavMesh, I added:

NavMeshSurface nm = GameObject.FindObjectOfType<NavMeshSurface>();
nm.UpdateNavMesh(nm.navMeshData);

This regenerates the NavMesh to incorporate changes – it also runs asynchronously so you don’t see a performance “burp”, but it also means the update isn’t “instant”, which was fine for me in this case. The end result was:

Notice how the elevated GameObject now has a nice NavMesh boundary padding around it – this helps the agents navigate around it smoothly and successfully.

By eliminating the holes in the NavMesh formed by placing trees and fixing the padding around modified areas of the NavMesh, I found the NavMeshAgents moved much more smoothly and orderly around the play area. A healthy NavMesh creates smoother, better pathing for your agents.

One other side bonus that I found reduced NavMeshAgent “traffic jams” – randomize the NavMeshAgent avoidancePriority – for example, put this code in your NavMesh agent’s Start() function:

agent = GetComponent<NavMeshAgent>();
agent.avoidancePriority = Random.Range(1, 100);

Every agent will have a variegated priority when evaluating movements that interfere with each other. In my case, I didn’t care who had the priority, but giving them different priority levels meant that agents in close proximity to each other did a much better job of “zippering” themselves together rather than fighting over who should be first.

Categories
unity

My Own NavMeshAgent

While working on Mazucan I recently had an experience that made me rethink a bit about how to use NavMesh’s in Unity. Lets start with a quick talk about RigidBody and NavMeshAgent in Unity.

Unity’s NavMesh breaks down to three pieces:

  • NavMesh – which is a baked “map” of where its AI agents can go or not go (including elevation)
  • NavMeshAgent – which is the component that you add to things like the player’s character to make it recognize and use the NavMesh for path-finding
  • NavMeshObstacle – which creates holes / obstacles in the NavMesh (which we’re not going to get into here)

So you have a game where the moveable areas of the map are determined by the NavMesh and you attach NavMeshAgents on the player characters and the enemies the player fights and you can use Unity’s AI engine to take care of the path-finding you inevitably need to do because there’s lots of holes in your map. Relatively easy so far…

Now lets say you the game you’re building involves a lot of things throwing rocks at each other (like Mazucan) and you want those things to “react” to getting hit – you naturally would add a collider and rigidbody to those player and enemy pieces and it, well, sort of works… Sometimes it works great – sometimes things go flying off in weird circles, spin in place, bounce up and down rapidly.

This is because the RigidBody and the NavMeshAgent are having a disagreement on what to do with that misbehaving GameObject. The NavMeshAgent is trying to keep the GameObject on its NavMesh path and the RigidBody is trying to enforce the laws of physics – the two don’t always align – in fact, they frequently disagree – it sounds like this:

RigidBody: Hey, we just got slammed on the x-axis with another object of equal mass so we need to move that way

NavMeshAgent: No freaking way – we’re going straight because I have a path on the NavMesh and I gotta get to the endpoint

RigidBody: Screw that – we’re falling over – physics rules all!

NavMeshAgent: I am leaving our feet glued to this NavMesh – you take me off this NavMesh and I’ll be completely lost, throw exceptions, and bring this game down!

Violent spinning ensues like a cat and a dog fighting

Unity’s answer to this is to click the isKinematic flag in the RigidBody (https://docs.unity3d.com/Manual/nav-MixingComponents.html) – this is basically tantamount to telling the RigidBody that we all know the NavMeshAgent gets what it wants and sometimes the laws of physics just have to wait because NavMeshAgent has a meltdown every time it falls off the NavMesh.

Physics hates being told its Kinematic

The problem with isKinematic is that basically physics looses all the time and everything becomes kind of stiff and rigid and non-reactive to environment events. You still get colliders and whatnot, but Kinematic physics is basically like non-physics and I eventually decided I wanted my physics back for Mazucan – I want things to get whacked on the side and react, I want pieces to accidentally fall off edges, I want some amount of “randomness” introduced into the game via physics (I know – sounds backwards – physics gives you the unexpected).

There is an alternative – you *can* make your own NavMeshAgent and re-use the existing NavMesh for path finding. To be fair, this wasn’t my idea – I got it after reading some Reddit posts that I frankly lost track of where someone else was suggesting to just get corners off a NavMesh path and use them like way-points. At first, I dismissed the idea – later I realized it had a lot of merit. Here’s how it winds up working – assuming you already have a valid NavMesh setup – and yes, you’re going to have to write some code:

  • Add “using UnityEngine.AI” to you code
  • Create a method for setting the destination which takes a Vector3 for a destination – this method will need to call the NavMesh calculatePath based on the destination and get back a NavMeshPath
  • Inside that NavMeshPath are its corners – this is literally just an sequenced array of Vector3’s representing each turning point on the path – save that at the class level cause you’re going to continually reference that
  • Inside your update function, you’re going to iterate over each Vector3 in that array and do something to move towards the waypoint (Vector3.moveTowards or in the below example calling addForce because it creates a nice rolling motion on the rocks I am rolling along the NavMesh)
  • Each time you reach one of the Vector3’s, move on to the next – you’re going to need to track which one you’re on (i.e. currentPathCorner in the below)
  • You might also need to turn each time you reach a corner to be pointing in the correct direction
  • Wrap it all in booleans so you can minimize the impact of having this in your update function (i.e. don’t execute the code if you’re at your final destination)

The net result is you no longer have a NavMeshAgent, but you can still leverage the NavMesh for path finding (which is a much harder thing to “roll your own”) and now you get happy little accidents when things get too close to edges:

One zinger in this is the difference in Y coordinates that the NavMesh wants versus the Y coordinates you use for your destination. All the Vector3’s from the NavMesh have a Y coordinate that’s on the NavMesh – if you use that as-is, your player pieces will try to shove themselves in the ground (assuming their pivot point is in their center which is typically is). You can recalibrate around this by taking all the corner Vector3’s and resetting their Y coordinates to the Y coordinate of the GameObject being moved. Remember, the NavMesh only knows how to path a destination that’s actually on it, with the same Y coordinate.

This is a rough version of what I wound up doing in Mazucan – there’s obviously a lot more to it, but what’s below is the core guts of the process.

// NOTE YOU CANNOT USE THIS CODE AS-IS
// IT ASSUMES YOU HAVE A WHOLE OTHER BLOCK OF CODE
// THAT TELLS IT WHERE TO GO AND THAT YOU
// WANT TO MOVE AROUND VIA APPLYFORCE AND OTHER
// STUFF - USE IT FOR REFERENCE ONLY

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.AI;

public class NavMeshAgentChaser : MonoBehaviour
{
    public float movementSpeed = 1;
    public float turningSpeed = 30;

    // internal private class vars
    private bool isMoving = false;
    private bool isTurning = false;
    private Vector3 recalibratedWayPoint;
    private NavMeshPath path;
    private int currentPathCorner = 0;
    private Quaternion currentRotateTo;
    private Vector3 currentRotateDir;
    private Vector3 groundDestination;
    private Rigidbody rb;

    void Start()
    {
        rb = transform.GetComponent<Rigidbody>();
    }

    private void Update()
    {
        if (isMoving)
        {
            // account for any turning needed
            if (isTurning)
            {
                transform.rotation = Quaternion.RotateTowards(transform.rotation, currentRotateTo, Time.deltaTime * turningSpeed);
                if (Vector3.Angle(transform.forward, currentRotateDir) < 1) isTurning = false;
            }

            // applying force gives a natural feel to the rolling movement 
            Vector3 tmpDir = (recalibratedWayPoint - transform.position).normalized;
            rb.AddForce(tmpDir * movementSpeed * Time.deltaTime);

            // check to see if you got to your latest waypoint
            if (Vector3.Distance(recalibratedWayPoint, transform.position) < 1)
            {
                currentPathCorner++;
                if (currentPathCorner >= path.corners.Length)
                {
                    // you have arrived at the destination
                    isMoving = false;
                }
                else
                {
                    // recalibrate the y coordinate to account for the difference between the piece's centerpoint
                    // and the ground's elevation
                    recalibratedWayPoint = path.corners[currentPathCorner];
                    recalibratedWayPoint.y = transform.position.y;
                    isTurning = true;
                    currentRotateDir = (recalibratedWayPoint - transform.position).normalized;
                    currentRotateTo = Quaternion.LookRotation(currentRotateDir);
                }
            }
        }
    }


    public void setMovementDestination(Vector3 tmpDest)
    {
        groundDestination = tmpDest;
        groundDestination.y = 2;
        currentPathCorner = 1;
        path = new NavMeshPath();
        NavMesh.CalculatePath(transform.position, groundDestination, NavMesh.AllAreas, path);
        // sometimes path winds up having 1 or less corners - skip this setting event if that's the case
        if (path.corners.Length > 1)
        {
            isMoving = true;
            isTurning = true;
            // recalibrate the y coordinate to account for the difference between the piece's centerpoint
            // and the ground's elevation
            recalibratedWayPoint = path.corners[currentPathCorner];
            recalibratedWayPoint.y = transform.position.y;
            currentRotateDir = (recalibratedWayPoint - transform.position).normalized;
            currentRotateTo = Quaternion.LookRotation(currentRotateDir);
        }
    }


}
Categories
unity

Navigating Hexagons without Math

The first time I worked with hexagon tiles was in the 1970’s – I was a dungeon master for a D&D group and I ran out of “normal” graph paper and somehow wound up with hexagonal graph paper to draw out a dungeon with. It looked cool, but it was hard to draw out a straight hallway unless you went with the “flat hexagon sides”. It didn’t lend itself well to the dungeon mastering experience (“You’re walking down a dark hallway when you hit an intersection – you can keep going straight or you can turn 45-ish degrees to your left-forward or 45-ish degrees to your backwards right”).

40 years later, board games and apps are predominantly hexagonally driven. Hexagons lend themselves better to “natural” shapes, bending rivers, rounded bottoms of mountains, etc… The traditional square grid pattern is largely relegated to “legacy support” of old chess and checker boards. However, there is one really important point to make about the tried and true square checker pattern:

Its really easy to work with – like CRAZY easy.

– Aaron

People use hexagons and squares for a really basic reason – they are two of the three shapes that tesselate properly (the third is triangles, but, if you’re building a grid with triangles, you’re weird and we’re not going to talk about them here). Tessellation is the notion that some shapes can be evenly repeated in a pattern with no empty spaces – you can read a good article about it here: https://mathworld.wolfram.com/Tessellation.html. Most shapes do not tessellate well or they only tessellate with some amount of distortion. This is why most grids in the world are based on squares or hexagons – they both tessellate very well and they are not triangles 🙂

Why are squares easier to work with than hexagons? Simple: their shape matches a standard coordinate system. Take any given square – to get the next one up, add one to your Y axis – next one down, take away one from your Y axis, add one to X to go over one way, subtract one from X to go the other. Easy peasy. There’s a reason people have been using square grids for hundreds of years.

Hexagons do not work that way. Depending on how you have them oriented, the next hexagon over in a grid might be X+1, or X+.66 and Y-1, or Y+1 and X-.5, and so on. What a pain in the butt. There’s a really cool article here https://www.redblobgames.com/grids/hexagons/ reviewing hexagon navigation theory – its a great article.

As an individual app developer, I have to make my world manageable. Reading someone tell me that I might want to consider adopting a 3-dimensional coordinate system to properly navigate my a 2-dimensional grid (which, by the way, exists in a 3-dimensional game space) makes me want to go crying back to squares. So I spent some time working on this, googled a lot, and worked out a collider-based solution to hexagonal navigation – that’s right – no axial coordinates or cubed numbering systems – just plain old Unity stuff.

Lets say you’re generating your own hexagonal grid and you want to make it so each tap highlights every contiguous hexagon for two hexagons out. However, the grid is non-contiguous – it has “land areas” and “water areas” and the contiguous space should not cross the water even though there is more land within that two hex radius:

If you add colliders to each hexagon as you lay them out (each one in this case is its own prefab), then you can superimpose a geometric figure that represents a radius that would constitute all neighboring tiles – in our case that super-imposed geometry is an OverlapSphere:


Each overlap sphere is only big enough to touch the directly neighboring tiles. Then you can evaluate those neighboring tiles to see if they are acceptable to highlight and you can repeat the process, effectively “stepping your way” through tiles evaluating neighbors for as many iterations / depth as you wish.

// source is the vector3 representing the starting tile; 
// 1 is the radius - you might need to adjust based on 
// the size of your hexagons

Collider[] hitColliders = Physics.OverlapSphere(source, 1);
foreach (var hit in hitColliders){
  // you can now get hit.transform and do
  // whatever you need to evaluate if this
  // is a valid tile to highlight;
  // if you also take hit.transform.position and
  // feed it back into Physics.OverlapSphere
  // you can repeat the process and get further
  // neighbors - turn it into a function and call
  // it recursively if you want (watch out for 
  // endless looping!)
}    

For my purposes, I looped over the process three times to get “three-levels out” of neighboring tiles and it very nicely supports very non-contiguous grids:

Success!

This may sound like an inefficient way to do things, but remember, colliders are deeply integrated with Unity – they are very fast – and the result you get back are essentially plain old GameObjects, so you can help yourself out a bit by tagging things ahead of time to make the evaluation logic in the middle very easy. I could easily see someone spending a lot more CPU cycles on convoluted, inefficient, difficult to understand math path-evaluation algorithms, so I’m not sure either one is “better” or “more correct” then the other. Using collider-based OverlapSpheres does mean that this bit of code looks and works A LOT like the rest of the code being written for the app – to me, that continuity of implementation approach is a big value for long term stability and debugging.

Hope this helps some others out there.

Categories
personal unity

My Year’s Journey

In December 2019, I decided to commit myself to learning a new skill, something that would be different than what I had done before. I also wanted to do something that would help create a lot of communication with my kids – unsurprisingly, they are not excited by discussing inventory levels or lambda functions, so I decided to take a whack at something more relatable to them, game development.

The last time I programmed a game was the 1980’s on an IBM PC running DOS working in Microsoft QuickBasic. So I started by Googling around for how to make games, found some systems based around javascript, said “Hey, I know that”, downloaded PixiJS and started learning.

PixiJS was actually pretty easy to pick up and work with, but the danger of sticking to things you are familiar with is that they are sometimes ultimately a poor fit for what you are trying to do. You can do a lot with JS – if you wanted to create embeddable, interactive web content, PixiJS is probably a decent direction to go in – but its not how 90+% of the gaming industry does there work and it ultimately limits your capabilities. You’re going to have a hard time creating immersive 3D in JS – or VR, or AR, or creating something that can run on a Switch or a PlayStation.

In the game industry, 90+% of everyone, from the single developer self-producing their own title to the AAA shops that put 100+ developers on a game at a time, use either Unity or Unreal. If you’re going to learn how to make modern games and you’re not using one of those two, you should look yourself in the mirror and ask what is so special about your requirements that you would not go down those paths. There are some legit use-cases to not use Unity or Unreal: maybe you only develop in the Apple ecosphere, never want to leave it and already know Xcode; maybe you need to make something that will run on a Raspberry Pi; maybe you really do just want something to slap on a website and that’s as far as you want to take it. For most everything else, you should really try Unity or Unreal.

I spent January 2020 trying both Unity and Unreal. They are both very good gaming IDE’s. In the end, I chose Unity. Why?

  • Better online training
  • Easier to code against

Unreal is a very, very good system. There is a reason why many high end games use it and why its making deep inroads into movie productions. Its graphics capabilities almost certainly eclipse Unity’s. It also seemed to me to be very geared towards people who do not want to code – most changes are done via graphs, so what could have been one line of code turns into adding a dozen nodes to a graph and interconnecting them and setting their properties – to me, that’s significantly harder than writing one line of code. As a result, the training videos are often a monologue of someone saying “now click here, then click here, and then click this, then drag this and click that and drag the connector to click next to the last click and then click again” – that’s a huge turnoff for me personally. I might revisit Unreal at some point.

unity is a good thing…

Unity is much easier to write code against – its a strength to their IDE that is often also criticized as a weakness, because you will almost certainly have to write code at some point to use Unity (although they recently added Bolt visual graphing so you can do more things without writing code). Their tutorials tend to come in “programmer” or “designer” flavors which I greatly appreciated and, generally speaking, I found myself up and running much more quickly than with Unreal.

I spent February and March going through Unity’s Create with Code course: https://learn.unity.com/course/create-with-code. Its a 40 hour free course, but if you’re considering using Unity and you have a programming background, I would recommend you do it. If you have a programming background, you’re going to find that tutorial very basic, however, part of what its trying to teach you is not just how to write code, but when to write code – and this is a very important distinction that took me a while to get my hands around.

Unity comes with a physics engine built in. You might think, “ok, that’s nice, but who cares – I can write code”. No, stop – do not get your old college physics book out and start writing code that describes the arc a physical object traverses while flying through the air with force applied to it. Instead, put a rigid body component on your object and simply apply a force to it – Unity will figure out how far it travels and where it falls – then add a collider to trigger an event when the object hits the ground, you know, like an explosion on landing.

If you work on a billiards game, for instance, you can simply put all the pool balls out there, wrap them in rigidbodies and colliders and just let them knock into each other and roll around however they see fit. The only code you really need to worry about is how the player controls the cue stick – everything else is just the physics of “solid” objects reacting to collisions and rolling around. This makes it incredibly easy to, for example, add obstacles to increase the amount of ricocheting that happens in the game without adding any code. Or to “slo-mo” everything by decreasing the speed of time, freezing everything in place by turning time off, controlling speed by increasing or decreasing drag or controlling the strength of gravity – these are all either settings or one-line code changes.

You do a lot with colliders in Unity – colliders create triggers when things touch each other. Colliders, combined with the physics engine, make it possible to implement games in a very “event-driven” model – if you embrace the approach, you will find it very flexible and it will reduce the number of lines of code you write. Flattery’s initial release, for example, was just over 3K lines of code – I’ve written HTML pages with more lines than that.

As a programmer, I found a lot of new concepts that I needed to get my head around – I had no idea what “normals” or “inverse normals” or “baked lighting” were let alone what the difference was between “shaders”, “materials”, and “textures”. And quaternions – omg – “a complex number of the form w + xi + yj + zk, where wxyz are real numbers and ijk are imaginary units that satisfy certain conditions.” There were also a lot of considerations about game mechanics, what makes for good game play, how do you legally get music and artwork that can be distributed with your game, what kind of privacy policy do you need – the list goes on and on.

I spent most of my “covid lockdown spare time” going through all these kinds of topics and considerations. I was happy that I was using Unity – it gave me common ground with thousands of others out there who were ready, willing, and able to give pointers about what to do or why to care. I also spent this time torturing my kids with review sessions – they were my “UAT testers”. They were very good at giving me constructive feedback and I don’t think I could have completed Flattery without them. Those review sessions were some of the longest discussions I have ever had with my son and daughter about software engineering – some review sessions literally went on for over two hours.

Somewhere around summer 2020, I decided I wasn’t going to be happy with myself unless I actually finished a game and got it up on the Apple App Store. I originally targeted Labor Day, but it wound up being Thanksgiving when I submitted the first build to Apple (for the record – no, I did not get approved first build). It might not be the greatest game ever, but I learned way more than I ever would have thought and got to spend a lot of time with my teenagers collaborating on a project and that was really awesome.

I would highly recommend Unity for any kind of multimedia development (anyone remember Macromind Director?). I would also highly recommend this as a path to do something with software collaboratively with your kids – just remember, they are your target market, they are the business sponsors, listen to them, take their feedback seriously and they’ll feel more engaged.

Feel free to check out Flattery (its free): https://apps.apple.com/us/app/flattery/id1542242326

Categories
build & deploy unity

Distribution Issues to Apple

When it came time to submit Flattery to the Apple App Store, I was very surprised at some last minute packaging issues for the app that came up. After all, I had literally done hundreds of build-deploy’s to my phone, it seemed like it should submit to the App Store without any issues – however, submitting to the App Store apparently has some additional build requirements that are not required when deploying directly to your phone.

To be clear, I am working on Unity 2020.1.13f1 and the latest Xcode for MacOS 11.0.1.

There were two main issues:

  1. Inclusion of an illegal “frameworks” directory
  2. Inclusion of an empty SwiftSupport directory

No Embedded Frameworks

So your app builds and installs fine from Xcode – you make the Archive and submit it to the App Store and you get this message:

All Your Bundles Are Belong To Us

This is basically telling you that the Unity build process is leaving some files in a directory that Apple disagrees with – there’s nothing “wrong with your app” per se, its just got some extra, unnecessary junk floating around in it that Apple wants to see cleaned up before distributing on its App Store.

After much Googling and trial and error, here are the instructions I wish someone were able to hand me:

I kept missing that little tiny plus sign…

The script in this case should be:

cd "${CONFIGURATION_BUILD_DIR}/${UNLOCALIZED_RESOURCES_FOLDER_PATH}/Frameworks/UnityFramework.framework/"
if [[ -d "Frameworks" ]]; then
    rm -fr Frameworks
fi

Do another build, archive, and upload to the App Store.

Invalid Swift Support

With the first problem resolved I was able to successfully upload the archive of my app, but after the upload is completed, Apple runs additional checks on the packaging of your app – in my case I got an email that said:

ITMS-90424: Invalid Swift Support – The SwiftSupport folder is empty. Rebuild your app using the current public (GM) version of Xcode and resubmit it.

Signed, your friends at Apple

The problem was that I was using the latest Xcode and I did everything in Unity and wasn’t writing any Swift code. After Googling around, I found that others have resolved this issue by:

  1. Create the application Archive as you normally would – this should leave you at the Organizer window (or go Window > Organizer)
  2. Right click on your recently made Archive and Show in Finder
  3. Right click on the archive file in Finder and select Show Package Contents
  4. Delete the SwiftSupport directory
  5. Return to the Organizer window in Xcode and submit your app

I’m sure someone better versed in Xcode build processes could add another Run Script in the right place and then the manual deletion wouldn’t be needed. In the end, its not like I need to submit builds to the App Store every day, so I can live with this as part of the “build and distribution process”.

I want to be clear – all of the above was not something I figured out on my own – these are issues that many others have been posting about and I would have gotten no where without their help. So thanks out to the community.

Reference URL’s:

https://forum.unity.com/threads/2019-3-validation-on-upload-to-store-gives-unityframework-framework-contains-disallowed-file.751112/

https://stackoverflow.com/questions/25777958/validation-error-invalid-bundle-the-bundle-at-contains-disallowed-file-fr

https://developer.apple.com/forums/thread/125902

https://github.com/bitrise-io/build.issues/issues/31

https://developer.apple.com/forums/thread/654980