Creative Hub

SCRUM

Working with the User stories defined by SCRUM was an interesting experience. Initially, I thought it would be self-explanatory, easy and handy to do, but actually it wasn’t at all. Especially keeping each story independent from every other seemed to be an impossible task. Also, to quantify the quality or depth of them was hard, as we hadn’t a clear vision of every aspect of the game in mind. Nevertheless, we agreed on stepping forward despite these issues, in the hope to identify the problems with the additional experience we could get over that very first sprint.

AI Improvements

I utilize Epic’s AI perception system to add “real eyes” to the AI agents. With that system, they are able to spot other actors and act on them through the behavior tree. With my last week’s experiment about multiple AI agents, I discovered an issue about that perception system: EVERY actor, even friendly AI, is spotted by these eyes and taken into account of every calculation the AI does. But I don’t plan to implement actions between AI agents, so I researched for a way to exclude friendly actors from the math to save up a lot of computing power.

Epic prepared the IGenericTeamAgentInterface and as such, all the functionalities are only declared, the implementation is up to the class that uses this interface with an override. The documentation towards that interface was quite thin, not to say nonexistent. Several forum threats and other blog entries later, I had it up and running. The main issue with it was basically, that the functions of that interface are called within Epic’s AI Perception System and other controllers without mentioning for what purpose, in which order, how often and so on. In retrospective, I had too many wrong assumptions and that were the reasons why a lot of my bug fixing assumptions could never work. This interface is not exposed to the blueprint system and, luckily, the C++ code needed was just a few lines. All in all, it was a neat entry point in creating my first C++ class for Unreal.

David created a Drone blockout that I implemented so we could discuss and judge more precisely about their size.

Damaging and destroying the drones was one of the user stories, so I implemented the “OnTakeAnyDamage” event. Nothing special here, as it only calls functions from the following health component.

Villain Improvements (Energy Component)

I don’t want to get into design details about our mobile player, so I summarize him now as our “mobile device player”. Daniel implemented the functionality to spawn an actor using a touch screen and I provided an energy component and the villains HUD. This component stores the current amount of energy the villain can spend for example to spawn drones. Also, it updates the HUD of the villain that visualizes the current amount of energy by every change. I chose to create an actor component to be able to add the same component to the hero player or even the drones itself, in case they also need to deal with the energy as a resource.

Health Component

This one is very similar to the energy component, besides that it handles everything related to health points (maximum amount, current amount and start amount of HP, also functions to change that value and update user interface elements).

New Drone Types and their Behavior

Delivery Drone
The Villain is supposed to work with energy delivery drones. These little helpers are spawned by the villain at the map’s borders and deliver energy in a container to the villain’s mainframe to add the energy to his stock. I chose them to be developed first because they looked very simple to implement. I build them up from scratch and applied my knowledge from the other AI agents. Later on, David provided an asset to prototype the drone’s visuals.

Sentry Drone
My next step was to build ontop the delivery drone and create a drone that is able to fight the hero. Their current behavior looks like the following:
1. Without a valid target, they investigate the area and look out for one. They do that with an Environment Query that follows a simple rule set:
a) create a mesh of possible locations (nodes) around the AI agent
b) prefer nodes that are far away more than the ones in close proximity, to convince the agent to travel longer distances
c) prefer nodes that are located in front of the player instead of the ones behind him, to convince the agent to move on instead of staying nearby the same location over and over again
d) prefer nodes that are not visible by the AI agent, to convince the agent to look behind corners, cover and enter buildings
These rules guide the AI through the whole level and I am very surprised by their effectivity. There is not a single spot on my test map, where I could hide indefinitely.

2. When they finally have a valid target, they will try to attack. Sending out a projectile from the AI towards its target is pretty simple, but teaching the AI NOT to land with every shot was more difficult. By accident, I discovered a method to get around the need of developing a method that actually tells the AI how to MISS a target. From the narrative part of our game, we do have drones with some kind of algorithm implemented. And even today’s machines will not calculate a wrong answer on purpose. So I decided to stick with that “One shot one hit” attitude but changed the way a drone actually fires. By now, I separate turning towards the target and shooting into two different steps. And the difficulty for the drone to hit a target is no longer the shot itself, it is the rotation towards the target. The speed is very limited, that enables the hero player to move more quickly than the drone can rotate. It also encourages the hero player to actually move a lot more than staying stationary, which should be the desired playstyle as Daniel also provides the grapple hook.

3. If the drone loses line of sight towards its target, it will move to the last known location and start the investigation routine (#1) again.

I developed both drones in a way that separates the source files for programmers and the designers at the same time. That said, I have a base class of each of the drones where I implement all my code in and then derive a child class of it, where the designers can alter each parameter as they choose. This enables us to work on the same entity, but within two different files, and this prevents a lot of git merge conflicts.

Each of the drones got a little billboard attached to them that is only visible to the mobile player, as he will see the whole map from above and far away on his display. We choose these billboards to represent each actor.

3D Pathfinding

Concerning the scope, we decided to go with flying drones. They can be simpler in shape, can have no animations at all and still be believable. But having a flying AI agent presumes 3D navigation and with it 3D Pathfinding. Unfortunately, the Unreal Engine has its roots in First Person Shooter combat and only provides pathfinding in two dimensions. Again concerning the scope, I haven’t the confidence to build a network based game with its replication topic, we never did before, strongly depending on AI including behavior trees, we never did before AND building some kind of 3D pathfinding on my own, which I also never did before. Especially as Prof. Hettlich told Daniel and me that we were too early with our goals, because Networking will be topic in the 4th semester and AI is going to be part of an even later semester. Therefore, I tried to find a workaround first.

Basically, I spent one day blocking out a level and adding invisible floors layering on top of each other that the player will not be able to interact or collide with. The drones, instead, should use these layers to simulate vertical movement. It somehow worked, but it looked not convincing. The invisible ramps that I included to get from one layer to another were very noticeable. If a drone is trying to get on the hero’s vertical level, it often increases distance first towards a ramp, went it down and approaches again.
Another approach was to increase AI’s “step height”. This value is designed to allow actors passing any uneven ground until that maximum step height is reached (e.g. stairs). But that was also very noticeable, as every AI agent just gets teleport on top of the obstacle.

After exploring these concepts, I invested some time to find a real 3D pathfinding solution. Luckily, I spotted DoN’s 3D Pathfinding Plugin. It is free of charge and the original developer used it in one of his own projects that is available on Steam. That told me, that it is not only working in theory but had proven in a running environment, already. Sadly, it is missing detailed documentation, but I had the impression that it is not too complex. I gave it a try and was astonished how unstable that plugin is:
The plugin is generating collision cubes towards all three dimensions and check if there is another collision overlapping. If it does, that cube gets excluded from the pathfinding algorithm. It also provides a “Fly To” node/task for the behavior tree. But this node is not failsafe. If it tries to read a location from a variable that is no longer valid in memory, the engine crashes. This little information took me ages to find out because AI behavior is a little more unpredictable and not at all reproducible on command, that increases the debugging difficulty tremendously. All the plugin’s code is available on GitHub, but it is designed to be in a plugin environment and, once more concerning the scope and knowing how limited my knowledge of creating a plugin is, I decided to find a workaround first. So I modified my tasks that generate these “fly to locations” and teach them to validate all parameter first, before handing them over to the fly to node.
That been done, I transferred or basically redone every drone from scratch. The older drones derived from the ACharacter class with its character movement component included. The new drones derived from the APawn class and utilize the floating pawn movement component, as I do not need any complex mechanics of the character movement anymore.

The next step was to adapt my SentryDrone roaming behavior (the one with the four steps I explained earlier in this blog entry) into the third dimension. But again, Epic’s Environment Querry System is designed for two dimensions. Considering that the first person player, the hero, will only travel in these two dimensions, at least from a navmesh point of view, I stick with that limitation: the drones do not need to look for the hero midair, it is enough to look out for him on the ground, but navigate in all three dimensions. In the end, I made some minor changes, like generating the mesh of nodes (step a) not utilizing the original navmesh, but using the geometry mode instead. That way, the nodes are also generated on top of obstacles.