Trapped in a nightmarish mannequin factory where reality itself is shattered, unravel the secrets of Mirrored Phantoms. Explore a deeply atmospheric world, solve the mystery of the mirror shards, and survive the terrors that await in this slow-burning, first-person horror experience.
Engine: Unreal Engine 5
Platform: Windows, Steam Deck
Development Time: May 2024 - December 2024
Team Size: 21
Role: AI Programmer (Support Programmer for Animation, Gameplay)
Designed and implemented an AI system tailored to design goals with manual overrides for key behaviors
Utilized Behavior Trees and C++ to manage and execute AI tasks
Created base logic for AI, following a structured sequence: Patrol→ Investigate→ Detect→ Chase → Attack
Developed flexible patrol logic, enabling random patrol within the nav mesh or movement along designated patrol points
Collaborated with other disciplines to refine investigating states and other manual override states
Conducted quality assurance to identify and resolve bugs
Produced opening scenes, endings, and cutscenes using Unreal Engine Sequencer
The AI behavior tree incorporates multiple states including Patrol, Investigation, and Aggressive, plus additional specialized gameplay states. Each state contains various C++ tasks and services. Using blackboard conditions, the AI smoothly transitions between states and executes tasks seamlessly.
The AI's patrol behavior has two different modes:
Random Patrol: This mode operates autonomously without level designer input. The AI selects random destinations within a specified range that are confirmed to be on the navigation mesh, then moves to those locations. This provides a simple patrol pattern that requires no manual setup.
Defined Pathway Patrol: This mode follows a specific route designed by the level designer. The level designer creates a sequence of locations that forms a patrol path. While this offers more precise control over AI movement patterns, designers must be cautious when placing these waypoints to ensure they're accessible and create logical paths.
When AI characters need to navigate in a game, they rely on a navigation mesh (nav mesh) that defines where they can walk. In our case, we faced a challenge because our game map is extremely large and uses level streaming. Using a single pre-generated nav mesh for the entire map would create significant loading time issues.
To solve this, we implemented a dynamic approach using Navigation Invoker. Instead of loading one massive nav mesh, the system generates navigation data only in the area immediately surrounding each AI enemy. This approach works particularly well with level streaming since it builds the navigation data on-demand and only for the specific areas where enemies are present. As a result, when an enemy moves into a new level section, the game doesn't freeze while loading navigation data for that entire area.
The key aspects are:
The AI perception relies mainly on hearing stimulus rather than sight stimulus for detection, as it has a limited visual range.
Whenever a new sound occurs in its hearing range, it will move to the sound location to investigate. If there is nothing to detect at that location, it will get confused and go back to the Patrolling State.
If the player is running, their footsteps will continue generating noise, causing the AI to follow/chase
The player can break this chase pattern by switching to walking or crouching, which generates less or no noise. Or the player can throw a brick to distract the AI.
If it sees the player, it will go to "Aggressive State"
This design creates an interesting stealth dynamic where players must manage their movement style - running draws constant attention while sneaking allows them to evade detection.
The AI uses Unreal Engine's built-in AI Perception system to process both visual and auditory inputs from the environment. Here's how it works:
The perception update function handles the core logic:
Every time the AI detects something (sight or sound), it stores that detection information
For sound detections:
The AI compares the volume of the new sound against previously recorded sounds
If the new sound is louder, it becomes the AI's new target point to investigate
For visual detections:
As soon as the AI visually spots the player, it immediately switches to an aggressive state
The behavior changes from "Investigation State" to "Aggressive State"
The AI executes these actions every frame within its update loop:
Emits a scream to indicate it has engaged with the player for the first time in the current state
Determines the player's current location in the game world
Navigates toward that player's position
If within attack range, perform an attack on the player
However, we must handle edge cases where the player moves outside the navigation mesh. In such situations, the AI could get stuck in its aggressive state while trying to reach an unreachable position. To prevent this, we've implemented a time limit that forces the AI to exit the aggressive state after a certain duration.
The attack sequence consists of three stages:
Lifting: The AI grabs and lifts the player, disabling all player input and forcing them to face the AI
Holding: Player controls are restored, allowing them to attempt to escape through rapid key presses
Throwing or Killing: Based on the player's escape attempt
Successful Escape: The AI throws the player forward and becomes stunned temporarily, giving the player time to escape. The AI then loses track of the player and returns to patrol
Failed Escape: Results in player death
The entire sequence is implemented using Unreal Engine's Animation Montage and Animation Notify States, which triggers code execution at specific keyframes in the animation. This synchronization between code and animation creates fluid, well-timed interactions that enhance the gameplay experience.
The game includes additional states that enhance the gameplay mechanics, but they are only activated under specific circumstances. These states are triggered exclusively when a boolean value in the blackboard is modified.
I created both the opening and closing cinematics for the game using Unreal Engine's Sequencer tool. This allowed me to capture choreographed character movements within the game environment and manipulate shader parameters to create camera-based visual effects. I collaborated with the level design team to ensure these sequences aligned properly with the audio.