//

The Journey Continues

I know my blog posts lately haven’t exactly been filled with pretty pictures. I’ve been quite (one might say myopically) focused lately on the simulation back-end that will eventually power the agents in my Null Operator vignettes. I’ve (perhaps weirdly) found a significantly greater amount of joy lately from working on these sorts of things compared to my 3d asset production. It’s been a trend for me over the past year or so that’s become more and more powerful. Namely, the grunt work of producing scenes, making meshes, uving all the things.. it has worn me down truth be told. This isn’t to say I intend to stop doing it. I just find that I need to be.. significantly more excited about the next given thing I need to make than was once to true to bring myself to open up maya.

What this has meant though is that I’ve been spending a tremendously larger amount of time in visual studio (and a half-blank google doc), most frequently on this agent sim system. I’ve also been mapping out (veeery roughly) about a half dozen different prototypical game structures that contain agents, to get a notion of what their typical action possibility space, level of environment interaction, and most importantly, what the state structure of those agents typically is. A good deal of this research and experimentation is driven by the combination of a deep dissatisfaction with Agent behavior in most RPG/Action/Adventure games, and a general thought that I have been rolling around, that I feel comfortable asserting.

The success of an indie game project is directly linked to the volume of production strategies being employed that output supralinear returns on effort.

Where this dovetails with games that involve agents is at the crux of the system I am attempting to build. How I would describe this (roughly and more linearly than I would ultimately like) is as follows.

Agents driven by a combination of data simulation and rules-based behavior produces game-play that is dynamic, allows for improvisation, emergent phenomena, uniqueness of experience, and a player-learning-loop that many find kinesthetically, emotionally and intellectually stimulating.

This production approach allows for immeasurably larger possibility spaces to its direct opposite, in which a finite set of manually sequenced behaviors is defined, that play out oblivious to the nuance of player context, or which transparently crawl through a limited tree of binary/trinary states which when re-performed are identical to their antecedent. (Though of course the approach of this structure still plays a part in many forms of games, from Bullet Hells to certain arcade style games to other forms which are directly centered on the recognition of these fixed patterns).

The possibility space of an agent behavior is bounded by the range, depth, fidelity, and relationship of inputs which connect to that behavior. Let us take a combat agent as example.

We start with an input that is simply its target. Assuming self knowledge, we now have a bunch of spatial data for where to fire.

We add some raycasting for obstacle detection. We know when to fire.

We add further information about target state (look position, action state). Now we know when to fire even better.

We add a pathfinding grid (or topo-scan via raycasts). Now we have a bounded space within to dodge.

etc.

The possibility space of an agent behavior is also bounded by the range, depth, fidelity, and relationship of outputs from the agent system. Let us continue with the combat agent.

We start with just a cannon. The agent may fire.

We add alternate modes to the cannon. The agent may now choose the best one for the context.

We add thrusters. The agent may now dodge, chase, retreat.

We add aesthetic representation of mood. The agent may now visually express its state, or symbolic emotion.

etc.

When in non-combat contexts, we give most game agents in the adventure/rpg/etc. genres next to no inputs nor outputs. Inputs given are largely binary flags, and the position of the player agent so that the agent can look at them. Outputs are occasionally movement, and one of several predetermined animation loops.

The few outputs we give non-combat game agents are manually constructed trees and sequences of events. Most of these trees lose their verisimilitude when executed more than once, as they betray the fixed-state nature of a symbolic system which attempts to represent a more complex system than itself. The prototypical example of this is the tavern/shop owner agent which acts _identically_ every time it is interacted with.

Defining Goals

Given this, what if we wished to go about things differently, and attempt to make non-combat agents which function more like organisms, and less like well… props. I would argue that we need three core structures.

1. Non-player inputs which are both spatial and temporal and possess a typology

By this I mean we need an environment for our agents, or some abstraction of it. For our agents to not just be direct reflections of finite player action, there must be a data-space beyond the player that the agent has access to. Further, they must be aware of _what_ their environment is, contains, and have some notion when that environment changes.

2. Internal agent simulation of self-state.

Life is change. Regardless of the appropriate fidelity of the internal agent simulation, there should be some mechanism through which elements of the agent change over time. Humans do not stand in the same place for 8 hours doing nothing. For my own work I’m emphasizing complexity in this system, as I’m curious as to which aspects of body and body-environment temporal change are perceptible once states, needs, behaviors are being informed by a complex connectome of perceived personal state.

3. An agent model of mind/action where the player is not the center of the universe

My compatriot Adam did a wonderful demonstration the other day of something that never happens in a video game, that exemplifies the.. implicit structural assumptions we embed into our agent design. He said something to me, I got about three words into my sentence, and he interrupted me suddenly

“You are not important for the next minute.” *he turns to Luke* “Hey I just remembered, I wanted to tell you…” *he turns back to me* “What were you saying again?”

These types of interruptions, interstitials, etc. are all a common place component of human interaction. But our agent-interaction models almost universally center around giving the player 100% of control over when and exactly how non-combat events take place. In conversation-heavy rpgs, the only time an agent refuses to speak to us is when dialogue/content is not available for the current moment in a linear narrative. It is used as a signal to the player to do something else, rather than as a natural component conversation that accompanies action.

Even a standard sims-esque needs based model would be a fascinating addition to the RPG game. Bethesda games have toyed with these, but I’ve always felt that they’ve had to artificially restrict them due to the way in which agents are traversing an environment in ‘real’ time, but symbolic time (which drives when say, eating, sleeping would occur) is passing at an absurdly accelerated rate. Tangent: I have never understood why games that allow the player to ‘wait’ any amount of time in a system feel the need to use abbreviated day-night cycles.

Though this will likely be a blog post (or 3) of its own, its important to state that an agent built in the way I’m describing does not fit into the business-as-usual rpg structure we are used to. Imbuing agents with volition that doesn’t regard the player’s needs at every moment means reconsidering a number of the ‘action-loops’ that comprise typical rpgs. Perhaps even breaking the notion of a loop entirely in favor of something far more free-form. This is what makes me most excited about this endeavor.

Body Sim Mk. 2

After reviewing the results of my first body sim attempt, which I took to about… 80% completion (mostly just had GI system and Immune system left to structure), I came to the realization that I had created something a bit too.. specific for my tastes.

Namely, while I had produced a very interesting human, nothing was structured in such a way to create anything other than a 100% regular human. The system wasn’t really built to handle limb amputation, nor cybernetic replacement. It wasn’t setup to handle anatomical differences between sexes. It also wasn’t in any way structured well to handle other sort of non-human agents (monsters, androids, dogs, other types of mechanoids).

The goal (possibly a pyrrhic one) is to build a base agent simulation structure that’s abstracted from specific genre context and exact environmental context. It should function as needed for say… a deep simulation survival game occurring in a turn-based context, or a fast paced real-time unity environment. Granted, this will mean divorcing a few things such as interstitial layers of sense interpretation and action logic outside the body, but in general I don’t mind moving what is in a sense, Cortex function, into a layer which is more game-type specific. This also allows for letting the player be an ‘agent’ in as much as any other agent is an agent.

So.. the challenge in this (totally surmountable as I’m finding), is to… operationalize the human body into a set of abstractions that would make sense for another mammal, a robot, a wolfman, maybe even something as surreal and a man with banana hands.

Thankfully, due to having coded a majority of the human body in my previous prototype, I have a great reference system to work from, and a clear method for seeing if anything important is being ‘lost’ to the move to a higher abstraction (ie. if I accidentally reduce expressiveness in the system).

My current pseudo-class structures work something like this:

Agent Regions

Agent regions can be thought about as conceptual areas of the agent body. They will be broken down akin to how one might define hit-zones, and inventory slots; Head, neck, arm, hand, etc.

Agent regions will each contain data-structures for a rough abstraction of skin, muscle and bone, named in this system Surface, Structure and Support (don’t blame me for loving alliteration).  Each of these will track relevant local state information, such as temperature, integrity, etc. as well as being able to (eventually) have other objects ‘mounted’ to them. Regions will also manage their own place in the agent body hierarchy, being aware of parent/child relationships. Agent regions will have a local mirror of whatever the Control System (listed below) represents in terms of pain/damage.

Agent Systems

An agent will have a single set of vital systems, which represent the diffuse body systems of a given agent. They are Control, Processing, and Locomotion, which in the case of a human will roughly model Nervous, Cardiopulmonary + Gastrointestinal, and Motor-muscular systems. Each system will be responsible for a typology of agent mechanisms.

Control will handle management of global sensory states including pain, pleasure, proximity, and awareness.

Transport will handle the transformation of abstractions of ‘fuel’ , which are available to the agent body, which will likely be broken down into something that encompasses blood oxygen, platelets, glucose, and white blood cells, etc.

Locomotion will handle the gross physical state of the agent in terms of its capability to perform requested physical actions.

Agent Organs

An agent will have a collection of vital organs, which are registered as being located in a given Agent Region, and also registered with the relevant Agent System. Everything from a heart, lung, biofeedback monitor, to an internal plasma reactor would be modeled using this data structure.

In short, each organ will have a set of state information, a set of requirements in terms of input resource/fuel, and possibly a transformed output. Additionally, organs should possess some ability to provoke an event upon their activation/deactivation, but I’m still not sure how I wish to represent this, as I haven’t yet tried to tackle the generalization of anything like status effects.

And this will hopefully get me..

Between these types, I’m fairly certain I can build back towards almost identical functionality to what I had in my prior experiment, but with several huge benefits.

The new system is significantly more consistent in its representation of base components, instead of being a bunch of special cases with their own alien logic.

The system supports (theoretically) the representation of at least any animal/animal-like construct, possibly more.

The system supports the creation of hybrid agents which are part X-thing, part Y-thing, which supports things like cybernetics, genetic freaks, and cronenbergs from Rick and Morty.

As I shift from the design to the production phase of this, I’ll be doing some more writing on the design philosophy, the custom tools I’ll need, and then (eventually) some actual field testing of said agents in various contexts.

I must admit, I can’t wait to give my jellyfish robit a heart, and hopefully, a mind of its own.

One Response to “Trees, Sequences, Loops, More Body Sim”

  1. DasBilligeAlien

    I think a Game where you you only see the reactions and actions ofthe internal organs would be very interesting. No visual representaiton of what the agent is doing. Just a visual respresentaion of how its body reacts. Maybe give some manipulation methods. MUtation or some methods of conditioning behavior. Be inside your pokemon.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *