Unity Application Framework

I’ve spent a lot of time working with different approaches to Application Frameworks in Unity.

For example, my first 1.x releases of Composer and PoseMe were based on a data binding model, for designing in a way similar to (highly inspired by) Model View ViewModel (MVVM) patterns.

The whole point of the thing, was to decouple data (on disk and in memory) from UI (logic) from GUI (toolkit and design).  Which is supposed to make for cleaner and more maintainable code.  And, when done correctly, I even got a functional command and UnDo system from it.

This was like trying to graft an elephant’s trunk onto a hippopotamus.  Can it be done?  Yes.  Should it be done?  Ummmm.  I did it.  It worked.  “Should” is such an imprecise term.  Let’s move on.

When I moved on to Composer version 2, I completely refactored the codebase to move from an older GUI system, iGUI, to a newer GUI system, NGUI.  In theory, such a thing should have been easy, because I had separated my data and UI from the GUI.  However, my data binding API was not compatible with the new GUI toolkit.  I ripped out my old data binding code, and rebuilt the layers using a different one.  I’d say about half my code survived.  Not a terrible attrition rate. But not pleasant.

This was like trying to re-graft the elephant’s trunk onto a giraffe.  Can it be done? Yes.  Should it be done?  It worked.  Let’s not dwell on such things.

Since then however, I became convinced that the MVVM framework was wrong for working with Unity.  I looked at Model View Controller (MVC) patterns and found similarly, it was wrong for Unity.  I dug into some other more exotic patterns and fads.  Like reactive (RX) programming.  From previous experience, I knew Aspect Oriented Programming to be a good explanation of what Unity was doing.  But I also knew AOP to be ill defined.  I looked into some great MVVM tools from Invert Game Studios.  I considered a more pure Finite State Machine (FSM) system like PlayMaker.

Ultimately, I came to the conclusion that I was correct in my assessment that MVVM and the like were really incompatible with a content creation environment like Unity.  However, I did run into what seemed to be the correct answer, the Entity Component System pattern/model.

I looked into Invert’s own ECS system as well as some others.  My early Virtual Reality (VR) experiments involved trying to implement my concepts and systems using these various ECS systems.  Literally I maintained parallel projects where I used each framework to try and implement the design.  But I found them all woefully inadequate when it came to the matter of networking.  They were not just bad at it.  They were incompatible with it.  They had not considered it a baseline functionality that needed to be integrated into the workflow.

As such, I decided to take what I had learned, and build an application framework for myself that can support an ECS methodology, but starts with networking as a default assumption.  It should handle local sessions as self-hosted network sessions.  This, after all is how most AAA game engines are designed from the ground up these days.

Single player modes are essentially locally hosted network games in which login is restricted.  This is done to make the multiplayer networked capabilities reasonable to develop in tandem with everything else.  This is one of the older lessons in the industry from at least 1999, if not earlier.

While I did do this initially to support my VR work, I found that actually, the framework doesn’t need to assume VR.  Rather it can work as well for a productivity tool, as a 2D sprite game, as a traditional 3D game, as a VR game, as a AR game, as anything you can do in Unity.

The result has been my FIE Unity Application Framework.  Which internally is just the “fie” package.  Which I will release as open source soon.

What follows, is high level design/description of what it does.

Kernel

The kernel is a central and persistent piece of data and code that makes up the core of the application. Everything the application can do, is somehow bolted to the kernel.

KernelExampleIn the example above, you see a kernel object with a bunch of other objects bolted onto it.  This is a VR kernel.  So it has some VR specific things.  A 2D Sprite kernel might have different things bolted on.

I’m using the ReWired package in this example, to handle some input devices.  And since it’s a VR application, I have included a “Spectator” to serve as a fallback for the VR device if nothing else exists, to anchor the user in space.

Most importantly, the kernel doesn’t exist in your scenes until runtime.  Rather, it exists in its own “kernel” scene.  And when you run a scene that opts into the application framework, in the editor or at runtime, something else is responsible for loading the kernel.  More on that later.

You can assume however, that the kernel always exists and is loaded exactly once.  When you change/load/unload scenes, the kernel persists.  And ideally, it is the home of all systems for an ECS model.

Further, I use it to house any kind of OS kernel services.  For example, I acquire Mixed Reality (MR) devices from the kernel.  This is why the ReWired input system is on there.  Input devices are provided to Unity via kernel interfaces after all.

Kernel Start Routine

Kernel Start Routines are the hooks that start everything off.  A scene opts in to the application framework by adding a Kernel Start Routine.  Take the example below.

KernelStartRoutineExample1This is a scene I am using as a VR Setup scene in my application.  Its a space to do any calibration or setup on your VR device, or the application itself, before you begin your experience.

Notice, there is no Kernel in this scene.  Starting from the bottom, there’s a piece of geometry that makes a room to walk in.  That’s ideally what my artists and level designers are focusing on.  That’s all standard Unity work.  The directional light is equally uninteresting.

However, the Kernel Start Routine object is, more specifically, a “VR Calibration Only Kernel Routine.”  What this means, is that if you “play” this scene it will execute a Kernel Start Routine meant to immediately execute the VR Calibration, only.  Which means, it’s for testing the VR Calibration and Setup routines of the game.  It has a Setup Session Type attached to it that it uses.  But we’ll get into sessions later.

I’ll summarize the code in that kernel start routine for you:

  1. Get the Kernel.
  2. Start all the Mixed Reality Devices available from the Kernel.
  3. Create a VR Domain and publish it to the Kernel.
  4. Loop Forever Doing the following
    1. Start a VR Calibration Session
    2. Complete the VR Calibration Session

As stated prior, this is what you’d want the application to do, to help you test the VR Calibration and Setup part of the program.  But it is not what you’d want the application to do when you release it and the user runs it.  Obviously the Kernel Start Routine that should run in release should load a menu and enter you into some kind of experience.  At some point, it probably does start a VR Calibration Session.  But it shouldn’t loop the way this test routine is designed to do.

Another Kernel Start Routine I have authored, is a simple VR Scene Kernel Routine.  It too is not the full application routine. Rather, it simply loads the kernel and does minimal setup to let the VR Scene you are currently working on run, and lets you experience it directly.

Once you have a more structured game though, you’d want to subclass the basic VR Scene Kernel Routine and have it do more useful testing setup.  Such as spawning your character in a given location with a given equipment load-out to make testing easier to do for whatever you’re working on.  You’d also want to make it easy for the game designer to change the spawn location and equipment load-out to support their rapid design work.

Kernel Start Routines are not just application control.  They’re also development control.

It’s reasonable for example, to create a “weapon editing scene” for each weapon in the game.  It would likely contain the main prefab for the weapon.  The “Weapon Editing Kernel Start Routine” you might embed in such scenes, might load up a shooting range and give the weapon to the tester.

As you might guess, Kernel Start Routines only run for the first scene Unity loads.  Therefore, it’s safe to leave editing Kernel Start Routines in your scene files.  They won’t be run unless you’re actually editing the scene and hit play.

The actual game probably should always load up a scene called “start” which will be empty, except for your actual “Game Kernel Start Routine.”  It will do, whatever you think is the right thing to do, for your application.

You said there would be networking?

So far, nothing seems to have touched on networking.  However, keep in mind, something has to bootstrap the networking.

In Unity, networking is usually interleaved with scene loading.  By definition, once a network connection is made, Unity is maintaining a “3d space” and the network entities in that space.  With the more advanced scene management features, you can technically load and unload scenes into a network session. But you can’t change the shared space itself or its network entities.  Therefore, you need to start managing the networking outside of the space and the scenes.  This is why the Kernel and Kernel Start Routines exist in a world outside scenes or in a world managing scenes.  They manage the local application and ideally set up the networking.

Spawnable Types

There is an abstract generic base-class known as a SpawnableType.  In Unity’s networking, network entities are prefabs.  And because of the way the networking IDs work, they must be registered by the networking manager on both the client and server the same way.  The kernel handles this automatically, when you inherit from the SpawnableType and embed it in the kernel, or in a scene.

SessionTypes

Since SessionRoutines are network entities themselves, the SessionType is used to make them spawnable.  Typically, they’re either a child of the KernelStartRoutine, or the kernel itself.

Session Routines

The session routine is a network entity that represents the logical control of a given network session.  One could think of this as the “game logic” for a given session.  Though it need not be a game, nor need it be complex.  It could be as simple as allowing a GUI to be used.

Typically, the KernelStartRoutine will initiate a network session which includes all the necessary login information.  Part of that process on the server includes passing the actual session type to be run.

Keep in mind, the KernelStartRoutine might create a host network session, which is just a local session in which both the client and server are run on the same system and no remote clients may log in.

The session routine will be spawned on the server and any client that logs in will therefore also spawn the routine.  Session routines are written using Unity’s own Client and Server logic.  They can include Client methods, Server methods, RPC methods and the like.  In this manner, the server controls the clients and the clients may interact with the server.

Session Routines are registered with the kernel and any code may acquire the kernel and request the running session.

A network session may do pretty much whatever it needs to do, to manage the user’s experience.  For example, the VR Setup Session I referenced earlier creates a Wizard GUI and executes it.  The GUI in turn, when completed, ends the session.  It’s up to the kernel start routine to do something useful upon the end of the session.  But of course, the session’s own logic might have requested something via the kernel in that case.

SceneTypes

A scene type is just a SpawnableType that is used to spawn SceneLoaders.

SceneLoaders

By spawning scene loaders, the server is able to manage scene loading on itself and on clients like any other network entity.  Clients that come in late get whatever SceneLoaders are currently loaded, just like any network entity.

Likewise, SceneLoaders also unload scenes across all clients and the server when they are destroyed.

SceneConfiguration

A SceneConfiguration routine is not a network entity.  It runs the same on all clients however.

When a scene is loaded, if it has a SceneConfiguration in it, it will pass the loader to the configuration and allow the configuration routine to configure the scene.  Usually, it will use synced data on the SceneLoader to do something clever and contextual.

Generally, you’d expect the session’s routine to result in the user choosing to do something that requires a scene be loaded.  That choice results in the spawn of a SceneLoader with data that’s contextual.  The SceneLoader spawns, syncs and loads the scene.  The loaded scene has a SceneConfiguration in it.  The SceneConfiguration routine is run and sets up the scene as per the contextual information in the SceneLoader. Since this is done via a clever networked SceneLoader, all clients do it independently, but the same.  And clients that come in late, catch up.

Avatar

Avatars are passed to the kernel by the KernelStartRoutine upon creating a network session.  In Unity, the network manager automatically spawns the player upon successful network connection.  Therefore, one would expect the Avatar to be spawned before the network session routine.

The Avatar is a special network entity meant to represent the player in a Unity game.  Most, if not all communication with the server is accomplished through the Avatar.

The Avatar is of course spawned with client authority and spawns on the server when a client connects.

Most systems will populate the Avatar with interfaces to manage their top level client/server communication.  In a sense, the Avatar can be used so that kernels and services on the client and server can communicate.

An example of this might be the network GUI system, which uses the authority system to request authority over network GUIs through the Avatar.

Or, an elevator/portal system I authored, which requests teleports of the Avatar.

The Avatar also implements a user preferences system.

Typically, you’d expect things like the Avatar’s position and pose to be synchronized as well.  This depends on the structure of the Avatar prefab.

While it may not be completely plausible as Unity exists currently, the Avatar maintains a hopeful API structure, such that that multiple Avatars may exist on a single client.

It is likely that the Avatar will be modified to better mirror Unity’s newer concepts of its upcoming IO system.  Which include the concept of provisioning a device to default players and specific players dynamically.

Finally

I’d just like to point out that most of my examples are from expansions upon the application framework.  The framework itself tries not to be too specific.  Rather, it is my fie.mr package that implements a bunch of my MixedReality (MR) services.  And further, it is my fiemr.cardboard and fiemr.steamvr packages respectively that implement GoogleCardboard and SteamVR services.  And lastly, it’s the application itself that one would expect to expand further to the final application level functionality.

Further posts and pages and publishes will provide these more AR, VR and MR systems.  Coming soon.