Category Archives: progblog

Paint By Monsters DevLog 5 – Shader Experimentation

It’s been entirely too long since I was last able to talk about the development on Paint By Monsters, but with the Conceptualization Funding sorted and a partnership forged with the incomparable Stellar Boar Productions, I’m finally able to put a little of the old grey goo to work on matters technical and creative.

So: Shaders. This goes back to the thread that was originally pulled when I started with the Brush Stroke feature demo. There are lots of ways to paint stuff in Unity, but I’ve been meaning to take a deep dive (ok, a shallow dive) into shaders for a while now, and this seemed like a good opportunity.

If you want to jump directly to the shader itself, it’s here:
https://www.shadertoy.com/view/Dt2XWG

Otherwise, please, read on.

The Joys of Shadertoy

If you’re not already familiar with Shadertoy, it’s a website that allows you to create complex shaders within your browser, which is just the kind of nonsense that leads to a Destroy All Software talk. But it’s also just kind of awesome, since it lets you mess with shaders on a tight loop of try-fail-swear-fix-enjoy.

I looked at a bunch of different shaders, mostly having to do with mouse trails and such, but when I started iterating I took opexu’s Mouse Trails Soft shader as my jumping-off point. Obviously it doesn’t really look quite like a paint effect, but it leaves a persistent trail of colour based on mouse input, which is about as good as it gets.

I’ll admit I’ve forgotten a lot of what I learned back when I was experimenting with the Kinect. I’d half-forgotten I even wrote a post about a shader with inputs. As a result, I had to relearn some things.

Shadertoy uses a different representation for shaders than Unity, because nobody who implements a shader architecture can seem to leave well enough alone. If you’re not familiar with one or the other, I’d encourage you to read the Shadertoy and Unity documentation, but the short version is that in addition to the Image code (which determines the onscreen color of each fragment), Shadertoy allows you to use up to 4 buffers, and each buffer (plus Image) can accept up to 4 inputs (iChannel0-3).

Mouse Trails Soft uses one buffer, which is where it holds both the image so far and the last known mouse pointer position. The latter is by turns both clever and wasteful, but in this specific case it makes sense.

BufferA takes BufferA (ie itself) as input on iChannel0. It took me a while to suss out enough details to grok the relevant details here, so don’t worry if the following looks like gibberish right now.

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{   
    vec2 uv = fragCoord/iResolution.xy;
    vec2 aspect = vec2(iResolution.x/iResolution.y, 1.);
    vec2 uvQuad = uv*aspect;
    
    vec2 mP = uvQuad - iMouse.xy/iResolution.xy*aspect;
    float d = 1.-length(mP);    
    
    vec4 bufA = texture(iChannel0, uv);
    vec2 mPN = bufA.zw;
    vec2 vel = min(max(abs(mPN - mP), vec2(0.001)), vec2(0.05));
    
    d = smoothstep(0.85,1.3,d+length(vel))/0.4;
    vec2 dot = vec2(d);

    dot = max(dot, bufA.xy);    
    vec4 col = vec4(dot.x, dot.y, mP.x, mP.y);
    
    if(iFrame == 0) {
        col = vec4(.0,.0, mP.x, mP.y);
    }
    
    fragColor = vec4(col);
}

I went through this code slowly, teasing out the meaning of each line.

  1. OK, so uv is the texture coordinate normalized by resolution
  2. Aspect is the aspect ratio
  3. uvQuad is the aspect ratio-normalized coordinate
  4. Wait, but we’re adjusting by the mouse coordinate
  5. Ok, uv is the coordinate of the current fragment.
  6. So mP is a resolution+aspect-normalized vector from the mouse to the current fragment
  7. And d is initialized to…the one’s complement of that?
  8. bufA is the existing 4-component color at coordinate uv
  9. mPN is the…zw value of that value? Which is actually the x and y value from last frame?

For full clarity, the Image code is below, but it’s mostly just repeated setup. This is the point where I realized that I don’t need to understand every detail. Maybe you can tell me why that col *= uv + 1-uv is there.

void mainImage( out vec4 fragColor, in vec2 fragCoord ) {
   vec2 uv = fragCoord/iResolution.xy;
   vec2 aspect = vec2(iResolution.x/iResolution.y, 1.);
  vec2 uvQuad = uv * aspect;
  vec4 bufA = texture(iChannel0, uv);

  vec4 col = bufA;
  col.xy *= (uv.xy+(1.-uv.xy));

  if(iFrame == 0) {
    col = vec4(.0,.0,.0,1.);
  }

  fragColor = vec4(col.xy, .0, 1.);
}

With my newfound (albeit limited) understanding in hand, it was time to start working on my own shader. I figured I’d learned enough to start from scratch, since the soft-circle shape was pretty far from where I wanted my code to end up.

First things first: 3 dimensional math!

I want to draw a box that is axis-aligned with the direction my mouse is moving in. The parallel portion of that is just the width of the box multiplied by the mouse’s velocity unit vector, which I can get by normalizing the velocity vector. Assuming, that is, that I can get the velocity vector. Which means I need not just the current mouse position – available as iMouse – but the previous position as well.

Luckily, that’s exactly what opexu’s trick is for. Since opexu stores one of the components in the third color component, however, it limits which colors are available. Not having a blue component seemed like a pretty big issue when Paint By Monsters is so heavily based on painting, so I decided to dedicate a buffer to tracking current and previous mouse coordinates.

And so, Buffer A was added. Buffer A takes itself as input and for each execution it puts the last mouse position into the R & G components, and the new position into the B & A components. On the first frame, it loads everything with position 0, which, as it turns out, can be a bad choice. More on that in a minute.

Buffer A uses the code below to update itself, reading its own previous contents from iChannel0.

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    if(iFrame == 0) {
        fragColor = vec4(0.0);
    } else {
        vec2 uv = fragCoord / iResolution.xy;
        vec2 m1 = texture(iChannel0, uv).zw;
        fragColor = vec4(m1,iMouse.xy);
    }
}

The next thing I needed was my persistent graphic. Again, opexu’s approach seems fine here – I can use a self-referencing buffer that just holds all of the fragments painted so far.

Thus Buffer B was added. Buffer B, as it turns out, is where the key goodness happens. Since I’m reading color from it, it needs to know about newly-painted fragments. Which means I need to paint into it, rather than in Image.

I’ll still need the mouse’s velocity vector, so I feed Buffer A into Buffer B’s iChannel0 input.

vec4 bufM = texture(iChannel0, uv);

I can then calculate my velocity vector.

vec2 mouse_move_vec = bufM.zw - bufM.xy;

As mentioned previously, I wanted a simple box-shaped brush for this iteration on the shader. The vector “width” of the box can be represented as w * vb, where w is a scalar equal to half the full width of the box. This, by itself, defines a narrow, infinitely tall box. We can calculate whether a particular fragment falls within the width box by projecting it onto the velocity vector.

The vector of interest, in this case, is the absolute distance – along a vector parallel to the long axis of the box – from the fragment coordinate to the centre of the box. We first need to calculate the vector between our fragment and any point already known to lie within the box – v_mouse1, for example.

v_frag = fragCoord - v_mouse1;

If we take the dot product of this vector with a unit vector that’s parallel to the short axis of the box (aka velocity_normalized) – we can determine the parallel distance from the original point to the fragment coordinate.

d_parallel = dot(v_frag, velocity_normalized);

If this distance is less than w, the fragment lies within our infinitely tall box.

We can multiply this length by velocity_normalized to get the vector representation of the parallel displacement, which will come in handy later.

v_parallel = d_parallel * velocity_normalized;

The next step is to constrain our box’s height as well. For this we need the perpendicular distance from one end of the velocity vector to our fragment. If we can get a vector that is perpendicular to velocity but still in the plane of our canvas, we’re good.

The simple way to get a vector that is perpendicular to another vector is to use the cross product, represented in GLSL as the cross() function. If you’re just getting started with vector math, this can be a stumbling block – we only have the velocity vector, right?

But the thing is, we also have a 2d plane – the canvas – and a plane can be represented by a 3-dimensional vector that is perpendicular to its surface. Fragment coordinates use x and y, which means (by convention) our 3rd dimension is z.

We can take the cross product of a unit z vector with our velocity.

vec4 v_perp = cross(velocity_normalized, vec4(0., 0., 1., 0.));

However, because we calculated v_frag and its parallel component, v_parallel, there’s another, less computationally demanding way to do this.

Frame of reference transformations are beyond the scope of this article, but in essence they tell us that for any vector v_original and any perpendicular components of that vector v_parallel and v_perpendicular, by definition those components sum to the original vector.

v_parallel + v_perpedicular = v_original

This equation, however, can be rearranged to yield the perpendicular component.

v_perpendicular = v_original - v_parallel;

In our case, we already have v_frag and v_parallel.

v_perp = v_frag - v_parallel;

With both the parallel and perpendicular components in hand, we can fully constrain our fragment with respect to our box.

//Initialize with previously painted fragment color
fragColor = texture(iChannel1, uv).xyz;

// Box is 2*20 high, 2*10 wide
if(length(v_perp) < 20 && length(v_parallel) < 10) {
  // paint it red
  fragColor = vec4(1.0, 0.0, 0.0, 0.0);
}

This shader will set fragColor to red if and only if the current fragment lies within the bounds of a box of height 40 and width 20 with its shorter axis aligned with the mouse’s direction of movement. Since we’re feeding BufferB back into itself, we initialize our fragment from the previous value of fragColor, so even after the “brush” moves on, painted fragments remain painted.


Further Reading

I looked up some stuff about simulating oil paint for real, and maybe at some point I’ll put it to use, but given it will eventually get rendered down to pixel art, maybe not.

Either way, the paper is pretty interesting. Finite Element Analysis with hybrid physical models is not something I’ve seen all that often.

I also started looking up Unity videos to try and get my head back in the shaders + Unity headspace, and I ran across this video by Code Monkey, where he does something very similar to what I’ve done above, but uses C# code and MonoBehaviours instead of shaders and trickses.


Featured Image

Kaleidoscope VI” by fdecomite is licensed under CC BY 2.0.

Super Simple Unity Surface Shader

As part of a project I’m involved with, I’ve been back at the shader business a little bit lately. In particular, I’ve been interested in how to provide input to a shader to allow dynamic displays of various kinds.

This post will be super-basic for those of you who already know how to write shaders, but if you’re just starting out with them and using Unity, it may provide a little extra help where you need it.

The shader explained below is a surface shader, which means that it controls the visual characteristics of particular pixels on a defined surface, and more particularly that it can interact with scene lighting. It also means that Unity does a lot of heavy lifting, generating lower-level shaders out of the high level shader code.

Doing this the way I am below is probably overkill, but since I’m learning here, I’m gonna give myself a pass (Shader Humour +1!).

Creating and Using a Surface Shader in Unity

In Unity, a Shader is applied to a rendered object via the object’s Material.  As an example, in the screenshot below, a shader named “PointShader” is applied to a Material named Upstage, which is applied to a Quad named Wall.

You can see in the UI that the Upstage material exposes two properties (actually 3, but we can ignore one of them), Color and Position. These are actually custom properties. Here’s a simplified version of the shader code for PointShader.


Shader "Custom/PointShader"{
  Properties {
    _MainTex("Dummy", 2D) = "white" {}
    _MyColor ("Color", Color) = (1,1,1,1)
    _Point ("Position", Vector) = (0, 0, 0, 0)
  }
  SubShader {
    // Setup stuff up here
    CGPROGRAM
    // More setup stuff

    sampler2D _MainTex;
    fixed4 _MyColor;
    float4 _Point;

    //&nbsp;Implementation of the shader
    ENDCG
  }
}

That “Properties” block defines inputs to the shader that you can set via the material, either in the Unity editor or in script.

In this case, we’ve defined 3 inputs:

  1. We will ignore _MainTex below because we’re not really using it except to ensure that our generated shaders properly pass UV coordinates, but basically it is a 2D graphic (that is, a texture). It’s called “Dummy” in the editor, and by default it will just be a texture that is flat white
  2. _MyColor (which has that My in front of it to avoid any possible conflict with the _Color variable that exists by default in a Unity Surface Shader)  is a 4-component Color (RGBA). This type is basically the same as the Color type used everywhere  else in Unity. This variable has the name “Color” in the editor, and defaults to opaque white.
  3. _Point is a 4-component Vector, which is slightly different from a Color in that it uses full floating point components, as you can see in the SubShader block. It’s referred to as Position in the Unity UI. The naming is up to you; I’m just showing you that you can use one name in code and a different one in the editor if you need to. It defaults to the origin.

As you can see in the screenshot above, you can set these values directly in the editor, which is pretty handy. The real power of this input method, however, comes when you start to integrate dynamic inputs via scripting.

PointShader was created as a sort of “selective mirror”. It allows me to apply an effect on a surface based on the location of an object in my scene. In order to do this, I have to update the _Point property of my material.  The code below shows how I’m doing that in this case.


public class PointUpdate : MonoBehaviour {
  public Vector2 texPos;
  internal override void Apply(Vector3 position) {
    var transformedPoint = this.transform.InverseTransformPoint(position);
    var tempX = .5f - transformedPoint.x / 10;
    var tempY = .5f - transformedPoint.z / 10;
    texPos = new Vector2(tempX, tempY);
    var material = this.GetComponent<MeshRenderer>().material;
    material.SetVector("_Point", texPos);
  }
}

Whenever my tracked object moves, it calls this Apply method, supplying its own position as a parameter. I then map that position to the local space of the object on which my shader is acting:

transformedPoint = this.transform.InverseTransformPoint(position);

Then I turn that mapped position into coordinates on my texture.

Three things you should know to understand this calculation:

  1. Texture coordinates are constrained to the range of 0 to 1
  2. A Unity quad has sides of length 10
  3. In this case my texture coordinates are inverted to the object orientation

var tempX = .5f - transformedPoint.x / 10;
var tempY = .5f - transformedPoint.z / 10;
texPos = new Vector2(tempX, tempY);

Finally, I set the value of _Point on my material. Note that I use the variable name and NOT the editor name here:

material.SetVector("_Point", texPos);

With this value set, I know where I should paint my dot with my shader. I use the surf() function within the shader to do this. I’ve added the full SubShader code block below.


SubShader {
  Tags { "RenderType"="Opaque" }
  LOD 200
        
  CGPROGRAM
  // Physically based Standard lighting model, and enable shadows on all light types
    #pragma surface surf Standard fullforwardshadows

  // Use shader model 3.0 target, to get nicer looking lighting
  #pragma target 3.0

  sampler2D _MainTex;
  fixed4 _Color;
  float4 _Point;

  struct Input {
    float2 uv_MainTex;
  };

  void surf (Input IN, inout SurfaceOutputStandard o) {
    if(IN.uv_MainTex.x > _Point.x - 0.05
        && IN.uv_MainTex.x < _Point.x + 0.05
        && IN.uv_MainTex.y > _Point.y - 0.05
        && IN.uv_MainTex.y < _Point.y + 0.05 ) {
      o.Albedo = _Color;
      o.Alpha = 1;
    } else {
      o.Albedo = 0;
      o.Alpha = 0;
    }
  }
  ENDCG
} 

The Input structure defines the values that Unity will pass to your shader. There are a bunch of possible element settings, which are described in detail at the bottom of the Writing Surface Shaders manpage.

The surf function receives that Input structure, which in this case I’m using only to get UV coordinates (which, in case you’re just starting out, are coordinates within a texture), and the SurfaceOutputStandard structure, which is also described in that manpage we talked about.

The key thing to know here is that the main point of the surf() function is to set the values of the SurfaceOutputStandard structure. In my case, I want to turn pixels “near” my object on, and turn all the rest of them off. I do this with a simple if statement:

  if(IN.uv_MainTex.x > _Point.x - 0.05
    && IN.uv_MainTex.x < _Point.x + 0.05     && IN.uv_MainTex.y > _Point.y - 0.05
    && IN.uv_MainTex.y < _Point.y + 0.05 ) {
  o.Albedo = _Color;
  o.Alpha = 1;
} else {
  o.Albedo = 0;
  o.Alpha = 0;
}

Albedo is the color of the pixel in question, and Alpha its opacity. By checking whether the current pixel’s UV coordinates (which are constrained to be between 0 and 1) are within a certain distance from my _Point property, I can determine whether to paint it or not.

At runtime, this is how that looks:

It’s a simple effect, and not necessarily useful on its own, but as a starting point it’s not so bad.

Adventure Time: Shaders

I’ve made a commitment to myself this year to learn more about low level programming. There are two parts to that effort.

The first is C++, a language with which I’ve had a love-hate relationship for years. I’ll talk in detail about this someday soon, but suffice it to say for now that I am trying to get more comfortable with all of the different quirks and responsibilities that come with that shambling mound of a language.

The second, which is, in its own hyper-specific way, both more interesting and less frustrating, is shaders. In case you don’t do this sort of thing much, shaders come in two basic flavours, vertex and pixel.

I don’t know where this goes, not yet. I’ve decided to write a talk for Gamedev NL, which will be a good way to crystallize whatever knowledge I gain in the process. Might not be the best possible presentation for the purpose, but we’re a small community, and I think people will appreciate it for whatever it is.

Shaders have long since hit criticality; they’re practically boring. You have only to look at sites like Shadertoy and ShaderFrog  to see that. But there’s something very spectacular about seeing the results of a tiny bit of code output the most realistic ocean you’ll never see, or the very foundations of life.

I mean, that’s cool, at least in my world. If you know how to build something like that, you got my vote for prom queen or whatever.

So that’s a thing I want a little more of in my life. I’ll talk about it as I go. I don’t have much specific purpose for this right now; Contension‘s not going to need this stuff for a good long time, but I’ll find something interesting to do with it.

Talk to you soon
mgb

Unity: Always a Work in Progress

While working on a couple of non-PMG projects, I was reminded that while Unity have had a banner year (couple of years, even) for major built-in feature upgrades – shaders, networking, UI, and services, to name a few – there are still some hard gaps.

The first problem I hit showed up while I was working on an enterprise-ey integration for the editor. The preferred data format in the enterprise these days tends to be JSON, so you need a JSON parser of some kind to be able to push data in and pull it out of systems. There are lots of third-party libraries that do this, but there are also framework-native options for most languages.

In Unity, the options for JSON are System.Web – which actually recommends a third-party library(!) and, as of the 5.5 beta experimental lane, System.Json, which is meant for use with Silverlight, but has the most desirable semantics and a fluent interface for dealing with JSON objects.

Having said all that, the best option right now for immediate use is still Json.NET, which has very similar semantics to System.Json but has the advantages of being compatible with the 2.0/3.5 .NET runtime and being mature enough to be fluent and fast.

This was my first time pulling a third-party .NET DLL into Unity, so it took a little while to understand how to get the system to see it. It turns out the process is actually super-simple – you just drop it into the Assets folder and use the regular Edit References functionality to reference it in your code IDE. Which is nice! I like easy problems.

The other problem I had was related to game development, though, sadly, not Contension, which remains on hold for now.

I was trying to get a click-object setup to work in a 2d game. Unity has a lot of different ways to do this, but the granddaddy of ’em all is the Input-Raycast, which works very well, but is kind of old and busted and not very Unity-ey anymore.

The new hotness for input is Input Modules, which natively support command mapping and event-driven operation. It turns out there are a bunch of ways to work with an IM, including the EventTrigger, which is basically zero-programming event handling, which, holy shit guys. That’s a fever of a hundred and three right there.

The thing about the Input Module for my scenario, however, was that if you’re working with non-UI elements and you don’t want to roll your own solution, you have to add a Physics Raycaster somewhere, which will allow you to click on objects that have a collider, and you have to have a collider on any object you want to click on. Which is fine! I’m 100% on board with having a simple, composable way to specify which things actually can be clicked. BUT.

See, there are actually 2 Physics Raycasters available. One is the ubiquitous Physics Raycaster, which does 3d interaction. The other is the Physics 2D Raycaster, which theoretically does interaction when you’re using the Unity 2D primitives. It may surprise you – I know it surprised the heck out of me – to learn that the Physics 2D Raycaster is actually a pile of bull puckey that does not in any way work at present.

It’s  one of those things that you often find out in gamedev that makes the whole exercise feel very frontier-ish, except there’s this enterprise dev in me. And he knows very well that a framework that puts that kind of dead-end red herring in and doesn’t even acknowledge the issue is a framework that I have to avoid trusting at every opportunity.

It all worked out ok; you can use the 3D raycaster and a 3d bounding box just fine for the purposes of interaction, and this particular project doesn’t need the 2D physics right now. It’s just annoying and worrying, which is, at the very least, not a super fun way to feel about the basic tool I’m using.

As an aside, I’m doing another talk soon, this time for the fun folks at NDev. It’ll be mostly a rehash of the 2016 NGX talk, but I’m hoping to tweak it at least a little to provide some depth in a few areas.  Should be interesting to see what comes of it!

Creating a single player version of a multiplayer game in Unity

I struggled to find any information about this online, so I’ll write a quick post about how I’m solving this with the prototype for Contension in hopes that it will help someone out there at some point.

The prototype has a ContensionGame object which derives from NetworkManager, which, if you’re not familiar with UNET, is basically the thing that coordinates the network traffic of the application, kind of a very abstract client/server class.

using UnityEngine;
using UnityEngine.Networking;
using System.Collections;
using System.Collections.Generic;

public class MultiplayerGame : ContensionGame // ContensionGame is a NetworkManager 
{
    public List<uint> _readySignals;
	
    public void Launch() 
    {
        StartHost();
    }
	
    public void Connect(string ipAddress) 
    {
        networkAddress = ipAddress; 
        StartClient();
        Debug.Log("connected");
    }

    public void AddReady(uint id) 
    {
        if(!_readySignals.Contains(id)) 
        {
            _readySignals.Add(id);
            if(_readySignals.Count > 1) 
            {
                ServerChangeScene(this.onlineScene); 
            }
        }
    }

    void Awake() 
    {
        DontDestroyOnLoad(this);
        _readySignals = new List<uint>();
    }
}

Simple enough – in a normal multiplayer game, we wait for all the players to connect (tracked with _readySignals), and once we have two or more we go to the “main” scene. This isn’t exactly how you’d do things with a full game; for one thing, you’d have more complex scene loading, and for another you’d probably have more robust reconnection logic, but it gets the job done for prototyping.

The real work of starting a multiplayer level, however, is done in the Player GameObject, primarily by the TeamSpawner script component. This object actually spawns our units in the appropriate areas on the map.

Network code can be hard to think about, but in Contension I’m using an authoritative server, which just means that the client won’t actually be doing a whole lot in terms of judging when and how units move or come into conflict. The premise of the game doesn’t work super well if you allow clients to make those judgements, though I’ll probably have to revisit that down the road.

The basic things you need to know to understand this are:

  1. SyncVars are automagically managed data that get replicated across the network.
  2. OnXYZ functions are called “Message” functions, and they’re usually only called by Unity based on events internal to the game engine, such as when a server starts or a client connects to the server.
  3. Command functions are called from the client to the server.
  4. ClientRpc functions are called from the server to the client.
  5. NetworkServer.Spawn creates an object in the game world for all players.
using UnityEngine;
using UnityEngine.Networking;
using System.Collections;
using System.Collections.Generic;

[RequireComponent(typeof(NetworkIdentity))]
public class TeamSpawner : NetworkBehaviour 
{
    public GameObject ContenderPrefab;
    
    [SyncVar]string _teamTag;

    List<Contender.Description> _contenderDescriptions;
    bool _spawned;

    void Start() 
    {
        DontDestroyOnLoad(this);
    }

    public override void OnStartServer ()
    {
        if(MoreThanOnePlayerWithMyTag()) 
        {
            _teamTag = "Team2";
        }
        if(isServer) { _tagged = true; }
    }

    public override void OnStartClient() 
    {
        _teamTag = tag;
    }

    public override void OnStartLocalPlayer ()
    {
        if(!isServer) 
        {
            CmdSendTag();
        }
        base.OnStartLocalPlayer ();
    }

    [Command] 
    public void CmdSendTag() 
    {
        RpcSetTag(this.tag);
    }

    [ClientRpc]
    public void RpcSetTag(string newTag) 
    {
        tag = newTag;
        _tagged = true;
    }

    internal void SubmitTeam (IEnumerable<TeamSetup.DescriptionWrapper> team)
    {
        ClearTeam();
        foreach(TeamSetup.DescriptionWrapper description in team) 
        {
            AddDescription(description.Role, description.Commitment, description.Speed);
        }
        CmdSignalReady();
    }

    [Command]
    void CmdSignalReady() 
    {
        GetComponent<ReadySignal>().Send();
    }

    private void AddDescription(Contender.Roles role, Contender.Commitments commitment, Contender.Speeds speed) 
    { 
        CmdAddDescription(role, commitment, speed);
    }

    [Command]
    void CmdAddDescription(Contender.Roles role, Contender.Commitments commitment, Contender.Speeds speed) 
    {
        ContenderDescriptions.Add(new Contender.Description(role, commitment, speed));
    }

    void OnLevelWasLoaded()
    {
        _spawned = false;
    }

    void Update () 
    {
        if(isLocalPlayer && _tagged && !_spawned && _contenderDescriptions != null) 
        {
            TeamSpawnArea[] spawnAreas = FindObjectsOfType<TeamSpawnArea>();
            foreach(TeamSpawnArea area in spawnAreas) 
            {
                if(area.tag == this.tag) 
                {
                    // Simple local perspective hack - the camera is rotated 180 if the player spawns in the
                    // top of the map instead of the bottom
                    transform.position = area.Center;
                    if(transform.position.y > 0 && GetComponent<AiPlayer>() == null) 
                    {
                        Camera.main.transform.Rotate (new Vector3(0,0,180));
                    }

                    if(isServer) 
                    {
                        SpawnTeam (tag);
                    }
                    else 
                    {
                        CmdSpawnTeam(tag);
                    }
                    _spawned = true;
                }
            }
        }
    }

    [Command]
    public void CmdSpawnTeam (string tag) 
    {
        SpawnTeam(tag);
    }

    private void SpawnTeam(string tag) 
    {
        TeamSpawnArea[] spawnAreas = FindObjectsOfType<TeamSpawnArea>();
        TeamSpawnArea teamArea = spawnAreas[0];
        foreach(TeamSpawnArea area in spawnAreas) 
        {
            if(area.tag == tag) 
            {
                teamArea = area;
                break;
            }
        }
        foreach(Contender.Description description in _contenderDescriptions) 
        {
            Vector2 SpawnLocation = PickSpawnPoint(teamArea);
            GameObject obj = (GameObject)Instantiate(ContenderPrefab, SpawnLocation, Quaternion.identity);
            
            Contender contender = obj.GetComponent<Contender>();
            contender.Initialize(tag, netId.Value, description);
            NetworkServer.Spawn(obj);
        }
    }
}

One of the basic problems with UNET, however, is it doesn’t natively support different player prefabs (read: types) for different players. This means that you can’t just set the player type and forget about it if you want to reuse the multiplayer code for your single player game. In a larger studio that might not be a concern, but I’m doing this on my own right now and that means I need to try to restrict how many things I have to worry about.

My solution to this (again, this is prototype code!) is pretty quick and dirty. Basically I’ve set the “main” playerPrefab to be my AI player class, and then added the human player as a spawnable prefab. As soon as the game starts, the AI player connects, which causes the game to spawn a second client with a hardcoded team.

Soo dirty. But it works!
Soo dirty. But it works!

using UnityEngine;
using UnityEngine.Networking;
using System.Collections;
using System.Collections.Generic;

public class SinglePlayerGame : ContensionGame
{
    bool _playerAdded;

    // Use this for initialization
    void Start () 
    {
        StartHost();
    }

    public override void OnServerAddPlayer(NetworkConnection conn, short playerControllerId)
    {
        GameObject Player;
        if(playerControllerId == 0)
        {
            Player = (GameObject)GameObject.Instantiate(playerPrefab, Vector2.zero, Quaternion.identity);;
        }
        else
        {
            Player = (GameObject)GameObject.Instantiate(spawnPrefabs[0], Vector2.zero, Quaternion.identity);
        }
         
        NetworkServer.AddPlayerForConnection(conn, Player, playerControllerId);
        if(playerControllerId != 0)
        {
            TeamSpawner PlayerTeam = Player.GetComponent<TeamSpawner>();
            List<TeamSetup.DescriptionWrapper> Units = new List<TeamSetup.DescriptionWrapper>();
            Units.Add(
                new TeamSetup.DescriptionWrapper(
                    new Contender.Description(Contender.Roles.ManyOnOne, Contender.Commitments.Balanced, Contender.Speeds.Average)));
            Units.Add(
                new TeamSetup.DescriptionWrapper(
                    new Contender.Description(Contender.Roles.ManyOnOne, Contender.Commitments.Balanced, Contender.Speeds.Average)));
            Units.Add(
                new TeamSetup.DescriptionWrapper(
                    new Contender.Description(Contender.Roles.OneOnMany, Contender.Commitments.Balanced, Contender.Speeds.Average)));
            Units.Add(
                new TeamSetup.DescriptionWrapper(
                    new Contender.Description(Contender.Roles.OneOnMany, Contender.Commitments.Balanced, Contender.Speeds.Average)));
            Units.Add(
                new TeamSetup.DescriptionWrapper(
                    new Contender.Description(Contender.Roles.OneOnOne, Contender.Commitments.Balanced, Contender.Speeds.Average)));
            Units.Add(
                new TeamSetup.DescriptionWrapper(
                    new Contender.Description(Contender.Roles.OneOnOne, Contender.Commitments.Balanced, Contender.Speeds.Average)));
            
            PlayerTeam.SubmitTeam(Units);
        }
    }

    // Update is called once per frame
    void Update () 
    {
        if(!_playerAdded && ClientScene.ready)
        {
            _playerAdded = true;
            ClientScene.AddPlayer(2);
        }
    }
}

For two AI players (for example, when building an AI demo or training simulator), you can do a similar thing but simply spawn a second AI player prefab instead of the human player.

I’ve also realized while writing this article that I can do a better team tagging solution based on the map’s available spawn areas. Which is neat!