OSX & Kinect, 2017

So you have a MacBook (or something else that runs OSX) and you want to play with the Kinect sensor, but you’re having trouble because there are about 1 billion sets of wrong instructions on the internet on how to connect this Kinect. Let me save you a little grief.

Hardware

I have the Kinect “v2”, aka Kinect for Xbox One, aka Kinect for Windows, aka (in my case) Model 1520. The instructions below work for my version. The only serious difference if you have the older Kinect should be that you use a different version of libfreenect, but I haven’t tested that.

Software

You have more than one option as far as software goes. If you’re a commercial developer, you might consider trying out Zigfu’s ZDK, which has an OSX-ready image and integrates with several modern packages, including Unity3d, out of the box.

If you’re more of a hobbyist (as I am at the moment) and don’t have the $200 for a Zigfu license, the lovely folks behind the Structure Sensor have taken on maintenance of the OpenNI2 library, including a macOS build. Your first step should be to download the latest version of that library and unzip it somewhere.

Unfortunately, their package isn’t quite complete, and you’ll also need a driver to connect the Kinect (I know, it’s getting old to me too). This is where our ways may diverge, gentle reader, for in my case I discovered that I needed OpenKinect’s libfreenect2, whereas an older sensor would require libfreenect.

Assuming that you’re using the XBox One sensor, you’ll want to read the README.md that comes with your copy of libfreenect2. It contains all the necessary instructions for getting the right tools + dependencies and building all the things.

There are two additional things that are currently left out of their readme file. The first is that when you want to use the OpenNI2 tools, you’ll need to copy the drivers from

libfreenect2/build/lib

into

{bin-folder}/OpenNI2/Drivers

for whatever you’re running. So to run NiViewer, which is in the Tools folder, you’d copy it to

{openni-base-folder}/Tools/OpenNI2/Drivers

I expected the “make install-openni2” command from libfreenect2’s readme would take care of that stuff, but it does not.

The second omission is the troubleshooting stuff on their wiki. In particular, for my specific MacBook, I had to plug the Kinect adapter into the USB port on the left-hand side, NOT the right-hand side, as the device requires USB3, and I had to run Protonect and NiViewer using the “cl” pipeline. The default pipeline setting can be changed by doing this:

export LIBFREENECT2_PIPELINE=cl

You can also pass in the pipeline for Protonect:

bin/Protonect cl

With that setting in place, you should see a window with 2 (NiViewer) or 4 (Protonect) windows, each capturing different parts of the raw Kinect stream:

 

From here you’re on your own, but I hope you found this at least a bit helpful!

Adventure Time: Shaders

I’ve made a commitment to myself this year to learn more about low level programming. There are two parts to that effort.

The first is C++, a language with which I’ve had a love-hate relationship for years. I’ll talk in detail about this someday soon, but suffice it to say for now that I am trying to get more comfortable with all of the different quirks and responsibilities that come with that shambling mound of a language.

The second, which is, in its own hyper-specific way, both more interesting and less frustrating, is shaders. In case you don’t do this sort of thing much, shaders come in two basic flavours, vertex and pixel.

I don’t know where this goes, not yet. I’ve decided to write a talk for Gamedev NL, which will be a good way to crystallize whatever knowledge I gain in the process. Might not be the best possible presentation for the purpose, but we’re a small community, and I think people will appreciate it for whatever it is.

Shaders have long since hit criticality; they’re practically boring. You have only to look at sites like Shadertoy and ShaderFrog  to see that. But there’s something very spectacular about seeing the results of a tiny bit of code output the most realistic ocean you’ll never see, or the very foundations of life.

I mean, that’s cool, at least in my world. If you know how to build something like that, you got my vote for prom queen or whatever.

So that’s a thing I want a little more of in my life. I’ll talk about it as I go. I don’t have much specific purpose for this right now; Contension‘s not going to need this stuff for a good long time, but I’ll find something interesting to do with it.

Talk to you soon
mgb

Unity: Always a Work in Progress

While working on a couple of non-PMG projects, I was reminded that while Unity have had a banner year (couple of years, even) for major built-in feature upgrades – shaders, networking, UI, and services, to name a few – there are still some hard gaps.

The first problem I hit showed up while I was working on an enterprise-ey integration for the editor. The preferred data format in the enterprise these days tends to be JSON, so you need a JSON parser of some kind to be able to push data in and pull it out of systems. There are lots of third-party libraries that do this, but there are also framework-native options for most languages.

In Unity, the options for JSON are System.Web – which actually recommends a third-party library(!) and, as of the 5.5 beta experimental lane, System.Json, which is meant for use with Silverlight, but has the most desirable semantics and a fluent interface for dealing with JSON objects.

Having said all that, the best option right now for immediate use is still Json.NET, which has very similar semantics to System.Json but has the advantages of being compatible with the 2.0/3.5 .NET runtime and being mature enough to be fluent and fast.

This was my first time pulling a third-party .NET DLL into Unity, so it took a little while to understand how to get the system to see it. It turns out the process is actually super-simple – you just drop it into the Assets folder and use the regular Edit References functionality to reference it in your code IDE. Which is nice! I like easy problems.

The other problem I had was related to game development, though, sadly, not Contension, which remains on hold for now.

I was trying to get a click-object setup to work in a 2d game. Unity has a lot of different ways to do this, but the granddaddy of ’em all is the Input-Raycast, which works very well, but is kind of old and busted and not very Unity-ey anymore.

The new hotness for input is Input Modules, which natively support command mapping and event-driven operation. It turns out there are a bunch of ways to work with an IM, including the EventTrigger, which is basically zero-programming event handling, which, holy shit guys. That’s a fever of a hundred and three right there.

The thing about the Input Module for my scenario, however, was that if you’re working with non-UI elements and you don’t want to roll your own solution, you have to add a Physics Raycaster somewhere, which will allow you to click on objects that have a collider, and you have to have a collider on any object you want to click on. Which is fine! I’m 100% on board with having a simple, composable way to specify which things actually can be clicked. BUT.

See, there are actually 2 Physics Raycasters available. One is the ubiquitous Physics Raycaster, which does 3d interaction. The other is the Physics 2D Raycaster, which theoretically does interaction when you’re using the Unity 2D primitives. It may surprise you – I know it surprised the heck out of me – to learn that the Physics 2D Raycaster is actually a pile of bull puckey that does not in any way work at present.

It’s  one of those things that you often find out in gamedev that makes the whole exercise feel very frontier-ish, except there’s this enterprise dev in me. And he knows very well that a framework that puts that kind of dead-end red herring in and doesn’t even acknowledge the issue is a framework that I have to avoid trusting at every opportunity.

It all worked out ok; you can use the 3D raycaster and a 3d bounding box just fine for the purposes of interaction, and this particular project doesn’t need the 2D physics right now. It’s just annoying and worrying, which is, at the very least, not a super fun way to feel about the basic tool I’m using.

As an aside, I’m doing another talk soon, this time for the fun folks at NDev. It’ll be mostly a rehash of the 2016 NGX talk, but I’m hoping to tweak it at least a little to provide some depth in a few areas.  Should be interesting to see what comes of it!

Creating a single player version of a multiplayer game in Unity

I struggled to find any information about this online, so I’ll write a quick post about how I’m solving this with the prototype for Contension in hopes that it will help someone out there at some point.

The prototype has a ContensionGame object which derives from NetworkManager, which, if you’re not familiar with UNET, is basically the thing that coordinates the network traffic of the application, kind of a very abstract client/server class.

using UnityEngine;
using UnityEngine.Networking;
using System.Collections;
using System.Collections.Generic;

public class MultiplayerGame : ContensionGame // ContensionGame is a NetworkManager 
{
    public List<uint> _readySignals;
	
    public void Launch() 
    {
        StartHost();
    }
	
    public void Connect(string ipAddress) 
    {
        networkAddress = ipAddress; 
        StartClient();
        Debug.Log("connected");
    }

    public void AddReady(uint id) 
    {
        if(!_readySignals.Contains(id)) 
        {
            _readySignals.Add(id);
            if(_readySignals.Count > 1) 
            {
                ServerChangeScene(this.onlineScene); 
            }
        }
    }

    void Awake() 
    {
        DontDestroyOnLoad(this);
        _readySignals = new List<uint>();
    }
}

Simple enough – in a normal multiplayer game, we wait for all the players to connect (tracked with _readySignals), and once we have two or more we go to the “main” scene. This isn’t exactly how you’d do things with a full game; for one thing, you’d have more complex scene loading, and for another you’d probably have more robust reconnection logic, but it gets the job done for prototyping.

The real work of starting a multiplayer level, however, is done in the Player GameObject, primarily by the TeamSpawner script component. This object actually spawns our units in the appropriate areas on the map.

Network code can be hard to think about, but in Contension I’m using an authoritative server, which just means that the client won’t actually be doing a whole lot in terms of judging when and how units move or come into conflict. The premise of the game doesn’t work super well if you allow clients to make those judgements, though I’ll probably have to revisit that down the road.

The basic things you need to know to understand this are:

  1. SyncVars are automagically managed data that get replicated across the network.
  2. OnXYZ functions are called “Message” functions, and they’re usually only called by Unity based on events internal to the game engine, such as when a server starts or a client connects to the server.
  3. Command functions are called from the client to the server.
  4. ClientRpc functions are called from the server to the client.
  5. NetworkServer.Spawn creates an object in the game world for all players.
using UnityEngine;
using UnityEngine.Networking;
using System.Collections;
using System.Collections.Generic;

[RequireComponent(typeof(NetworkIdentity))]
public class TeamSpawner : NetworkBehaviour 
{
    public GameObject ContenderPrefab;
    
    [SyncVar]string _teamTag;

    List<Contender.Description> _contenderDescriptions;
    bool _spawned;

    void Start() 
    {
        DontDestroyOnLoad(this);
    }

    public override void OnStartServer ()
    {
        if(MoreThanOnePlayerWithMyTag()) 
        {
            _teamTag = "Team2";
        }
        if(isServer) { _tagged = true; }
    }

    public override void OnStartClient() 
    {
        _teamTag = tag;
    }

    public override void OnStartLocalPlayer ()
    {
        if(!isServer) 
        {
            CmdSendTag();
        }
        base.OnStartLocalPlayer ();
    }

    [Command] 
    public void CmdSendTag() 
    {
        RpcSetTag(this.tag);
    }

    [ClientRpc]
    public void RpcSetTag(string newTag) 
    {
        tag = newTag;
        _tagged = true;
    }

    internal void SubmitTeam (IEnumerable<TeamSetup.DescriptionWrapper> team)
    {
        ClearTeam();
        foreach(TeamSetup.DescriptionWrapper description in team) 
        {
            AddDescription(description.Role, description.Commitment, description.Speed);
        }
        CmdSignalReady();
    }

    [Command]
    void CmdSignalReady() 
    {
        GetComponent<ReadySignal>().Send();
    }

    private void AddDescription(Contender.Roles role, Contender.Commitments commitment, Contender.Speeds speed) 
    { 
        CmdAddDescription(role, commitment, speed);
    }

    [Command]
    void CmdAddDescription(Contender.Roles role, Contender.Commitments commitment, Contender.Speeds speed) 
    {
        ContenderDescriptions.Add(new Contender.Description(role, commitment, speed));
    }

    void OnLevelWasLoaded()
    {
        _spawned = false;
    }

    void Update () 
    {
        if(isLocalPlayer && _tagged && !_spawned && _contenderDescriptions != null) 
        {
            TeamSpawnArea[] spawnAreas = FindObjectsOfType<TeamSpawnArea>();
            foreach(TeamSpawnArea area in spawnAreas) 
            {
                if(area.tag == this.tag) 
                {
                    // Simple local perspective hack - the camera is rotated 180 if the player spawns in the
                    // top of the map instead of the bottom
                    transform.position = area.Center;
                    if(transform.position.y > 0 && GetComponent<AiPlayer>() == null) 
                    {
                        Camera.main.transform.Rotate (new Vector3(0,0,180));
                    }

                    if(isServer) 
                    {
                        SpawnTeam (tag);
                    }
                    else 
                    {
                        CmdSpawnTeam(tag);
                    }
                    _spawned = true;
                }
            }
        }
    }

    [Command]
    public void CmdSpawnTeam (string tag) 
    {
        SpawnTeam(tag);
    }

    private void SpawnTeam(string tag) 
    {
        TeamSpawnArea[] spawnAreas = FindObjectsOfType<TeamSpawnArea>();
        TeamSpawnArea teamArea = spawnAreas[0];
        foreach(TeamSpawnArea area in spawnAreas) 
        {
            if(area.tag == tag) 
            {
                teamArea = area;
                break;
            }
        }
        foreach(Contender.Description description in _contenderDescriptions) 
        {
            Vector2 SpawnLocation = PickSpawnPoint(teamArea);
            GameObject obj = (GameObject)Instantiate(ContenderPrefab, SpawnLocation, Quaternion.identity);
            
            Contender contender = obj.GetComponent<Contender>();
            contender.Initialize(tag, netId.Value, description);
            NetworkServer.Spawn(obj);
        }
    }
}

One of the basic problems with UNET, however, is it doesn’t natively support different player prefabs (read: types) for different players. This means that you can’t just set the player type and forget about it if you want to reuse the multiplayer code for your single player game. In a larger studio that might not be a concern, but I’m doing this on my own right now and that means I need to try to restrict how many things I have to worry about.

My solution to this (again, this is prototype code!) is pretty quick and dirty. Basically I’ve set the “main” playerPrefab to be my AI player class, and then added the human player as a spawnable prefab. As soon as the game starts, the AI player connects, which causes the game to spawn a second client with a hardcoded team.

Soo dirty. But it works!
Soo dirty. But it works!
using UnityEngine;
using UnityEngine.Networking;
using System.Collections;
using System.Collections.Generic;

public class SinglePlayerGame : ContensionGame
{
    bool _playerAdded;

    // Use this for initialization
    void Start () 
    {
        StartHost();
    }

    public override void OnServerAddPlayer(NetworkConnection conn, short playerControllerId)
    {
        GameObject Player;
        if(playerControllerId == 0)
        {
            Player = (GameObject)GameObject.Instantiate(playerPrefab, Vector2.zero, Quaternion.identity);;
        }
        else
        {
            Player = (GameObject)GameObject.Instantiate(spawnPrefabs[0], Vector2.zero, Quaternion.identity);
        }
         
        NetworkServer.AddPlayerForConnection(conn, Player, playerControllerId);
        if(playerControllerId != 0)
        {
            TeamSpawner PlayerTeam = Player.GetComponent<TeamSpawner>();
            List<TeamSetup.DescriptionWrapper> Units = new List<TeamSetup.DescriptionWrapper>();
            Units.Add(
                new TeamSetup.DescriptionWrapper(
                    new Contender.Description(Contender.Roles.ManyOnOne, Contender.Commitments.Balanced, Contender.Speeds.Average)));
            Units.Add(
                new TeamSetup.DescriptionWrapper(
                    new Contender.Description(Contender.Roles.ManyOnOne, Contender.Commitments.Balanced, Contender.Speeds.Average)));
            Units.Add(
                new TeamSetup.DescriptionWrapper(
                    new Contender.Description(Contender.Roles.OneOnMany, Contender.Commitments.Balanced, Contender.Speeds.Average)));
            Units.Add(
                new TeamSetup.DescriptionWrapper(
                    new Contender.Description(Contender.Roles.OneOnMany, Contender.Commitments.Balanced, Contender.Speeds.Average)));
            Units.Add(
                new TeamSetup.DescriptionWrapper(
                    new Contender.Description(Contender.Roles.OneOnOne, Contender.Commitments.Balanced, Contender.Speeds.Average)));
            Units.Add(
                new TeamSetup.DescriptionWrapper(
                    new Contender.Description(Contender.Roles.OneOnOne, Contender.Commitments.Balanced, Contender.Speeds.Average)));
            
            PlayerTeam.SubmitTeam(Units);
        }
    }

    // Update is called once per frame
    void Update () 
    {
        if(!_playerAdded && ClientScene.ready)
        {
            _playerAdded = true;
            ClientScene.AddPlayer(2);
        }
    }
}

For two AI players (for example, when building an AI demo or training simulator), you can do a similar thing but simply spawn a second AI player prefab instead of the human player.

I’ve also realized while writing this article that I can do a better team tagging solution based on the map’s available spawn areas. Which is neat!

Shadertoy

So you want to write code for a living, but you also have a wee bit of graphic artist in you? Maybe shaders are your glovely medium!

Here’s a small sample of the insanity on display at ShaderToy.

Garage – I haven’t parsed this completely, but I think that entire scene – including the cars – may be a single shader. Note that the car goes both up AND down the decks.

Tentacle Thing – More traditional, akin to the hair shaders supposedly used on the Monsters movies from Pixar, but still amazing to see it running in realtime in a browser.

Seascape – What can I say, except maybe why the hell aren’t there games that have this as part of their experience???

Flame – Simple, but also easy to mess with (try changing the numbers in the mainImage function and hit the Play button below the code window to see what I mean).

It’s hard to imagine the world of the elder statesmen in this field; they were stuck decoding research papers from a variety of obscure journals. But it’s also nice to know that there is such an incredibly rich ocean of knowledge out there to scour while learning to do cool things.

short, beautiful experiences