Theorycrafting a fully-integrated Eve MMOFPS

I’ve been doing a lot of game design of late, both for Beat Farmer and for other projects I’ve had in the back of my mind for a while. Having stumbled over Eve and its FPS tie-in ambitions again today, I kind of want to figure out what my approach to this might look like.

First, some background.

Eve Online is a massively multiplayer game with starship combat and player-driven economics and “geographic” (topological?) control. It is famous for, among other things, its difficulty curve:

Can’t find an author – if it’s you, HMU

Many moons ago now, CCP Games released Dust 514, a first person shooter aimed at giving new players an “in” to the Eve universe, as well as expanding the existing player base’s available experience.

Dust 514 was shuttered, and CCP has gone through a couple of stabs at a design for a successor. The first was Project Legion, which, judging by the name alone, may have contained some of the ideas I’m going to talk about below. The second will be Project Nova, which sounds like it will be a much more straight-FPS first. After looking at GunJack videos, that’s probably the right decision. Eve was never a game for the nice-UX crowd.

Integrated Game Worlds and Design

As fascinating as it is to read about Eve’s many born-in-real-life treacheries, the universe itself isn’t that compelling. The compelling part of Eve has always been its large-scale gameplay, where massive alliances conduct battles so fierce that time literally slows down within the game so that the servers (and, no doubt, the clients) can run the necessary calculations.

That seems like a great place to start for a planetary conflict FPS. Fleets need to bring their troop transports to bear because ground forces are the only forces that can take territory. That’s interesting from a space combat standpoint because at that point you need to have well-protected transport ships.

Luckily, Eve already has these gigantic transports, and shipping is its own deep specialization within the game. But.

Blowing Things Up With Smaller Guns

The thing about Eve is this: the whole idea is to make spaceships shoot at each other. Every piece of the player-driven economic system is, in the end, a way to let both hardcores and casuals put together ships and fleets and super-fleets and massive starships and starbases oh my. These heavy things necessarily dominate the gameplay, and there’s no reason to imagine that a planetary assault wouldn’t come down to who has the better fleet in every case.

So you’d have to introduce gameplay restrictions in order for this to work.

You could, for example, add planetary defence systems that are able to  blow a Titan out of the sky in short order but, in keeping with the game’s oeuvre, can’t hit anything smaller than a supercapital. This could get even more interesting if planets can throw off their governance once in a while, leading to both sides having to dodge space flak.

(Side note: There are almost certainly Eve players who would enjoy an associated game wherein their only job was to manage the governance of planets, quashing rebellions and maximizing productivity. There could be a tie-in between this game and the FPS players as well, where FPS players lead rebellions and security forces in conflict with one another.

There could even be a competitive aspect to the management game, with “sleeper” agents embedded in the game exercising various levels of mismanagement to mess with the managers. I don’t know whether the management types would like this element, but I digress…)

Another, possibly complementary, approach would be to  introduce the idea of orbital pollution, where intense spaceborne warfare in proximity to a planetary body would reduce the effective value of the planet while the orbit was “cleaned up”. You could still see skirmishes and raids with smaller ships to try to disrupt or destroy orbiting transports, but the heaviest ships would have to concentrate on controlling the supply line rather directly participating in orbital dominance.

To my mind, it seems like introducing these limiting factors would allow the optimization of the play experience in each game for its intended players.

Ground Assaults, Economics, and Availability

As to the transports themselves, I feel like the Dust approach, which limited itself to orbital bombardments, was a poor solution. It would be much more Eve-ish, to my mind at least, to integrate the transports themselves into a resource equation for a particular planetary war. Eve players and FPS players would have to negotiate how many clones, resources, and buildings to supply to the war effort. FPS players could see – possibly depending on their rank – exactly what is left available on the planet and in orbit, which would raise the stakes on their individual and team performances.

From a technical standpoint, one of the harder things to get right about this situation would be matchmaking. I think there’s room here, too, for improvement.

The first thing would be ensuring the conflict doesn’t pause because FPS players go to bed. Bots could be brought in as part of the supply drop, ensuring that FPS and space gameplay are not over-reliant on one another. FPS players might face AI-controlled bots for a while, then see a gradual increase in clone troopers, who would be controlled by other players.

For those unfamiliar with the Eve Universe, there are grades of clones, which brings a whole different question into play – when FPS players field their top-tier clones, and why? What payment do they require? How do they handle the collapse (or retreat; betrayal is a key part of the game, after all!) of their space-based support lines?

This appeals to me as a former Eve player. Judging exactly how many clones might available for an assault and for how long would become a critical part of the supply line equation. Balancing different bot types – Titanfall comes to mind – would come into play. And it’s possible to tie in economic incentives by offering clone trooper contracts on a cross-game marketplace to improve your assault’s chances.

Negotiation

Allowing cross-game discussion and negotiation in some form would be critical, even if it’s more like email than Messenger in design. Eve players and FPS players probably have some crossover, but I would expect that in most cases you’d see two separate entities (Eve Corp/FPS Mercenaries)  discussing terms via this system. From a community and design standpoint, this could be a great place to gather data about how players are actually doing what they do and how to improve the experience.

Conclusion

That’s kind of what I see being the best version of the tie-in idea. It’s obviously weighted towards Eve-first. That’s the starting point, and it’s also the only CCP game I’ve played. That also means there’s a recognizable and not necessarily positive bias in the design here, and I’m not sure I’ve solved the issues perfectly.

Regardless, I’ll be interested to see how CCP deal with Project Nova and to what extent they implement gameplay that melds these different genres in interesting ways.

-mgb

Solo dev: Bizdev edition

I’ve been working on getting the non-gameplay aspects of  Beat Farmer figured out of late, and that has meant getting some of the basics figured out for Perfect Minute as a functioning business.

Before I did anything, I needed to commit to the business more heavily than I have been. I have an aversion to not paying people for their work, so I started putting away $100 per paycheque from my day job. There are a variety of opinions on funding game development, some of which encourage you to self-fund, others focused more on external investment, but as a rule I find that paying out of pocket helps me remember to look for the best possible value for my money, so that’s my preferred bootstrapping method.

With that tiny pot of money, my first order of business was finding an artist. I’m trying to hire locally where possible, so I sent out a call on this blog and on my friendly local game development Facebook community. I got a few portfolios right away, including an artist I was very interested in working with, Clay Burton.

Finding someone so quickly meant I had to scramble a bit to get the contract drawn up. I initially considered using Law Depot, but I didn’t feel confident that I would get something I could trust to legally enforce the rights I needed.

I looked around town to find a lawyer specializing in IP and media and settled on Lindsay Wareham at Cox and Palmer, whose focus areas include Intellectual Property and Startups, which seemed like a good fit. I’ve since discovered that Cox and Palmer have several folks working together in this area, as well as a helper program for startups in general, which gives me hope that I have, for once, made a pretty good call.

The drafting of the contract took a couple of weeks and wasn’t too expensive, as legal matters go. A lot of good questions came up during my conversation with Lindsay, though, stuff like:

  • Are you incorporating? (not yet)
  • What share structure do you intend to use for your corporation? (not sure, and I have conflicting information about the best structure to use)
  • Where will the copyright and moral rights reside? (with me until incorporation)
  • Do you foresee selling products other than games? (yes)
  • Do you need trademarks registered? (yes, when I have a bit more money)

Two weeks later I had a shiny new contract ready to fill out. I sent it over to my artist, who sent it back with his name on it…but not a witness! This is my first time doing this, and I didn’t want to bug the guy more than necessary, but after chatting with Lindsay, I had to go back and beg him to get it witnessed as well. So that’s ready to go.

I also sent out a call a while ago for a music person for the game, and I use the word “person” on purpose there, because I don’t know much about doing music in a game.

One of the musicians I know in town recommended his buddy, Georgie Newman. Georgie and I had spoken briefly after that initial request, but never got around to talking further. I reached out and we decided to meet up and chat.  That turned out to be really great for me, as Georgie knows what he is at to a much higher degree than I do when it comes to game audio.

That conversation has now left me with a number of things I need to do (“action items”, as the cool fogies say):

  • Flesh out the design for Beat Farmer enough to do cost and marketing plans
  • Figure out how much Beat Farmer is going to cost to make and market
  • Figure out the best sales model for this game and its follow-ons
  • Figure out how I’m going to fund the first few Perfect Minute Games ($50/week ain’t gonna cut it forever, after all)

As error-prone dark-groping goes, this has actually been ok. I’m hopeful that I can get all the way to the publishing phase without destroying myself and/or the company financially or otherwise in the process.

I’ll keep you posted!

Small art contract

As I mentioned on Twitter,

I’m looking for a freelance artist to do a small job for Beat Farmer.

I’m looking for someone who can do clean 2d/3d work in a cute/cartoon style. If you happen to know anyone who might suit,  please have them send a portfolio to mgb@perfectminutegames.com.

Super Simple Unity Surface Shader

As part of a project I’m involved with, I’ve been back at the shader business a little bit lately. In particular, I’ve been interested in how to provide input to a shader to allow dynamic displays of various kinds.

This post will be super-basic for those of you who already know how to write shaders, but if you’re just starting out with them and using Unity, it may provide a little extra help where you need it.

The shader explained below is a surface shader, which means that it controls the visual characteristics of particular pixels on a defined surface, and more particularly that it can interact with scene lighting. It also means that Unity does a lot of heavy lifting, generating lower-level shaders out of the high level shader code.

Doing this the way I am below is probably overkill, but since I’m learning here, I’m gonna give myself a pass (Shader Humour +1!).

Creating and Using a Surface Shader in Unity

In Unity, a Shader is applied to a rendered object via the object’s Material.  As an example, in the screenshot below, a shader named “PointShader” is applied to a Material named Upstage, which is applied to a Quad named Wall.

You can see in the UI that the Upstage material exposes two properties (actually 3, but we can ignore one of them), Color and Position. These are actually custom properties. Here’s a simplified version of the shader code for PointShader.


Shader "Custom/PointShader"{
  Properties {
    _MainTex("Dummy", 2D) = "white" {}
    _MyColor ("Color", Color) = (1,1,1,1)
    _Point ("Position", Vector) = (0, 0, 0, 0)
  }
  SubShader {
    // Setup stuff up here
    CGPROGRAM
    // More setup stuff

    sampler2D _MainTex;
    fixed4 _MyColor;
    float4 _Point;

    // Implementation of the shader
    ENDCG
  }
}

That “Properties” block defines inputs to the shader that you can set via the material, either in the Unity editor or in script.

In this case, we’ve defined 3 inputs:

  1. We will ignore _MainTex below because we’re not really using it except to ensure that our generated shaders properly pass UV coordinates, but basically it is a 2D graphic (that is, a texture). It’s called “Dummy” in the editor, and by default it will just be a texture that is flat white
  2. _MyColor (which has that My in front of it to avoid any possible conflict with the _Color variable that exists by default in a Unity Surface Shader)  is a 4-component Color (RGBA). This type is basically the same as the Color type used everywhere  else in Unity. This variable has the name “Color” in the editor, and defaults to opaque white.
  3. _Point is a 4-component Vector, which is slightly different from a Color in that it uses full floating point components, as you can see in the SubShader block. It’s referred to as Position in the Unity UI. The naming is up to you; I’m just showing you that you can use one name in code and a different one in the editor if you need to. It defaults to the origin.

As you can see in the screenshot above, you can set these values directly in the editor, which is pretty handy. The real power of this input method, however, comes when you start to integrate dynamic inputs via scripting.

PointShader was created as a sort of “selective mirror”. It allows me to apply an effect on a surface based on the location of an object in my scene. In order to do this, I have to update the _Point property of my material.  The code below shows how I’m doing that in this case.


public class PointUpdate : MonoBehaviour {
  public Vector2 texPos;
  internal override void Apply(Vector3 position) {
    var transformedPoint = this.transform.InverseTransformPoint(position);
    var tempX = .5f - transformedPoint.x / 10;
    var tempY = .5f - transformedPoint.z / 10;
    texPos = new Vector2(tempX, tempY);
    var material = this.GetComponent<MeshRenderer>().material;
    material.SetVector("_Point", texPos);
  }
}

Whenever my tracked object moves, it calls this Apply method, supplying its own position as a parameter. I then map that position to the local space of the object on which my shader is acting:

transformedPoint = this.transform.InverseTransformPoint(position);

Then I turn that mapped position into coordinates on my texture.

Three things you should know to understand this calculation:

  1. Texture coordinates are constrained to the range of 0 to 1
  2. A Unity quad has sides of length 10
  3. In this case my texture coordinates are inverted to the object orientation

var tempX = .5f - transformedPoint.x / 10;
var tempY = .5f - transformedPoint.z / 10;
texPos = new Vector2(tempX, tempY);

Finally, I set the value of _Point on my material. Note that I use the variable name and NOT the editor name here:

material.SetVector("_Point", texPos);

With this value set, I know where I should paint my dot with my shader. I use the surf() function within the shader to do this. I’ve added the full SubShader code block below.


SubShader {
  Tags { "RenderType"="Opaque" }
  LOD 200
        
  CGPROGRAM
  // Physically based Standard lighting model, and enable shadows on all light types
    #pragma surface surf Standard fullforwardshadows

  // Use shader model 3.0 target, to get nicer looking lighting
  #pragma target 3.0

  sampler2D _MainTex;
  fixed4 _Color;
  float4 _Point;

  struct Input {
    float2 uv_MainTex;
  };

  void surf (Input IN, inout SurfaceOutputStandard o) {
    if(IN.uv_MainTex.x > _Point.x - 0.05
        && IN.uv_MainTex.x < _Point.x + 0.05
        && IN.uv_MainTex.y > _Point.y - 0.05
        && IN.uv_MainTex.y < _Point.y + 0.05 ) {
      o.Albedo = _Color;
      o.Alpha = 1;
    } else {
      o.Albedo = 0;
      o.Alpha = 0;
    }
  }
  ENDCG
} 

The Input structure defines the values that Unity will pass to your shader. There are a bunch of possible element settings, which are described in detail at the bottom of the Writing Surface Shaders manpage.

The surf function receives that Input structure, which in this case I’m using only to get UV coordinates (which, in case you’re just starting out, are coordinates within a texture), and the SurfaceOutputStandard structure, which is also described in that manpage we talked about.

The key thing to know here is that the main point of the surf() function is to set the values of the SurfaceOutputStandard structure. In my case, I want to turn pixels “near” my object on, and turn all the rest of them off. I do this with a simple if statement:

  if(IN.uv_MainTex.x > _Point.x - 0.05
    && IN.uv_MainTex.x < _Point.x + 0.05     && IN.uv_MainTex.y > _Point.y - 0.05
    && IN.uv_MainTex.y < _Point.y + 0.05 ) {
  o.Albedo = _Color;
  o.Alpha = 1;
} else {
  o.Albedo = 0;
  o.Alpha = 0;
}

Albedo is the color of the pixel in question, and Alpha its opacity. By checking whether the current pixel’s UV coordinates (which are constrained to be between 0 and 1) are within a certain distance from my _Point property, I can determine whether to paint it or not.

At runtime, this is how that looks:

It’s a simple effect, and not necessarily useful on its own, but as a starting point it’s not so bad.

OSX & Kinect, 2017

So you have a MacBook (or something else that runs OSX) and you want to play with the Kinect sensor, but you’re having trouble because there are about 1 billion sets of wrong instructions on the internet on how to connect this Kinect. Let me save you a little grief.

Hardware

I have the Kinect “v2”, aka Kinect for Xbox One, aka Kinect for Windows, aka (in my case) Model 1520. The instructions below work for my version. The only serious difference if you have the older Kinect should be that you use a different version of libfreenect, but I haven’t tested that.

Software

You have more than one option as far as software goes. If you’re a commercial developer, you might consider trying out Zigfu’s ZDK, which has an OSX-ready image and integrates with several modern packages, including Unity3d, out of the box.

If you’re more of a hobbyist (as I am at the moment) and don’t have the $200 for a Zigfu license, the lovely folks behind the Structure Sensor have taken on maintenance of the OpenNI2 library, including a macOS build. Your first step should be to download the latest version of that library and unzip it somewhere.

Unfortunately, their package isn’t quite complete, and you’ll also need a driver to connect the Kinect (I know, it’s getting old to me too). This is where our ways may diverge, gentle reader, for in my case I discovered that I needed OpenKinect’s libfreenect2, whereas an older sensor would require libfreenect.

Assuming that you’re using the XBox One sensor, you’ll want to read the README.md that comes with your copy of libfreenect2. It contains all the necessary instructions for getting the right tools + dependencies and building all the things.

There are two additional things that are currently left out of their readme file. The first is that when you want to use the OpenNI2 tools, you’ll need to copy the drivers from

libfreenect2/build/lib

into

{bin-folder}/OpenNI2/Drivers

for whatever you’re running. So to run NiViewer, which is in the Tools folder, you’d copy it to

{openni-base-folder}/Tools/OpenNI2/Drivers

I expected the “make install-openni2” command from libfreenect2’s readme would take care of that stuff, but it does not.

The second omission is the troubleshooting stuff on their wiki. In particular, for my specific MacBook, I had to plug the Kinect adapter into the USB port on the left-hand side, NOT the right-hand side, as the device requires USB3, and I had to run Protonect and NiViewer using the “cl” pipeline. The default pipeline setting can be changed by doing this:

export LIBFREENECT2_PIPELINE=cl

You can also pass in the pipeline for Protonect:

bin/Protonect cl

With that setting in place, you should see a window with 2 (NiViewer) or 4 (Protonect) windows, each capturing different parts of the raw Kinect stream:

 

From here you’re on your own, but I hope you found this at least a bit helpful!

Adventure Time: Shaders

I’ve made a commitment to myself this year to learn more about low level programming. There are two parts to that effort.

The first is C++, a language with which I’ve had a love-hate relationship for years. I’ll talk in detail about this someday soon, but suffice it to say for now that I am trying to get more comfortable with all of the different quirks and responsibilities that come with that shambling mound of a language.

The second, which is, in its own hyper-specific way, both more interesting and less frustrating, is shaders. In case you don’t do this sort of thing much, shaders come in two basic flavours, vertex and pixel.

I don’t know where this goes, not yet. I’ve decided to write a talk for Gamedev NL, which will be a good way to crystallize whatever knowledge I gain in the process. Might not be the best possible presentation for the purpose, but we’re a small community, and I think people will appreciate it for whatever it is.

Shaders have long since hit criticality; they’re practically boring. You have only to look at sites like Shadertoy and ShaderFrog  to see that. But there’s something very spectacular about seeing the results of a tiny bit of code output the most realistic ocean you’ll never see, or the very foundations of life.

I mean, that’s cool, at least in my world. If you know how to build something like that, you got my vote for prom queen or whatever.

So that’s a thing I want a little more of in my life. I’ll talk about it as I go. I don’t have much specific purpose for this right now; Contension‘s not going to need this stuff for a good long time, but I’ll find something interesting to do with it.

Talk to you soon
mgb

Unity: Always a Work in Progress

While working on a couple of non-PMG projects, I was reminded that while Unity have had a banner year (couple of years, even) for major built-in feature upgrades – shaders, networking, UI, and services, to name a few – there are still some hard gaps.

The first problem I hit showed up while I was working on an enterprise-ey integration for the editor. The preferred data format in the enterprise these days tends to be JSON, so you need a JSON parser of some kind to be able to push data in and pull it out of systems. There are lots of third-party libraries that do this, but there are also framework-native options for most languages.

In Unity, the options for JSON are System.Web – which actually recommends a third-party library(!) and, as of the 5.5 beta experimental lane, System.Json, which is meant for use with Silverlight, but has the most desirable semantics and a fluent interface for dealing with JSON objects.

Having said all that, the best option right now for immediate use is still Json.NET, which has very similar semantics to System.Json but has the advantages of being compatible with the 2.0/3.5 .NET runtime and being mature enough to be fluent and fast.

This was my first time pulling a third-party .NET DLL into Unity, so it took a little while to understand how to get the system to see it. It turns out the process is actually super-simple – you just drop it into the Assets folder and use the regular Edit References functionality to reference it in your code IDE. Which is nice! I like easy problems.

The other problem I had was related to game development, though, sadly, not Contension, which remains on hold for now.

I was trying to get a click-object setup to work in a 2d game. Unity has a lot of different ways to do this, but the granddaddy of ’em all is the Input-Raycast, which works very well, but is kind of old and busted and not very Unity-ey anymore.

The new hotness for input is Input Modules, which natively support command mapping and event-driven operation. It turns out there are a bunch of ways to work with an IM, including the EventTrigger, which is basically zero-programming event handling, which, holy shit guys. That’s a fever of a hundred and three right there.

The thing about the Input Module for my scenario, however, was that if you’re working with non-UI elements and you don’t want to roll your own solution, you have to add a Physics Raycaster somewhere, which will allow you to click on objects that have a collider, and you have to have a collider on any object you want to click on. Which is fine! I’m 100% on board with having a simple, composable way to specify which things actually can be clicked. BUT.

See, there are actually 2 Physics Raycasters available. One is the ubiquitous Physics Raycaster, which does 3d interaction. The other is the Physics 2D Raycaster, which theoretically does interaction when you’re using the Unity 2D primitives. It may surprise you – I know it surprised the heck out of me – to learn that the Physics 2D Raycaster is actually a pile of bull puckey that does not in any way work at present.

It’s  one of those things that you often find out in gamedev that makes the whole exercise feel very frontier-ish, except there’s this enterprise dev in me. And he knows very well that a framework that puts that kind of dead-end red herring in and doesn’t even acknowledge the issue is a framework that I have to avoid trusting at every opportunity.

It all worked out ok; you can use the 3D raycaster and a 3d bounding box just fine for the purposes of interaction, and this particular project doesn’t need the 2D physics right now. It’s just annoying and worrying, which is, at the very least, not a super fun way to feel about the basic tool I’m using.

As an aside, I’m doing another talk soon, this time for the fun folks at NDev. It’ll be mostly a rehash of the 2016 NGX talk, but I’m hoping to tweak it at least a little to provide some depth in a few areas.  Should be interesting to see what comes of it!

short, beautiful experiences