Tag Archives: unity

Paint By Monsters: DevLog 4 + Feature Video 0.0.10 – Brush Strokes!

I’m gonna talk about all the faffery in a minute, but if you want the TL;DR, I invite you to take a look at feature video 0.0.10 – Brush Strokes;

Why “Brush Strokes”, Cousin?

One of the folks in my local gamedev community, the incredible artist rsvpasap, asked me a while back what kind of “painting” there was in Paint By Monsters, and at the time I didn’t have any kind of sensible answer for them.

Well, now I do, at least in the first-proof-of-concept sense. The Brush Strokes feature was directly inspired by that conversation, and I hope Angie would be proud to have sent me down this road.

On Assets, Asset Markets, and General Faffery

I’ve been mostly buying assets for PBM from itch.io artists, but there’s only so much content you can find there. I’ve looked at a bunch of new spots, particularly ArtStation and the Unity Asset Store, and in the latter I happened across the Gesture Recognizer asset.

This asset, on its face, is exactly what I wanted for Brush Strokes – a clean, simple asset focused specifically on recognizing a range of gestures, with good editor integration and the ability to create new gestures without a lot of faffing about. So, you know. Good job Raphael Marques on that front.

Unfortunately, Gesture Recognizer has a few problems out of the box that make it less than ideal to work with as of this writing. Maybe the creator will update it sometime, but in the meantime let’s talk about how I approached the issues that I encountered.

Fixing Gesture Recognizer

The first problem – and really, this is as much an indictment of Unity as a vetting agent as it is of the asset creator – is that unless you’re starting with the example level included with Gesture Recognizer, creating a new Gesture and trying to fill in the ID field causes a whole boatload of exceptions to show up in the Console Log. These will lead you back to the GesturePatternEditor.OnInspectorGUI method, and in particular to this line of code:

var closedLineProp = gestures.GetArrayElementAtIndex(currentGestureIndex).FindPropertyRelative("closedLine");

There are two exceptionally unfortunate things about this line of code. The first is that nowhere does the editor code check to see whether currentGestureIndex is a valid index for the gestures array. The second is that it is only really required in a small subset of gestures, because closedLine is only a concern when you’re closing a gesture that contains a closed loop of some kind.

My solution to this issue was relatively simple. I put an if statement around the entire editor block containing this piece of code:

if (gestures.arraySize > 0)
{
    //do stuff, including the array access
}

This fixed the editor errors, at least for the cases I care about.

I ran into some issues with the OnRecognize event that the asset uses, but I can’t 100% rule out PEBKAC for those, so I’ll just state for the record that Dynamic Events are tricksy, and you should look for them at the very top of the event list for the handler object.

I don’t really want to admit how long it took me to find that entry, but…it was a long time.

The Other Problem With OnRecognize

The really tricky bit with this event, however, is that the parameter for the event, RecognitionResult, only includes the points from the recognized gesture. The asset seems to be focused on some kind of cleaned-up vector drawing use case, so I’m sure that works great in that circumstance where you’re doing the scaling and whatnot in another component (for the record, you probably need to inherit from DrawDetector to do that properly).

But if you’re more interested in the gestures themselves – place, size, all that kind of stuff, and if you want to keep things nice and centralized to boot – you’re going to want the points as measured at the time of input.

Now, I’m not proud of my solution to this one, but I will say that it serves the purpose, and that’s enough for the kind of rapid feature iteration I’m doing on PBM. What I did was this: I changed the array of points that define the input being examined – which is a member variable in the object that invokes OnRecognize (that is to say, DrawDetector) – from private to internal.

internal List<UILineRenderer> lines;

This lets me do the following in my event handler:

var cam = FindObjectOfType<Camera>();
var dd = transform.GetComponentInParent<DrawDetector>();
var lines = dd.data.lines;
var line = lines[lines.Count - 1];
var point1 = cam.ScreenToWorldPoint(line.points[0]);
var point2 = cam.ScreenToWorldPoint(line.points[line.points.Count - 1]);
Vector3 gestureVec = point2 - point1;
Vector3 gesturePos = ((point1 + point2) / 2.0f);

This works for basically any straight-ish line, and gives me the two primary things I care about:

  1. The position of the gesture in world space
  2. The orientation of the gesture in world space

I expect to have to do more complex analysis for future specializations, but these two vectors allowed me to specify

  1. Where my brush stroke effects appear
  2. How big they are
  3. What their orientation is.

Of course, then I had to figure out how particle system shapes work and how they interact with the Position, Rotation, and Scale of their host GameObject…but more on that next time.

Paint By Monsters: Devlog 2 – Work To Date

I promised some local tech folks I’d do a weekly summary of my progress on Paint By Monsters, and I figured since I was doing that anyway, I’d expand on it over here for anyone who’s not in that very specific community.

First of all, I’m happy to report that PBM feels kind of like a game now
https://www.youtube.com/watch?v=_LGMChMhVFk

Things of note in that video include:

  1. Programmer art new logo!
  2. FTL-style “Progression” screen – as I understand it, these are required by law for roguelike and roguelite games
  3. Upgrades – each time you defeat a hero, you get an upgrade for your monsters. I’ll be expanding on these over time, and probably playing with other possibilities.

There are lots of smaller details, too, but I’m mostly just happy to be well on the way to something that’s game-ish in form.

I’ve got lots of plans and ambitions in mind for the game in the weeks and months to come, so I hope you’ll stay tuned for those. Some of the high notes include:

  1. Lair Building v1 – add balconies, traps, chandeliers, and more
  2. More progression stuff
  3. More monsters
  4. Acrobatic maneuvers

I mentioned some of those in my first Devlog over on Itch.io. Which reminds me – if you’re wondering how I’ll be working Devlogs going forward, I’m planning to alternate between itch and this site, with links going both ways. I’m hoping you’ll stick with me on that. I tend to prefer to keep things on this site, but it’s nice to have a more well-known presence as well, especially since between the game page on itch.io and the ko-fi page for Perfect Minute Games I’ve moved from “fun pastime” to “thing I’m both spending and soliciting money for”.

That’s always a bit of a hard rubicon to cross, from hobby to serious, but I’m trying to take the lowest, informal-est approach possible right now so I can keep my focus squarely on making the game itself as fun as I can. At some point I’d like to expand my ambitions to include custom art, animation, and music, but I’m not in a rush for now.

Having said that, if you’re an artist or composer who might be interested in partnering up, my email is always open:

mgb (at) perfectminutegames (dot) com

I can’t tell you how much I appreciate the interest folks have expressed in Paint By Monsters thus far. I’m excited to work on my own game design again, of course, but you never know how well it translates to anyone else. So getting feedback and even financial support has been incredibly encouraging.

Hope yer well.

Super Simple Unity Surface Shader

As part of a project I’m involved with, I’ve been back at the shader business a little bit lately. In particular, I’ve been interested in how to provide input to a shader to allow dynamic displays of various kinds.

This post will be super-basic for those of you who already know how to write shaders, but if you’re just starting out with them and using Unity, it may provide a little extra help where you need it.

The shader explained below is a surface shader, which means that it controls the visual characteristics of particular pixels on a defined surface, and more particularly that it can interact with scene lighting. It also means that Unity does a lot of heavy lifting, generating lower-level shaders out of the high level shader code.

Doing this the way I am below is probably overkill, but since I’m learning here, I’m gonna give myself a pass (Shader Humour +1!).

Creating and Using a Surface Shader in Unity

In Unity, a Shader is applied to a rendered object via the object’s Material.  As an example, in the screenshot below, a shader named “PointShader” is applied to a Material named Upstage, which is applied to a Quad named Wall.

You can see in the UI that the Upstage material exposes two properties (actually 3, but we can ignore one of them), Color and Position. These are actually custom properties. Here’s a simplified version of the shader code for PointShader.


Shader "Custom/PointShader"{
  Properties {
    _MainTex("Dummy", 2D) = "white" {}
    _MyColor ("Color", Color) = (1,1,1,1)
    _Point ("Position", Vector) = (0, 0, 0, 0)
  }
  SubShader {
    // Setup stuff up here
    CGPROGRAM
    // More setup stuff

    sampler2D _MainTex;
    fixed4 _MyColor;
    float4 _Point;

    //&nbsp;Implementation of the shader
    ENDCG
  }
}

That “Properties” block defines inputs to the shader that you can set via the material, either in the Unity editor or in script.

In this case, we’ve defined 3 inputs:

  1. We will ignore _MainTex below because we’re not really using it except to ensure that our generated shaders properly pass UV coordinates, but basically it is a 2D graphic (that is, a texture). It’s called “Dummy” in the editor, and by default it will just be a texture that is flat white
  2. _MyColor (which has that My in front of it to avoid any possible conflict with the _Color variable that exists by default in a Unity Surface Shader)  is a 4-component Color (RGBA). This type is basically the same as the Color type used everywhere  else in Unity. This variable has the name “Color” in the editor, and defaults to opaque white.
  3. _Point is a 4-component Vector, which is slightly different from a Color in that it uses full floating point components, as you can see in the SubShader block. It’s referred to as Position in the Unity UI. The naming is up to you; I’m just showing you that you can use one name in code and a different one in the editor if you need to. It defaults to the origin.

As you can see in the screenshot above, you can set these values directly in the editor, which is pretty handy. The real power of this input method, however, comes when you start to integrate dynamic inputs via scripting.

PointShader was created as a sort of “selective mirror”. It allows me to apply an effect on a surface based on the location of an object in my scene. In order to do this, I have to update the _Point property of my material.  The code below shows how I’m doing that in this case.


public class PointUpdate : MonoBehaviour {
  public Vector2 texPos;
  internal override void Apply(Vector3 position) {
    var transformedPoint = this.transform.InverseTransformPoint(position);
    var tempX = .5f - transformedPoint.x / 10;
    var tempY = .5f - transformedPoint.z / 10;
    texPos = new Vector2(tempX, tempY);
    var material = this.GetComponent<MeshRenderer>().material;
    material.SetVector("_Point", texPos);
  }
}

Whenever my tracked object moves, it calls this Apply method, supplying its own position as a parameter. I then map that position to the local space of the object on which my shader is acting:

transformedPoint = this.transform.InverseTransformPoint(position);

Then I turn that mapped position into coordinates on my texture.

Three things you should know to understand this calculation:

  1. Texture coordinates are constrained to the range of 0 to 1
  2. A Unity quad has sides of length 10
  3. In this case my texture coordinates are inverted to the object orientation

var tempX = .5f - transformedPoint.x / 10;
var tempY = .5f - transformedPoint.z / 10;
texPos = new Vector2(tempX, tempY);

Finally, I set the value of _Point on my material. Note that I use the variable name and NOT the editor name here:

material.SetVector("_Point", texPos);

With this value set, I know where I should paint my dot with my shader. I use the surf() function within the shader to do this. I’ve added the full SubShader code block below.


SubShader {
  Tags { "RenderType"="Opaque" }
  LOD 200
        
  CGPROGRAM
  // Physically based Standard lighting model, and enable shadows on all light types
    #pragma surface surf Standard fullforwardshadows

  // Use shader model 3.0 target, to get nicer looking lighting
  #pragma target 3.0

  sampler2D _MainTex;
  fixed4 _Color;
  float4 _Point;

  struct Input {
    float2 uv_MainTex;
  };

  void surf (Input IN, inout SurfaceOutputStandard o) {
    if(IN.uv_MainTex.x > _Point.x - 0.05
        && IN.uv_MainTex.x < _Point.x + 0.05
        && IN.uv_MainTex.y > _Point.y - 0.05
        && IN.uv_MainTex.y < _Point.y + 0.05 ) {
      o.Albedo = _Color;
      o.Alpha = 1;
    } else {
      o.Albedo = 0;
      o.Alpha = 0;
    }
  }
  ENDCG
} 

The Input structure defines the values that Unity will pass to your shader. There are a bunch of possible element settings, which are described in detail at the bottom of the Writing Surface Shaders manpage.

The surf function receives that Input structure, which in this case I’m using only to get UV coordinates (which, in case you’re just starting out, are coordinates within a texture), and the SurfaceOutputStandard structure, which is also described in that manpage we talked about.

The key thing to know here is that the main point of the surf() function is to set the values of the SurfaceOutputStandard structure. In my case, I want to turn pixels “near” my object on, and turn all the rest of them off. I do this with a simple if statement:

  if(IN.uv_MainTex.x > _Point.x - 0.05
    && IN.uv_MainTex.x < _Point.x + 0.05     && IN.uv_MainTex.y > _Point.y - 0.05
    && IN.uv_MainTex.y < _Point.y + 0.05 ) {
  o.Albedo = _Color;
  o.Alpha = 1;
} else {
  o.Albedo = 0;
  o.Alpha = 0;
}

Albedo is the color of the pixel in question, and Alpha its opacity. By checking whether the current pixel’s UV coordinates (which are constrained to be between 0 and 1) are within a certain distance from my _Point property, I can determine whether to paint it or not.

At runtime, this is how that looks:

It’s a simple effect, and not necessarily useful on its own, but as a starting point it’s not so bad.

Unity: Always a Work in Progress

While working on a couple of non-PMG projects, I was reminded that while Unity have had a banner year (couple of years, even) for major built-in feature upgrades – shaders, networking, UI, and services, to name a few – there are still some hard gaps.

The first problem I hit showed up while I was working on an enterprise-ey integration for the editor. The preferred data format in the enterprise these days tends to be JSON, so you need a JSON parser of some kind to be able to push data in and pull it out of systems. There are lots of third-party libraries that do this, but there are also framework-native options for most languages.

In Unity, the options for JSON are System.Web – which actually recommends a third-party library(!) and, as of the 5.5 beta experimental lane, System.Json, which is meant for use with Silverlight, but has the most desirable semantics and a fluent interface for dealing with JSON objects.

Having said all that, the best option right now for immediate use is still Json.NET, which has very similar semantics to System.Json but has the advantages of being compatible with the 2.0/3.5 .NET runtime and being mature enough to be fluent and fast.

This was my first time pulling a third-party .NET DLL into Unity, so it took a little while to understand how to get the system to see it. It turns out the process is actually super-simple – you just drop it into the Assets folder and use the regular Edit References functionality to reference it in your code IDE. Which is nice! I like easy problems.

The other problem I had was related to game development, though, sadly, not Contension, which remains on hold for now.

I was trying to get a click-object setup to work in a 2d game. Unity has a lot of different ways to do this, but the granddaddy of ’em all is the Input-Raycast, which works very well, but is kind of old and busted and not very Unity-ey anymore.

The new hotness for input is Input Modules, which natively support command mapping and event-driven operation. It turns out there are a bunch of ways to work with an IM, including the EventTrigger, which is basically zero-programming event handling, which, holy shit guys. That’s a fever of a hundred and three right there.

The thing about the Input Module for my scenario, however, was that if you’re working with non-UI elements and you don’t want to roll your own solution, you have to add a Physics Raycaster somewhere, which will allow you to click on objects that have a collider, and you have to have a collider on any object you want to click on. Which is fine! I’m 100% on board with having a simple, composable way to specify which things actually can be clicked. BUT.

See, there are actually 2 Physics Raycasters available. One is the ubiquitous Physics Raycaster, which does 3d interaction. The other is the Physics 2D Raycaster, which theoretically does interaction when you’re using the Unity 2D primitives. It may surprise you – I know it surprised the heck out of me – to learn that the Physics 2D Raycaster is actually a pile of bull puckey that does not in any way work at present.

It’s  one of those things that you often find out in gamedev that makes the whole exercise feel very frontier-ish, except there’s this enterprise dev in me. And he knows very well that a framework that puts that kind of dead-end red herring in and doesn’t even acknowledge the issue is a framework that I have to avoid trusting at every opportunity.

It all worked out ok; you can use the 3D raycaster and a 3d bounding box just fine for the purposes of interaction, and this particular project doesn’t need the 2D physics right now. It’s just annoying and worrying, which is, at the very least, not a super fun way to feel about the basic tool I’m using.

As an aside, I’m doing another talk soon, this time for the fun folks at NDev. It’ll be mostly a rehash of the 2016 NGX talk, but I’m hoping to tweak it at least a little to provide some depth in a few areas.  Should be interesting to see what comes of it!