Tag Archives: side projects

Super Simple Unity Surface Shader

As part of a project I’m involved with, I’ve been back at the shader business a little bit lately. In particular, I’ve been interested in how to provide input to a shader to allow dynamic displays of various kinds.

This post will be super-basic for those of you who already know how to write shaders, but if you’re just starting out with them and using Unity, it may provide a little extra help where you need it.

The shader explained below is a surface shader, which means that it controls the visual characteristics of particular pixels on a defined surface, and more particularly that it can interact with scene lighting. It also means that Unity does a lot of heavy lifting, generating lower-level shaders out of the high level shader code.

Doing this the way I am below is probably overkill, but since I’m learning here, I’m gonna give myself a pass (Shader Humour +1!).

Creating and Using a Surface Shader in Unity

In Unity, a Shader is applied to a rendered object via the object’s Material.  As an example, in the screenshot below, a shader named “PointShader” is applied to a Material named Upstage, which is applied to a Quad named Wall.

You can see in the UI that the Upstage material exposes two properties (actually 3, but we can ignore one of them), Color and Position. These are actually custom properties. Here’s a simplified version of the shader code for PointShader.


Shader "Custom/PointShader"{
  Properties {
    _MainTex("Dummy", 2D) = "white" {}
    _MyColor ("Color", Color) = (1,1,1,1)
    _Point ("Position", Vector) = (0, 0, 0, 0)
  }
  SubShader {
    // Setup stuff up here
    CGPROGRAM
    // More setup stuff

    sampler2D _MainTex;
    fixed4 _MyColor;
    float4 _Point;

    // Implementation of the shader
    ENDCG
  }
}

That “Properties” block defines inputs to the shader that you can set via the material, either in the Unity editor or in script.

In this case, we’ve defined 3 inputs:

  1. We will ignore _MainTex below because we’re not really using it except to ensure that our generated shaders properly pass UV coordinates, but basically it is a 2D graphic (that is, a texture). It’s called “Dummy” in the editor, and by default it will just be a texture that is flat white
  2. _MyColor (which has that My in front of it to avoid any possible conflict with the _Color variable that exists by default in a Unity Surface Shader)  is a 4-component Color (RGBA). This type is basically the same as the Color type used everywhere  else in Unity. This variable has the name “Color” in the editor, and defaults to opaque white.
  3. _Point is a 4-component Vector, which is slightly different from a Color in that it uses full floating point components, as you can see in the SubShader block. It’s referred to as Position in the Unity UI. The naming is up to you; I’m just showing you that you can use one name in code and a different one in the editor if you need to. It defaults to the origin.

As you can see in the screenshot above, you can set these values directly in the editor, which is pretty handy. The real power of this input method, however, comes when you start to integrate dynamic inputs via scripting.

PointShader was created as a sort of “selective mirror”. It allows me to apply an effect on a surface based on the location of an object in my scene. In order to do this, I have to update the _Point property of my material.  The code below shows how I’m doing that in this case.


public class PointUpdate : MonoBehaviour {
  public Vector2 texPos;
  internal override void Apply(Vector3 position) {
    var transformedPoint = this.transform.InverseTransformPoint(position);
    var tempX = .5f - transformedPoint.x / 10;
    var tempY = .5f - transformedPoint.z / 10;
    texPos = new Vector2(tempX, tempY);
    var material = this.GetComponent<MeshRenderer>().material;
    material.SetVector("_Point", texPos);
  }
}

Whenever my tracked object moves, it calls this Apply method, supplying its own position as a parameter. I then map that position to the local space of the object on which my shader is acting:

transformedPoint = this.transform.InverseTransformPoint(position);

Then I turn that mapped position into coordinates on my texture.

Three things you should know to understand this calculation:

  1. Texture coordinates are constrained to the range of 0 to 1
  2. A Unity quad has sides of length 10
  3. In this case my texture coordinates are inverted to the object orientation

var tempX = .5f - transformedPoint.x / 10;
var tempY = .5f - transformedPoint.z / 10;
texPos = new Vector2(tempX, tempY);

Finally, I set the value of _Point on my material. Note that I use the variable name and NOT the editor name here:

material.SetVector("_Point", texPos);

With this value set, I know where I should paint my dot with my shader. I use the surf() function within the shader to do this. I’ve added the full SubShader code block below.


SubShader {
  Tags { "RenderType"="Opaque" }
  LOD 200
        
  CGPROGRAM
  // Physically based Standard lighting model, and enable shadows on all light types
    #pragma surface surf Standard fullforwardshadows

  // Use shader model 3.0 target, to get nicer looking lighting
  #pragma target 3.0

  sampler2D _MainTex;
  fixed4 _Color;
  float4 _Point;

  struct Input {
    float2 uv_MainTex;
  };

  void surf (Input IN, inout SurfaceOutputStandard o) {
    if(IN.uv_MainTex.x > _Point.x - 0.05
        && IN.uv_MainTex.x < _Point.x + 0.05
        && IN.uv_MainTex.y > _Point.y - 0.05
        && IN.uv_MainTex.y < _Point.y + 0.05 ) {
      o.Albedo = _Color;
      o.Alpha = 1;
    } else {
      o.Albedo = 0;
      o.Alpha = 0;
    }
  }
  ENDCG
} 

The Input structure defines the values that Unity will pass to your shader. There are a bunch of possible element settings, which are described in detail at the bottom of the Writing Surface Shaders manpage.

The surf function receives that Input structure, which in this case I’m using only to get UV coordinates (which, in case you’re just starting out, are coordinates within a texture), and the SurfaceOutputStandard structure, which is also described in that manpage we talked about.

The key thing to know here is that the main point of the surf() function is to set the values of the SurfaceOutputStandard structure. In my case, I want to turn pixels “near” my object on, and turn all the rest of them off. I do this with a simple if statement:

  if(IN.uv_MainTex.x > _Point.x - 0.05
    && IN.uv_MainTex.x < _Point.x + 0.05     && IN.uv_MainTex.y > _Point.y - 0.05
    && IN.uv_MainTex.y < _Point.y + 0.05 ) {
  o.Albedo = _Color;
  o.Alpha = 1;
} else {
  o.Albedo = 0;
  o.Alpha = 0;
}

Albedo is the color of the pixel in question, and Alpha its opacity. By checking whether the current pixel’s UV coordinates (which are constrained to be between 0 and 1) are within a certain distance from my _Point property, I can determine whether to paint it or not.

At runtime, this is how that looks:

It’s a simple effect, and not necessarily useful on its own, but as a starting point it’s not so bad.

Unity: Always a Work in Progress

While working on a couple of non-PMG projects, I was reminded that while Unity have had a banner year (couple of years, even) for major built-in feature upgrades – shaders, networking, UI, and services, to name a few – there are still some hard gaps.

The first problem I hit showed up while I was working on an enterprise-ey integration for the editor. The preferred data format in the enterprise these days tends to be JSON, so you need a JSON parser of some kind to be able to push data in and pull it out of systems. There are lots of third-party libraries that do this, but there are also framework-native options for most languages.

In Unity, the options for JSON are System.Web – which actually recommends a third-party library(!) and, as of the 5.5 beta experimental lane, System.Json, which is meant for use with Silverlight, but has the most desirable semantics and a fluent interface for dealing with JSON objects.

Having said all that, the best option right now for immediate use is still Json.NET, which has very similar semantics to System.Json but has the advantages of being compatible with the 2.0/3.5 .NET runtime and being mature enough to be fluent and fast.

This was my first time pulling a third-party .NET DLL into Unity, so it took a little while to understand how to get the system to see it. It turns out the process is actually super-simple – you just drop it into the Assets folder and use the regular Edit References functionality to reference it in your code IDE. Which is nice! I like easy problems.

The other problem I had was related to game development, though, sadly, not Contension, which remains on hold for now.

I was trying to get a click-object setup to work in a 2d game. Unity has a lot of different ways to do this, but the granddaddy of ’em all is the Input-Raycast, which works very well, but is kind of old and busted and not very Unity-ey anymore.

The new hotness for input is Input Modules, which natively support command mapping and event-driven operation. It turns out there are a bunch of ways to work with an IM, including the EventTrigger, which is basically zero-programming event handling, which, holy shit guys. That’s a fever of a hundred and three right there.

The thing about the Input Module for my scenario, however, was that if you’re working with non-UI elements and you don’t want to roll your own solution, you have to add a Physics Raycaster somewhere, which will allow you to click on objects that have a collider, and you have to have a collider on any object you want to click on. Which is fine! I’m 100% on board with having a simple, composable way to specify which things actually can be clicked. BUT.

See, there are actually 2 Physics Raycasters available. One is the ubiquitous Physics Raycaster, which does 3d interaction. The other is the Physics 2D Raycaster, which theoretically does interaction when you’re using the Unity 2D primitives. It may surprise you – I know it surprised the heck out of me – to learn that the Physics 2D Raycaster is actually a pile of bull puckey that does not in any way work at present.

It’s  one of those things that you often find out in gamedev that makes the whole exercise feel very frontier-ish, except there’s this enterprise dev in me. And he knows very well that a framework that puts that kind of dead-end red herring in and doesn’t even acknowledge the issue is a framework that I have to avoid trusting at every opportunity.

It all worked out ok; you can use the 3D raycaster and a 3d bounding box just fine for the purposes of interaction, and this particular project doesn’t need the 2D physics right now. It’s just annoying and worrying, which is, at the very least, not a super fun way to feel about the basic tool I’m using.

As an aside, I’m doing another talk soon, this time for the fun folks at NDev. It’ll be mostly a rehash of the 2016 NGX talk, but I’m hoping to tweak it at least a little to provide some depth in a few areas.  Should be interesting to see what comes of it!