Tag Archives: shaders

Paint By Monsters DevLog 5 – Shader Experimentation

It’s been entirely too long since I was last able to talk about the development on Paint By Monsters, but with the Conceptualization Funding sorted and a partnership forged with the incomparable Stellar Boar Productions, I’m finally able to put a little of the old grey goo to work on matters technical and creative.

So: Shaders. This goes back to the thread that was originally pulled when I started with the Brush Stroke feature demo. There are lots of ways to paint stuff in Unity, but I’ve been meaning to take a deep dive (ok, a shallow dive) into shaders for a while now, and this seemed like a good opportunity.

If you want to jump directly to the shader itself, it’s here:
https://www.shadertoy.com/view/Dt2XWG

Otherwise, please, read on.

The Joys of Shadertoy

If you’re not already familiar with Shadertoy, it’s a website that allows you to create complex shaders within your browser, which is just the kind of nonsense that leads to a Destroy All Software talk. But it’s also just kind of awesome, since it lets you mess with shaders on a tight loop of try-fail-swear-fix-enjoy.

I looked at a bunch of different shaders, mostly having to do with mouse trails and such, but when I started iterating I took opexu’s Mouse Trails Soft shader as my jumping-off point. Obviously it doesn’t really look quite like a paint effect, but it leaves a persistent trail of colour based on mouse input, which is about as good as it gets.

I’ll admit I’ve forgotten a lot of what I learned back when I was experimenting with the Kinect. I’d half-forgotten I even wrote a post about a shader with inputs. As a result, I had to relearn some things.

Shadertoy uses a different representation for shaders than Unity, because nobody who implements a shader architecture can seem to leave well enough alone. If you’re not familiar with one or the other, I’d encourage you to read the Shadertoy and Unity documentation, but the short version is that in addition to the Image code (which determines the onscreen color of each fragment), Shadertoy allows you to use up to 4 buffers, and each buffer (plus Image) can accept up to 4 inputs (iChannel0-3).

Mouse Trails Soft uses one buffer, which is where it holds both the image so far and the last known mouse pointer position. The latter is by turns both clever and wasteful, but in this specific case it makes sense.

BufferA takes BufferA (ie itself) as input on iChannel0. It took me a while to suss out enough details to grok the relevant details here, so don’t worry if the following looks like gibberish right now.

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{   
    vec2 uv = fragCoord/iResolution.xy;
    vec2 aspect = vec2(iResolution.x/iResolution.y, 1.);
    vec2 uvQuad = uv*aspect;
    
    vec2 mP = uvQuad - iMouse.xy/iResolution.xy*aspect;
    float d = 1.-length(mP);    
    
    vec4 bufA = texture(iChannel0, uv);
    vec2 mPN = bufA.zw;
    vec2 vel = min(max(abs(mPN - mP), vec2(0.001)), vec2(0.05));
    
    d = smoothstep(0.85,1.3,d+length(vel))/0.4;
    vec2 dot = vec2(d);

    dot = max(dot, bufA.xy);    
    vec4 col = vec4(dot.x, dot.y, mP.x, mP.y);
    
    if(iFrame == 0) {
        col = vec4(.0,.0, mP.x, mP.y);
    }
    
    fragColor = vec4(col);
}

I went through this code slowly, teasing out the meaning of each line.

  1. OK, so uv is the texture coordinate normalized by resolution
  2. Aspect is the aspect ratio
  3. uvQuad is the aspect ratio-normalized coordinate
  4. Wait, but we’re adjusting by the mouse coordinate
  5. Ok, uv is the coordinate of the current fragment.
  6. So mP is a resolution+aspect-normalized vector from the mouse to the current fragment
  7. And d is initialized to…the one’s complement of that?
  8. bufA is the existing 4-component color at coordinate uv
  9. mPN is the…zw value of that value? Which is actually the x and y value from last frame?

For full clarity, the Image code is below, but it’s mostly just repeated setup. This is the point where I realized that I don’t need to understand every detail. Maybe you can tell me why that col *= uv + 1-uv is there.

void mainImage( out vec4 fragColor, in vec2 fragCoord ) {
   vec2 uv = fragCoord/iResolution.xy;
   vec2 aspect = vec2(iResolution.x/iResolution.y, 1.);
  vec2 uvQuad = uv * aspect;
  vec4 bufA = texture(iChannel0, uv);

  vec4 col = bufA;
  col.xy *= (uv.xy+(1.-uv.xy));

  if(iFrame == 0) {
    col = vec4(.0,.0,.0,1.);
  }

  fragColor = vec4(col.xy, .0, 1.);
}

With my newfound (albeit limited) understanding in hand, it was time to start working on my own shader. I figured I’d learned enough to start from scratch, since the soft-circle shape was pretty far from where I wanted my code to end up.

First things first: 3 dimensional math!

I want to draw a box that is axis-aligned with the direction my mouse is moving in. The parallel portion of that is just the width of the box multiplied by the mouse’s velocity unit vector, which I can get by normalizing the velocity vector. Assuming, that is, that I can get the velocity vector. Which means I need not just the current mouse position – available as iMouse – but the previous position as well.

Luckily, that’s exactly what opexu’s trick is for. Since opexu stores one of the components in the third color component, however, it limits which colors are available. Not having a blue component seemed like a pretty big issue when Paint By Monsters is so heavily based on painting, so I decided to dedicate a buffer to tracking current and previous mouse coordinates.

And so, Buffer A was added. Buffer A takes itself as input and for each execution it puts the last mouse position into the R & G components, and the new position into the B & A components. On the first frame, it loads everything with position 0, which, as it turns out, can be a bad choice. More on that in a minute.

Buffer A uses the code below to update itself, reading its own previous contents from iChannel0.

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    if(iFrame == 0) {
        fragColor = vec4(0.0);
    } else {
        vec2 uv = fragCoord / iResolution.xy;
        vec2 m1 = texture(iChannel0, uv).zw;
        fragColor = vec4(m1,iMouse.xy);
    }
}

The next thing I needed was my persistent graphic. Again, opexu’s approach seems fine here – I can use a self-referencing buffer that just holds all of the fragments painted so far.

Thus Buffer B was added. Buffer B, as it turns out, is where the key goodness happens. Since I’m reading color from it, it needs to know about newly-painted fragments. Which means I need to paint into it, rather than in Image.

I’ll still need the mouse’s velocity vector, so I feed Buffer A into Buffer B’s iChannel0 input.

vec4 bufM = texture(iChannel0, uv);

I can then calculate my velocity vector.

vec2 mouse_move_vec = bufM.zw - bufM.xy;

As mentioned previously, I wanted a simple box-shaped brush for this iteration on the shader. The vector “width” of the box can be represented as w * vb, where w is a scalar equal to half the full width of the box. This, by itself, defines a narrow, infinitely tall box. We can calculate whether a particular fragment falls within the width box by projecting it onto the velocity vector.

The vector of interest, in this case, is the absolute distance – along a vector parallel to the long axis of the box – from the fragment coordinate to the centre of the box. We first need to calculate the vector between our fragment and any point already known to lie within the box – v_mouse1, for example.

v_frag = fragCoord - v_mouse1;

If we take the dot product of this vector with a unit vector that’s parallel to the short axis of the box (aka velocity_normalized) – we can determine the parallel distance from the original point to the fragment coordinate.

d_parallel = dot(v_frag, velocity_normalized);

If this distance is less than w, the fragment lies within our infinitely tall box.

We can multiply this length by velocity_normalized to get the vector representation of the parallel displacement, which will come in handy later.

v_parallel = d_parallel * velocity_normalized;

The next step is to constrain our box’s height as well. For this we need the perpendicular distance from one end of the velocity vector to our fragment. If we can get a vector that is perpendicular to velocity but still in the plane of our canvas, we’re good.

The simple way to get a vector that is perpendicular to another vector is to use the cross product, represented in GLSL as the cross() function. If you’re just getting started with vector math, this can be a stumbling block – we only have the velocity vector, right?

But the thing is, we also have a 2d plane – the canvas – and a plane can be represented by a 3-dimensional vector that is perpendicular to its surface. Fragment coordinates use x and y, which means (by convention) our 3rd dimension is z.

We can take the cross product of a unit z vector with our velocity.

vec4 v_perp = cross(velocity_normalized, vec4(0., 0., 1., 0.));

However, because we calculated v_frag and its parallel component, v_parallel, there’s another, less computationally demanding way to do this.

Frame of reference transformations are beyond the scope of this article, but in essence they tell us that for any vector v_original and any perpendicular components of that vector v_parallel and v_perpendicular, by definition those components sum to the original vector.

v_parallel + v_perpedicular = v_original

This equation, however, can be rearranged to yield the perpendicular component.

v_perpendicular = v_original - v_parallel;

In our case, we already have v_frag and v_parallel.

v_perp = v_frag - v_parallel;

With both the parallel and perpendicular components in hand, we can fully constrain our fragment with respect to our box.

//Initialize with previously painted fragment color
fragColor = texture(iChannel1, uv).xyz;

// Box is 2*20 high, 2*10 wide
if(length(v_perp) < 20 && length(v_parallel) < 10) {
  // paint it red
  fragColor = vec4(1.0, 0.0, 0.0, 0.0);
}

This shader will set fragColor to red if and only if the current fragment lies within the bounds of a box of height 40 and width 20 with its shorter axis aligned with the mouse’s direction of movement. Since we’re feeding BufferB back into itself, we initialize our fragment from the previous value of fragColor, so even after the “brush” moves on, painted fragments remain painted.


Further Reading

I looked up some stuff about simulating oil paint for real, and maybe at some point I’ll put it to use, but given it will eventually get rendered down to pixel art, maybe not.

Either way, the paper is pretty interesting. Finite Element Analysis with hybrid physical models is not something I’ve seen all that often.

I also started looking up Unity videos to try and get my head back in the shaders + Unity headspace, and I ran across this video by Code Monkey, where he does something very similar to what I’ve done above, but uses C# code and MonoBehaviours instead of shaders and trickses.


Featured Image

Kaleidoscope VI” by fdecomite is licensed under CC BY 2.0.

Super Simple Unity Surface Shader

As part of a project I’m involved with, I’ve been back at the shader business a little bit lately. In particular, I’ve been interested in how to provide input to a shader to allow dynamic displays of various kinds.

This post will be super-basic for those of you who already know how to write shaders, but if you’re just starting out with them and using Unity, it may provide a little extra help where you need it.

The shader explained below is a surface shader, which means that it controls the visual characteristics of particular pixels on a defined surface, and more particularly that it can interact with scene lighting. It also means that Unity does a lot of heavy lifting, generating lower-level shaders out of the high level shader code.

Doing this the way I am below is probably overkill, but since I’m learning here, I’m gonna give myself a pass (Shader Humour +1!).

Creating and Using a Surface Shader in Unity

In Unity, a Shader is applied to a rendered object via the object’s Material.  As an example, in the screenshot below, a shader named “PointShader” is applied to a Material named Upstage, which is applied to a Quad named Wall.

You can see in the UI that the Upstage material exposes two properties (actually 3, but we can ignore one of them), Color and Position. These are actually custom properties. Here’s a simplified version of the shader code for PointShader.


Shader "Custom/PointShader"{
  Properties {
    _MainTex("Dummy", 2D) = "white" {}
    _MyColor ("Color", Color) = (1,1,1,1)
    _Point ("Position", Vector) = (0, 0, 0, 0)
  }
  SubShader {
    // Setup stuff up here
    CGPROGRAM
    // More setup stuff

    sampler2D _MainTex;
    fixed4 _MyColor;
    float4 _Point;

    //&nbsp;Implementation of the shader
    ENDCG
  }
}

That “Properties” block defines inputs to the shader that you can set via the material, either in the Unity editor or in script.

In this case, we’ve defined 3 inputs:

  1. We will ignore _MainTex below because we’re not really using it except to ensure that our generated shaders properly pass UV coordinates, but basically it is a 2D graphic (that is, a texture). It’s called “Dummy” in the editor, and by default it will just be a texture that is flat white
  2. _MyColor (which has that My in front of it to avoid any possible conflict with the _Color variable that exists by default in a Unity Surface Shader)  is a 4-component Color (RGBA). This type is basically the same as the Color type used everywhere  else in Unity. This variable has the name “Color” in the editor, and defaults to opaque white.
  3. _Point is a 4-component Vector, which is slightly different from a Color in that it uses full floating point components, as you can see in the SubShader block. It’s referred to as Position in the Unity UI. The naming is up to you; I’m just showing you that you can use one name in code and a different one in the editor if you need to. It defaults to the origin.

As you can see in the screenshot above, you can set these values directly in the editor, which is pretty handy. The real power of this input method, however, comes when you start to integrate dynamic inputs via scripting.

PointShader was created as a sort of “selective mirror”. It allows me to apply an effect on a surface based on the location of an object in my scene. In order to do this, I have to update the _Point property of my material.  The code below shows how I’m doing that in this case.


public class PointUpdate : MonoBehaviour {
  public Vector2 texPos;
  internal override void Apply(Vector3 position) {
    var transformedPoint = this.transform.InverseTransformPoint(position);
    var tempX = .5f - transformedPoint.x / 10;
    var tempY = .5f - transformedPoint.z / 10;
    texPos = new Vector2(tempX, tempY);
    var material = this.GetComponent<MeshRenderer>().material;
    material.SetVector("_Point", texPos);
  }
}

Whenever my tracked object moves, it calls this Apply method, supplying its own position as a parameter. I then map that position to the local space of the object on which my shader is acting:

transformedPoint = this.transform.InverseTransformPoint(position);

Then I turn that mapped position into coordinates on my texture.

Three things you should know to understand this calculation:

  1. Texture coordinates are constrained to the range of 0 to 1
  2. A Unity quad has sides of length 10
  3. In this case my texture coordinates are inverted to the object orientation

var tempX = .5f - transformedPoint.x / 10;
var tempY = .5f - transformedPoint.z / 10;
texPos = new Vector2(tempX, tempY);

Finally, I set the value of _Point on my material. Note that I use the variable name and NOT the editor name here:

material.SetVector("_Point", texPos);

With this value set, I know where I should paint my dot with my shader. I use the surf() function within the shader to do this. I’ve added the full SubShader code block below.


SubShader {
  Tags { "RenderType"="Opaque" }
  LOD 200
        
  CGPROGRAM
  // Physically based Standard lighting model, and enable shadows on all light types
    #pragma surface surf Standard fullforwardshadows

  // Use shader model 3.0 target, to get nicer looking lighting
  #pragma target 3.0

  sampler2D _MainTex;
  fixed4 _Color;
  float4 _Point;

  struct Input {
    float2 uv_MainTex;
  };

  void surf (Input IN, inout SurfaceOutputStandard o) {
    if(IN.uv_MainTex.x > _Point.x - 0.05
        && IN.uv_MainTex.x < _Point.x + 0.05
        && IN.uv_MainTex.y > _Point.y - 0.05
        && IN.uv_MainTex.y < _Point.y + 0.05 ) {
      o.Albedo = _Color;
      o.Alpha = 1;
    } else {
      o.Albedo = 0;
      o.Alpha = 0;
    }
  }
  ENDCG
} 

The Input structure defines the values that Unity will pass to your shader. There are a bunch of possible element settings, which are described in detail at the bottom of the Writing Surface Shaders manpage.

The surf function receives that Input structure, which in this case I’m using only to get UV coordinates (which, in case you’re just starting out, are coordinates within a texture), and the SurfaceOutputStandard structure, which is also described in that manpage we talked about.

The key thing to know here is that the main point of the surf() function is to set the values of the SurfaceOutputStandard structure. In my case, I want to turn pixels “near” my object on, and turn all the rest of them off. I do this with a simple if statement:

  if(IN.uv_MainTex.x > _Point.x - 0.05
    && IN.uv_MainTex.x < _Point.x + 0.05     && IN.uv_MainTex.y > _Point.y - 0.05
    && IN.uv_MainTex.y < _Point.y + 0.05 ) {
  o.Albedo = _Color;
  o.Alpha = 1;
} else {
  o.Albedo = 0;
  o.Alpha = 0;
}

Albedo is the color of the pixel in question, and Alpha its opacity. By checking whether the current pixel’s UV coordinates (which are constrained to be between 0 and 1) are within a certain distance from my _Point property, I can determine whether to paint it or not.

At runtime, this is how that looks:

It’s a simple effect, and not necessarily useful on its own, but as a starting point it’s not so bad.

Adventure Time: Shaders

I’ve made a commitment to myself this year to learn more about low level programming. There are two parts to that effort.

The first is C++, a language with which I’ve had a love-hate relationship for years. I’ll talk in detail about this someday soon, but suffice it to say for now that I am trying to get more comfortable with all of the different quirks and responsibilities that come with that shambling mound of a language.

The second, which is, in its own hyper-specific way, both more interesting and less frustrating, is shaders. In case you don’t do this sort of thing much, shaders come in two basic flavours, vertex and pixel.

I don’t know where this goes, not yet. I’ve decided to write a talk for Gamedev NL, which will be a good way to crystallize whatever knowledge I gain in the process. Might not be the best possible presentation for the purpose, but we’re a small community, and I think people will appreciate it for whatever it is.

Shaders have long since hit criticality; they’re practically boring. You have only to look at sites like Shadertoy and ShaderFrog  to see that. But there’s something very spectacular about seeing the results of a tiny bit of code output the most realistic ocean you’ll never see, or the very foundations of life.

I mean, that’s cool, at least in my world. If you know how to build something like that, you got my vote for prom queen or whatever.

So that’s a thing I want a little more of in my life. I’ll talk about it as I go. I don’t have much specific purpose for this right now; Contension‘s not going to need this stuff for a good long time, but I’ll find something interesting to do with it.

Talk to you soon
mgb