Tag Archives: graphics

Paint By Monsters DevLog 5 – Shader Experimentation

It’s been entirely too long since I was last able to talk about the development on Paint By Monsters, but with the Conceptualization Funding sorted and a partnership forged with the incomparable Stellar Boar Productions, I’m finally able to put a little of the old grey goo to work on matters technical and creative.

So: Shaders. This goes back to the thread that was originally pulled when I started with the Brush Stroke feature demo. There are lots of ways to paint stuff in Unity, but I’ve been meaning to take a deep dive (ok, a shallow dive) into shaders for a while now, and this seemed like a good opportunity.

If you want to jump directly to the shader itself, it’s here:
https://www.shadertoy.com/view/Dt2XWG

Otherwise, please, read on.

The Joys of Shadertoy

If you’re not already familiar with Shadertoy, it’s a website that allows you to create complex shaders within your browser, which is just the kind of nonsense that leads to a Destroy All Software talk. But it’s also just kind of awesome, since it lets you mess with shaders on a tight loop of try-fail-swear-fix-enjoy.

I looked at a bunch of different shaders, mostly having to do with mouse trails and such, but when I started iterating I took opexu’s Mouse Trails Soft shader as my jumping-off point. Obviously it doesn’t really look quite like a paint effect, but it leaves a persistent trail of colour based on mouse input, which is about as good as it gets.

I’ll admit I’ve forgotten a lot of what I learned back when I was experimenting with the Kinect. I’d half-forgotten I even wrote a post about a shader with inputs. As a result, I had to relearn some things.

Shadertoy uses a different representation for shaders than Unity, because nobody who implements a shader architecture can seem to leave well enough alone. If you’re not familiar with one or the other, I’d encourage you to read the Shadertoy and Unity documentation, but the short version is that in addition to the Image code (which determines the onscreen color of each fragment), Shadertoy allows you to use up to 4 buffers, and each buffer (plus Image) can accept up to 4 inputs (iChannel0-3).

Mouse Trails Soft uses one buffer, which is where it holds both the image so far and the last known mouse pointer position. The latter is by turns both clever and wasteful, but in this specific case it makes sense.

BufferA takes BufferA (ie itself) as input on iChannel0. It took me a while to suss out enough details to grok the relevant details here, so don’t worry if the following looks like gibberish right now.

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{   
    vec2 uv = fragCoord/iResolution.xy;
    vec2 aspect = vec2(iResolution.x/iResolution.y, 1.);
    vec2 uvQuad = uv*aspect;
    
    vec2 mP = uvQuad - iMouse.xy/iResolution.xy*aspect;
    float d = 1.-length(mP);    
    
    vec4 bufA = texture(iChannel0, uv);
    vec2 mPN = bufA.zw;
    vec2 vel = min(max(abs(mPN - mP), vec2(0.001)), vec2(0.05));
    
    d = smoothstep(0.85,1.3,d+length(vel))/0.4;
    vec2 dot = vec2(d);

    dot = max(dot, bufA.xy);    
    vec4 col = vec4(dot.x, dot.y, mP.x, mP.y);
    
    if(iFrame == 0) {
        col = vec4(.0,.0, mP.x, mP.y);
    }
    
    fragColor = vec4(col);
}

I went through this code slowly, teasing out the meaning of each line.

  1. OK, so uv is the texture coordinate normalized by resolution
  2. Aspect is the aspect ratio
  3. uvQuad is the aspect ratio-normalized coordinate
  4. Wait, but we’re adjusting by the mouse coordinate
  5. Ok, uv is the coordinate of the current fragment.
  6. So mP is a resolution+aspect-normalized vector from the mouse to the current fragment
  7. And d is initialized to…the one’s complement of that?
  8. bufA is the existing 4-component color at coordinate uv
  9. mPN is the…zw value of that value? Which is actually the x and y value from last frame?

For full clarity, the Image code is below, but it’s mostly just repeated setup. This is the point where I realized that I don’t need to understand every detail. Maybe you can tell me why that col *= uv + 1-uv is there.

void mainImage( out vec4 fragColor, in vec2 fragCoord ) {
   vec2 uv = fragCoord/iResolution.xy;
   vec2 aspect = vec2(iResolution.x/iResolution.y, 1.);
  vec2 uvQuad = uv * aspect;
  vec4 bufA = texture(iChannel0, uv);

  vec4 col = bufA;
  col.xy *= (uv.xy+(1.-uv.xy));

  if(iFrame == 0) {
    col = vec4(.0,.0,.0,1.);
  }

  fragColor = vec4(col.xy, .0, 1.);
}

With my newfound (albeit limited) understanding in hand, it was time to start working on my own shader. I figured I’d learned enough to start from scratch, since the soft-circle shape was pretty far from where I wanted my code to end up.

First things first: 3 dimensional math!

I want to draw a box that is axis-aligned with the direction my mouse is moving in. The parallel portion of that is just the width of the box multiplied by the mouse’s velocity unit vector, which I can get by normalizing the velocity vector. Assuming, that is, that I can get the velocity vector. Which means I need not just the current mouse position – available as iMouse – but the previous position as well.

Luckily, that’s exactly what opexu’s trick is for. Since opexu stores one of the components in the third color component, however, it limits which colors are available. Not having a blue component seemed like a pretty big issue when Paint By Monsters is so heavily based on painting, so I decided to dedicate a buffer to tracking current and previous mouse coordinates.

And so, Buffer A was added. Buffer A takes itself as input and for each execution it puts the last mouse position into the R & G components, and the new position into the B & A components. On the first frame, it loads everything with position 0, which, as it turns out, can be a bad choice. More on that in a minute.

Buffer A uses the code below to update itself, reading its own previous contents from iChannel0.

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    if(iFrame == 0) {
        fragColor = vec4(0.0);
    } else {
        vec2 uv = fragCoord / iResolution.xy;
        vec2 m1 = texture(iChannel0, uv).zw;
        fragColor = vec4(m1,iMouse.xy);
    }
}

The next thing I needed was my persistent graphic. Again, opexu’s approach seems fine here – I can use a self-referencing buffer that just holds all of the fragments painted so far.

Thus Buffer B was added. Buffer B, as it turns out, is where the key goodness happens. Since I’m reading color from it, it needs to know about newly-painted fragments. Which means I need to paint into it, rather than in Image.

I’ll still need the mouse’s velocity vector, so I feed Buffer A into Buffer B’s iChannel0 input.

vec4 bufM = texture(iChannel0, uv);

I can then calculate my velocity vector.

vec2 mouse_move_vec = bufM.zw - bufM.xy;

As mentioned previously, I wanted a simple box-shaped brush for this iteration on the shader. The vector “width” of the box can be represented as w * vb, where w is a scalar equal to half the full width of the box. This, by itself, defines a narrow, infinitely tall box. We can calculate whether a particular fragment falls within the width box by projecting it onto the velocity vector.

The vector of interest, in this case, is the absolute distance – along a vector parallel to the long axis of the box – from the fragment coordinate to the centre of the box. We first need to calculate the vector between our fragment and any point already known to lie within the box – v_mouse1, for example.

v_frag = fragCoord - v_mouse1;

If we take the dot product of this vector with a unit vector that’s parallel to the short axis of the box (aka velocity_normalized) – we can determine the parallel distance from the original point to the fragment coordinate.

d_parallel = dot(v_frag, velocity_normalized);

If this distance is less than w, the fragment lies within our infinitely tall box.

We can multiply this length by velocity_normalized to get the vector representation of the parallel displacement, which will come in handy later.

v_parallel = d_parallel * velocity_normalized;

The next step is to constrain our box’s height as well. For this we need the perpendicular distance from one end of the velocity vector to our fragment. If we can get a vector that is perpendicular to velocity but still in the plane of our canvas, we’re good.

The simple way to get a vector that is perpendicular to another vector is to use the cross product, represented in GLSL as the cross() function. If you’re just getting started with vector math, this can be a stumbling block – we only have the velocity vector, right?

But the thing is, we also have a 2d plane – the canvas – and a plane can be represented by a 3-dimensional vector that is perpendicular to its surface. Fragment coordinates use x and y, which means (by convention) our 3rd dimension is z.

We can take the cross product of a unit z vector with our velocity.

vec4 v_perp = cross(velocity_normalized, vec4(0., 0., 1., 0.));

However, because we calculated v_frag and its parallel component, v_parallel, there’s another, less computationally demanding way to do this.

Frame of reference transformations are beyond the scope of this article, but in essence they tell us that for any vector v_original and any perpendicular components of that vector v_parallel and v_perpendicular, by definition those components sum to the original vector.

v_parallel + v_perpedicular = v_original

This equation, however, can be rearranged to yield the perpendicular component.

v_perpendicular = v_original - v_parallel;

In our case, we already have v_frag and v_parallel.

v_perp = v_frag - v_parallel;

With both the parallel and perpendicular components in hand, we can fully constrain our fragment with respect to our box.

//Initialize with previously painted fragment color
fragColor = texture(iChannel1, uv).xyz;

// Box is 2*20 high, 2*10 wide
if(length(v_perp) < 20 && length(v_parallel) < 10) {
  // paint it red
  fragColor = vec4(1.0, 0.0, 0.0, 0.0);
}

This shader will set fragColor to red if and only if the current fragment lies within the bounds of a box of height 40 and width 20 with its shorter axis aligned with the mouse’s direction of movement. Since we’re feeding BufferB back into itself, we initialize our fragment from the previous value of fragColor, so even after the “brush” moves on, painted fragments remain painted.


Further Reading

I looked up some stuff about simulating oil paint for real, and maybe at some point I’ll put it to use, but given it will eventually get rendered down to pixel art, maybe not.

Either way, the paper is pretty interesting. Finite Element Analysis with hybrid physical models is not something I’ve seen all that often.

I also started looking up Unity videos to try and get my head back in the shaders + Unity headspace, and I ran across this video by Code Monkey, where he does something very similar to what I’ve done above, but uses C# code and MonoBehaviours instead of shaders and trickses.


Featured Image

Kaleidoscope VI” by fdecomite is licensed under CC BY 2.0.