FDNavigate back to the homepage

GLSL and Shaders

Rick
March 21st, 2021 · 4 min read

GLSL syntax

Here is a great website going through the GLSL syntax. Why is GLSL important? Well three uses GLSL as the language to write shaders, so if you are interested in effects and shaders, and you use three then it is essential to learn GLSL.

Everything in GLSL is heavily typed:

1vec3 color = vec3(1.0,1.0,1.0);
2
3bool isWhite = true;
4
5mat4 projMat = projectionMatrix;

The four most important things (imo) to understand are vectors, matrices, textures and how to pass data to the shaders or between shaders. A vector is a point in space, ie. a vec3 is a vector with 3 components ( x/y/z or rgb) and similarly a vec4 is a vector with 4 components (x/y/z/w or rgba). Matrices are used all over maths and one of the most important things to realise is they can transform coordinates into modified coordinates in various different coordinate systems (explained below).

Vectors

You can do various things with vectors which might not seem so intuitive:

  • Vector composition -

    Defining one vector inside another like so:
1vec3 color = vec3(1.0, 1.0, 1.0);
2// you can also do vec3(1.0), and it will
3// automatically fill in all the components
4// with 1.0.
5vec4 finalColor = vec4(color, 1.0);
  • Swizzling -
    Can be useful in certain circumstances.
1vec4 bikeColor = vec4(0.2, 0.5, 1.0, 1.0);
2
3vec3 color = bikeColor.xxy
4// outputting vec3(0.2, 0.2, 0.5).
  • Operations -
    Vector addition, subtraction, multiplication and division:
1vec3 color1 = vec3(0.1, 0.9, 1.0);
2
3vec3 finalColor = color + color;
4// or..
5vec3 finalColor = color - color;
6// or..
7vec3 finalColor = color * color;
8// or..
9vec3 finalColor = color / color;

Matrices

There are so many great resources on matrices, all im going to say is google is your friend 😄 Suffice to say the most important thing (imo) you will do with these are transforming coordinates from one coordinate system to the next!

1void main(){
2 gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.0);
3}

Textures

Textures are arrays of data and we map each block of texture data to each fragment or screen pixel. Textures can be defined into shaders like so:

1uniform sampler2D tDiffuse;

and used like so:

1uniform vec2 uResolution;
2uniform sampler2D tDiffuse;
3
4void main () {
5 vec2 uv = gl_FragCoord.xy / uResolution.xy;
6
7 vec3 color = texture(tDiffuse, uv).rgb;
8
9 gl_FragCoord = vec4(color, 1.0);
10}

uv coordinates are what we use to do the texture lookups as seen above, you will probably want to take into account the screen resolution when using uv’s or gl_FragCoord.

Passing data to the shader and between shaders

You can either pass data to a shader via a uniform or attribute and if you want to pass a variable from the vertex shader to fragment shader you would define a varying:

1// vertex shader
2uniform vec4 halfSquare;
3varying vec4 middle;
4varying vec4 pos;
5
6void main () {
7 gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
8
9 middle = projectionMatrix * modelViewMatrix * halfSquare;
10
11 pos = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
12}
1// fragment shader
2varying vec4 pos;
3varying vec4 middle;
4
5void main () {
6 if (pos > middle) {
7 gl_FragCoord = vec4(1.0);
8 } else {
9 gl_FragCoord = vec4(0.0, 0.0, 0.0, 1.0);
10 }
11}

Shaders

So what is a shader? A very general question! But in its basic form it is a unit of code which runs on the GPU and not on the CPU. This code tells us the colour of every pixel on the screen. An important note is that this block of code will run every frame, so anywhere from 30–60fps on an average computer. If you come from javascript like me you might be used to state and shared data across many components.. but shader code runs in a highly parallel environment, meaning there is no shared state 😢, well at least not with the two shaders three uses: vertex shaders and fragment shaders.

Vertex Shaders

Vertex shaders provide positional context to every vertex. They essentially place each vertex at a specific position on the screen. Vertex shaders work on vertices, vertices make up a mesh and a mesh provides the shape of the 3D model.

How do we get the shape from 3D coordinates to screen coordinates I hear you yelling !?

There are many coordinate systems in 3D graphics, each with there own purpose. The end goal is to get screen coordinates. Why is it important to be aware of screen coordinates? well imagine you have world coordinates running from 0 to a large number (say 0 -100), well in other coordinate systems the range is 0–1 or -1 to 1!!  The reason why Im mentioning this is because say you want to show half a square red and the other half blue; if the squares height is 10 you might pass the would coordinate of float 5.0 to the shader and do <5.0 = red and > 5.0 = blue. If you don’t take into account the coordinate systems it just wont work, because it will either be clipped and max out on the very top or bottom range of the coordinates.

So how do we take into account coordinate systems in shaders using three? Matrices! Matrices are used to transform coordinates from one system to another. In general in three, you would use the built in matrices in the vertex shader:

1void main(){
2 // built in matrices dont have to be defined in three
3 // shaders, they are appended to you shader and you
4 // can use like so
5 gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.0);
6}

A very good resource to understand these coordinate systems and matrices is here, https://learnopengl.com/Getting-started/Coordinate-Systems.

I wont explain it any better than this website so Ill leave it for you to read this website and get a better grasp of the concepts.

A few key points on the vertex shader 😄

  • you have to have a main function defined 
  • you have to define a gl_Position 
  • you should transform coordinates into clip space ready for the final division operation (which happens automatically after each vertex shader is run).

Fragment Shaders 

What the hell is our second shader.. the fragment shader.. well this provides the colour of each screen pixel or fragment. Below is an example of a fragment shader:

1uniform vec3 color;
2uniform vec2 uResolution;
3uniform sampler2D tDiffuse;
4
5void main () {
6
7 // gl_FragCoord is a built in uniform from
8 // three.
9 vec2 uv = gl_FragCoord.xy / uResolution.xy;
10
11 // texture is a built in function used for
12 // texture lookups.
13 vec3 texColor = texture(tDiffuse, uv).xyz;
14
15 // This is linear interpolation, two vectors and a
16 // third argument which is the amount you mix, 0 =
17 // dont mix, 1.0 = fully mixed.
18 vec3 mixture = mix(texColor, color, 1.0);
19
20 gl_FragColor = vec4(mixture, 1.0);
21}

Here we can do all sorts to manipulate colours, which I will go into in the following articles. Colors in GLSL range from 0 (black) to 1.0 red (r)/ green (g) / blue (b). And opacity ranges from 0-1.

A key thing to remember is you can access components of a vector using both xyzw or rgba, both of the below are valid:

1vec4 colour = color.rgba;
2vec4 colour = color.xyzw;
3
4vec4 pos = position.xyzw;
5vec4 pos = position.rgba;

I would probably stick to rgba when refering to colours and xyzw when relating to positions and coordinates.

A few key points here:

  • you have to have a single main function 
  • you must define a vec4 gl_FragColor

Shader Materials and Post Processing

The only thing Im going to say here is that if you want to do an effect on a shape like a square you would use three’s shaderMaterial. If you want to do an effect around an object you would use techniques involving post processing. I promise I will go a lot more in depth into both of these in the next few articles 😎

Final Thoughts: 

I hope this hasn’t been too much of a mind fuck 😅 😄 We have gone through GLSL syntax, some essentials of the vertex and fragment shaders. And breifly discussed shaderMaterials and postprocessing. I hope you have found this mildly useful 😎 

The next few articles are going to take a deep dive into some more complex topics like SDF functions, ray-marching and clouds!

Stay tuned for some (hopefully) really cool articles on modelling weather, weird effects, transitioning between models and many more interesting effects.

If your interesting in chatting with me or sharing work.. contact me on linkedIn!

For now stay safe and watch out for more posts 🙌

More articles from theFrontDev

Harnessing the Elements - Sun, Noise, and Bloom in Layered Mesh Design

This article examines the role of 3D noise in layered mesh design, focusing on how noise algorithms shape textures and patterns. By integrating sun and bloom effects, these dynamic designs create complex, organic visual aesthetics in digital environments, enhancing realism and depth in 3D rendering and animation.

September 11th, 2024 · 1 min read

Dynamic FlowMap Curve Modifier for Procedural River Generation

The dynamic FlowMap Curve Modifier is an innovative research and development tool for creating fluid, adaptive river terrains. It combines curve modification with integrated flow mapping to generate evolving watercourses, offering new possibilities for procedural landscape generation in 3D environments and interactive applications.

September 10th, 2024 · 1 min read
© 2021–2024 theFrontDev
Link to $https://twitter.com/TheFrontDevLink to $https://github.com/Richard-ThompsonLink to $https://www.linkedin.com/in/richard-thompson-248ba3111/