In this tutorial Im going to quickly run through how to use radial blur to give a fake volumetric light without the need to use raymarching which would be complicated and expensive.
I tried radial blur a while ago as shown in this codesandbox.
This was the precursor to the finished volumetric sandbox.
So whats the premise?
Outline
First off we need two layers in our @react-three/fiber scene , one with the blur applied and one with everything else in. We will need a final additive shader pass which will combine these two layers.
A word of warning, this needs to be used carefully as rendering a particularly large scene twice has downsides in terms of being computationally expensive.
So with this in mind if you use this for a large scene then you will need to create some sort of level of detail (LOD) system where you would show low poly transparent light shafts at distance and only add things close to the camera into the radial blur layer.
Just food for thought. But in general this is alot cheaper than doing volumetric raymarching which would tax any mid range computer, with anything that even comes close to being complicated. I havent done ray marching in a while but doing many calculations per fragment on screen doesnt work to well on the web for the average user.
Radial Blur
What is radial blur ?
Radial Blur is a type of blur which usually eminates from a central point and then affects an image outwards.
The general shader is below.
1const float exposure = 0.055;2const float decay = .99;3const float density = 0.54;4const float weight = 4.75;56// Inner used valuesa7vec2 deltaTextCoord = vec2(vUv.xy - light_on_screen.xy);8vec2 textCoo = vUv.st;9deltaTextCoord *= 1.0 / float(NUM_SAMPLES) * density;10float illuminationDecay = .10;1112vec4 c = vec4(0.0, 0.0, 0.0, 1.0);1314for(int i=0; i < NUM_SAMPLES ; i++)15{16 textCoo -= deltaTextCoord;1718 textCoo.s = clamp(textCoo.s, 0.0, 1.0);19 textCoo.t = clamp(textCoo.t, 0.0, 1.0);2021 vec4 sample1 = texture2D(tDiffuse, textCoo);2223 sample1 *= illuminationDecay * weight;2425 c += sample1;2627 illuminationDecay *= decay;28}2930c *= exposure;3132c.r = clamp(c.r, 0.0, 1.0);33c.g = clamp(c.r, 0.0, 1.0);34c.b = clamp(c.r, 0.0, 1.0);
deltaTextCoord is the different between the light source to this fragment, in 2D. We then divide by the number of samples to increase the resolution or detail of the effect. If we didnt do this then every sample we would divide by the un altered difference between the light source and this fragment, which is massive so to reduce the space between each sample or radial blur we divide the deltaTexcCoord by 1/ samples. which means we divde the difference by the samples.
c represents the color we accumulate over the different texture samples we do. The accumulation of the blur of each fragment from the origin surround the fragment we are on.
We then loop through the smaples count and minus the difference from this fragments uvs so we can blur this fragments color with the surrounding fragments.
Although we dont have state in shaders we can sample textures in neighbouring fragments to get the color of a texture for the neighbouring frgament.
The effect should be less prominate the further away you get from center, so we want the color to have a decay, so we multiply the sample by the illuminationDecay. And finally decrease the illuminationDecay by 1% each sample loop we do.
Exposure is basically how bright the effect is and the last thing we do is clamp the colors components r/g/b to 0.0-1.0, so we dont get any out of bound colors.
R3F / Three layers
The crucial part of the whole project is.. layers. This is quite a common principle in gaming where the camera will have layers and you can apply affects to certain parts of a scene and then merge both layers with an additive shader.
1useFrame((state) => {2 state.gl.setRenderTarget(target);3 state.gl.render(scene, camera);4 camera.layers.set(2);5 radialBlur.uniforms["depthTexture"].value = target.depthTexture;6 radialBlur.uniforms["uResolution"].value = new THREE.Vector2(7 size.width,8 size.height9 );10 radialBlur.uniforms["camPos"].value = new THREE.Vector3().copy(11 camera.position12 );13 radialBlur.uniforms["cameraWorldMatrix"].value = camera.matrixWorld;14 radialBlur.uniforms[15 "cameraProjectionMatrixInverse"16 ].value = new THREE.Matrix4().copy(camera.projectionMatrix).invert();1718 radialBlur.uniforms["windowPos"].value = new THREE.Vector3(-2, 0, 0);19 radialBlurComposer.render();20 camera.layers.set(1);21 finalComposer.render();22}, 1);
We set the rendertarget for depth, then render the scene, set the layer to be 2 (the layer with the windows in) and then we render the radial blur shader. The set the layer to 1 which has the rest of the scene in, we pass the final composer a occlusion render target with the radiul blurred scene in. And then in this final composer we combine the two scenes.
Chaining effect composers
We use the first effect composer to render the scene and apply the radial blur, the occulsion render target stores the radial blurred image of the scene in it. This then gets passed as a uniform to the second and final effect composer, combined with the final standard tDiffuse texture which is a standard uniform of an effect composer, which stores a snap shot of whats being rendered per frame.
1const [radialBlurComposer, finalComposer, radialBlur] = useMemo(() => {2 const renderScene = new RenderPass(scene, camera);34 const occlusionRenderTarget = new THREE.WebGLRenderTarget(5 window.innerWidth * 0.5,6 window.innerHeight * 0.57 );89 const radialBlurComposer = new EffectComposer(gl, occlusionRenderTarget);10 radialBlurComposer.renderToScreen = false;11 radialBlurComposer.addPass(renderScene);1213 const radialBlur = new ShaderPass(RadialBlurShader);14 radialBlur.needsSwap = false;15 radialBlur.uniforms.radialCenter.value = new THREE.Vector2(0.5, 0.5);16 radialBlur.uniforms.intensity.value = 10;17 radialBlur.uniforms.iterations.value = 300;18 radialBlurComposer.addPass(radialBlur);1920 const finalComposer = new EffectComposer(gl);21 finalComposer.addPass(new RenderPass(scene, camera));2223 const additivePass = new ShaderPass(CombiningShader());24 additivePass.uniforms.tAdd.value = occlusionRenderTarget.texture;25 additivePass.needsSwap = false;26 finalComposer.addPass(additivePass);2728 return [radialBlurComposer, finalComposer, radialBlur, additivePass];29}, []);
Additive shader pass
The additive shader pass is the one which combines the two rendered layers into one final texture which gets passed to the screen for displaying to the user.
1export default () => ({2 uniforms: {3 tDiffuse: { value: null },4 tAdd: { value: null },5 depthTexture: { value: null },6 },7 vertexShader: `8 varying vec2 vUv;9 void main() {10 vUv = uv;11 gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);12 }13 `,14 fragmentShader: `15 varying vec2 vUv;16 uniform sampler2D tDiffuse;17 uniform sampler2D depthTexture;18 uniform sampler2D tAdd;19 void main()20 {21 gl_FragColor = texture2D(tDiffuse, vUv) + (texture2D(tAdd, vUv));22 }23 `,24});
Down sides to this method
The down sides to doing something like this is its complicated and requires expert knowledge to maintain, and if you get too close to the effect it currently has some artefacts.