====== Lab 7 - Writing a Shader ====== ==== Before You Start ==== * Download and unzip the [[http://www.ru.is/kennarar/hannes/classes/ve2008/Lab7Assets.zip|Lab 7 Asset File]] into your working directory ==== Writing a Shader ==== - **The Simplest Shader:**\\ Make sure you can run **testshader.py** and that you can see a white teapot on the screen. The color blue is being passed into the shader through the **tint** constant. Try different color values for this constant. Then open up **myshader.sha** and notice how the value is used in the vertex shader to assign a color to a vertex. Play with the color values inside the vertex shader and see how it affects the final render. - **Diffuse Vertex Shading:**\\ Instead of sending a preset color value to all of the vertices inside the vertex shader, let the vertex shader calculate the color in each vertex based on a lighting model. Use the following lighting model to do this calculation: **color = 0.2*ambientcolor + diffuse_attenuation*diffusecolor + specular_attenuation*specularcolor**. Instead of *tint* you now have to send **ambientcolor**, **diffusecolor** and **specularcolor** into the shader. Inside the vertex shader, you then have to calculate the **diffuse_attenuation** and **specular_attenuation** which are values between 0 and 1 that indicate how much of that color will be seen in that vertex.\\ - The **diffuse_attenuation** essentially represents how well the vertex is lit, which relates to the angle between the vertex normal and the direction to the light source. This is the formula (using Cg functions): **diffuse_attenuation = saturate(dot(LightVector, NormalVector))**.\\ - You already receive the NormalVector inside the vertex shader, but you have to calculate the LightVector. To do that, you have to first pass the **light** into the shader from Panda. You can pass any NodePath into shaders using the **.setShaderInput(, )** method. When you do that, you get access to various render state attributes of that NodePath, including its model space position - it will be accessible inside the shader as a **uniform float4 mspos_**.\\ - When you have the model space position of the light source, you can easily calculate the LightVector (i.e. the vector from the vertex position to the light position).\\ - Now you should be able to calculate the **diffuse_attenuation**. Set the **specular_attenuation** to 0 for now and test your lighting model so far.\\ - **Specular Vertex Shading:**\\ Now calculate the **specular_attenuation** value in the lighting model. The formula for this is: **specular_attn = pow(saturate(dot(ReflectionVector,ViewVector)),6.0)**. This essentially says that this component will get a sharp increase (by raising the results to the power of 6) when the reflection of the light lines up with the direction towards our eyes. Just like you needed the light position passed in to calculate the LightVector, you'll need to pass in the camera position to calculate the ViewVector. Once you've gotten the ViewVector, you need to calculate the light ReflectionVector with this formula: R=normalize(2*N*dot(N,L)-L). Try to make this specular lighting work in your shader. - **Diffuse and Specular Pixel Shading:**\\ So far you have only been calculating the color values for each of the vertices in a model, which can result in rough lighting artifacts (occurs when simple interpolation of vertex colors is not enough). To prevent these artifacts, you can move the lighting calculation inside the pixel/fragment shader and perform them for each pixel. You do not have to calculate the Vectors inside the pixel shader; if you pass these vectors into the pixel shader from the vertex shader using the TEXCOORD semantics (registers), they will be correctly interpolated. Dot products don't interpolate, so that's why the final calculations have to take place inside the pixel shader. See if you can now create per-pixel diffuse and specular lighting. Notice how much better looking it is than the per-vertex lighting (but also realize that this is more GPU intensive). - **Texture:**\\ The **teapot** model doesn't have any texture coordinates, so for this step you'll need to switch to a different model (a sphere and a car are provided). First make sure that your current shader works with your new model. The next step is to replace the **diffusecolor** in your lighting model with the color sampled from a texture. First, load the texture you want into Panda and assign it to the model PathNode (The textures are in Models/Textures). It's a good idea to explicitly create a texture stage (a texture channel) for the texture, although this is not needed if you only want to use a single texture. Example: diffusecolormap = loader.loadTexture("./Models/Textures/bricks.png") ts0 = TextureStage( 'level0' ) model.setTexture(ts0, diffusecolormap) You now have access to this texture inside your pixel shader as the **tex_0** color sampler (the new input variable is **uniform sampler2D tex_0 : TEXUNIT0**). A texture is nothing but a lookup table for color values. Now multiply your out color (that already includes your light calculations) in the pixel shader with the color value found in **tex_0** at the texture coordinates that are passed into the fragment shader in **l_texcoord0** (remember to receive the texture coordinates from the vertex shader). The lookup function is **tex2D(, )**. You should now see a nicely textured and shaded model. If you are using the **car** model, you should be able to retrieve separate NodePaths to the **body** and **wheels** parts of the model (using .find()), and then apply the same shader to both parts but with a separate set of uniform input values (including the texture). - **Normal Map:**\\ For this part, you need to be using the **sphere** model and you need to assign the **bricks.png** and **bricks-n.png** textures to two separate texture stages on the model. The latter texture is a normal map (passed in as **uniform sampler2D tex_1 : TEXUNIT1**) To add lighting detail with this normal map, a new coordinate space, called the tangent space, needs to be created inside the vertex shader. This space is local to each vertex and is defined by the axes formed by the vertex normal, binormal and tangent. Therefore the following automatic varying inputs need to be added to the vertex shader: float3 vtx_binormal0 : BINORMAL, float3 vtx_tangent0 : TANGENT, Now a transformation matrix that is capable of mapping other vectors into this local tangent space can be created like this: float3x3 mat_tangent; mat_tangent[0] = vtx_tangent0; mat_tangent[1] = vtx_binormal0; mat_tangent[2] = vtx_normal; Once this new transformation matrix has been created inside the vertex shader, both the View Vector and the Light Vector should be projected into tangent space through a simple multiplication with this matrix. Once these vectors arrive inside the fragment shader, they should already be in tangent space and the third important component in our lighting calculations, namely the normal, should now be looked up in the normal map accessed through (note however that the texture stores components in the 0-1 range but we want them in the -1-1 range, so you need to correct the value when you look it up like this **tex2D(normalmap, texturecoordinate)*2-1**). Since the normal that we're reading from the normal map is already in tangent space (that's how they're stored in normal maps - and that explains why we needed to transform the other vectors), we can now proceed with the lighting calculations, just like we did in model space. See if you can make this work.