public:t-gede-14-1:lab7
Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
public:t-gede-14-1:lab7 [2014/02/27 19:46] – created marino | public:t-gede-14-1:lab7 [2024/04/29 13:33] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== | + | ====== |
- | + | ||
- | This lab is based on a variety of sources, including the [[http:// | + | |
+ | This lab is based on a variety of sources, including the [[http:// | ||
+ | |||
===== Discussion ===== | ===== Discussion ===== | ||
- | Discussion thread for this lab is here: [[http://ruclasses.proboards.com/index.cgi?action=display& | + | Discussion thread for this lab is here: [[https://piazza.com/class/ |
===== Goal ====== | ===== Goal ====== | ||
Line 22: | Line 22: | ||
Follow these steps to complete the lab project: | Follow these steps to complete the lab project: | ||
- | - **Create a New Project** Create a new empty project called "Lab6" in the same way you have created new projects for other lab projects. Create the Lab6Main.cpp file that contains a minimal Ogre application that displays at least one ground plane and an Ogre model. You can use your own application, | + | - **Create a New Project** Create a new empty project called "Lab7" in the same way you have created new projects for other lab projects. You can use your own application, |
- | #include " | + | |
- | + | ||
- | class MyApplication | + | |
- | private: | + | |
- | Ogre:: | + | |
- | Ogre:: | + | |
- | Ogre:: | + | |
- | Ogre:: | + | |
- | Ogre::Camera* _camera; | + | |
- | + | ||
- | public: | + | |
- | + | ||
- | MyApplication() { | + | |
- | _sceneManager = NULL; | + | |
- | _root = NULL; | + | |
- | _ogre = NULL; | + | |
- | _ground = NULL; | + | |
- | } | + | |
- | + | ||
- | ~MyApplication() { | + | |
- | delete _root; | + | |
- | } | + | |
- | + | ||
- | void loadResources() { | + | |
- | Ogre:: | + | |
- | cf.load(" | + | |
- | Ogre:: | + | |
- | Ogre:: | + | |
- | while(sectionIter.hasMoreElements()) { | + | |
- | sectionName=sectionIter.peekNextKey(); | + | |
- | Ogre:: | + | |
- | Ogre:: | + | |
- | for(i=settings-> | + | |
- | typeName=i-> | + | |
- | dataName=i-> | + | |
- | Ogre:: | + | |
- | } | + | |
- | } | + | |
- | Ogre:: | + | |
- | } | + | |
- | + | ||
- | void createScene() { | + | |
- | _ogre =_sceneManager-> | + | |
- | _sceneManager-> | + | |
- | + | ||
- | Ogre:: | + | |
- | Ogre:: | + | |
- | Ogre:: | + | |
- | _sceneManager-> | + | |
- | + | ||
- | // HERE YOU SET THE MATERIALS FOR EACH OBJECT | + | |
- | // e.g. _ogre-> | + | |
- | // e.g. _ground-> | + | |
- | + | ||
- | Ogre:: | + | |
- | light-> | + | |
- | light-> | + | |
- | } | + | |
- | + | ||
- | int startup() { | + | |
- | _root=new Ogre:: | + | |
- | if(!_root-> | + | |
- | return -1; | + | |
- | } | + | |
- | + | ||
- | Ogre:: | + | |
- | _sceneManager=_root-> | + | |
- | + | ||
- | _camera=_sceneManager-> | + | |
- | _camera-> | + | |
- | _camera-> | + | |
- | _camera-> | + | |
- | + | ||
- | Ogre:: | + | |
- | viewport-> | + | |
- | _camera-> | + | |
- | + | ||
- | loadResources(); | + | |
- | createScene(); | + | |
- | + | ||
- | _root-> | + | |
- | return 0; | + | |
- | + | ||
- | } | + | |
- | }; | + | |
- | + | ||
- | + | ||
- | int main(void) { | + | |
- | MyApplication app; | + | |
- | app.startup(); | + | |
- | return 0; | + | |
- | } | + | |
- | </ | + | |
- **Fixed Diffuse Color Fragment Shader** The first shader program we create is a fragment shader that simply returns a fixed color for each fragment that gets processed. In Ogre you can write shader programs in any of the major high-level shading languages, but we will be using **Cg**. Create a new shader program file called '' | - **Fixed Diffuse Color Fragment Shader** The first shader program we create is a fragment shader that simply returns a fixed color for each fragment that gets processed. In Ogre you can write shader programs in any of the major high-level shading languages, but we will be using **Cg**. Create a new shader program file called '' | ||
float4 main_orange_fp(in float3 TexelPos : TEXCOORD0) : COLOR { | float4 main_orange_fp(in float3 TexelPos : TEXCOORD0) : COLOR { | ||
Line 382: | Line 289: | ||
} | } | ||
</ | </ | ||
+ | |||
+ | ===== Bonus ===== | ||
+ | <box green 100%> | ||
+ | - **Create a render texture** For many post processing and special effects shaders, we will need to capture the current scene from some angle and draw that into a texture. This is done by telling a certain camera to draw the scene from its point of view into a texture called a RenderTexture. | ||
+ | - **Create a variable in MyApplication.h: | ||
+ | - **Create a member function** that assigns a camera to the render texture and initializes it. This function must accept a '' | ||
+ | |||
+ | Ogre:: | ||
+ | _renderTexture = rtt_texture-> | ||
+ | _renderTexture-> | ||
+ | _renderTexture-> | ||
+ | _renderTexture-> | ||
+ | |||
+ | _renderTexture-> | ||
+ | } </ | ||
+ | - **Now we must create a plane** So that we will be able to see the rendered texture. Add the member variable '' | ||
+ | Ogre:: | ||
+ | manual-> | ||
+ | |||
+ | manual-> | ||
+ | manual-> | ||
+ | |||
+ | manual-> | ||
+ | manual-> | ||
+ | |||
+ | manual-> | ||
+ | manual-> | ||
+ | |||
+ | manual-> | ||
+ | manual-> | ||
+ | |||
+ | manual-> | ||
+ | manual-> | ||
+ | manual-> | ||
+ | manual-> | ||
+ | manual-> | ||
+ | manual-> | ||
+ | |||
+ | manual-> | ||
+ | Ogre:: | ||
+ | boardNode-> | ||
+ | boardNode-> | ||
+ | }</ | ||
+ | - ** Now we must create the custom/ | ||
+ | { | ||
+ | technique | ||
+ | { | ||
+ | pass | ||
+ | { | ||
+ | // Leaving the vertex and fragment ref's blank, | ||
+ | // will make Ogre use fixed function vertex and fragment programs. | ||
+ | | ||
+ | texture_unit { | ||
+ | // Notice: This is the name we gave in the createRenderTarget method. | ||
+ | texture RttTex | ||
+ | } | ||
+ | |||
+ | } | ||
+ | } | ||
+ | } </ | ||
+ | - **Lets make something practical with the renderTarget. ** | ||
+ | - Lets add a top down view to the scene, that will be displayed on our Board. To do that we will create another camera that we will position above and make it look down: <code cpp> | ||
+ | camera2-> | ||
+ | camera2-> | ||
+ | camera2-> | ||
+ | - Now make sure to send the new camera into the function that creates the rendertarget. <code cpp> | ||
+ | - **Finally lets create some water effects** This is not an exhaustive method, but it points you in the general direction of distorting what is under water :) | ||
+ | - **Lets start by creating a plane for our water surface**, make it a bit smaller than the floor, and a few units higher. <code cpp> // Add the water surface | ||
+ | Ogre::Plane waterSurface(Ogre:: | ||
+ | Ogre:: | ||
+ | 200, 200, 1, 1, true, 1, 5, 5, Ogre:: | ||
+ | |||
+ | _water = _sceneManager-> | ||
+ | _sceneManager-> | ||
+ | _water-> | ||
+ | - **Water surface** If you think about a water surface, you can imagine it has no actual colour, it has generally 3 elements when looking at it with our simple goggles; reflection, some colouring of different light scattering about the water volume, and a distorted image of the bottom of the water since its mostly transparent. | ||
+ | - **For this exercise we will skip the reflection :)** lets focus on the bottom. The bottom is really just what is behind the water surface in the general view direction, but slightly distorted. So what we want to do is draw what the camera sees but exclude the water surface, and save it in a RenderTarget. To do that we send our normal camera to the createRenderTarget function, but we will need to add the following line to the function: <code cpp> | ||
+ | - **Exclude the water from the RenderTarget** Now we have to specify the visibility mask of our water surface: <code cpp> | ||
+ | - **Now we will have to create our own distortion shader " | ||
+ | - Create Two files, '' | ||
+ | - In the file '' | ||
+ | uniform sampler2D offsetMap : TEXUNIT1; | ||
+ | uniform float4x4 | ||
+ | uniform float myTime; | ||
+ | |||
+ | // Vertex program input | ||
+ | struct VP_input { | ||
+ | float4 pos : POSITION; | ||
+ | float4 uv : TEXCOORD0; | ||
+ | }; | ||
+ | |||
+ | // Vertex program output / fragment program input. | ||
+ | struct VP_output { | ||
+ | float4 pos : POSITION; | ||
+ | float4 uv : TEXCOORD0; | ||
+ | float4 uvPos : TEXCOORD1; | ||
+ | }; | ||
+ | |||
+ | |||
+ | VP_output main_distort_vp( VP_input p_in ) { | ||
+ | VP_output output; | ||
+ | |||
+ | // Transform the current vertex into projection space. | ||
+ | output.pos = mul(mat_modelproj, | ||
+ | output.uv = p_in.uv; | ||
+ | |||
+ | // Since the fragment shader will not be able to read the position Semantic, | ||
+ | // we must pass down the position to the fragment shader via texture coordinate. | ||
+ | // Note: In DirectX 10+ fragment shader have access to the current pixels position :) | ||
+ | output.uvPos = output.pos; | ||
+ | |||
+ | return output; | ||
+ | } | ||
+ | |||
+ | float4 main_distort_fp( VP_output p_in ) : COLOR { | ||
+ | // Create a variable to hold the screen space uv coordinates | ||
+ | float2 screenUV; | ||
+ | | ||
+ | // Since the vertex shader does the divide by w automatically, | ||
+ | // we have to do so explicitly here our selves to get the viewport coordinates. | ||
+ | screenUV = float2(p_in.uvPos.x, | ||
+ | // Viewport coordinates range from -1 to 1, so we must map them to the range 0..1 to be valid UV coordinates. | ||
+ | screenUV = ( screenUV + 1.0f ) * 0.5f; | ||
+ | | ||
+ | // Read the current offset from the offset map. | ||
+ | float2 offset = tex2D( offsetMap, p_in.uv.xy + myTime*.01).xy ; | ||
+ | |||
+ | // Texture can not store negative values, so we must take our offset vectors, and map them to the correct range from (0..1) to (-1..1) | ||
+ | offset = (offset - 0.5f) * 2.0 ; | ||
+ | |||
+ | float4 oColor = tex2D( RttTex, screenUV.xy + offset * 0.07 ) * 0.5 ; | ||
+ | |||
+ | return oColor; | ||
+ | } </ | ||
+ | - And into the Material script create a material that uses these two programs: < | ||
+ | source distortShader.cg | ||
+ | entry_point main_distort_vp | ||
+ | profiles vs_1_1 arbvp1 | ||
+ | default_params { | ||
+ | param_named_auto mat_modelproj worldviewproj_matrix | ||
+ | } | ||
+ | } | ||
+ | |||
+ | fragment_program shader/ | ||
+ | source distortShader.cg | ||
+ | entry_point main_distort_fp | ||
+ | profiles ps_2_0 arbfp1 | ||
+ | default_params { | ||
+ | param_named_auto myTime time 1 | ||
+ | } | ||
+ | } | ||
+ | |||
+ | material custom/ | ||
+ | technique { | ||
+ | pass { | ||
+ | vertex_program_ref shader/ | ||
+ | } | ||
+ | |||
+ | fragment_program_ref shader/ | ||
+ | } | ||
+ | |||
+ | texture_unit 0 { | ||
+ | tex_address_mode mirror | ||
+ | texture RttTex | ||
+ | } | ||
+ | texture_unit 1 { | ||
+ | tex_address_mode mirror | ||
+ | texture offsetMap.png | ||
+ | } | ||
+ | } | ||
+ | } | ||
+ | } </ | ||
+ | - **Then finally we will need to put that material on the water surface** and download the {{: | ||
+ | </ | ||
+ | </ | ||
===== When You Are Finished ===== | ===== When You Are Finished ===== | ||
- | Upload your **commented source files** into Lab6 in MySchool (zip them up if more than one). The lab projects will not be graded, but their completion counts towards your participation grade. | + | Upload your **commented source files** into Lab7 in MySchool |
/var/www/cadia.ru.is/wiki/data/attic/public/t-gede-14-1/lab7.1393530361.txt.gz · Last modified: 2024/04/29 13:32 (external edit)