Lab 2: Shader Programs & 3D Transformations; GLFW Key & Mouse Events

Lab 2 Objectives

In today's lab, we will discuss the following:
  • Indexed Drawing (beginning of lab; the code is below)
  • 3D Transformations
  • Shader Programs
  • GLFW Events
  • Rendering a cube
  • Tips for project 1


Transformations

Computer graphics would be boring and somewhat useless if all we could do is position objects at the origin. We often want to move and position objects in 3D space. We can do this by using three basic transformations: translation, scaling, and rotation.

In mathematics and in code, we can represent a 3D transformation with a 4x4 matrix. We can create these matrices by using the cs237 math library, which we will be shown in the next section.

Below is an example of translating a 2D square from the origin to another location by multiplying each vertex position by a translation matrix T.

Note: Notice that matrix is multiplied on the left of each vertex position (T * v1). In this course we will be using column-major order (the default way in OpenGL).

Points and Directional Vectors

In order to multiply points and directional vectors by 4x4 transformation matrices, we represent them not just by a (x,y,z) triplet, but by a vector with four components (x,y,z,w).

What goes in the w component?

  • For points, w will always equal 1.
  • For directional vectors, w will always equal 0.
For now, we will represent vectors as points. Later in the course, you will see difference between representing vectors as points and directions.


CS237 Math Library

Inside the cs237 math library, you will find that you will use the following types regularly in this course:
  • cs237::vec2f - A vector of size two.
  • cs237::vec3f - A vector of size three.
  • cs237::vec4f - A vector of size four.
  • cs237::mat3x3f - A 3x3 matrix
  • cs237::mat4x4f - A 4x4 matrix
  • Along with a few other types.

The library also provides a few helpful functions for creating transformation matrices:

  • cs237::mat4x4f translate (cs237::vec3f v) - Returns a matrix that translates in x, y, and z according to the given vector.

  • cs237::mat4x4f translate (cs237::mat4x4f mat, cs237::vec3f v) - Builds a translation 4x4 matrix created from a vector of 3 components, multiplied by the given matrix.

  • cs237::mat4x4f scale (cs237::vec3f s) - Builds a scale 4x4 matrix created from 3 scalars inside the vector s.

  • cs237::mat4x4f scale (cs237::mat4x4f mat, cs237::vec3f s) - Builds a scale 4x4 matrix created from 3 scalars inside the vector s, multiplied by the given matrix.

  • cs237::mat4x4f rotate (float theta, cs237::vec3f axis) - Returns a matrix that rotates by the given angle about the given axis.

  • Again, along with a few more transformation functions.

You can also (+,-,*,/) vectors of the same dimensions and (+,-,*) matrices of the same dimensions. Below is a sample code that translates a point by a transformation matrix.

// Create a point at (1,0,0)
cs237::vec4f p(1.0f,0.0f,0.0f,1.0f); 

// Create a transformation matrix that translates the point by 12 units up the y-axis
cs237::vec3f offSet(0.0f, 12.0f, 0.0f); 
cs237::mat4x4f T = cs237::translate(offSet);  

// Now p should be vec3f(1.0f,12.0f,0.0f,1.0f)
p = T * p;

Shaders

Let's say we have a cube at the center of the screen and we want to move it up the y-axis.

A possible solution is to update our vertex positions with new y-values as we move the object up the y-axis. This means that we will need to update the cube's vertex buffer object (VBO) with the new vertex positions as we move the object up the screen. THIS IS A BAD SOLUTION!

Why?

  1. If we were rendering a more complicated mesh (e.g., a sphere) it could be inefficient and costly to recompute all the vertex positions.
  2. Pushing data from the CPU to the GPU is an expensive task. Programs can contain a lot of objects that are constantly moving, so its important that we avoid updating our VBOs every frame.

Vertex Shaders

Another the solution (i.e., the correct solution) is to use the vertex shader . A vertex shader proccesses each vertex individually before rendering. Remember shaders run on the GPU so it makes them ideal for performing per-vertex computations. The main job of the vertex shader is to provide the vertex postion's clip-coordinates. Don't worry about clip-coordinates for right now. You will learn about those in lecture very soon.

Lets look at the vertex shader that was provided for you in lab 1. You can use this vertex shader for this lab but with a few modifications. Open the "shader.vert" file inside the shader directory of lab 1 (i.e. "shader/shader.vert"). Lets examine that file:

  1.  #version 410 core 
    Remember that shaders are written in the OpenGL Shading Language (GLSL), which was designed to resemble C. All shaders begin with the version of GLSL being used. We are using GLSL 4.1.
  2.  layout(location = 0) in vec3 position;
     layout(location = 1) in vec4 color;
    This declares a vertex attribute (called an in variable) with type vec3 that is called position. Likewise, we can multiple vertex attributes declared. We also have a vertex attribute for the color of the vertex. On each execution of the vertex shader, position gets one of the positions that we stored in the VBO. layout (location = 0) or layout (location = 1) for color are an optional part of the delcaration that allows us to set the location (or index) of this attribute. Remember in Lab1 we had the following pieces of code:

    tri->posLoc = shader->AttributeLocation("position"); 
    tri->colorLoc = shader->AttributeLocation("color");
    

    and

    CS237_CHECK(glVertexAttribPointer(posLoc, 3, GL_FLOAT, GL_FALSE, sizeof(vertices[0]), (GLvoid*)0));
    CS237_CHECK(glVertexAttribPointer(colorLoc, 4, GL_FLOAT, GL_FALSE, sizeof(vertices[0]), (GLvoid*)sizeof(vertices[0].pos)));
    
    In the first code section, we were retrieving the vertex's position and color locations defined in vertex shader. The second section of code then uses those locations to set the vertex attribute information (i.e., the formatting information about the vertex position and color) at that attribute location. You are telling the shader program: "Hey shader program, go look at your attribute location 0 and it'll tell you that the vertex positions are the first chunk of data inside the vertex struct. Also go look at location 1 and it'll tell you the vertex colors are the second chunk of data inside the vertex struct."
  3.  out vec4 vColor; 
    This line allows us to the send data along the pipeline. In particular, the out variable will send an interpolated vexter color to the fragement shader. It is inside the fragment shader where we assign a color so we pass on the vertex's color to that shader. We will talk more about this next lab.
  4. uniform mat4 modelView;
    uniform mat4 projection;
    The variables modelView and projection are called uniforms . These variables remain the same for all vertices that are sent through the vertex shader. More on this in the next section.
  5. void main(){...}
    All vertex shaders must contain a main function, which is automatically called for each vertex.
  6. gl_Position =  projection * modelView * vec4(position,1.0); 
    The main function is required to set the value of gl_position , which is a built-in variable that determines the final position of the vertex (i.e., its clip coordinates). Here we are using our model-view and projection matrices to transform the vertex positions, which are in model space all the way to clip-space. See the board.

    Note: The code "vec4(position,1.0)" is a quick way of converting a vec3 into a vec4. Remember we set the "w" component of the vector to 1.0 because we are representing vertex positions as points.

  7. vColor = color; 
    We are saying the output of the vertex shader will be the vertex color. We can also send multiple data out of the vertex shader to the fragment shader using blocks. We will talk about this in the next lab.

GLSL has a lot of built­in types and functions, which mostly are identical to the cs237 library. So a v​ec4​ in GLSL is the same as a cs237::vec4​f in the cs237 library.

But before we talk about the fragment shader we briefly need to revisit how we will update all the vertex positions so that their y-values can be easily updated...

Uniform Variables

In the previous lab, we saw how to pass vertex data to the shader: store it in a VBO and have it show up in an "in" variable. This works for data that varies across vertices (e.g., positions and colors). But we want a transformation matrix to be the same for ALL vertices, so instead we will use a uniform .

To set a uniform we use the cs237::setUniform function before drawing. On each execution of the vertex shader, it will use the last value that was set using cs237::setUniform .

Here's a chart to summarize the differences between vertex attributes and uniforms:

Vertex Attribute Uniform
Different values for different vertices Same value for all vertices
Defined as an "in" variable in the vertex shader Defined as a "uniform" in a shader. Uniforms can be accessed both from the vertex and fragment shaders.
Values are set using a VBO and a VAO Values ar set by calling cs237::setUniform

The first argument to cs237::setUniform is the location of the uniform variable, which you can obtain by using UniformLocation on a shader object.

Below is a code example for setting the model-view matrix as a uniform:

/* Assuming we created a shader program, sh, then we first
 * get the location of the modelView matrix defined in the shader
 * program. 
 */ 
int  mvLoc; 
mvLoc = sh->UniformLocation ("modelView") 

/* We create the model-view matrix (really just the camera) using 
 * the lookAt function*/ 
cs237::mat4x4f modelViewMat = cs237::lookAt (
  this->camPos,
  this->camAt,
  this->camUp);

// We send the data to the GPU by using the cs237::setUniform function 
 cs237::setUniform (mvLoc, modelViewMat);

Fragment shader

We won't go into full details about the fragment shader in this lab. For now, lets explain the setup for lab 1's fragment shader:
  1.  #version 410 core 
    Again the fragment shader needs to know about the version of GLSL you will be using. Again, we are using GLSL 4.1.
  2.  in vec4 vColor;
    This is the interpolated vertex color that was passed as an output variable inside your vertex shader. We are now going to use this color as the output color for the fragment.
    Note: Notice the name of the input variable: "vColor". This is EXACTLY the same name as the output variable defined in the vertex shader. This is not by coincidence! These names need to match exactly. Thus, the name of the "out" variable inside the vertex shader has to be the same name as the "in" variable to the fragment shader.
  3. void main(void)
    {
      color = vColor;
    } 
    This simple program for the fragment shader just assigns the color for a fragment to be the incoming interpolated color. Again, you do not need to fully understand this right now because we will talk more about this in the next lab/lecture.
Note: In this week's lab and project 1, we won't pass the color as a vertex attribute. Instead, we will send it as a uniform to describe the color of a mesh. This means you'll have something like "uniform vec4 color;" inside your fragment shader. This also means you won't need to pass anything "out" of the vertex shader.


GLFW Key & Mouse Events

GLFW allows you to poll for events from the keyboard and mouse by setting a callback function when those events occur. The GLFW documentation provides a great explanation on how to implement and set these callback functions for events.

The following are links to short descriptions on the key and mouse events used in GLFW. Click on the links to read the descriptions for each type of event.

  1. Key Input
  2. Cursor position
  3. Mouse button input
  4. Keyboard Key Macros

Part1: Actual Lab (Render an Indexed Cube)

The main goal of the lab is to try to get a yellow cube rendered. Inside the lab2/view.cxx file, I have defined global variables to help generate a cube. The variables are the vertices, indices, and color for the cube. Remember we will not pass the color as a vertex attribute but as an uniform. This seeded code looks VERY similiar to your project 1 seeded code so if you can get lab 2 completed then you'll have a large chunk of your project 1 completed. Note you will need to create your shader files yourself. Make sure they have the extensions .vsh (vertex shader) and .fsh (fragement shader).

To help you, I have placed "/** HINT: */" tags to give you some guidance on what you should write inside functions and the struct definitions. Remember the cube is indexed drawn so use the code below to help you with drawing an indexed mesh.

Last thing, once you have the cube renedered, try updating the camera's position by making it move forward and backwards on the z-axis. I gave you some help inside the "Key" callback function inside the main.cxx file.


Index Drawing

Indexed drawinge calls from the beginning of lecture:
  • Code for loading a VBO for indexing:

    CS237_CHECK( glBindVertexArray (this->vaoId) );
    CS237_CHECK( glGenBuffers (1, &this->ebufId) );
    CS237_CHECK( glBindBuffer (GL_ELEMENT_ARRAY_BUFFER, this->ebufId) );
    CS237_CHECK( glBufferData (GL_ELEMENT_ARRAY_BUFFER, n*sizeof(uint32_t), indices, GL_STATIC_DRAW) );
    CS237_CHECK( glBindBuffer (GL_ELEMENT_ARRAY_BUFFER, 0) );
    
  • Code for drawing an indexed mesh:

    CS237_CHECK( glDrawElements (this->prim, this->nIndices, GL_UNSIGNED_INT, 0));
    


Tips for Project 1

Here are some tips, while working on project 1:
  • You may want to look at the glPolygonMode function.

  • One of the challenging parts of the project is understanding how to retrieve the mesh data out of the scene object. You will need to look at the following functions that return to you iterators:

    Iterators for Scene Objects

    • std::vector::const_iterator beginObjs ()
    • std::vector::const_iterator endObjs ()

    Iterators for Models for the scene

    • std::vector::const_iterator beginModels ()
    • std::vector::const_iterator endModels ()

    The function types look a little daunting but really they are saying that the models and scene objects are stored in a container. You can think of using iterators as a way to go through that collection. An iterator points to an item that is part of the collection. For instance, all containers support a function called begin (i.e., beginObjs & beginModels), which will return an iterator pointing to the beginning of the container (the first element) and function, end (i.e., endObjs & endModels), that returns an iterator corresponding to having reached the end of the container. In fact, you can access the element by "dereferencing" the iterator with a *, just as you would dereference a pointer.

    Below is the code that you can use to go through the iterator:

    std::vector<SceneObj>::const_iterator it;
    for (it = scene.beginObjs();  it != scene.endObjs();  it++) {
        SceneObj obj = *it;
        /* ...  */        
    }
    
    The code will be similiar for iterating through the models but the iterator will be different because the container nows holds OBJ::Model * (a pointer to an OBJ::Model) instead of a SceneObj .

  • You will want to look at the common/obj/object.hxx file. This is where the object models (i.e., OBJ::Model) definition lives. Models live in groups inside an object file. You can call the "Group" function (i.e. calling it with Group(0) for this project) to get the mesh data. The Group struct then holds the vertices and indices data that you will need to load into the VBOs for a mesh.

  • Remember a SceneObj holds the position, color, and model index for a mesh in the scene. The mesh id gives you the index of what Obj::Model (i.e., the mesh data) to use for rendering that scene object. Thus, I recommend having containers (or arrays) to hold the mesh data and scene objects.