# Normal Mapping

Examples without and with normal mapping

Introduction

In modern games normal mapping has become a very commonly seen graphical effect, and in many cases it's actually expected. Even so, I found it exceptionally difficult to locate decent material covering it's implementation. If you have run into a similar roadblock, I hope you find this helpful.

What does it do?

The goal of normal mapping is simple. Render a low-poly (or even high-poly) mesh, and add the illusion of more detail. This effect is achieved by manipulating the normals of each each fragment upon shading. Since a normal vector is used to compute the lighting of a fragment it is clear that changing the normal will also alter the shading of the fragment.

How does it work?

As I mentioned above, all normal mapping does is modify the normal vectors of a mesh. The way it does this is through the use of... a normal map!!! The normal map itself is nothing more than a texture which looks something like this.

As you might have guessed, this particular normal map corresponds to the model shown above. Similarly to a standard color map, a normal map is laid out to match up with the texture coordinates of the vertices which make up the mesh. In other words the pixels from the color map that are responsible for giving the wing of this model it's color correspond exactly to the wing on the normal map.

So you might be wondering, "Hey, what's with the funky colors?". Well my friend, there's actually a very good reason for them. As you might be aware, most image formats have at the very least three color channels for each pixel (red, green, and blue). These colors are nothing more than numerical values which range between 0 to some positive upper bound. In the case of a 24bit color, that bound is 255. However, we will assume this range is [0, 1]. Now here's the problem. Normal vectors must be able to point in any direction, and thus, their components must be allowed to hold negative values. Hopefully you can see the issue with that; texture colors can only exist in the range [0, 1], where normal vectors exist in the range [-1, 1]. Not to fear! With some math we can convert between the two with ease. The equation bellow illustrates how.

Do you see how that works? Since the color in the texture C is made up of 3 scalar components(r, g, b), it can be represented as a vector. Remember, that each pixel's color lies in the range [0, 1]. So let's say for example that the value of r for a pixel in the normal map is 1.0. If you run that value through the equation, you will find that it returns the value 1.0. Which is the most positive value that can be found in a normal vector. Now assume that r is not 1.0, but 0 instead. Running this value through the same equation will return -1.0. The most negative value that can be found in a normal vector.

Implementation

The implementation that I provide here is written in GLSL for openGL ES 2.0 specifically. However, the same exact concepts hold for other shading languages such as HLSL and CG. Only the functions of the fragment program are explained, since the vertex program is fairly trivial.

The first thing that I assume is the vertex program provides the following values as output.

The first three lines are variables passed from the vertex program to the fragment program. vTexCoord is the texture coordinate, or UV which has been calculated in the vertex program. vNormal is the original normal vector for the vertex. Finally, vTangent is a vector which has been determined to be tangent or non-parallel to vNormal. Note: vTexCoord is a 2D vector, where the others are 3D. The next two lines are textures. uTexture is the color map for the model, and uNormal is as you may have guessed, the normal map.

In the snippet below, I set up some initial values. Because of the way my textures are loaded I have to flip the texture coordinate before using it. This may be unnecessary in your own project. Also keep in mind that I'm setting up a static light which is immobile. You would need to add a uniform for either light position and/or direction if you wished to change it.

Here is where the magic happens. As you can see, the first thing we do is calculate what is called the bi-normal. The bi-normal is a vector which is orthogonal to the vectors vNormal and vTangent. That means all three of those vectors are perpendicular to one another. The next thing we do is create a 3 x 3 matrix. When constructing the matrix three 3D vectors are provided as input, vTangent, biNormal, and vNormal. These vectors make up what is called the basis of the matrix. These essentially make the matrix in to a rotation matrix, thus multiplying a 3D vector with the matrix will result in the vector's rotation. On line 19 you see a few operations lumped together into one statement. First, the normal is looked up from the normal map and converted using the equation mentioned earlier. Second, it is multiplied by the matrix tanSpace which actually performs the rotation. Finally, the vector is normalized and set to the variable normal and presto! We have a manipulated normal vector which can be used in our lighting calculations! Not so bad eh?

Here I provide the entire shader programs for both the vertex, and fragment programs

Vertex

Fragment

As always I hope this helps you in your game programming adventures! If you have any questions, post a comment!