Phong lighting model

This shader example demonstrates the Phong lighting model:

phongV.hlsl :

#define IN_HLSL
#include "shdrConsts.h"
#include "hlslStructs.h"

Uniforms are also called 'dynamic constants'.
They indicate the value is constant for the execution of the shader.
Uniforms are set into the constant registers (c#).

uniform float4x4 modelview;
uniform float4x4 objTrans;

Each vertex shader has its input and output semantics.
They provide the input data from the application and the output data to PS.
In our case we'll pull a vertex position and a normal vector.

struct VS_INPUT
{
float4 pos : POSITION;
float3 N : NORMAL;
};

We'll pass the normal and light vectors in world space to PS.
Texture coordinates represent gpu's oT# registers.
Position represent the oPos register.
Various calculation results represent the temporary registers (r#).

struct VS_OUTPUT
{
float4 pos : POSITION;
float3 wL : TEXCOORD0;
float3 wN : TEXCOORD1;
};
VS_OUTPUT main(VS_INPUT IN)
{
VS_OUTPUT OUT;

light position

float3 Lpos = {500,100,1000};

First part - transformation.
A vertex shader must always output a position. This is the minimum.
Position is consumed by the rasterizer so it is not exposed to the pixel shader.
If we need the position there,we can always pass it using texcoords.
When we render something, it should be in screen space,the screen is just a plane.
So this position must be transformed to clip space.
The model itself is in its local 'object' space,so we should multiply its transformation with world, view and projection transforms.
Modelview is hardcoded in T3D.
modelview = world * view * projection
If we don't have a visualisation of the object after the pixel shader,then we have a transformation problem.
Any additional VS calculations are interpolated in the rasterization process.
This data will be passed to PS via an interpolator.
Clip space is a geometry space after the projection transformation and before the perspective divide (xyz/w).
w>0 and w<=1, when we devide xyz/w we obtain normalized device coordinates from our clip coordinates.
Normalized device coordinates are scaled and translated by the viewport parameters to produce window coordinates.
We don't have to worry, because this division will happen automatically between the vertex shader and pixel shader,so this is safe.
The reason we are using float4(pos,1) is because in some cases T3D can not implicitly convert const float3 to float4.
Remember something very important - modelview is always your first parameter,position is always the second one.
The reason - position is treated as a column vector.
If we change their places, then position will be treated as a row vector.
This vector will receive wrong data,because modelview will act as a transposed matrix.
Some of the transformations will need to be transposed, like TBN.
This is done with the idea to move from local to texture space. If not transposed then we will move from tangent to local object space.
Modelview is not this type of a matrix, therefore we continue this way:

OUT.pos = mul(modelview,float4(IN.pos.xyz,1));

We are creating some lightning calculations, therefore we need to move to world space.
objTrans is our model's world space, it is hardcoded in T3D.
Get our normal and light vectors in world space.

OUT.wN = normalize(mul(objTrans,IN.N));
OUT.wL = normalize(Lpos - mul(objTrans,float4(IN.pos.xyz,1)).xyz);;
return OUT;
}

 

phongP.hlsl :

#define IN_HLSL
#include "shdrConsts.h"
#include "hlslStructs.h"

uniform float3 vEye;

Each pixel shader has an input and output semantics.
Vertex shader will pass the data to the pixel shader using a very fast cache memory.
Now we are in PS, we receive the data.

struct PS_INPUT
{
float3 wL : TEXCOORD0;
float3 wN : TEXCOORD1;
};

Our pixel shader will output a color (oD0 register).

struct PS_OUTPUT
{
float4 color : COLOR0;
};
PS_OUTPUT main(PS_INPUT IN)
{
PS_OUTPUT OUT;

We define our diffuse and ambient colors.

float4 diffuseColor = {0,1,0,1};
float4 ambientColor = {0,0.2,0,1};


Diffuse light actually works using a dot product - it determines how much illumination reaches a surface.
Just imagine we have a horizontal plane and our light emiter is far up in the sky.
The dot product between the light vector and vertex normal will be close to cos(M_PI/2) = 1.
Therefore our surface will be fully lit by the sun.
If our sun goes down the horizon, our dot product will be <1 so we'll get a particial lit.
If our sun is below the surface, the dot product will return a scalar '-1', we clamp this in [0;1] and our diffuse color will become black.
Well, someone may ask "why we define our dot product for each pixel? Isn't it more expensive calculation? We could calculate it for a vertex instead?"
Yes,per-pixel operations are expensive, but they lead to more accuracy.

float4 diffuseEdgeComponent = saturate(dot(IN.wL, IN.wN));


Specular reflection model.
This is the specular formula of Phong.
Notice something very important -> -normalize(vEye).
vEye is hardcoded in T3D, it represents the eyeNeg vector (the forward vector of our camera), with length 1 / zFar.
It is very conveniently to use it, we need to inverse and normalize the vector and this will save some calculations,
because it is already set.

float3 reflectComponent = normalize(2 * diffuseEdgeComponent * IN.wN - IN.wL);
float4 specularColor = pow(saturate(dot(reflectComponent, -normalize(vEye))), 8);



We get our final color by summing all the light components.

OUT.color = diffuseColor * diffuseEdgeComponent + ambientColor + specularColor;
return OUT;
}


 

materials.cs :

singleton ShaderData(PhongShaderData)
{
DXVertexShaderFile = "shaders/common/phongV.hlsl";
DXPixelShaderFile = "shaders/common/phongP.hlsl";

pixVersion = 3.0;
};  

singleton CustomMaterial(PhongMat)
{
mapTo = "testshapetex";

shader = PhongShaderData;
version = 3.0;
};

back...