Get an 20% OFF using the RELEASE code on your book purchase. For a limited time.

Introduction to the shader programming language

3.3.0. Cg / HLSL vertex input & vertex output

This post is also available in…

A data type that we will use frequently in the creation of our shaders is “struct”. For those who know the C language, a struct is a compound data type declaration, which defines a grouped list of multiple elements of the same type and allows access to different variables through a single pointer. We will use the struct to define both the inputs and the output in our shader and its syntax is the following: 

struct name
{
    vector[n] name : SEMANTIC[n];
};

First, we declare the struct and then its name. Then we store the vector semantics inside the struct field for later use. The struct “name” corresponds to the name of the structure, the “vector” corresponds to the type of vector that we will use (e.g. float2, half4) to assign a semantic. Finally, “SEMANTIC” corresponds to the semantics that we will pass as input or output.

By default, Unity adds two struct functions which are: appdata and v2f. Appdata corresponds to the “vertex input” and v2f refers to the “vertex output”. 

The vertex input will be the place where we store our object properties (e.g. position of vertices, normals, etc) as an “entrance” to take them to the “vertex shader stage”. Whereas, vertex output will be where we store the rasterized properties to take them to the “fragment shader stage”. 

We can think of semantics as “access properties” of an object. According to the official Microsoft documentation:

“A semantic is a chain connected to a shader input or output that transmits usage information of the intended use of a parameter”. 

We will exemplify using the POSITION[n] semantic.

In previous pages, we have talked about the properties of a primitive. As we already know, a primitive has its vertex position, tangents, normals, UV coordinates and color in the vertices. A semantic allows individual access to these properties, that is, if we declare a four-dimensional vector and we pass it the POSITION[n] semantic then that vector will contain the primitive vertices position. Suppose we declare the following vector:

float4 pos : POSITION;

This means that within the four-dimensional vector called “pos” we are storing the position of vertices in the object-space of an object. 

The most common semantics that we use are:

  • POSITION[n]. 
  • TEXCOORD[n]. 
  • TANGENT[n]. 
  • NORMAL[n].
  • COLOR[n].
struct vertexInput (e.g. appdata)
{
    float4 vertPos : POSITION;
    float2 texCoord : TEXCOORD0;
    float3 normal : NORMAL0;
    float3 tangent : TANGENT0;
    float3 vertColor:  COLOR0;
};

struct vertexOutput (e.g. v2f)
{
    float4 vertPos : SV_POSITION;
    float2 texCoord : TEXCOORD0;
    float3 tangentWorld : TEXCOORD1;
    float3 binormalWorld : TEXCOORD2;
    float3 normalWorld : TEXCOORD3;
    float3 vertColor:  COLOR0;
};

TEXCOORD[n] allows access to the UV coordinates of our primitive and has up to four dimensions (x, y, z, w). 

TANGENT[n] gives access to the tangents of our primitive. If we want to create normal maps it will be necessary to work with a semantic that has up to four dimensions as well. 

Through NORMAL[n] we can access the normals of our primitive and it has up to four dimensions. We must use this semantic if we want to work with lighting within our shader.

Finally COLOR[n] allows us to access the color of the vertices of our primitive and has up to four dimensions like the rest. Generally, the vertex color corresponds to a white color (1, 1, 1, 1).

To understand this concept, we are going to look at the structures that have been declared automatically within our USB_simple_color shader. We will start with appdata.

struct appdata
{
    float4 vertex : POSITION;
    float2 uv : TEXCOORD0;
};

As we can see, there are two vectors within the structure: vertex and uv. “Vertex” has the POSITION semantic; this means that inside the vector we are storing the position of the vertices of the object in object-space. These vertices are later transformed to clip-space in the vertex shader stage through the UnityObjectToClipPos(v.vertex) function. 

The vector uv has the semantic TEXCOORD0, which gives access to the UV coordinates of the texture. 

Why does the vertex vector have four dimensions (float4)? Because within the vector we are storing the values ​​XYZW, where W equals “one” and vertices correspond to a position in space.

Within the v2f structure we can find the same vectors as in appdata, with a small difference in the SV_POSITION semantic, which fulfils the same function as POSITION[n], but has the prefix “SV_” (System Value).

struct v2f
{
    float2 uv : TEXCOORD0;
    UNITY_FOG_COORDS(1)
    float4 vertex : SV_POSITION;    
};

Note that these vectors are being connected in the vertex shader stage as follows: 

v2f vert (appdata v)
{
    v2f o;
    o.vertex = UnityObjectToClipPos(v.vertex);
    o.uv = TRANSFORM_TEX(v.uv, _MainTex);
    ...
}

o.vertex” is equal to the vertex output, that is, the vertex vector that has been declared in the v2f structure, while “v.vertex” is equal to the vertex input, that is, to the vector vertex that has been declared in the appdata structure. This same logic applies to uv vectors.

Follow us to stay informed about all the latest news, updates, and more.

Join the group to share your experiences with other developers.

Subscribe to our channel and keep learning game dev!

jettelly-logo

Jettelly Team

We are a team of indie developers with more than 9 years of experience in video games. As an independent studio, we have developed Nom Noms in which we published with Hyperbeard in 2019. We are currently developing The Unity Shader Bible.

Follow us on our social networks.