Month: January 2010

Nvidia Geforce7950 gtx GPU (courtesy:

Terms: GPU, CUDA, CG

Past few months, I’m constantly hearing about some terms: GPU,CUDA,CG etc. My friends are working in a software project dealing with all these stuff. So they keep telling me about these and the terms stated above were the words I captured primarily. I could read their mental satisfaction when they say about it and their happiness in seeing me getting confused and at the same time attracted to it. And due to the same feeling, I’m getting my eyes on it.

So the first question. What is this all about? But I got another question along with the first at the same time. What makes it a big thing? Let’s check.

Graphics Processing Unit

Nvidia Geforce7950 gtx GPU

The word expands to Graphics Processing Unit, which is a specialized processor that offloads the 3D graphics rendering from the CPU. It’s parallel structure makes it more efficient than CPU.  So basically, a GPU  is a processor attached to a graphics card dedicated to calculating floating point operations. It’s basically used for graphics rendering as the custom micro chips incorporated in the graphics accelerator provides special mathematical operations.


CUDA (Compute Unified Device Architecture) is a parallel computing architecture developed by NVIDIA. Actually I’m reading a book about this. So let me tell you my understanding. As GPU is graphics processing unit, The earlier architecture of these were limited for graphics related processing. So to perform general purpose computing, one had to know about the graphics APIs and use it in a way that it can be used for general purpose computing. But that was not very flexible. So CUDA is a hardware as well as a software architecture that provides an interface for general purpose computing.

C for Graphics

This is a high-level shading language developed by Nvidia in close collaboration with Microsoft for programming vertex and pixel shaders. Well, shader is a set of software instructions which is used to calculate the rendering effects on graphics hardware. They are used to program the GPU programmable rendering pipeline. Rendering pipeline or the graphic pipeline is used maily for the representation of a 3D scene as input and results in a 2D raster image. A graphics pipeline consists of many stages in which some of the stages will be programmable like pixel shaders.


There are shaders for programming GPU . We can write shaders of various 3D APIs of both Direct3D and OpenGL using their own shader languages, HLSL and GLSL respectively. But porting  between these two languages are very difficult. Here is the main purpose of Cg. As Cg is developed by Nvidia, the hardware providers, there won’t be any question about the software compatibility. So even if it is a Direct3D API or a OpenGL one, we can process it using Cg code. We can also generate HLSL or GLSL code from CG code using  Cg compiler. So that eases the porting in need.

Consider a case when you want to write a program to display a triangle as it is being viewed in a particular angle in a virtual 3D space. What will you do? You will call the corresponding Direct3D or OpenGL API to draw a triangle, pass it the three vertices, the angle in which it is to be viewed. Yes the application is complete.  Now consider that we are not looking the triangle from straight in front. We cannot directly display this 3D information alright? We need to do some processing in order to make it appear 3D in a 2D space. And that is what the shaders help us for. There are 3 types ofshaders.

Vertex Shaders are run once for each vertex given to the graphics processor. The purpose is to transform each vertex’s 3D position in virtual space to the 2D coordinate at which it appears on the screen (as well as a depth value for the Z-buffer). Vertex shaders can manipulate properties such as position, color, and texture coordinate, but cannot create new vertices. The output of the vertex shader goes to the next stage in the pipeline, which is either a geometry shader if present or the rasterizer otherwise which converts it in to discrete pixels.

Geometry shaders can add and remove vertices from a mesh. Geometry shaders can be used to generate geometry procedurally or to add volumetric detail to existing meshes that would be too costly to process on the CPU. If geometry shaders are being used, the output is then sent to the rasterizer which converts it into discrete pixels.

Pixel shaders, also known as fragment shaders, calculate the color of individual pixels. The input to this stage comes from the rasterizer, which fills in the polygons being sent through the graphics pipeline. Pixel shaders are typically used for scene lighting and related effects such as bump mapping and color toning.

We can program these shaders, using HLSL, GLSL or preferably Cg, to control the way in which the shading is applied. in our example, if we want a green square shade in the middle of the triangle drawn, we don’t have to modify our API. Just write a pixel/fragment shader program that apply the desired shade to each pixels at the middle of the triangle which creates a shade, bind the program with the application before the API is called.