The Allegro Wiki is migrating to github at

2D using OpenGL

From Allegro Wiki
Jump to: navigation, search

Using 3D hardware to draw 2D images offers great speed advantages over using the traditional 2D Allegro functions. Alpha blending, in particular, becomes much faster. In addition, you can add new effects to your application that Allegro never offered, such as linear filtering to your bitmaps to reduce the pixelated look when stretched out.

Often I hear people wish they could use OpenGL to help draw their two dimensional objects, but OpenGL and 3D graphics seem too complicated to be worth the confusion.

Using OpenGL to draw 2D images requires a little knowledge about how OpenGL draws images, but once you understand the basics like matrices, and screen / texture coordinates, the rest becomes just as easy as using the Allegro library.

In this tutorial, you will learn the basics of using Allegro and AllegroGL to draw bitmaps to the screen. By the end of tutorial, you should be able to not only display images, but be able to emulate many of Allegro's advanced blitting and rendering functions such as alpha blending and sprite rotation.

Part 1: Setting the Environment

You should have Allegro and AllegroGL installed, and have a decent understanding of 2D drawing using Allegro.

Traditionally, 2D drawing is based on a grid system where increases in the Y dimension actually places a pixel lower on the screen. For example:


|                 |
|                 |
|                 |
|                 |
|                 |

Each point represents a pixel, and pixels are contained points. In other words, all pixels are represented by solid integers, and we can not have a pixel of 2 and a half on the X or Y dimension.

OpenGL (and all 3D systems) work a bit differently. As you know, you can run Quake and Unreal at whatever resolution you wish. A point on a 640x480 screen will show up in the same spot in any resolution! Why is this?

OpenGL will let you define the coordinate system of your screen, or where things get put on the screen. Here's a common example:

  |                  |
  |                  |
  |                  |
  |                  |
  |                  |

This format should remind you of high school geometry. Y goes up, and 0,0 is in the center of the screen. X goes from -1 to 1 as does Y. Screen points are defined by floats now as opposed to integers. Therefore, you have essentially made a resolution-less drawing surface with sub-pixel accuracy. For example -.5 ,.5 will point to a location in the upper left quadrant of the screen space.

The second thing you need to know is how a 3D drawing system can use the Z coordinate to draw 3D objects. The most popular way is to use the Z coordinate to decrease the X and Y positions on the screen. For example: "screen_x = x / z" gives us a normal, things-get-smaller-in-the-distance perspective.

The other way uses something called orthographic perspective. Using orthographic perspective, we simply throw away the Z value when drawing the object. This is useful in stuff like CAD programs or even most 3D mesh editors, so you can easily tell when a line or wall is directly behind something, or the same size, without it getting smaller in the distance. It's ESPECIALLY useful in drawing 2D images to the screen because it gives us a drawing surface where Z doesn't matter, only X and Y!

Awesome, so let's look at code to see how we would set up our orthographic projection:

//Starting allegro and stuff.

//Setting our.. well... settings.  =)  Read the AllegroGL documentation 
//for a more in-depth explanation for what these all do, but they are 
//pretty self explanatory.
//Note: Z_DEPTH isn't mandatory for 2D drawing.  It's only useful for 
//sorting 3D objects, so when you draw an object behind another, it 
//doesn't get drawn on top of it.  You CAN use it for 2D drawing, 
//but it's REALLY not necessary.
  allegro_gl_set(AGL_Z_DEPTH, 8); 
  allegro_gl_set(AGL_COLOR_DEPTH, 32);

//Setting the graphics mode:
  set_gfx_mode(GFX_OPENGL_WINDOWED, 640, 480, 0, 0);

//I am setting a state where I am editing the projection matrix...

//Clearing the projection matrix...

//Creating an orthoscopic view matrix going from -1 -> 1 in each
//dimension on the screen (x, y, z). 
  glOrtho(-1, 1, -1, 1, -1, 1);
//Now editing the model-view matrix.

//Clearing the model-view matrix.

//Disabling the depth test (z will not be used to tell what object 
//will be shown above another, only the order in which I draw them.)

OK, so let's see what I did: First I started Allegro and AllegroGL and set all my options (32bpp, use a back-buffer, etc...) Everything up to setting the graphics mode is pretty self-explanatory.

Now I do some stuff regarding matrices. Matrices are mathematical constructs much like arrays and multidimensional arrays. I'm not going to go into much detail here, but suffice it to say, when you multiply our coordinates to a matrix, you can get a new set set of points back. For example, multiply a set of coordinates (1, 2, 3) by a transformation matrix that moves all points +2 on each axis, and the result is (3, 4, 5). This is done through a process called Cross Multiplication, and it DOES NOT DISTRIBUTE. MATRIX_M * MATRIX_N does not equal MATRIX_N * MATRIX_M. Therefore, you can COMBINE matrices together. Another popular matrix is a rotation matrix which rotates points. If you cross multiply a rotation matrix and a transformation matrix, you will have a matrix that moves then rotates an object! :o

Matrices are VERY important in 3D programming and game programming. Read more from the OpenGL Red Book more here:

So when I call glMatrixMode(GL_PROJECTION), I go into a mode where matrix operations work with the projection matrix. glLoadIdentity(); Loads a projection matrix that acts like a 1 in multiplication. (Cross multiply 3 points to it and you get the same points back). The projection matrix is the matrix that sets clipping volumes and stuff. Remember that OpenGL is very much a state machine. You basically turn stuff on or off, and set modes such as anti-aliasing or alpha blending. Setting a color will make all objects after that call have that color.

So now we set our orthographic projection matrix with glOrtho. Here's the actual definition of glOrtho from the OpenGL manual:

void glOrtho
    (    GLdouble left
, GLdouble right
, GLdouble bottom
, GLdouble top
, GLdouble zNear
, GLdouble zFar

In other words: this defines our screen coordinates.

I set the zNear and zFar to -1 and 1 respectively since we will always draw our 2D objects at z equal to zero.

Now I switch to the model-view matrix mode. Remember earlier how I said matrices are often used to move objects and the like? That's the model-view matrix!

Think of models as meshes, like 2D squares, or 3D cubes for example. We define them around the point 0,0. Then if we want to move (translate) them, we use OpenGL to create a translation matrix which moves the meshes somewhere else. The same is done with scaling and rotation. This is how objects are moved and rotated in games like Unreal. Similarly, the camera just defines a set of rotation and translations to be multiplied to each object to move them around on the screen. For example, if you want to move the camera to the right, you create a matrix that moves objects to the left. OpenGL lets you keep a stack of matrices that copies the current matrix onto a new layer into the stack, so you can create a matrix, like a camera, push a copy of the current matrix onto the stack using glPushMatrix, move the object into position, then pop the matrix, rinse and repeat.

The order of operations on matrices are kind of reversed. Multiplying an identity matrix by a rotation matrix, then multiplying that by a translation matrix will first move THEN rotate the object. This is important since rotations occur around the 0,0 point. So this gives us completely different results than rotating then translating. Remember that the product rule does not apply to matrices. A * B does not equal B * A. :D

I'll get back on translation and what-not later, but I want to finish up explaining the rest of my example code... The last thing I did was call glDisable(GL_DEPTH_TEST), which turns off depth testing. In other words, what is displayed on the screen is based on what I drew first. You can enable this, and use Z values to sort your objects though, and draw them in any order you like. Remember that the Z values get thrown away in terms of drawing the object though, so things won't look bigger or smaller in orthographic projection mode, even when you use a Z value.

Part 2: Displaying a Box!

So now we are ready to start drawing. Let's do something really simple, like making a blue box! Here's some code:

//Begin drawing quads
  //Define the color (blue)
  glColor3ub(0, 0, 255);

  //Draw our four points, clockwise.
  glVertex3f(-0.5, 0.5, 0);
  glVertex3f(0.5, 0.5, 0);
  glVertex3f(0.5, -0.5, 0);
  glVertex3f(-0.5, -0.5, 0);

All drawing goes between glBegin(...), and glEnd(). We first called glBegin(GL_QUADS), which tells OpenGL that we are going to start drawing quads (rectangles, squares, etc...). You can also say you'll be drawing triangles, or triangle fans, but all we need are quads right now.

glColor3ub(...) sets the red, green, and blue values respectively as unsigned bytes (notice the ub at the end of the function that stands for unsigned byte). A lot of these OpenGL drawing functions let us use different data types for defining stuff. glColor3f(...) will let us use floats and define our rgb components somewhere between 0 and 1. Similarly, glColor4ub(red,green,blue,alpha) let's us define all 4 color components (red, green, blue, and alpha.) Once again, OpenGL works in states, so once we declare that we'll use a color, we'll keep using it until otherwise stated.

glVertex3f obviously defines a vertex for our quad. So we draw the first vertex, the next vertex, etc... until we have a quad. So every four vertexes defines a quad. Make sure to draw them in clockwise or counter clockwise order. =) You can actually speed up OpenGL a bit by culling the back-faces of our quads if we aren't going to ever see the back side of them. The front face is defined by whether you define the vertexes in a clockwise or counterclockwise order. You can read the OpenGL manual about culling back-faces later as it's more advanced than I want to get into for this tutorial.

Part 3: Translation, Rotation, and Scaling Our Quad.

We can move (translate), rotate, and scale (make bigger or smaller) our quad fairly easily using cross multiplication with translation, rotation, and scaling matrices. Here's an example:

//Load an identity matrix so we don't multiply the current model-view matrix by the last matrix.  AKA reset the matrix.
//Move 0.2 on the x axis.
glTranslatef(0.2, 0.0, 0.0);
//Half the size of the object on the x and y axis
glScaled(0.5, 0.5, 0);
//Rotate 30 degrees around the Z axis.
glRotatef(30.0, 0, 0, 1.0);

  glColor3ub(0, 0, 255);
  glVertex3f(-0.5, 0.5, 0);
  glVertex3f(0.5, 0.5, 0);
  glVertex3f(0.5, -0.5, 0);
  glVertex3f(-0.5, -0.5, 0);

IMPORTANT: Despite what you're thinking, it will rotate, THEN scale, THEN translate. It's sort of a reverse order.

Anyways, as you can see, no nasty matrix math needed! Just call glTranslatef, glScaled, and glRotatef to multiply the current matrix by a translation, scaling, and rotation matrix respectively.

Part 4: Loading a Texture.

There are two ways of loading a texture. Either manually, or with AllegroGL. Briefly I'd like to explain how OpenGL uses textures.

Each texture is given a number (index / name / handle), and when you bind that number, it tells OpenGL you're going to be using that texture.

So here's the steps to loading a texture by hand: 1) Have an array in any OpenGL format with all your data in it, like a normal bitmap. By OpenGL format, I mean something like... store an unsigned byte of red, then green, then blue, then alpha for each pixel for the GL_RGBA format. 2) Bind the texture number we are are going to be uploading to video memory. If you do not have a number and want OpenGL to give one to you, call glGenTextures to give you those indexes for you. If you know a texture number isn't being used, you don't have to call glGenTextures. For example, if you set the handle to '5' for your first texture, that's fine. =) glGenTextures is optional. 3) Set the texture properties, such as how to view the texture when zoomed (nearest neighbor or linear). Use glTexParameteri. 4) Upload that data to video memory. Use glTexImage2D.

Here's an example:

GLuint texture_ref; //Texture number, reference, handle, etc...
int texture_height; //Height of the texture.
int texture_width; //Width of the texture.
GLubyte *texture_layout; //The texture bitmap, essentially...

//Load bitmap somewhere in here.

//Generate one texture index.
glGenTextures(1, &texture_ref);

//Tell OpenGL we will be working with that texture number.
glBindTexture(GL_TEXTURE_2D, texture_ref);

//Set the parameters for the texture.

//Upload the texture to memory.  Here we specify the format which our data is in as well 
//as what we want the format in video memory to be.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture_width, 
                texture_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 

Texture width and heights must be in a power of two (2, 4, 8, 16, 32, 64, 128...). So a good texture size would be 8x128 or 32x32. A texture size of 19x28 WILL NOT WORK.

Luckily there's a way around this. You can either blow up the texture to fit a good space, but more preferably, blit the original image into a valid texture size that it will fit into. Then, you can specify the texture coordinates later based on the width and height of the original image. In fact, a single texture can hold many images if you want to get fancy.

Fortunately, loading a standard allegro bitmap into an OpenGL texture is fairly trivial. Use this function to convert an allegro BITMAP into an OpenGL texture:

GLuint allegro_gl_make_texture_ex  (  int  flags,  
  BITMAP *  bmp,  
  GLint  internal_format

The function returns an OpenGL texture handle to bind to later on to select that texture.

You can delete the original allegro BITMAP after it has been turned into an OpenGL texture.

OpenGL will automatically delete textures after the program is run. glDeleteTextures should only be used to reload a texture into an already existing texture handle. OpenGL causes memory leaks otherwise. So my advice would be to preload all the textures you're going to use before the program runs, or convert allegro BITMAP data into an array of GLubytes that OpenGL can upload as a texture so you can specify the texture handle yourself. AllegroGL does not allow you to specify a texture handle before uploading a texture. ;D

For more info on this function, read the AllegroGL documentation.

Part 5: Putting It All Together.

To use the texture, enable texture mapping with OpenGL, then bind the texture you wish to use. When drawing the texture, for each vertex, give a corresponding texture coordinate.

GLuint texture_number;
BITMAP *my_bitmap;

//Load something into the allegro bitmap here, make sure dimensions are 
//a power of 2, if not, scale it so it is, or just blit it into a larger 
//canvas that works.

//Load a texture.  AGL_TEXTURE_MASKED sets the alpha values of the final
//texture to be 0 if the color of a pixel is (255,0,255).
texture_number = allegro_gl_make_texture_ex(AGL_TEXTURE_MASKED, my_bitmap, GL_RGBA);

//Enable texturing on all models for now on.
glBindTexture(GL_TEXTURE_2D, texture_number);

//Define how alpha blending will work and enable alpha blending.

  //If we set the color to white here, then the textured quad won't be
  //tinted red or half-see-through or something when we draw it based on
  //the last call to glColor*().
  glColor4ub(255, 255, 255, 255);

  //Draw our four points, clockwise.
  glTexCoord2f(0, 0); glVertex3f(-0.5, 0.5, 0);
  glTexCoord2f(1, 0); glVertex3f(0.5, 0.5, 0);
  glTexCoord2f(1, 1); glVertex3f(0.5, -0.5, 0);
  glTexCoord2f(0, 1); glVertex3f(-0.5, -0.5, 0);

There we go, we're done. If you blitted your allegro bitmap into a bigger bitmap, then turned the bigger bitmap into a texture, then we can find the texture coordinates very easily. Just do image_width / texture_width, and image_height / texture_height, assuming the upper left hand corner of your blit is at 0,0 of the newer texture. Example:

 tex_coord_width = image_width / texture_width;
 tex_coord_height = image_height / texture_height;
 glTexCoord2f(0, 0); glVertex3f(-0.5, 0.5, 0);
 glTexCoord2f(tex_coord_width, 0); glVertex3f(0.5, 0.5, 0);
 glTexCoord2f(tex_coord_width, tex_coord_height); glVertex3f(0.5, -0.5, 0);
 glTexCoord2f(0, tex_coord_height); glVertex3f(-0.5, -0.5, 0);

Now you can matrices to translate, rotate, and scale your bitmap as a texture.

Using glColor, you can tint your quad a color, or lower the alpha values to make it semi-transparent.

You can now do pretty much everything allegro can do... only with 3D acceleration.

Of course, in the example above, we thew the texture onto a square in the middle of the screen, ignoring the original image dimensions. Defining the pixels you want to draw onto specifically is pretty simple. First of all, remember that the screen is defined from -1 to 1 on the x and y axis. This gives us 2 units per dimension. So... let's say we define our window dimensions as 640x480... Then we convert for x: 0->640 is a value between 0-2, and subtract 1. Point would be coord_x=(pixel_x/320) - 1; coord_y=(pixel_y/240) - 1. Anyways, you should get the idea by now. The great thing is, you can resize the window and the image will just expand with the screen, even though it's designed for 640x480!

Of course, that's complicated. If you want to change the coordinate system to make it easier to draw, that's fine too. Let's say you want to use the old coordinate system where 0,0 is the top left part of the screen and 640,480 is the lower right, then just define the orthographic projection matrix as:

glOrtho(0, 640, 480, 0, -1, 1);

Now, you can define the vertexes of the quads as actual pixels on a 640x480 screen, using the same coordinate system you would use in allegro when drawing to a 640x480 bitmap.

glOrtho(0, 640, 480, 0, -1, 1);
//.. Load and bind a texture here.

  glTexCoord2f(0, 0); glVertex3f(0, 0, 0);
  glTexCoord2f(1, 0); glVertex3f(640, 0, 0); 
  glTexCoord2f(1, 1); glVertex3f(640, 480, 0);
  glTexCoord2f(0, 1); glVertex3f(0, 480, 0);

So now you don't even have to worry about a new coordinate system.

Part 6: Conclusion

Now that you have a basic understanding of how OpenGL can draw your bitmaps on the screen, you can begin exploring on your own.

You are given a powerful set of tools using OpenGL, Going to a full three dimensions from here is as simple as defining a frustum using glFrustum instead of glOrtho to create an area in which objects further away from you become smaller.

Also, there are a lot of neat tricks for you to discover on your own when using OpenGL. For example, quads can be any shape as long as they have four sides. You can sheer your bitmap as a parallelogram on the screen, for example.

A good place to start learning more about OpenGL is to read the OpenGL Red Book on-line.

Have fun exploring OpenGL, and for the first time feel free to use transparency liberally!

--NyanKoneko 01:28, 28 May 2005 (EDT)