Day 7: Textures

Up to now we can draw basic primitives. But these primitives cannot make a game if there are no pictures to complete them. Today we learn about textures and how they can help display images on our primitives.

What is a texture?

A texture is a 2D representation of the surface of an object. The object itself might be 2D or 3D. In order to understand what textures are, take a glance at the following figure.

Earth With and Without a Texture

The two spheres are identical. The only difference is that the the sphere on the right has the image below “wrapped around” it.

Earth With and Without a Texture

This picture is called a texture and the process of wrapping it around the object is called texture mapping. You can see how important textures are. In fact in every game you can find loads of textures wrapped around the objects. For us, textures have particular importance because they are everything we need. What our 2D game engine will do is not to create 3-dimensional meshes, but rather display and animate textures on the screen.

Textures in OpenGL ES

Texture characteristics

A texture in OpenGL is an object that contains one or more images that have the same image format. There are three defining characteristics of a texture:

  • Image Format: How the pixel data is stored in memory: number of color channels, and type and size of each channel. All images in the texture must share this format.
  • Texture Size: The size of the images in the texture (dimensions for example).
  • Texture Type

There are three (main) types of textures that OpenGL supports:

  • 1D Textures: Images in this texture all are 1-dimensional. They have width, but no height or depth.
  • 2D Textures: Images in this texture all are 2-dimensional. They have width and height, but no depth. You probably have a lot of these images on your computer.
  • 3D Textures: Images in this texture all are 3-dimensional. They have width, height, and depth.

When creating a texture in OpenGL, you must specify all these characteristics. Each texture also has a unique identifier (handle). This identifier is generated by OpenGL through a formal request when we create the texture. Later reference to the texture is done through this identifier.

Texture mapping

Now we get to the question of how we wrap textures around objects. The figure below shows OpenGL texture coordinate system. As you can see, regardless of the dimensions of the original image, textures always have unit width and height. You can use floating point numbers to address points in the middle.

Texture Coordination System

We learned about vertex buffers in day 6. There is a similar concept about textures. When we want to instruct OpenGL ES to wrap textures around an object, we simply tell it where each pixel lies in the texture. So we simply need a list of numbers in the texture coordination system that associates vertices to the points. In our square example, we need to simply assign each vertex to one of the corners, so we won’t be using anything other than 0’s and 1’s.

Texture wrap

As we said texture coordinates are between zero and one. But what happens if we specify a value outside these limits. What if we go negative or greater than one? Is it completely illegal? The answer is an absolute NO. It is completely legal to use any value for texture coordinates. But you will exit the image boundaries if you do so. OpenGL has several ways of handling out-of-boundary coordinates. These methods are called wrap methods. In fact it is so legal to use out-of-boundary coordinates that there is a setting for which wrap method OpenGL should employ. Below is a list of wrap methods:

  • Clamp: Clamps the texture coordinate in the [0,1] range. In the limits the color of texture is combined with the color of the edge.
  • Clamp to Border: Everything outside the boundaries is drawn as a solid color (the border color).
  • Clamp to Edge: Everything outside the boundaries is drawn with the pixels on the edge of the exceeded boundary.
  • Repeat: Creates a repeating pattern. For example the pixels in the range (1, 2) are the same as the pixels in the range (0, 1). This is the default value.
  • Mirrored Repeat: It is different from repeat in that each repeated image is the reverse of the previous. For example the pixels in the range (-1, 0) are mirrored compared to (0, 1), but are identical to the ones in the range (1, 2).

Repeat and mirrored repeat methods can be used to create tile patterns. The figure below depicts all these methods:

Sample Texture Image

Mip maps

When a texture is directly applied to a surface, how many pixels of that texture (commonly called “texels”) are used depends on the angle at which that surface is rendered. A texture mapped to a plane that is almost edge-on with the camera will only use a fraction of the pixels of the texture. Similarly, looking directly down on the texture from far away will show fewer texels than an up-close version.

The problem is that when you slowly zoom out on a texture, you start to see aliasing artifacts appear. These are caused by sampling fewer than all of the texels; the choice of which texels are sampled changes between different frames of the animation. Even with linear filtering (see below), artifacts will appear as the camera zooms out.

To solve this problem, we employ mip maps. These are pre-shrunk versions of the full-sized image. Each mipmap is half the size of the previous one in the chain, using the largest dimension of the image . So a 64×16 2D texture can have 6 mip-maps: 32×8, 16×4, 8×2, 4×1, 2×1, and 1×1. OpenGL does not require that the entire mipmap chain is complete; you can specify what range of mipmaps in a texture are available.

Dimension constraints

Deciding how big your textures are depends partially on your specific application. But it also depends on the capabilities of your target device. When speaking about Android and OpenGL ES, these capabilities are highly variant. There are thousands of devices with different sets of capabilities. OpenGL ES has routines to query these capabilities, but there are still some devices that don’t respond to these queries, or respond with wrong information (even among famous brands).

One of the most important things about texture dimensions is that all dimensions must be powers of two (16, 64, 1024, etc). This is mainly for mip map generation. Historically, as computer hardware was not fast enough to support non-power of two textures (NPOT), all textures were required to abide this. Nowadays NPOT is allowed in OpenGL, although it is not encouraged. But when it comes to mobile devices, there is a different story. Some models still don’t allow NPOTs, and it is sometimes mentioned that NPOTs are slower than power-of-two textures. So in order to make sure your textures behave correctly on all devices, you should always use powers of two for their dimensions.

Another question is how big is too big? In the earth example in the beginning of this text, you cannot use a full-resolution satellite map where people are visible if zoomed enough. That’s definitely too big an image and no memory can hold that. But what IS a reasonable size depends on the device. There is a limit on the dimension of images on each device that might be different than other devices. Normally, anything bigger than 2048 pixels in each dimension is too big. There are devices that support more (normally devices that have HD or higher resolutions). But unless your device is too old, 2048 (or 1024 to be more conservative) is safe to use for most devices.

Our texture implementation

We said all this to come to implementing our own texture drawing module. On day 6 we drew a rectangle. Frankly, that’s the only sort of primitive we will ever draw in this engine. Since we are working in 2D, all we need to draw is textures, and we need a “billboard” to map these textures to. The rectangle we made is going to serve the billboard for us.

What we are going to do is design a wrapping class for texture that combines the concepts of primitive drawing and texture mapping and provides an abstract means to simply draw 32-bit bitmap resources on the screen (by bitmap I mean 2D images, not the file format). We do not want the OpenGL ES back-end to be visible to the user of our framework.

Implementation summary

We are going to create an object that draws any texture on the screen with given parameters. We also support transparency, so we can use PNG images to draw arbitrarily-shaped objects. We will be creating textures from Android Resources (res directory of your project). The texture class will have the following routines:

1. Texture identifier generator

Referring to textures in OpenGL is not done through object references or pointers, but rather with texture identifiers. In order to create a texture we need a unique identifier for it. Fortunately, OpenGL gives us the means to generate this identifier. For ease of use, we will create a static method in our Texture class to simply generate this identifier for us.

2. Dimension properties

Our texture class has methods to retrieve the width and height of the texture. We will keep these numbers locally as private fields so we can always use them without any complication. These fields should be read-only and are only modified when the texture is being loaded from the resource. We add get methods to be able to retrieve dimensions from the object.

3. Load from resource

This is the most important method in the class. It loads the image from the resource file and create an OpenGL ES texture. Optionally we can add different mip map levels, so zooming out will not cause unnatural results.

4. Destroy the texture

When the user navigates away from the app, we pause our renderer. It might happen that the OpenGL ES context is freed in the mean time to save memory for other apps. So we have to have this in mind and destroy our texture before pausing the app, and reload it when the app is resumed. This ensures that textures are always loaded properly. What the destroy method does is to unload the texture and mark it as unloaded. Stage then loads unloaded textures when needed.

5. Prepare and draw the texture

All this implementation is to finally draw the texture and that’s what this routine is for. We will map our texture onto the rectangle we made before, and draw the rectangle onto the screen. We will probably have two methods that do the drawing together. The reason for this implementation is that we will end up drawing some objects more than once (when we introduce effects). Redrawing the texture while resources are present is a small part of the whole drawing procedure. So we move all the drawing initialization code to one method, and the actual drawing to another, so we can redraw with less overhead. We call the initialization method prepare and the drawing method draw.

Next steps

Based on the amount of workload we have had for each day so far, implementation of the Texture class will have to wait until the next day. In the meantime, please let me know if anything is unclear about today please don’t hesitate to contact me through my email address (hessan@annahid.com). You might also have suggestions for how I can improve this guide and I am totally open to suggestions as it is my first time doing this.