ID:275751
 
Okay, I need to know why this isn't working; all OpenGL gurus are welcome to assist. After the window has been created and the context set up, I've called glEnable(GL_TEXTURE_2D) as required and done the other setup work. This is part of it that follows the glEnable() call:
glGenTextures(1, &texid);
glBindTexture(GL_TEXTURE_2D, texid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, texval);

Now, texval is an unsigned char array 3072 bytes long. It's been filled with variations of a color in the RGB order. Now, the display loop works something like this:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glBindTexture(GL_TEXTURE_2D, texid);

glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex3f(x,y,z);
glTexCoord2f(0.0, 1.0); glVertex3f(x,y+1.0f,z);
glTexCoord2f(1.0, 1.0); glVertex3f(x+1.0f,y+1.0f,z);
glTexCoord2f(1.0, 0.0); glVertex3f(x+1.0f,y,z);
glEnd();

All this is kosher. Problem is, no texture displays: only a white square. However--and this is the weird part--if I call glTexImage2D() as follows, I can get a solid color to appear:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 1, 1, 0, GL_RGB, GL_UNSIGNED_BYTE, texval);

Obviously I have no use for a 1×1 texture, but I don't see why it works while any higher size like 2×1 or 32×32 fails. And it's not a proper failure, because the data is in fact being accepted to the texture as far as I know. No error code is generated when this fails--it simply does not draw a texture.

Note that I'm using no lighting settings, and the projection is orthographic.

So, any idea why OpenGL would simply choke on any 2D texture bigger than 1×1?

Lummox JR
Just to point out, you can only upload bitmaps with powers of 2, like 2x2, 32x32, 128x128, 256x256 etc. Also, glVertex3f, glVertex2d, or any glVertexN(f/d) are in general bad these days, they cause a lot of overhead. Using Vertex Buffers is a much better/faster solution. Or switching to DirectX 9 =) and using their sprites which is a faster solution... actually I think OGL 2.0 has point sprites... which could replace your quads altogether (depending on how you're using them). One more note, Quads are slow, Triangle Strips are faster in OGL, in DirectX Triangle List are faster (depending on how you use them).

I modified a NeHe example and placed a macro

#define ORTHO

in there so that it will run in ortho mode and display a the texture. There are 3 textures, 256, 128, and 32. You can check you code with this code if you want. I hope it helps man.

http://www.ethereal-studios.com/night/ogl/

good luck! :) If this isn't the problem then just tell me and I can try to help some other way.
In response to Goten84
Goten84 wrote:
Just to point out, you can only upload bitmaps with powers of 2, like 2x2, 32x32, 128x128, 256x256 etc.

Knew that, though with a 32x32 it was fine.

Also, glVertex3f, glVertex2d, or any glVertexN(f/d) are in general bad these days, they cause a lot of overhead. Using Vertex Buffers is a much better/faster solution.

I'm not sure if that'd make any difference for my goals.

Or switching to DirectX 9 =) and using their sprites which is a faster solution... actually I think OGL 2.0 has point sprites... which could replace your quads altogether (depending on how you're using them).

I can't really go with DirectX or OpenGL 2.0 because my goal is maximum compatibility.

One more note, Quads are slow, Triangle Strips are faster in OGL, in DirectX Triangle List are faster (depending on how you use them).

Thanks! I wasn't aware of that but any little bit of speed I can eke out here is great.

I checked out your example and noticed it had just one similarity to the texture example I had been working from: It set the minification and magnification filters. I'd figured the example was doing this for its own purposes, because the filters were what it was demonstrating. But just to be safe, I added the relevant calls, and suddenly it worked.

So it's all good. Thanks much for the speed tip and the alternate example.

Lummox JR
Lummox JR wrote:
glGenTextures(1, &texid);
> glBindTexture(GL_TEXTURE_2D, texid);
> glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, texval);


My best guess is that it's because you aren't setting up the texture filters via glTexParameter*().

Eg.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
In response to Malver
Malver wrote:
My best guess is that it's because you aren't setting up the texture filters via glTexParameter*().

Yup, that was it.

Lummox JR