ID:185033
 
I would really appreciate it if someone could just compile and run the following code and tell me what the output is. I would, but my C++ compiler is broke, and I can't fix it.
#include <windows.h>
#include <iostream.h>

int main(int argc, char *argv[])
{
cout << PW_CLIENTONLY;
Sleep(2000);
return 0;
}

I'm trying to use the PrintWindow API function, and I want to use the PW_CLIENTONLY constant as a flag for the last argument so that it gets only the client area.

I don't know what value PW_CLIENTONLY is supposed to have, MSDN doesn't tell what value the constants have, and APILOAD doesn't include it. I tried Google, but the few examples I find there claim that it should be 1, however, however, that doesn't work.

Since windows.h comes with all the constants defined for you, I thought this should give a simple and concrete answer.
You could always do a find-in-files in your library directory. =) (Searching in windows.h probably won't work since IIRC it pulls in a whole bunch of other header files.)
Every time you include iostream.h, you kill an angel. Use iostream instead.

Anyways, PW_CLIENTONLY is in fact 1. Actually, 0x00000001. It's defined in the header.
In response to Audeuro
Audeuro wrote:
Every time you include iostream.h, you kill an angel. Use iostream instead.

Why do you say that? Not including the .h just makes it default to .h anyway, or such is my understanding of it from when I learned. Also, there are some header files that don't work properly when you try to just leave them to default.

Anyways, PW_CLIENTONLY is in fact 1. Actually, 0x00000001. It's defined in the header.

Thanks, and I will also say dang it. I wonder why it is not working for me. Maybe it's due to a conflict of data styles between VB and the DLL. I do know that VB's integers and long integers default to signed; does anyone know if there's a way to make them unsigned? I don't know what else to try.

I suppose it's not a huge deal right now anyway, as I got my program to work around it. It would be nice though.
In response to Loduwijk
Loduwijk wrote:
Audeuro wrote:
Every time you include iostream.h, you kill an angel. Use iostream instead.

Why do you say that? Not including the .h just makes it default to .h anyway, or such is my understanding of it from when I learned. Also, there are some header files that don't work properly when you try to just leave them to default.

No, it doesn't default to .h. iostream.h was deprecated (or is in the process of being deprecated). IIRC, it was deprecated because it's contents were not in the std namespace, but I might be mistaken there.

For further proof:

Try including just plain "windows." You'll get errors.

Further further proof:

Look in your include directory. There's iostream, then iostream.h. Two separate files.
In response to Loduwijk
Loduwijk wrote:
I do know that VB's integers and long integers default to signed; does anyone know if there's a way to make them unsigned? I don't know what else to try.

It won't matter. Representations of signed and unsigned integers are exactly the same for low positive values (and zero).

If you're curious, "low" in this case means less than 2n-1, where n is the number of bits in the integer. (On the usual 32-bit systems, a standard integer has 32 bits, a long integer has 64, and a short integer has 16. There are other variations too.)

But the number 1 definitely qualifies no matter how many bytes you use. =)
In response to Crispy
Right, that makes sense. I wasn't thinking properly.

Signed integers just use the first bit to determine sign, don't they?
In response to Audeuro
Yet another thing the C++ book I learned from led me wrong on. :(

I have found too many things that are just completely bogus in this ruinous compilation of junk. I should just go out and buy one of those large volumes on C++ that people recommend for accurate and extensive information.
In response to Loduwijk
Loduwijk wrote:
Signed integers just use the first bit to determine sign, don't they?

Yes and no. You can use the first bit to determine whether a signed integer is negative (1 means negative, 0 means zero or positive), but you can't negate a number just by flipping the first bit. Wikipedia has more.
In response to Crispy
So 0 = 0, 1 = 1, 10 = 2, etc. as normal, but 11111111 = -1, 11111110 = 2, 11111101 = 3, etc.? Why not just leave it the way it was and add 1 to negative values that way? Why not have 10000000 = 1, 10000001 = 2, etc?

This seems like one of those systems where someone said "How can we make it as confusing as possible, almost unreadable, and still have it work?"

I don't see how that would be better at all. It is more difficult to read, and it appears as though it would require more calculating on the computer's part to use it.
In response to Loduwijk
It looks odd, but it actually simplifies the hardware a lot. Being able to just add two integers without having to worry about whether one is signed and using extra hardware to figure out how to handle sign bits, etc. etc. makes things a lot more complicated than using two's complement.

When using a sign bit, there's also the problem that you have TWO zeroes... 10000000 and 00000000. This means that you need additional logic to do integer comparisons, which kinda sucks. Using two's complement, comparison is really really easy; but with the sign-bit system you have to screw around a lot more.

Also, how does the hardware know when to treat 10000000 and 00000000 as being equal and when to not do so? It doesn't know whether a particular location in memory is a signed integer, an unsigned integer, part of a floating-point number, a pointer, part of a string, or anything like that. It's just a bunch of ones and zeroes, with no semantic information attached. If 10000000 and 00000000 are pointers then they're definitely not equal (that would cause some weeeeird bugs), but if they're signed integers then they are? How is the hardware supposed to know the difference?

Does two's complement make it more difficult for humans to interpret the data? Yes. But internal data representations are not supposed to be human-readable, just machine-usable. If you want to interpret a two's complement integer, you should really be getting a machine to do the base-10 translation for you. =) Manual arithmetic is not encouraged...