One thing I’m a real stickler about is what I call writing good platform code. When I say platform code, I am referring to code that shows an fuller understanding of both the platform it’s being used on and its ability to be transferred to other platforms.
Windows coders are often especially bad at this because, and this is partially Microsoft’s fault, years of seeing sample code that does things like:
T * some_function(void *base, size_t offset) { return (T *)((DWORD)base + offset); }
This code makes the dangerous assumption that the pointer will always fit inside a 32-bit integer. Nowadays Visual Studio tries to throw warnings about this type of mistake, and Microsoft introduced INT_PTR and UINT_PTR as replacements for casting pointers to DWORD. Yet, this type of mistake continues to persist.
However, I argue that using Microsoft-specific types is a bad idea in its own regard. Back in the Windows 3.1 days, Microsoft designed all their structures and API around types where their own names were hardcoded to the specific platform (that is, 16-bit x86). For example, WPARAM (originally 16-bit) and LPARAM (originally 32-bit) are both 32-bit on both x86 and AMD64.
Another example of this is the long datatype. Historically, this type was supposed to be the longest integer representable by a register (I think). For example, on 16-bit Windows, it was 32-bit, while a normal integer was 16-bit (I may be wrong on this). On 64-bit architectures, long has typically been 64-bit while an integer is 32-bit. Using long, since it varies so wildly, is therefore generally a very bad idea unless there is a specific need for it.
Unfortunately, Microsoft decided to use long in many various structures and API calls, and ran into the problem that, on Win64, making it 64-bit would break much legacy code. So, the 64-bit Visual C compiler makes long a 32-bit type. This makes long an even worse type to use since its width of a register assumption is changed on Win64.
I have seen countless programs that ignore good typing practices and make themselves unportable (for no reason other than poor style) to other platforms. For the best portability, be proactive:
- Use *int_Xt, for example, int8_t instead of char or BYTE when your intention is to store an 8-bit integer.
- Use intptr_t for pointer math that requires using an integer.
- Use size_t for counts that can’t go below 0. Don’t use int or DWORD when you mean something that might not be a signed 32-bit integer.
- Avoid Microsoft typedefs unless you’re directly interacting with Windows API. They have no place in code that might run on other platforms.
- Avoid long unless it’s part of an external API call.
- Avoid time_t in binary formats (such as files or network code). On GCC/VS2k3, its size is equal to the processor bit width. On VS2k5, it is always 64-bit.
Examples of these mistakes:
long: A developer once gave me code to interact with his network application. The code compiled on my 64-bit server but failed to connect with his application. He had used long in a network structure assuming it would be 32-bit. long was doubly meaningless as it could be different on either end of the spectrum. Using int32_t in the specification would have solved that.
time_t: Someone in IRC asked me to make a reader for his binary file format, which involved a time_t field in a struct. On my 32-bit VS2k5 compiler, time_t was 64-bit, and I couldn’t read his file format at first. The specification should have either mentioned the compiler, mentioned that time_t had to be 32-bit, or it should have simply used a field guaranteed to be most portable (like int64_t).
All of these are general suggestions toward making applications more portable. As long as you know the exact meaning of your types, and you have fully documented the “indeterminate” cases in your API calls/structures/binary formats, it’s fine. But if you’re blindly using sketchy types out of laziness, you won’t regret taking the time to use better types. You gain both readability and portability by being explicit.