On Sat, 2004-10-02 at 19:15 -0700, Grocery Clerk wrote:
I know size_t (used for size of objects) is a 32-bit value on a 32-bit
system and 64-bit on a 64 bit system. This is explained on Page 29 of
Unix Network Programming: The Sockets Networking API. Can someone
please explain to me the reasoning behind pid_t?
This isn't C, of course, but POSIX, so it's really off-topic here.
The thing is that POSIX likes to define a lot of typedefs to hold
datatypes that are supposed to be opaque to the programmer, like pid_t,
uid_t, gid_t, regex_t, socklen_t. These types, while usually defined as
different primitive integer types, are meant to be opaque, and thus only
certain operations are defined to work on them -- Usually things like
comparing to zero of below zero. The programmer may not assume anything
about their size either (except that returned by sizeof). That way,
implementors can deal with it in different ways while not breaking
programs that are compliant with the specified guidelines. For example,
32-bit Solaris may define a socklen_t to be 32 bits, while 64-bit
Solaris may define it to be 64 bits. Furthermore, some systems may have
16-bit UIDs, others 32-bit UIDs, and so on. Some may not even implement
it as integers, although that's probably rare, if it exists at all.
Fredrik Tolf