469,265 Members | 1,985 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,265 developers. It's quick & easy.

how to define an 8 bit integer

I have seen many legacy code use uint8_t or uint16_t to override the
default compiler setting of an integer's length.

I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.

I tried using uint8_t but the gcc doesn't like it. Do I need to modify
any setting or declare typdefes.

Please guide me. Your answer is greatly appreciated.

Thanks,
Oct 25 '08 #1
30 27255
DanielJohnson wrote:
I have seen many legacy code use uint8_t or uint16_t to override the
default compiler setting of an integer's length.

I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.

I tried using uint8_t but the gcc doesn't like it. Do I need to modify
any setting or declare typdefes.
By default, gcc compiles for for a non-conforming version of C that is
sort of like C90 with many features that are similar, but not always
identical, to features that were added to C99. uint8_t was added in C99.
Use -std=c99 to turn on support for C99. Add -pedantic to come a little
closer to being fully conforming to C99.

If for any reason you can't use C99, use the following:

typedef unsigned char uint8;

If a compiler supports any unsigned 8-bit integer type, unsigned char
will be such a type. If the compiler has no 8-bit integer type,
'unsigned char' is going to be the best approximation possible for that
compiler.
Oct 25 '08 #2
On 25 Oct 2008 at 19:56, DanielJohnson wrote:
I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.

I tried using uint8_t but the gcc doesn't like it. Do I need to modify
any setting or declare typdefes.
All these arcane portability issues have been thought of, solved, and
painfully debugged by the creators of things like the GNU autotools, so
why reinvent the wheel?

Look at the autoconf macros AC_TYPE_INT8_T, AC_TYPE_INT16_T,
AC_TYPE_INT32_T and AC_TYPE_INT64_T.

Oct 25 '08 #3

"DanielJohnson" <di********@gmail.comwrote in message
I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.
Why?

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Oct 25 '08 #4
James Kuyper <ja*********@verizon.netwrites:
DanielJohnson wrote:
>I have seen many legacy code use uint8_t or uint16_t to override the
default compiler setting of an integer's length.
I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.
I tried using uint8_t but the gcc doesn't like it. Do I need to
modify
any setting or declare typdefes.

By default, gcc compiles for for a non-conforming version of C that is
sort of like C90 with many features that are similar, but not always
identical, to features that were added to C99. uint8_t was added in
C99. Use -std=c99 to turn on support for C99. Add -pedantic to come a
little closer to being fully conforming to C99.
I think the problem is that he doesn't have "#include <stdint.h>".
Add that to the top of the file, and uint8_t becomes visible --
assuming the implementation provides <stdint.h>.

On my system, this works whether you use gcc's partial C99 mode or not
-- which is valid behavior, since it's a standard header in C99 and a
permitted extension in C90.

Incidentally, using uint8_t or uint16_t doesn't override anything.
The predefined types are still there, and their sizes don't change for
a given implementation. uint8_t and uint16_t, if they exist, are
nothing more than typedefs for existing predefined types (typically
unsigned char and unsigned short, respectively).

An implementation note: the <stdint.hheader isn't provided by gcc,
it's provided by the library. On my system, the library is glibc,
which does provide <stdint.h>. On another system, a different library
might not provide this header. I suspect that <stdint.hwill be
available on *most* modern implementations, but it's not guaranteed
unless the implementation claims conformance to C99.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Oct 25 '08 #5
DanielJohnson wrote:
I have seen many legacy code use uint8_t or uint16_t to override the
default compiler setting of an integer's length.
#include <stdint.h>
Oct 25 '08 #6

"Malcolm McLean" <re*******@btinternet.comwrote in message
news:pP******************************@bt.com...
>
"DanielJohnson" <di********@gmail.comwrote in message
>I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.
Why?
If he has lots of them (as in an array), then it might be useful to only
require a quarter or an eighth of the memory for example.

If he has to talk to some software/hardware that uses specific integer
widths then again it would be handy.

--
Bartc

Oct 25 '08 #7
"Bartc" <bc@freeuk.comwrote in message news:
>
"Malcolm McLean" <re*******@btinternet.comwrote in message
news:pP******************************@bt.com...
>>
"DanielJohnson" <di********@gmail.comwrote in message
>>I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.
Why?

If he has lots of them (as in an array), then it might be useful to only
require a quarter or an eighth of the memory for example.

If he has to talk to some software/hardware that uses specific integer
widths then again it would be handy.
These are possible reasons.
However let the OP answer for himself.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Oct 25 '08 #8
"Malcolm McLean" <re*******@btinternet.comwrites:
"DanielJohnson" <di********@gmail.comwrote in message
>I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.

Why?
Why do you need to know his reasons? Is it that you don't
believe him? Do you treat all posters with equal mistrust,
and do you expect others to treat you with the same mistrust?

He wants the ability to do the above. If he #includes stdint.h,
he'll have his wants most easily satisfied, no matter what
his reasons were. If he grabs a decent book in C, then he'll
probably have his wants satisfied far more quickly in the
future.

Phil
--
The fact that a believer is happier than a sceptic is no more to the
point than the fact that a drunken man is happier than a sober one.
The happiness of credulity is a cheap and dangerous quality.
-- George Bernard Shaw (1856-1950), Preface to Androcles and the Lion
Oct 25 '08 #9

"Phil Carmody" <th*****************@yahoo.co.ukwrote in message news:
"Malcolm McLean" <re*******@btinternet.comwrites:
>"DanielJohnson" <di********@gmail.comwrote in message
>>I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.

Why?

Why do you need to know his reasons? Is it that you don't
believe him? Do you treat all posters with equal mistrust,
and do you expect others to treat you with the same mistrust?
Because the number of people who think they need integers of a certain width
is much greater than the number who actually do.
As Bartc pointed out, there can be good reasons for wanting a guaranteed
8-bit type, but they are rare.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Oct 25 '08 #10
Malcolm McLean wrote:
>
"Phil Carmody" <th*****************@yahoo.co.ukwrote in message news:
>"Malcolm McLean" <re*******@btinternet.comwrites:
>>"DanielJohnson" <di********@gmail.comwrote in message
I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.

Why?

Why do you need to know his reasons? Is it that you don't
believe him? Do you treat all posters with equal mistrust,
and do you expect others to treat you with the same mistrust?
Because the number of people who think they need integers of a certain
width is much greater than the number who actually do.
As Bartc pointed out, there can be good reasons for wanting a guaranteed
8-bit type, but they are rare.
Not in my world (that of a driver writer) or that of most embedded
programmers. Considering a large proportion of C programmers are
embedded programmers, the need for fixed width types is much greater
than you think.

--
Ian Collins
Oct 25 '08 #11
DanielJohnson wrote:
>
I have seen many legacy code use uint8_t or uint16_t to override
the default compiler setting of an integer's length.

I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.

I tried using uint8_t but the gcc doesn't like it. Do I need to
modify any setting or declare typdefes.

Please guide me. Your answer is greatly appreciated.
uint8_t etc. are not guaranteed available. The guaranteed integer
types are char, short, int, long. C99 adds long long. These can
all be signed or unsigned.

Code using uint8_t is inherently non-portable. bytes can be larger
than 8 bits. See <limits.hfor the sizes available on your
system, expressed by MAX and MINs for the types.

--
[mail]: Chuck F (cbfalconer at maineline dot net)
[page]: <http://cbfalconer.home.att.net>
Try the download section.
Oct 26 '08 #12
DanielJohnson wrote:
I tried using uint8_t but the gcc doesn't like it. Do I need to
modify any setting or declare typdefes.
Using GCC, adding

#include <stdint.h>

should do the trick. Like already told by the other posters, it
may not be avaliable, so add some checks to your source, and
provide alternative ways to define those types.

In the end it will boild down to some typedefs from primitive
types, which have been exactly matched to target architecture
and compiler. That's how stdint.h works.

Wolfgang Draxinger
--
E-Mail address works, Jabber: he******@jabber.org, ICQ: 134682867

Oct 26 '08 #13
DanielJohnson wrote:
I have seen many legacy code use uint8_t or uint16_t to override the
default compiler setting of an integer's length.

I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.

I tried using uint8_t but the gcc doesn't like it. Do I need to modify
any setting or declare typdefes.

Please guide me. Your answer is greatly appreciated.
I haven't starting using C99 features yet, but I see that all here talk
about <stdint.hfor some weird reason. :)

For hosted systems, <inttypes.his the header file you are looking for.
In addition to including <stdint.h>, <inttypes.hadd macros and useful
conversion functions.
$ cat use_uint8_t.c
#include <inttypes.h>
#include <stdio.h>

int main(void)
{
uint8_t byte;
uint16_t word;

printf("%d\n", (int)sizeof (byte));
printf("%d\n", (int)sizeof (word));

return 0;
}
$ gcc -std=c99 use_uint8_t.c
$ ./a.out
1
2
$ gcc -v
Using built-in specs.
Target: i486-linux-gnu
Configured with: ../src/configure -v
--enable-languages=c,c++,fortran,objc,obj-c++,treelang --prefix=/usr
--enable-shared --with-system-zlib --libexecdir=/usr/lib
--without-included-gettext --enable-threads=posix --enable-nls
--with-gxx-include-dir=/usr/include/c++/4.2 --program-suffix=-4.2
--enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc
--enable-mpfr --enable-targets=all --enable-checking=release
--build=i486-linux-gnu --host=i486-linux-gnu --target=i486-linux-gnu
Thread model: posix
gcc version 4.2.3 (Ubuntu 4.2.3-2ubuntu7)

--
Tor <echo bw****@wvtqvm.vw | tr i-za-h a-z>
Oct 26 '08 #14
Tor Rustad <bw****@wvtqvm.vwwrites:
[...]
I haven't starting using C99 features yet, but I see that all here
talk about <stdint.hfor some weird reason. :)

For hosted systems, <inttypes.his the header file you are looking
for. In addition to including <stdint.h>, <inttypes.hadd macros and
useful conversion functions.
[...]

And if you don't happen to need those macros and conversion functions,
even on a hosted system, why not use <stdint.h>?

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Oct 26 '08 #15
Keith Thompson wrote:
Tor Rustad <bw****@wvtqvm.vwwrites:
[...]
>I haven't starting using C99 features yet, but I see that all here
talk about <stdint.hfor some weird reason. :)

For hosted systems, <inttypes.his the header file you are looking
for. In addition to including <stdint.h>, <inttypes.hadd macros and
useful conversion functions.
[...]

And if you don't happen to need those macros and conversion functions,
even on a hosted system, why not use <stdint.h>?
IIRC, <stdint.hwas primary intended for free-standing environments, so
for hosted platforms, the recommendation here should IMO be using
<inttypes.h>.

If not needing those macros/functions at one point in time... you can
speed up the compilation by a tiny fraction. In practice, this speed-up
shouldn't matter, even when targeting embedded Linux.

--
Tor <echo bw****@wvtqvm.vw | tr i-za-h a-z>
Oct 26 '08 #16

"Ian Collins" <ia******@hotmail.comwrote in message
Malcolm McLean wrote:
>As Bartc pointed out, there can be good reasons for wanting a guaranteed
8-bit type, but they are rare.

Not in my world (that of a driver writer) or that of most embedded
programmers. Considering a large proportion of C programmers are
embedded programmers, the need for fixed width types is much greater
than you think.
Embedded programmers are not an exception to the rule.
Whilst there will be a few places in which you interface directly with
hardware, and so need fixed size types, it is easy to convince yourself that
these types propagate up to higher-level code, creating a need for uint8_t
throughout the system. Normally they don't. What you achieve by demanding
special types is a need for bespoke code, which can be good for programmers'
employment prospects, but bad for efficiency.

--
Free games and programming goodies.
http://www.personal.leeds.ac.uk/~bgy1mm

Oct 26 '08 #17
Tor Rustad <bw****@wvtqvm.vwwrites:
Keith Thompson wrote:
>Tor Rustad <bw****@wvtqvm.vwwrites:
[...]
>>I haven't starting using C99 features yet, but I see that all here
talk about <stdint.hfor some weird reason. :)

For hosted systems, <inttypes.his the header file you are looking
for. In addition to including <stdint.h>, <inttypes.hadd macros and
useful conversion functions.
[...]
And if you don't happen to need those macros and conversion
functions,
even on a hosted system, why not use <stdint.h>?

IIRC, <stdint.hwas primary intended for free-standing environments,
so for hosted platforms, the recommendation here should IMO be using
<inttypes.h>.
So? That may have been the intent, but why should a programmer be
bound by that, or even influenced?

One standard header contains a few declarations. Another standard
header contains those same declarations plus some other stuff. If I
don't need the other stuff, what is the disadvantage of using the
first header?
If not needing those macros/functions at one point in time... you can
speed up the compilation by a tiny fraction. In practice, this
speed-up shouldn't matter, even when targeting embedded Linux.
Sure, there's nothing wrong with using <inttypes.hif you want to.
I'm just saying that there's nothing wrong with using <stdint.hif
you want to.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Oct 26 '08 #18
>>>>"PC" == Phil Carmody <th*****************@yahoo.co.ukwrites:

PC"Malcolm McLean" <re*******@btinternet.comwrites:
>"DanielJohnson" <di********@gmail.comwrote in message
>>I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.
>Why?
PCWhy do you need to know his reasons?

Because any experienced programmer who's helped others has run into XY
problems: the person asking for help needs to do X, and thinks
mistakenly that Y is the way to accomplish it. So the querent asks
about Y, and the newsgroup spends a lot of time going around in circles
because Y is really not the right solution to X, but because the querent
is asking about Y and not X, everyone's time is wasted.

Asking "Why do you want to do Y?" allows the respondents to say, "Aha!
That's not the best way to accomplish X -- you'll have a much easier
time of it if you try Z." If Y is the best way to do X -- it happens
occasionally -- then help with Y can proceed apace.

It's not a matter of distrust. Knowing *why* someone wants to do
something allows respondents to offer alternative solutions that may
well be better.

Charlton
--
Charlton Wilbur
cw*****@chromatico.net
Oct 26 '08 #19
In article <sl*******************@nospam.invalid>,
Antoninus Twink <no****@nospam.invalidwrote:
>All these arcane portability issues have been thought of, solved, and
painfully debugged by the creators of things like the GNU autotools, so
why reinvent the wheel?
I've just switched a project to autoconf/automake, and the result is
not pretty. The support for non-gcc compilers is weak (how do I say I
want the highest level of warnings for whatever compiler turns out to
be available?). The previously short cc commands are now each several
lines long, making it hard to see warning messages before everything
has scrolled off the screen. I no longer get errors for undefined
functions until I run the program. Support for generated files
(except those produced by known programs like yacc) is poor, and
doesn't work properly with VPATH on some platforms. And each time I
try a new platform, I find a bunch of new things that I have to write
autoconf macros for.

On balance, it's probably worthwhile, and I don't mean to criticise
the authors, but prepare yourself for a lot of tedious messing around.

-- Richard
--
Please remember to mention me / in tapes you leave behind.
Oct 27 '08 #20
Richard Tobin <ri*****@cogsci.ed.ac.ukwrote:
In article <sl*******************@nospam.invalid>,
Antoninus Twink <no****@nospam.invalidwrote:
All these arcane portability issues have been thought of, solved, and
painfully debugged by the creators of things like the GNU autotools, so
why reinvent the wheel?
I've just switched a project to autoconf/automake, and the result is
not pretty. The support for non-gcc compilers is weak (how do I say I
want the highest level of warnings for whatever compiler turns out to
be available?). The previously short cc commands are now each several
lines long, making it hard to see warning messages before everything
has scrolled off the screen. I no longer get errors for undefined
functions until I run the program. Support for generated files
(except those produced by known programs like yacc) is poor, and
doesn't work properly with VPATH on some platforms. And each time I
try a new platform, I find a bunch of new things that I have to write
autoconf macros for.

On balance, it's probably worthwhile, and I don't mean to criticise
the authors, but prepare yourself for a lot of tedious messing around.
The problem with autotools is that it doesn't really buy you that much
portability anymore. It requires a bourne shell, M4, and Make. A universe
which meets those requirements isn't particularly difficult to handle
yourself, especially in 2008. That universe is becoming more homogenous as
ISO and POSIX permeate through. You can reap greater overall dividends by
off-loading a couple of chores to the person compiling the software. For
instance, include a few extra front-end Makefiles, like Makefile.linux,
Makefile.bsd, Makefile.osx, Makefile.hpux. That's not as fancy as autotools,
but it reduces _my_ headaches five-fold, and it's hardly something worth
complaining about on the other end. Autotools isn't so prolific that people
have forgotten how to read the INSTALL file.

Granted, you have to reign in stuff like dynamic modules, etc. Often that's
okay. Anyhow, libtool can be used independently from the rest of the
autotools suite.

Portabilty these days means something other than "unix-land". It
increasingly involves supporting Win32 and various embedded environments.
That kind of portability can't be reached by simply throwing autotools into
the mix; you have to structure your code more intelligently, and reduce
external dependencies. When you begin to do that, autotools benefits are
reduced significantly.

Oct 27 '08 #21

"James Kuyper" <ja*********@verizon.netwrote in message
news:Ha***************@nwrddc02.gnilink.net...
DanielJohnson wrote:
>I have seen many legacy code use uint8_t or uint16_t to override the
default compiler setting of an integer's length.

I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.

I tried using uint8_t but the gcc doesn't like it. Do I need to modify
any setting or declare typdefes.

By default, gcc compiles for for a non-conforming version of C that is
sort of like C90 with many features that are similar, but not always
identical, to features that were added to C99. uint8_t was added in C99.
Use -std=c99 to turn on support for C99. Add -pedantic to come a little
closer to being fully conforming to C99.

If for any reason you can't use C99, use the following:

typedef unsigned char uint8;
[...]

I would add the following if you expect 32-bit systems...
typedef char test[
sizeof(char * 4) == 32 / CHAR_BIT
];
shi% happens.

Oct 28 '08 #22

"Chris M. Thomasson" <no@spam.invalidwrote in message
news:ge**********@aioe.org...
>
"James Kuyper" <ja*********@verizon.netwrote in message
news:Ha***************@nwrddc02.gnilink.net...
>DanielJohnson wrote:
>>I have seen many legacy code use uint8_t or uint16_t to override the
default compiler setting of an integer's length.

I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.

I tried using uint8_t but the gcc doesn't like it. Do I need to modify
any setting or declare typdefes.

By default, gcc compiles for for a non-conforming version of C that is
sort of like C90 with many features that are similar, but not always
identical, to features that were added to C99. uint8_t was added in C99.
Use -std=c99 to turn on support for C99. Add -pedantic to come a little
closer to being fully conforming to C99.

If for any reason you can't use C99, use the following:

typedef unsigned char uint8;

[...]

I would add the following if you expect 32-bit systems...
typedef char test[
sizeof(char * 4) == 32 / CHAR_BIT
];
well, perhaps:

typedef char tester[
(sizeof(char) * 4 == 32 / CHAR_BIT) ? 1 : -1
];

MAN! I am a fuc%ing retard!!!!!!!!!!!!!!!!!
>
shi% happens.
Oct 28 '08 #23
Chris M. Thomasson said:

<snip>
I would add the following if you expect 32-bit systems...
typedef char test[
sizeof(char * 4) == 32 / CHAR_BIT
];
How does adding a syntax error help on 32-bit systems?

And what's wrong with a simple assertion that CHAR_BIT is 8?

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Oct 28 '08 #24
Richard Heathfield wrote:
Chris M. Thomasson said:

<snip>
>I would add the following if you expect 32-bit systems...
typedef char test[
sizeof(char * 4) == 32 / CHAR_BIT
];

How does adding a syntax error help on 32-bit systems?
I think other-Chris mistyped (see their later posting).
And what's wrong with a simple assertion that CHAR_BIT is 8?
Assertions don't operate at compile-time, but the imploding
array trick does.

--
'Don't be afraid: /Electra City/
there will be minimal destruction.' - Panic Room

Hewlett-Packard Limited registered office: Cain Road, Bracknell,
registered no: 690597 England Berks RG12 1HN

Oct 28 '08 #25
"Chris M. Thomasson" <no@spam.invalidwrites:
I would add the following if you expect 32-bit systems...

typedef char tester[
(sizeof(char) * 4 == 32 / CHAR_BIT) ? 1 : -1
];
sizeof(char) is somewhat redundant - it's defined to be 1.
So that's a check that CHAR_BIT is 7 or 8. (Of course,
7 is impossible.) So for that purpose if I absolutely had
to have such a trap, I'd just keep it simple (no divison,
no ?:)

typedef char tester[CHAR_BIT==8];

Ditto the 32-bit ints condition:

typedef char tester[CHAR_BIT*sizeof(int)==32];

However, it does look like some of the C++ Kool-Aid has
cross-polinated, and I can't say I particularly like
such bombs.

If you want to limit yourself to only using 32-bit ints,
why not code using exact-width integers. If such a type
can't be found, you'll find out at compile time, without
need for an obfuscation.

Phil
--
Christianity has such a contemptible opinion of human nature that it does
not believe a man can tell the truth unless frightened by a belief in God.
No lower opinion of the human race has ever been expressed.
-- Robert Green Ingersoll (1833-1899), American politician and scientist
Oct 28 '08 #26
Chris M. Thomasson wrote:
....
I would add the following if you expect 32-bit systems...
typedef char test[
sizeof(char * 4) == 32 / CHAR_BIT
];
I understand what you're trying to do there, but wouldn't a #if/#endif
pair bracking a #error directive do what you're trying to do in a much
clearer way? In a conforming mode, no compiler can omit the diagnostic
for an array length of 0, but it's perfectly free to accept the program
after issuing the diagnostic - I've used compilers with this "feature".
However, in conforming mode no compiler can accept a translation unit
containing a #error directive that survives conditional compilation.
Oct 28 '08 #27
Chris Dollin said:
Richard Heathfield wrote:
>Chris M. Thomasson said:

<snip>
>>I would add the following if you expect 32-bit systems...
typedef char test[
sizeof(char * 4) == 32 / CHAR_BIT
];

How does adding a syntax error help on 32-bit systems?

I think other-Chris mistyped (see their later posting).
Yes, I saw his correction after I'd posted the above.
>And what's wrong with a simple assertion that CHAR_BIT is 8?

Assertions don't operate at compile-time, but the imploding
array trick does.
You want compile-time? Fine:

#if CHAR_BIT != 8
#error oops
#endif

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Oct 28 '08 #28
On 25 Oct, 22:24, Ian Collins <ian-n...@hotmail.comwrote:
Malcolm McLean wrote:
"Phil Carmody" <thefatphil_demun...@yahoo.co.ukwrote in message news:
"Malcolm McLean" <regniz...@btinternet.comwrites:
"DanielJohnson" <diffuse...@gmail.comwrote in message
>>I am using gcc on Linux and a sizeof(int) gives me 4. I want the
ability to define an 8, 16 or 32 bit integer.
>Why?
Why do you need to know his reasons?
because sometimes people aren't describing their actual problem.
They are describing the problem they are having with their chosen
solution instead.

Is it that you don't
believe him? Do you treat all posters with equal mistrust,
and do you expect others to treat you with the same mistrust?
are you always this obnoxious?
Because the number of people who think they need integers of a certain
width is much greater than the number who actually do.
As Bartc pointed out, there can be good reasons for wanting a guaranteed
8-bit type, but they are rare.

Not in my world (that of a driver writer) or that of most embedded
programmers. *Considering a large proportion of C programmers are
embedded programmers, the need for fixed width types is much greater
than you think.
I've dabbled with embedded systems and I also think "the number of
people who think they need integers of a certain width is much
greater
than the number who actually do".

I suspect the same applies to device drivers as well.
--
Nick Keighley

(Proverbs 17:10) A rebuke works deeper in one having understanding
than striking a stupid one a hundred times.
Oct 28 '08 #29
James Kuyper writes:
>Chris M. Thomasson wrote:
>typedef char test[
sizeof(char * 4) == 32 / CHAR_BIT
];

I understand what you're trying to do there, but wouldn't a #if/#endif
pair bracking a #error directive do what you're trying to do in a much
clearer way?
Yes, #if/#error/#endif is better when you can use preprocessor constants
for your check. Like in this case, since sizeof(char) can be dropped.

If you want to check e.g. sizeof or enum values at compile time, you
need to force a constraint violation after preprocessing. Or two
violations, since some compilers are sloppy about checking them. I
prefer to macroize that to make the actual assertion readable:

#define static_assert(name, c) \
typedef struct { int Assert_##name: 2-4*(c); } Assert_##name[2-4*(c)]

/* TODO: Port this to saner assumptions */
static_assert(supported_atomic, sizeof(sig_atomic_t) * CHAR_BIT >= 16);

--
Hallvard
Oct 28 '08 #30
I wrote:
#define static_assert(name, c) \
typedef struct { int Assert_##name: 2-4*(c); } Assert_##name[2-4*(c)]
Of course, it would have helped if I didn't invert the test by trying to
make the typedef a one-liner.

BTW, the 2 and 4 is just because I have no idea if compilers can be
relied on to handle 'int foo:1;' well when int bitfields are signed.

--
Hallvard
Oct 28 '08 #31

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

7 posts views Thread by Morgan Cheng | last post: by
4 posts views Thread by rahul8143 | last post: by
2 posts views Thread by Nicola Garone | last post: by
6 posts views Thread by Kay | last post: by
10 posts views Thread by Christian Christmann | last post: by
1 post views Thread by CARIGAR | last post: by
reply views Thread by zhoujie | last post: by
reply views Thread by suresh191 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.