473,289 Members | 1,743 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,289 software developers and data experts.

Sizeof int

Hi,

Is there a way to change the sizeof int type? Is it posible to make it
return 3 or 5 by changing some header file in the compiler?

Thanks in advance.

Jul 22 '06 #1
56 3897

write2g...@gmail.com wrote:
Hi,

Is there a way to change the sizeof int type? Is it posible to make it
No there is no ways, by whihc you can change the size of the int from
your program.
Well, The size of int depends upon the size of word on the machine and
is very much architecture dependent.
return 3 or 5 by changing some header file in the compiler?
Ya you can surely do that by making changes or writing your own custom
compiler but on what machine would you execute the program,
The compiler is depending on the underlying machine, and doing so would
not make any sense.
Thanks in advance.
Cheers,
SandeepKsinha.

Jul 22 '06 #2

wr********@gmail.com wrote:
Hi,

Is there a way to change the sizeof int type? Is it posible to make it
return 3 or 5 by changing some header file in the compiler?
Supposing you could, why would you?

Tom

Jul 22 '06 #3
Tom St Denis wrote:
wr********@gmail.com wrote:
>>Hi,

Is there a way to change the sizeof int type? Is it posible to make it
return 3 or 5 by changing some header file in the compiler?


Supposing you could, why would you?
/* The high-precision calculations in this module
* need a 1000-bit signed integer, so ...
*/
#include <limits.h>
sizeof(int) = (1000 + CHAR_BIT - 1) / CHAR_BIT;
/*
* Ain't science wonderful?
*/

It doesn't work that way, though: sizeof just describes
the data types the implementation provides, but does not
control their characteristics. Similarly with CHAR_BIT and
the like: you cannot get "fat integers" by re-#defining
INT_MAX to a larger-than-usual number.

<off-topic>

The very first computer I ever programmed actually had
this ability! Integers were normally five decimal digits
wide, with values between -99999 and +99999, but at the drop
of an option card you could make them wider for more precision
or narrower for better storage economy. I'm pretty sure you
couldn't make them narrower than two digits (-99 to +99), but
I can no longer remember what the upper limit was. If there
was an upper limit, though, it was imposed by the compiler and
not by the underlying hardware: the machine itself was quite
happy to work on thousand-digit integers using the same
instructions as for five-digit values.

That was in the mid-1960's, using FORTRAN II on an IBM 1620.
It is a sign of how far we've advanced that we no longer have
this kind of flexibility and are instead made to lie in whatever
Procrustean bed some machine designer chooses to inflict on us ;-)

</off-topic>

--
Eric Sosman
es*****@acm-dot-org.invalid
Jul 22 '06 #4
Eric Sosman a écrit :
<off-topic>

The very first computer I ever programmed actually had
this ability! Integers were normally five decimal digits
wide, with values between -99999 and +99999, but at the drop
of an option card you could make them wider for more precision
or narrower for better storage economy. I'm pretty sure you
couldn't make them narrower than two digits (-99 to +99), but
I can no longer remember what the upper limit was. If there
was an upper limit, though, it was imposed by the compiler and
not by the underlying hardware: the machine itself was quite
happy to work on thousand-digit integers using the same
instructions as for five-digit values.

That was in the mid-1960's, using FORTRAN II on an IBM 1620.
It is a sign of how far we've advanced that we no longer have
this kind of flexibility and are instead made to lie in whatever
Procrustean bed some machine designer chooses to inflict on us ;-)

</off-topic>
This is still used today.

lcc-win32 provides a "bignums" package where you have to set
before any calculations are done how big an integer should be,
i.e. set the maximal precision of the package. I think Gnu's
multi-precision package has a similar stuff in it.
Jul 22 '06 #5
jacob navia said:
lcc-win32 provides a "bignums" package where you have to set
before any calculations are done how big an integer should be,
i.e. set the maximal precision of the package.
That's a good reason to avoid it. Thanks for sharing.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Jul 22 '06 #6
jacob navia wrote:
This is still used today.

lcc-win32 provides a "bignums" package where you have to set
before any calculations are done how big an integer should be,
i.e. set the maximal precision of the package. I think Gnu's
multi-precision package has a similar stuff in it.
cough....

http://math.libtomcrypt.com

A *portable* public domain bignum library that doesn't need to know the
maximum precision at build time [well aside from the fact the array
size has to conform to C specs].

[OT boasting...] It also happens to be the bignum provider for the Tcl
scripting language. :-)

No need for non-portable standards violating addons to compute bignums.

Tom

Jul 22 '06 #7
jacob navia wrote:
Eric Sosman a écrit :
> The very first computer I ever programmed actually had
this ability! Integers were normally five decimal digits
wide, with values between -99999 and +99999, but at the drop
of an option card you could make them wider for more precision
or narrower for better storage economy. I'm pretty sure you
couldn't make them narrower than two digits (-99 to +99), but
I can no longer remember what the upper limit was. If there
was an upper limit, though, it was imposed by the compiler and
not by the underlying hardware: the machine itself was quite
happy to work on thousand-digit integers using the same
instructions as for five-digit values.

That was in the mid-1960's, using FORTRAN II on an IBM 1620.
It is a sign of how far we've advanced that we no longer have
this kind of flexibility and are instead made to lie in whatever
Procrustean bed some machine designer chooses to inflict on us ;-)
We have much greater flexibility. We have languages with built-in
bignums; C just isn't one of them.
This is still used today.

lcc-win32 provides a "bignums" package where you have to set
before any calculations are done how big an integer should be,
i.e. set the maximal precision of the package. I think Gnu's
multi-precision package has a similar stuff in it.
GNU MP doesn't require a precision to be set beforehand. It
automatically resizes bignums.
Jul 22 '06 #8
Lars a écrit :
jacob navia wrote:
>Eric Sosman a écrit :
>> The very first computer I ever programmed actually had
this ability! Integers were normally five decimal digits
wide, with values between -99999 and +99999, but at the drop
of an option card you could make them wider for more precision
or narrower for better storage economy. I'm pretty sure you
couldn't make them narrower than two digits (-99 to +99), but
I can no longer remember what the upper limit was. If there
was an upper limit, though, it was imposed by the compiler and
not by the underlying hardware: the machine itself was quite
happy to work on thousand-digit integers using the same
instructions as for five-digit values.

That was in the mid-1960's, using FORTRAN II on an IBM 1620.
It is a sign of how far we've advanced that we no longer have
this kind of flexibility and are instead made to lie in whatever
Procrustean bed some machine designer chooses to inflict on us ;-)


We have much greater flexibility. We have languages with built-in
bignums; C just isn't one of them.
>This is still used today.

lcc-win32 provides a "bignums" package where you have to set
before any calculations are done how big an integer should be,
i.e. set the maximal precision of the package. I think Gnu's
multi-precision package has a similar stuff in it.


GNU MP doesn't require a precision to be set beforehand. It
automatically resizes bignums.
From the GMP manual:

http://www.swox.com/gmp/manual/Memor...ory-Management

mpf_t variables, in the current implementation, use a fixed amount of
space, determined by the chosen precision and allocated at
initialization, so their size doesn't change.
Jul 22 '06 #9
Tom St Denis a écrit :
jacob navia wrote:
>>This is still used today.

lcc-win32 provides a "bignums" package where you have to set
before any calculations are done how big an integer should be,
i.e. set the maximal precision of the package. I think Gnu's
multi-precision package has a similar stuff in it.


cough....

http://math.libtomcrypt.com

A *portable* public domain bignum library that doesn't need to know the
maximum precision at build time [well aside from the fact the array
size has to conform to C specs].
cough cough

You write in your documentation:

/* init to a given number of digits */
int mp_init_size(mp_int *a, int size);
The mp init size function will initialize the integer and set the
allocated size to a given value.

The library from lcc-win32 does it in a global way, and not in
a per digit basis. That is the only difference.

Jul 22 '06 #10
On 22 Jul 2006 04:26:28 -0700, wr********@gmail.com wrote:
>Hi,

Is there a way to change the sizeof int type? Is it posible to make it
return 3 or 5 by changing some header file in the compiler?
sizeof(int) is determined by the compiler you are using, subject to
the restraints in the standard (the only one that comes to mind is
that it be large enough to hold any integer in the range INT_MIN to
INT_MAX inclusive). It is usually related to some hardware
characteristics of the system the compiled object code is intended to
run on but that is a decision made by the compiler writer, taking into
account whatever factors he desires.

The standard does describe any method for changing the size of an int
but does not prohibit an implementation from providing one. If it
were possible , it would therefore be specific to your implementation
(and hopefully described in its documentation). However, it would be
off-topic here and you need to ask in a group where your compiler is
topical.

By the way, if you find a compiler that allows this, see if you can
find out how they make the run-time library flexible enough to deal
with it and what impact it has on performance.
Remove del for email
Jul 22 '06 #11
On 22 Jul 2006 04:26:28 -0700, in comp.lang.c , wr********@gmail.com
wrote:
>Hi,

Is there a way to change the sizeof int type? Is it posible to make it
return 3 or 5 by changing some header file in the compiler?
The question doesn't make sense. The size of types depends on your
hardware and operating system.
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Jul 22 '06 #12
>>Is there a way to change the sizeof int type? Is it posible to make it
>>return 3 or 5 by changing some header file in the compiler?

The question doesn't make sense. The size of types depends on your
hardware and operating system.
Sometimes you do have a limited choice, and you can select a different
implementation with compiler flags or use a completely different
compiler. For example, Windows can run code with 16-bit ints or
32-bit ints (the choice also affects a number of other things like
pointer sizes. It also interacts with your choice of memory model).
A number of other platforms have choices of 32-bit vs. 64-bit code,
and this may or may not affect the size of int used.

Generally, you can't link code from the two implementations together
without a lot of effort. You probably need different sets of system
libraries for each implementation.

Gordon L. Burditt
Jul 22 '06 #13
jacob navia wrote:
You write in your documentation:

/* init to a given number of digits */
int mp_init_size(mp_int *a, int size);
The mp init size function will initialize the integer and set the
allocated size to a given value.

The library from lcc-win32 does it in a global way, and not in
a per digit basis. That is the only difference.
hehehe ... you're treading into my world now ...

"mp" stands for "multiple precision" as in the algorithms will adjust
the precision as required for you.

The "init_size" function is just for the cases where you know the size
of your numbers and want to avoid the realloc.

Tom

Jul 22 '06 #14
jacob navia wrote:
http://www.swox.com/gmp/manual/Memor...ory-Management

mpf_t variables, in the current implementation, use a fixed amount of
space, determined by the chosen precision and allocated at
initialization, so their size doesn't change.
mpf is for the floats. mpz_t is for integers and will grow as required
[GMP is one of the three math libs my crypto lib supports]

Tom

Jul 22 '06 #15
On 2006-07-22, sandy <sa***********@gmail.comwrote:
>
write2g...@gmail.com wrote:
>Hi,

Is there a way to change the sizeof int type? Is it posible to make it
No there is no ways, by whihc you can change the size of the int from
your program.
Well, The size of int depends upon the size of word on the machine and
is very much architecture dependent.
C has no concept of "size of word".
>return 3 or 5 by changing some header file in the compiler?

Ya you can surely do that by making changes or writing your own custom
compiler but on what machine would you execute the program,
"you can surely"..."make changes [to the compiler]. I have reason to
doubt that that is a true statement.
The compiler is depending on the underlying machine, and doing so would
not make any sense.
Hardly. My version of gcc running on a i386 has a "long long" type of 64
bits, even though my underlying machine has no such type.

--
Andrew Poelstra <website down>
My server is down; you can't mail
me, nor can I post convieniently.
Jul 22 '06 #16

"Mark McIntyre" <ma**********@spamcop.netha scritto nel messaggio
news:no********************************@4ax.com...
On 22 Jul 2006 04:26:28 -0700, in comp.lang.c , wr********@gmail.com
wrote:
Hi,

Is there a way to change the sizeof int type? Is it posible to make it
return 3 or 5 by changing some header file in the compiler?

The question doesn't make sense. The size of types depends on your
hardware and operating system.
Strictly speaking it depends by compiler, and ISO C of course!

With the same hw and OS I can write a compiler
with 39 bits for long and another compiler with
32 bits, always for long type.

Of course compilers listen the hw suggestion. :-)
Giorgio Silvestri


Jul 22 '06 #17
In article <sl**********************@localhost.localdomain> ,
Andrew Poelstra <ap*******@localhost.localdomainwrote:
>Well, The size of int depends upon the size of word on the machine and
is very much architecture dependent.
>C has no concept of "size of word".
I often find that to explain something you have to refer to concepts
not defined in ISO standards, don't you?

-- Richard
Jul 22 '06 #18
Tom St Denis a écrit :
jacob navia wrote:
>>http://www.swox.com/gmp/manual/Memor...ory-Management

mpf_t variables, in the current implementation, use a fixed amount of
space, determined by the chosen precision and allocated at
initialization, so their size doesn't change.


mpf is for the floats. mpz_t is for integers and will grow as required
[GMP is one of the three math libs my crypto lib supports]

Tom
Big numbers in lcc-win32 are ratios of two big integers, i.e.
floats. That is why it needs (as GMP) a size limit.
Jul 22 '06 #19
Tom St Denis a écrit :
jacob navia wrote:

>>You write in your documentation:

/* init to a given number of digits */
int mp_init_size(mp_int *a, int size);
The mp init size function will initialize the integer and set the
allocated size to a given value.

The library from lcc-win32 does it in a global way, and not in
a per digit basis. That is the only difference.


hehehe ... you're treading into my world now ...

"mp" stands for "multiple precision" as in the algorithms will adjust
the precision as required for you.

The "init_size" function is just for the cases where you know the size
of your numbers and want to avoid the realloc.

Tom
You do not handle floating point then. Lcc-win32 does.
Jul 22 '06 #20
On Sun, 23 Jul 2006 00:05:40 +0200, in comp.lang.c , jacob navia
<ja***@jacob.remcomp.frwrote:
>Tom St Denis a écrit :
>The "init_size" function is just for the cases where you know the size
of your numbers and want to avoid the realloc.
You do not handle floating point then. <my stuffdoes.
sorry, since when did CLC become a dick swinging session for two
offtopic posters ?
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Jul 22 '06 #21
On 22 Jul 2006 22:02:21 GMT, in comp.lang.c , ri*****@cogsci.ed.ac.uk
(Richard Tobin) wrote:
>In article <sl**********************@localhost.localdomain> ,
Andrew Poelstra <ap*******@localhost.localdomainwrote:
>>Well, The size of int depends upon the size of word on the machine and
is very much architecture dependent.
>>C has no concept of "size of word".

I often find that to explain something you have to refer to concepts
not defined in ISO standards, don't you?
Sure. Whenever I'm trying to explain the difference between futures
and forwards, I need to explain margin and quite possibly time value
of money.

How is that any more relevant to CLC than the concept of wordsize?
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Jul 22 '06 #22
jacob navia wrote:
You do not handle floating point then. Lcc-win32 does.
I don't get your point. my reply was suggesting there are more
portable ways of achieving bignums than hacking the C language.

Personally, if you want to turn this personal, nobody worth have a look
uses lcc-win32. No offense, but it was [at least when I used it a few
years ago] the crappiest development suite out there. The resource
editor would often crash and working with symbols in headers was a
complete pain in the arse. The compiler doesn't optimize the code and
your extensions are only applicable to windows. Smart people would use
cygwin for windows development if they couldn't afford VS. At least
the GCC extensions in that work in other platforms which use GCC.

I'd say if you wanted to really turn lcc-win32 into a product of value
you have more to worry about then adding bignum support to the
language.

For instance, are you even working on Win64 support? How about real
code optimizations? Are you remotely close to C99 support? etc...

Tom

Jul 22 '06 #23
Mark McIntyre a écrit :
On Sun, 23 Jul 2006 00:05:40 +0200, in comp.lang.c , jacob navia
<ja***@jacob.remcomp.frwrote:

>>Tom St Denis a écrit :

>>>The "init_size" function is just for the cases where you know the size
of your numbers and want to avoid the realloc.

You do not handle floating point then. <my stuffdoes.


sorry, since when did CLC become a dick swinging session for two
offtopic posters ?
Ahh what a good argumentation!

It reflects immediately the intellectual capacities of the poster.

Basically there are two types of bignums support:
Integers only support or some kind of "floating" point
support. This is a design decision for the builders of
the package. Mr StDenis decided to support only integers,
then there is no point in setting sizeof(bignum int) in
its library.

I decided otherwise, so there must be a limit as to how many
bits will be supported by each number.

All this is obviously way beyond what you can understand.

Since you can't do otherwise you just throw obscenities
around and think you are very clever.

The subject of this thread was precisely if sizeof(int) could
be changed. This leads naturally to bignums implementations
where sizeof(int) can be changed at will.

Reading the subject of the thread is also something too difficult for
you.

If you followed the discussion you could have understood something.

But it is obviously much easier to throw some obscenity.

jacob
Jul 22 '06 #24
Tom St Denis a écrit :
jacob navia wrote:
>>You do not handle floating point then. Lcc-win32 does.


I don't get your point.
Basically there are two types of bignums support:

Integers only support or some kind of "floating" point
support.

This is a design decision for the builders of
the package. You decided to support only integers,
then there is no point in setting sizeof(bignum int) in
your library.

I decided to support "floating" point, i.e. to represent the bignums
by a ratio of two indefinitely long integers. Then, the user must decide
which size should those numbers have. Then, (and we come here to the
original thread) there is a concrete sense in setting sizeof(int).

[snip off topic]
Jul 22 '06 #25
jacob navia wrote:
Basically there are two types of bignums support:
Integers only support or some kind of "floating" point
support. This is a design decision for the builders of
the package. Mr StDenis decided to support only integers,
then there is no point in setting sizeof(bignum int) in
its library.
You don't need to augment your type to support large integers.
Therefore, breaking your "C" compiler is kinda a bad idea.
The subject of this thread was precisely if sizeof(int) could
be changed. This leads naturally to bignums implementations
where sizeof(int) can be changed at will.
One may assume he wanted to change sizeof(int) to support large
numbers. One may think he wants to make a "faster" int or something.
Who knows.

But if the intent is large numbers, a bignum lib would be better.

Tom

Jul 22 '06 #26
In article <44*********************@news.orange.fr>,
jacob navia <ja***@jacob.remcomp.frwrote:
>Basically there are two types of bignums support:
In my experience, "bignum" is used to refer to numbers without fixed
size limits (except memory, address space etc). You can't do reals
(rather than integers or rationals) like that, because almost all
operations can return numbers that can't be represented in any finite
size.
>I decided to support "floating" point, i.e. to represent the bignums
by a ratio of two indefinitely long integers.
That sounds like rationals, not floating point. The point about
floating point is that the point floats...

-- Richard
Jul 22 '06 #27
Richard Tobin a écrit :
In article <44*********************@news.orange.fr>,
jacob navia <ja***@jacob.remcomp.frwrote:

>>Basically there are two types of bignums support:


In my experience, "bignum" is used to refer to numbers without fixed
size limits (except memory, address space etc). You can't do reals
(rather than integers or rationals) like that, because almost all
operations can return numbers that can't be represented in any finite
size.
True. But they can be approximated by rationals as near as
memory constraints allow. That is why you have to set a
size for the numbers, so that there is a limit in the approximation.

GMP uses a per number size, lcc-win32 uses a global size, and
all numbers are equal.
>
>>I decided to support "floating" point, i.e. to represent the bignums
by a ratio of two indefinitely long integers.


That sounds like rationals, not floating point. The point about
floating point is that the point floats...

-- Richard
Well, I wrote it in quotes. Obviously they are rational numbers.
Jul 22 '06 #28
On Sat, 22 Jul 2006, Andrew Poelstra wrote:
On 2006-07-22, sandy <sa***********@gmail.comwrote:
>Well, The size of int depends upon the size of word on the machine and
is very much architecture dependent.

C has no concept of "size of word".
ISO/IEC 9899:1999 (E) used the term ``word length'' in F.1
(normative).

Tak-Shing
Jul 22 '06 #29
In article <44*********************@news.orange.frjacob navia <ja***@jacob.remcomp.frwrites:
Tom St Denis a écrit :
jacob navia wrote:
>You do not handle floating point then. Lcc-win32 does.
I don't get your point.

Basically there are two types of bignums support:

Integers only support or some kind of "floating" point
support.
Perhaps. Or only floating point support (the package from Richard Brent).
This is a design decision for the builders of
the package. You decided to support only integers,
then there is no point in setting sizeof(bignum int) in
your library.
For integers there is no need to setting size. This is independent on
whether you also support floating point or not.
I decided to support "floating" point, i.e. to represent the bignums
by a ratio of two indefinitely long integers.
That is *not* floating point, and in most cases can not be suitably used
as a substitute for floating point, as it does not conform to common
requirements from numerical mathematics about floating point (a minimal
exponent range compared to the number of mantissa digits). It is also
not floating slash (a representation of rationals where the numerator
and denominator together occupy about the same amount of storage). It
is a representation of the rationals with fixed limit on the size of
numerator and denominator.

When you represent real numbers with them you get a terribly wobbling
precision. Moreover you will have problems to correctly define what
the operations on such numbers do when the result does not fit in the
representation. Consider such a representation with a single decimal
digit for numerator and denominator. What is the result of 1/9 + 1/8?
Is it 2/9 or 1/4 after rounding? Does 1/9 + 1/7 properly round to 1/4?
And 1/9 + 1/7 to 1/6? So your package is not suitable for numerical
mathematics. Thanks for the warning.
Then, the user must decide
which size should those numbers have. Then, (and we come here to the
original thread) there is a concrete sense in setting sizeof(int).
And that is your problem, you conflate them both in the same type.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Jul 23 '06 #30
jacob navia <ja***@jacob.remcomp.frwrites:
[...]
The subject of this thread was precisely if sizeof(int) could
be changed. This leads naturally to bignums implementations
where sizeof(int) can be changed at will.
No, having a bignum library doesn't mean you can change sizeof(int).
It may mean you can change sizeof(bignum).

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jul 23 '06 #31
ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
In article <44*********************@news.orange.fr>,
jacob navia <ja***@jacob.remcomp.frwrote:
>>Basically there are two types of bignums support:

In my experience, "bignum" is used to refer to numbers without fixed
size limits (except memory, address space etc). You can't do reals
(rather than integers or rationals) like that, because almost all
operations can return numbers that can't be represented in any finite
size.
But all simple arithmetic operations (+, -, *, /, even ** with an
integer right operand) yield only rational numbers given rational
operands, so you could represent all the results as ratios of
dynamically sized integer bignums. If you do it right, there should
be no need to specify the size. Of course, for some iterative
calculations the sizes can grow very large very quickly, which might
be worse than imposing a size limit and losing exactness. And the
whole thing breaks down when you introduce other functions such as
sqrt(). (Unless you create a representation that represents square
roots symbolically, but then implementing operations becomes
difficult.)
>>I decided to support "floating" point, i.e. to represent the bignums
by a ratio of two indefinitely long integers.

That sounds like rationals, not floating point. The point about
floating point is that the point floats...
Right. Floating-point numbers are a subset of real numbers; rational
numbers are another subset of real numbers.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jul 23 '06 #32
On 2006-07-22, Richard Tobin <ri*****@cogsci.ed.ac.ukwrote:
In article <sl**********************@localhost.localdomain> ,
Andrew Poelstra <ap*******@localhost.localdomainwrote:
>>Well, The size of int depends upon the size of word on the machine and
is very much architecture dependent.
>>C has no concept of "size of word".

I often find that to explain something you have to refer to concepts
not defined in ISO standards, don't you?
Absolutely, and normally I wouldn't have called you on that, but in
this case, there are compilers which /don't/ conform to the processor's
register sizes. For example, gcc on an i386 has a "long long" type in
C99 mode which is 64 bits, twice as long as my processor's biggest
register.

Of course, this is all terrible off-topic on clc.

--
Andrew Poelstra <website down>
My server is down; you can't mail
me, nor can I post convieniently.
Jul 23 '06 #33
On 2006-07-22, Tak-Shing Chan <t.****@gold.ac.ukwrote:
On Sat, 22 Jul 2006, Andrew Poelstra wrote:
>On 2006-07-22, sandy <sa***********@gmail.comwrote:
>>Well, The size of int depends upon the size of word on the machine and
is very much architecture dependent.

C has no concept of "size of word".

ISO/IEC 9899:1999 (E) used the term ``word length'' in F.1
(normative).
While I'm not sure of the context, and with my downed server
to worry about I can't go check,

I stand corrected.

--
Andrew Poelstra <website down>
My server is down; you can't mail
me, nor can I post convieniently.
Jul 23 '06 #34
>>>Well, The size of int depends upon the size of word on the machine and
>>>is very much architecture dependent.
>>>C has no concept of "size of word".

I often find that to explain something you have to refer to concepts
not defined in ISO standards, don't you?
I'd love to see the standard try to define such terms as "buggy code",
"atrocious style", and, when referring to something an application
is supposed to compute, a "dollar".
>Absolutely, and normally I wouldn't have called you on that, but in
this case, there are compilers which /don't/ conform to the processor's
register sizes. For example, gcc on an i386 has a "long long" type in
C99 mode which is 64 bits, twice as long as my processor's biggest
register.
Another much OLDER non-conformance to CPU register sizes: C on a
PDP-11 had a "long" type, which was 32 bits, even though there were
no general registers that long. And no 32-bit add instruction.

Gordon L. Burditt
Jul 23 '06 #35
Dik T. Winter a écrit :
In article <44*********************@news.orange.frjacob navia <ja***@jacob.remcomp.frwrites:
Tom St Denis a écrit :
jacob navia wrote:

>You do not handle floating point then. Lcc-win32 does.

I don't get your point.
>
Basically there are two types of bignums support:
>
Integers only support or some kind of "floating" point
support.

Perhaps. Or only floating point support (the package from Richard Brent).
This is a design decision for the builders of
the package. You decided to support only integers,
then there is no point in setting sizeof(bignum int) in
your library.

For integers there is no need to setting size. This is independent on
whether you also support floating point or not.
I decided to support "floating" point, i.e. to represent the bignums
by a ratio of two indefinitely long integers.

That is *not* floating point, and in most cases can not be suitably used
as a substitute for floating point, as it does not conform to common
requirements from numerical mathematics about floating point (a minimal
exponent range compared to the number of mantissa digits). It is also
not floating slash (a representation of rationals where the numerator
and denominator together occupy about the same amount of storage). It
is a representation of the rationals with fixed limit on the size of
numerator and denominator.
No it is floating slash, and it is working since several years. I
started working on it for my lisp interpreter that I developed under
MSDOS around 1989.
>
When you represent real numbers with them you get a terribly wobbling
precision. Moreover you will have problems to correctly define what
the operations on such numbers do when the result does not fit in the
representation. Consider such a representation with a single decimal
digit for numerator and denominator. What is the result of 1/9 + 1/8?
Is it 2/9 or 1/4 after rounding? Does 1/9 + 1/7 properly round to 1/4?
And 1/9 + 1/7 to 1/6? So your package is not suitable for numerical
mathematics. Thanks for the warning.
You are welcome. First you misunderstand, then you start drawing
absurd conclusions. I am speaking about indefinitely long integers
not just two word size integers!
>
Then, the user must decide
which size should those numbers have. Then, (and we come here to the
original thread) there is a concrete sense in setting sizeof(int).

And that is your problem, you conflate them both in the same type.
????
I wrote sizeof (bgnum int)
Jul 23 '06 #36
jacob navia <ja***@jacob.remcomp.frwrites:
Dik T. Winter a écrit :
>In article <44*********************@news.orange.frjacob navia
<ja***@jacob.remcomp.frwrites:
[...]
> Then, the user must decide
which size should those numbers have. Then, (and we come here to the
original thread) there is a concrete sense in setting sizeof(int).
And that is your problem, you conflate them both in the same type.

????
I wrote sizeof (bgnum int)
Are you saying you were misquoted? I just checked your previous
article (3 steps up from this one), and you did write "... a concrete
sense in setting sizeof(int)." Perhaps that's not what you meant to
write, but it's what showed up under your name on my server.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jul 23 '06 #37
In article <ln************@nuthaus.mib.org>,
Keith Thompson <ks***@mib.orgwrote:

>In my experience, "bignum" is used to refer to numbers without fixed
size limits (except memory, address space etc). You can't do reals
(rather than integers or rationals) like that, because almost all
operations can return numbers that can't be represented in any finite
size.
>But all simple arithmetic operations (+, -, *, /, even ** with an
integer right operand) yield only rational numbers given rational
operands, so you could represent all the results as ratios of
dynamically sized integer bignums. If you do it right, there should
be no need to specify the size.
Yes, this is exactly what, for example, Common Lisp does. I intended
to make that clear when I said "rather than integers or rationals".

What I meant to say is that a floating-point-like representation of
reals can't precisely represent many results even when the
representation can use arbitrarily many digits.

-- Richard
Jul 23 '06 #38
Richard Tobin a écrit :
In article <ln************@nuthaus.mib.org>,
Keith Thompson <ks***@mib.orgwrote:
>>>In my experience, "bignum" is used to refer to numbers without fixed
size limits (except memory, address space etc). You can't do reals
(rather than integers or rationals) like that, because almost all
operations can return numbers that can't be represented in any finite
size.

>>But all simple arithmetic operations (+, -, *, /, even ** with an
integer right operand) yield only rational numbers given rational
operands, so you could represent all the results as ratios of
dynamically sized integer bignums. If you do it right, there should
be no need to specify the size.


Yes, this is exactly what, for example, Common Lisp does. I intended
to make that clear when I said "rather than integers or rationals".

What I meant to say is that a floating-point-like representation of
reals can't precisely represent many results even when the
representation can use arbitrarily many digits.

-- Richard
That is true, as in any other reals representation. True floating
point has the same poblems, and this representation can't represent
many reals too. There is no silver bullet.
Jul 23 '06 #39
Keith Thompson a écrit :
jacob navia <ja***@jacob.remcomp.frwrites:
[...]
>>The subject of this thread was precisely if sizeof(int) could
be changed. This leads naturally to bignums implementations
where sizeof(int) can be changed at will.


No, having a bignum library doesn't mean you can change sizeof(int).
It may mean you can change sizeof(bignum).
Obviously. The "int" type is fixed, but in a bignum
package the size of the bignums may vary. I wrote
elsewhere sizeof (bignum int) but not everywhere

Sorry about this confusion
Jul 23 '06 #40
On Sun, 23 Jul 2006 00:39:38 +0200, in comp.lang.c , jacob navia
<ja***@jacob.remcomp.frwrote:
>Mark McIntyre a écrit :
>>
sorry, since when did CLC become a dick swinging session for two
offtopic posters ?

Ahh what a good argumentation!
Thanks. I thought it was quite apt too.
>It reflects immediately the intellectual capacities of the poster.
Or perhaps it reflects immediately my boredom with your "mines bigger
than yours" argument with Tom. Still, at least I didn't choose to
gratuitously insult either of your intelligences.
>Reading the subject of the thread is also something too difficult for
you.
Yeah, whatever. If you choose to consider yourself superior, there's a
word for that, even in French.
>If you followed the discussion you could have understood something.
I followed it right up to the point at which it became a session of
oneupmansship between two offtopic posters.
>But it is obviously much easier to throw some obscenity.
Apparently you don't know what the phrase means. I suggest you read
"Liars Poker".
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Jul 23 '06 #41
On Sun, 23 Jul 2006 01:54:17 GMT, in comp.lang.c , Andrew Poelstra
<ap*******@localhost.localdomainwrote:
>On 2006-07-22, Tak-Shing Chan <t.****@gold.ac.ukwrote:
> ISO/IEC 9899:1999 (E) used the term ``word length'' in F.1
(normative).
While I'm not sure of the context, and with my downed server
to worry about I can't go check,
Floating point. The context is that IEE754 /removes/ the dependenct on
word length. I strongly suspect its got nothing to do with C as such.
>I stand corrected.
I don't think so!
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Jul 23 '06 #42
On Sun, 23 Jul 2006, Mark McIntyre wrote:
On Sun, 23 Jul 2006 01:54:17 GMT, in comp.lang.c , Andrew Poelstra
<ap*******@localhost.localdomainwrote:
>On 2006-07-22, Tak-Shing Chan <t.****@gold.ac.ukwrote:
>> ISO/IEC 9899:1999 (E) used the term ``word length'' in F.1
(normative).
While I'm not sure of the context, and with my downed server
to worry about I can't go check,

Floating point. The context is that IEE754 /removes/ the dependenct on
word length. I strongly suspect its got nothing to do with C as such.
What do you mean by ``C as such''? All I am saying is that
C does have a ``concept'' of word length, even though the
behaviour of the abstract machine does not depend on it.

Tak-Shing
Jul 23 '06 #43
Mark McIntyre <ma**********@spamcop.netwrites:
On Sun, 23 Jul 2006 01:54:17 GMT, in comp.lang.c , Andrew Poelstra
<ap*******@localhost.localdomainwrote:
>>On 2006-07-22, Tak-Shing Chan <t.****@gold.ac.ukwrote:
>> ISO/IEC 9899:1999 (E) used the term ``word length'' in F.1
(normative).
While I'm not sure of the context, and with my downed server
to worry about I can't go check,

Floating point. The context is that IEE754 /removes/ the dependenct on
word length. I strongly suspect its got nothing to do with C as such.
>>I stand corrected.

I don't think so!
Here's the actual quotation from C99 F.1 (the introduction to Annex F
(normative), "IEC 60559 floating-point arithmetic":

This annex specifies C language support for the IEC 60559
floating-point standard. The IEC 60559 floating-point standard is
specifically Binary floating-point arithmetic for microprocessor
systems, second edition (IEC 60559:1989), previously designated
IEC 559:1989 and as IEEE Standard for Binary Floating-Point
Arithmetic (ANSI/IEEE 754.1985). IEEE Standard for
Radix-Independent Floating-Point Arithmetic (ANSI/IEEE 854.1987)
generalizes the binary standard to remove dependencies on radix
and word length. IEC 60559 generally refers to the floating-point
standard, as in IEC 60559 operation, IEC 60559 format, etc. An
implementation that defines __STDC_IEC_559__ shall conform to the
specifications in this annex. Where a binding between the C
language and IEC 60559 is indicated, the IEC 60559-specified
behavior is adopted by reference, unless stated otherwise.

I haven't bothered to indicate italics.

So "word length" refers only to the difference between the IEEE 754
and 854 standards. I don't believe this paragraph implies that the C
standard uses the concept of "word length".

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Jul 23 '06 #44
On Sun, 23 Jul 2006, Keith Thompson wrote:
So "word length" refers only to the difference between the IEEE 754
and 854 standards. I don't believe this paragraph implies that the C
standard uses the concept of "word length".
There is a big difference between ``has [the] concept of''
(Poelstra, 2006-07-22) and ``uses the concept of'' (Thompson,
2006-07-23). ISO/IEC 9899:1999 (E) certainly has a concept of
word length (otherwise F.1 would be uninterpretable), but this
concept is never used in the rest of the standard.

Tak-Shing
Jul 23 '06 #45
On Sun, 23 Jul 2006 20:05:55 +0100, in comp.lang.c , Tak-Shing Chan
<t.****@gold.ac.ukwrote:
>On Sun, 23 Jul 2006, Mark McIntyre wrote:

What do you mean by ``C as such''? All I am saying is that
C does have a ``concept'' of word length, even though the
behaviour of the abstract machine does not depend on it.
I disagree. The very para you quote is a reference to a different
standard, it does NOT refer to any feature of C.

<extreme analogy>
You could as well claim that C has a concept of ISO, because its
mentioned on several pages.
</end>
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Jul 23 '06 #46
On Sun, 23 Jul 2006 21:18:48 +0100, in comp.lang.c , Tak-Shing Chan
<t.****@gold.ac.ukwrote:
>On Sun, 23 Jul 2006, Keith Thompson wrote:
>So "word length" refers only to the difference between the IEEE 754
and 854 standards. I don't believe this paragraph implies that the C
standard uses the concept of "word length".

There is a big difference between ``has [the] concept of''
(Poelstra, 2006-07-22) and ``uses the concept of'' (Thompson,
2006-07-23). ISO/IEC 9899:1999 (E) certainly has a concept of
word length (otherwise F.1 would be uninterpretable), but this
concept is never used in the rest of the standard.
I've already said this elsethread but to recap:

in that case, C also has the concept of ISO, and of Annex. I suspect
that is nonsense.

My feeling is that you searched for "word length" and got a hit, but
didn't thoroughly read it. If so, no problem but stop digging.
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Jul 23 '06 #47
On Sun, 23 Jul 2006, Mark McIntyre wrote:
On Sun, 23 Jul 2006 20:05:55 +0100, in comp.lang.c , Tak-Shing Chan
<t.****@gold.ac.ukwrote:
>On Sun, 23 Jul 2006, Mark McIntyre wrote:

What do you mean by ``C as such''? All I am saying is that
C does have a ``concept'' of word length, even though the
behaviour of the abstract machine does not depend on it.

I disagree. The very para you quote is a reference to a different
standard, it does NOT refer to any feature of C.

<extreme analogy>
You could as well claim that C has a concept of ISO, because its
mentioned on several pages.
</end>
Your analogy is broken. The paragraph I quoted refers to a
normative reference of the standard. According to the standard:

``The following normative documents contain provisions
which, through reference in this text, constitute provisions of
this International Standard'' (ISO/IEC 9899:1999, 2 para. 1).

The standard continues: ``ISO 60559:1989, Binary
floating-point arithmetic for microprocessor systems (previously
designated IEC 559:1989)'' (ISO/IEC 9899:1999, 2 para. 8).

Then we have, in Annex F, an explicit reference to
ISO 60559:1989, making it a provision of C as well.

Tak-Shing
Jul 23 '06 #48
On Sun, 23 Jul 2006, Mark McIntyre wrote:
On Sun, 23 Jul 2006 21:18:48 +0100, in comp.lang.c , Tak-Shing Chan
<t.****@gold.ac.ukwrote:
>On Sun, 23 Jul 2006, Keith Thompson wrote:
>>So "word length" refers only to the difference between the IEEE 754
and 854 standards. I don't believe this paragraph implies that the C
standard uses the concept of "word length".

There is a big difference between ``has [the] concept of''
(Poelstra, 2006-07-22) and ``uses the concept of'' (Thompson,
2006-07-23). ISO/IEC 9899:1999 (E) certainly has a concept of
word length (otherwise F.1 would be uninterpretable), but this
concept is never used in the rest of the standard.

I've already said this elsethread but to recap:

in that case, C also has the concept of ISO, and of Annex. I suspect
that is nonsense.
No, because the concepts of ``ISO'' and ``Annex'' are not
backed by normative references (ISO/IEC 9899:1999, clause 2).

Tak-Shing
Jul 23 '06 #49
In article <44*********************@news.orange.frjacob navia <ja***@jacob.remcomp.frwrites:
Dik T. Winter a écrit :
....
I decided to support "floating" point, i.e. to represent the bignums
by a ratio of two indefinitely long integers.
That is *not* floating point, and in most cases can not be suitably used
as a substitute for floating point, as it does not conform to common
requirements from numerical mathematics about floating point (a minimal
exponent range compared to the number of mantissa digits). It is also
not floating slash (a representation of rationals where the numerator
and denominator together occupy about the same amount of storage). It
is a representation of the rationals with fixed limit on the size of
numerator and denominator.

No it is floating slash, and it is working since several years.
It is not, it is fixed slash. In fixed slash notation you represent
the numerator and denominator with fixed size representation. In
floating slash notation you use a number of bits that say where the
slash is (say the value is m), then the denominator is m+1 bits (or
m bits if there is a hidden most significant bit), the remainder
of the notation is the numerator. (In what way does your slash float?)
Say your slash designator is k bits wide, in that case m ranges from
0 to 2^k - 1. So the size of the denominator ranges from 1 to 2^k bits
and the size of the numerator ranges from n - k downto
n - 2^k bits (where n is the total number of bits allocated, and assuming
hidden bit in the numerator).

Floating slash is used very limited because the operations are fairly
complex. Fixed slash is used a bit more, but rounding is complicated.
When you represent real numbers with them you get a terribly wobbling
precision. Moreover you will have problems to correctly define what
the operations on such numbers do when the result does not fit in the
representation. Consider such a representation with a single decimal
digit for numerator and denominator. What is the result of 1/9 + 1/8?
Is it 2/9 or 1/4 after rounding? Does 1/9 + 1/7 properly round to 1/4?
And 1/9 + 1/7 to 1/6? So your package is not suitable for numerical
mathematics. Thanks for the warning.

You are welcome. First you misunderstand, then you start drawing
absurd conclusions. I am speaking about indefinitely long integers
not just two word size integers!
You were *not* talking about indefinitely long integers. Your packages
has integers of a fixed size. My example was to show the difficulty
in rounding operations when using fixed slash notation, and I am not
talking about "word size integers", I am talking about integers with a
fixed size.
>
Then, the user must decide
which size should those numbers have. Then, (and we come here to the
original thread) there is a concrete sense in setting sizeof(int).
And that is your problem, you conflate them both in the same type.

????
I wrote sizeof (bgnum int)
No, you did not write that, see your original article. But what I was
stating was that you use a fixed slash notation type with fixed size
integers for both integers and rationals. There is no reason to do
so. If you had split the two types, there was no need to fix the size
of integers. It makes sense to fix the size of the integers used in
the fixed slash notation for rationals, otherwise the growth would be
uncontrolled (with fixed slash notation the size of the integers needed
can easily double with every operation, even addition).

But my question still stands. How do you do rounding of results in your
notation? Do you always obtain the nearest representable result? Or
what? Those are things a numerical mathematician needs to know. He/she
can even be satisfied when you state that the result is not always the
nearest representable number, as long as you give the error bounds.
One of my gripes with the Cray 1 was that it nowhere gave error bounds
on the operations, you had to read the hardware reference manual to
find that a multiplication of two operands (with 48 bit mantissa) could
give a result that was only precise in 45 bits. If they had stated that
upfront, that would have simplified error analysis.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Jul 24 '06 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: Sunil Menon | last post by:
Dear All, A class having no member variables and only a method sizeof(object) will return 1byte in ANSI and two bytes in Unicode. I have the answer for this of how in works in ANSI. But I don't...
2
by: Xiangliang Meng | last post by:
Hi, all. What will we get from sizeof(a class without data members and virtual functions)? For example: class abnormity { public: string name() { return "abnormity"; }
19
by: Martin Pohlack | last post by:
Hi, I have a funtion which shall compute the amount for a later malloc. In this function I need the sizes of some struct members without having an instance or pointer of the struct. As...
9
by: M Welinder | last post by:
This doesn't work with any C compiler that I can find. They all report a syntax error: printf ("%d\n", (int)sizeof (char)(char)2); Now the question is "why?" "sizeof" and "(char)" have...
7
by: dam_fool_2003 | last post by:
#include<stdio.h> int main(void) { unsigned int a=20,b=50, c = sizeof b+a; printf("%d\n",c); return 0; } out put: 24
42
by: Christopher C. Stacy | last post by:
Some people say sizeof(type) and other say sizeof(variable). Why?
8
by: junky_fellow | last post by:
Consider the following piece of code: #include <stddef.h> int main (void) { int i, j=1; char c; printf("\nsize =%lu\n", sizeof(i+j));
90
by: pnreddy1976 | last post by:
Hi, How can we write a function, which functionality is similar to sizeof function any one send me source code Reddy
32
by: Abhishek Srivastava | last post by:
Hi, Somebody recently asked me to implement the sizeof operator, i.e. to write a function that accepts a parameter of any type, and without using the sizeof operator, should be able to return...
5
by: Francois Grieu | last post by:
Does this reliably cause a compile-time error when int is not 4 bytes ? enum { int_size_checked = 1/(sizeof(int)==4) }; Any better way to check the value of an expression involving sizeof...
2
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 7 Feb 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:30 (7.30PM). In this month's session, the creator of the excellent VBE...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: Aftab Ahmad | last post by:
Hello Experts! I have written a code in MS Access for a cmd called "WhatsApp Message" to open WhatsApp using that very code but the problem is that it gives a popup message everytime I clicked on...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.