468,457 Members | 1,593 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,457 developers. It's quick & easy.

Natural size: int


On modern 32-Bit PC's, the following setup is common:

char: 8-Bit
short: 16-Bit
int: 32-Bit
long: 32-Bit

"char" is commonly used to store text characters.
"short" is commonly used to store large arrays of numbers, or perhaps wide
text characters (via wchar_t).
"int" is commonly used to store an integer.
"long" is commonly used to store an integer greater than 65535.

Now that 64-Bit machines are coming in, how should the integer types be
distributed? It makes sense that "int" should be 64-Bit... but what should
be done with "char" and "short"? Would the following be a plausible setup?

char: 8-Bit
short: 16-Bit
int: 64-Bit
long: 64-Bit

Or perhaps should "short" be 32-Bit? Or should "char" become 16-Bit (i.e.
16 == CHAR_BIT).

Another semi-related question:

If we have a variable which shall store the quantity of elements in an
array, then should we use "size_t"? On a system where "size_t" maps to
"long unsigned" rather than "int unsigned", it would seem to be inefficient
most of the time. "int unsigned" guarantees us at least 65535 array
elements -- what percentage of the time do we have an array any bigger than
that? 2% maybe? Therefore would it not make sense to use unsigned rather
than size_t to store array lengths (or the positive result of subtracting
pointers)?

--

Frederick Gotham
Aug 8 '06
78 3274
Ian Collins wrote:
Ancient_Hacker wrote:
>><soap>
I think it's high time a language has the ability to do the very basic
and simple things programmers need to write portable software: the
ability to specify, unambiguously, what range of values they need to
represent, preferably in decimal, binary, floating, fixed, and even
arbitrary bignum formats. Not to mention hardware-dependent perhaps
bit widths. There's no need for the compiler to be able to actually
*do* any arbitrarily difficult arithmetic, but at least give the
programmer the ability to ASK and if the compiler is capable, and get
DEPENDABLE math. I don't think this is asking too much.

Isn't this a case of if the cap doesn't fit, use another one?

You could achieve what you describe in a language that supports operator
overloading on user defined types.
Yes. That is why lcc-win32 proposes to enhance the language with that
feature. It is needed in MANY situations and it doesn't complexify
the language at all.

operator overloading is a MUST for C.

P.S. flames >/dev/null
Aug 10 '06 #51
Keith Thompson writes:
Here's a thought.

Defined a new type declaration syntax:

signed(expr1, expr2) is a signed integer type that can hold values
in the range expr1 to expr2, both of which are integer constant
expressions. This will simply refer to some existing predefined
integer type;
(..snip..)
Go ahead and write up a proposal:-) But check out the SBEIR proposal
first:

Specification-Based Extended Integer Range, Revision 3 (5.1)
(WG14/N459 X3J11/95-060) date 1995-08-25, by Farance & Rugolsky
http://wwwold.dkuug.dk/JTC1/SC22/WG1...nded-integers/

More ambitious - maybe too ambitious, and with a somewhat different
goal. Turned out to be a lot of tricky issues involved. There was a
lot of discussion about it on comp.std.c and (I think) in the C standard
committee, but in the end it was left out. Don't remember why - too
late and too hairy, maybe.

--
Hallvard

Aug 10 '06 #52
av
On 9 Aug 2006 05:32:32 -0700, Ancient_Hacker wrote:
>The funny thing is this issue was partly solved in 1958, 1964, and in
1971.

In 1958 Grace Hopper and Co. designed COBOL so you could actually
declare variables and their allowed range of values! IIRC something
like:

001 DECLARE MYPAY PACKED-DECIMAL PICTURE "999999999999V999"
001 DECLARE MYPAY USAGE IS COMPUTATIONAL-/1/2/3

Miracle! A variable with predictable and reliable bounds! Zounds!
in how i see the thing: *for doing all*

1) it is need a routine for convalidate input range allow
2) it is need a fixed float number type (that has its pricision
set by programmer or user for each program) that can calculate all
until there is some memory in the system (never overflow)
Aug 10 '06 #53
jacob navia wrote:
Ian Collins wrote:
>Ancient_Hacker wrote:
>><soap>
I think it's high time a language has the ability to do the very basic
and simple things programmers need to write portable software: the
ability to specify, unambiguously, what range of values they need to
represent, preferably in decimal, binary, floating, fixed, and even
arbitrary bignum formats. Not to mention hardware-dependent perhaps
bit widths. There's no need for the compiler to be able to actually
*do* any arbitrarily difficult arithmetic, but at least give the
programmer the ability to ASK and if the compiler is capable, and get
DEPENDABLE math. I don't think this is asking too much.

Isn't this a case of if the cap doesn't fit, use another one?

You could achieve what you describe in a language that supports operator
overloading on user defined types.

Yes. That is why lcc-win32 proposes to enhance the language with that
feature. It is needed in MANY situations and it doesn't complexify
the language at all.
The canonical word is "complify", by analogy with "simplify".
operator overloading is a MUST for C.
Meh. Discussions on the pros and cons of operator overloading are plenty,
but I hardly see why C of all languages would need it so badly. C has
limited support for user-defined types to begin with; I'd hate to see
operator overloading be introduced as a crutch.

For numerical applications, where operator overloading would come in most
naturally, C has always been a bit of a specialist; fast but requiring some
care. Syntactic sugar is of less concern to those applications than portable
and fast calculations are. You'll effectively achieve that people will be
able to write "d = a * b + c" rather than "d =
xfloat128_add(xfloat128_mult(a, b), c)", which is nice but not spectacular.
Readability of the individual calculations is usually not high on the
priority list.

If you want wins in this area, introduce genericity. It does for semantics
what overloading does for syntax. C currently doesn't support generic
functions well; implementations that use macros or void* are clearly
problematic. Of course, don't go hog-wild crazy with the concept, like C++
did -- templates, overloading and implicit conversions all conspire there to
form name resolution semantics that are full of gotchas.
P.S. flames >/dev/null
Well, there's no point arguing with an opinion given without justification
anyway.

S.
Aug 10 '06 #54

Al Balmer wrote:
On Wed, 09 Aug 2006 20:40:52 +0200, Skarmander
<in*****@dontmailme.comwrote:
You can use a typedef to abstract away from the actual type and convey the
purpose to humans, but C does not allow true subtyping, so you'd have to do
any range checks yourself. This obviously encourages a style where these
checks are done as little as possible, or possibly never, which is a clear
drawback.

Perhaps, but it also allows a style where such checks are done only
when necessary.
So does Ada, or Cobol, or most any other programming
language. Just an observation.

Aug 10 '06 #55

Keith Thompson wrote:
>
Here's a thought.

Defined a new type declaration syntax:

signed(expr1, expr2) is a signed integer type that can hold values
in the range expr1 to expr2, both of which are integer constant
expressions. This will simply refer to some existing predefined
integer type; for example, signed(-32768, 32767) might just mean
short. The resulting type isn't necessarily distinct from any
other type.

unsigned(expr1, expr2): as above, but unsigned.

If an expression of one of these types yields a result outside the
declared bounds, the behavior is undefined. (Or it's
implementation-defined, chosen from some set of possibilities.)

All an implementation *has* to do is map each of these declarations to
some predefined type that meets the requirements. For example, if I
do this:

signed(-10, 10) x;
x = 7;
x *= 2;

I might get the same code as if I had written:

int x;
x = 7;
x *= 2;

But a compiler is *allowed* to perform range-checking.

The effort to implement this correctly would be minimal, but it would
allow for the possibility of full range-checking for integer
operations.

Any thoughts?
If you think about the ramifications of &x and what
it means to have pointers/arrays for these things,
you'll probably find it's a much bigger language
change than it first appears. Not to say it can't
be done, but it is a big change.

Aug 10 '06 #56
Skarmander wrote:
jacob navia wrote:
>operator overloading is a MUST for C.
Meh. Discussions on the pros and cons of operator overloading are
plenty, but I hardly see why C of all languages would need it so badly.
C has limited support for user-defined types to begin with; I'd hate to
see operator overloading be introduced as a crutch.

For numerical applications, where operator overloading would come in
most naturally, C has always been a bit of a specialist; fast but
requiring some care. Syntactic sugar is of less concern to those
applications than portable and fast calculations are. You'll
effectively achieve that people will be able to write "d = a * b + c"
rather than "d = xfloat128_add(xfloat128_mult(a, b), c)", which is nice
but not spectacular. Readability of the individual calculations is
usually not high on the priority list.
Look, if you have
double a,b,c,d;
...
d = a*b+c;

you need more precision? You do:
qfloat a,b,c,d;
...
d = a*b+c;

You see?
Overloaded operators make your code more PORTABLE! You do not have to
rewrite it for the new type but you can use the same algorithms with
the new numeric type WITHOUT REWRITING EVERYTHING.

d = xfloat128_add(xfloat128_mult(a, b), c);

Doing this is HORRIBLE (I have done it several times)

If you want wins in this area, introduce genericity. It does for
semantics what overloading does for syntax. C currently doesn't support
generic functions well; implementations that use macros or void* are
clearly problematic. Of course, don't go hog-wild crazy with the
concept, like C++ did -- templates, overloading and implicit conversions
all conspire there to form name resolution semantics that are full of
gotchas.
>P.S. flames >/dev/null


Well, there's no point arguing with an opinion given without
justification anyway.

S.
Aug 10 '06 #57
jacob navia wrote:
Skarmander wrote:
>jacob navia wrote:
>>operator overloading is a MUST for C.
Meh. Discussions on the pros and cons of operator overloading are
plenty, but I hardly see why C of all languages would need it so
badly. C has limited support for user-defined types to begin with; I'd
hate to see operator overloading be introduced as a crutch.

For numerical applications, where operator overloading would come in
most naturally, C has always been a bit of a specialist; fast but
requiring some care. Syntactic sugar is of less concern to those
applications than portable and fast calculations are. You'll
effectively achieve that people will be able to write "d = a * b + c"
rather than "d = xfloat128_add(xfloat128_mult(a, b), c)", which is
nice but not spectacular. Readability of the individual calculations
is usually not high on the priority list.

Look, if you have
double a,b,c,d;
...
d = a*b+c;

you need more precision? You do:
qfloat a,b,c,d;
...
d = a*b+c;

You see?
Like I said, nice but not spectacular. You won't need search & replace as
much if you decide to switch types. Current C programs achieve this mostly
through typedefs and macros, which is only marginally worse.
Overloaded operators make your code more PORTABLE! You do not have to
rewrite it for the new type but you can use the same algorithms with
the new numeric type WITHOUT REWRITING EVERYTHING.
Poor man's polymorphism. This sort of thing works much better in a language
with a unified type system, where you can actually create new integer subtypes.
d = xfloat128_add(xfloat128_mult(a, b), c);

Doing this is HORRIBLE (I have done it several times)
I'm not saying operator overloading is useless, but I do contend it's not as
impressive as programmers seem to think. It frees you from typing (as in
keypresses, not as in types) but wants you to think carefully about what
function could be invoked with every expression you write down, lest nasty
surprises befall you. The lossage it causes in combination with promotions
is notorious in C++.

S.
Aug 10 '06 #58
Hallvard B Furuseth <h.**********@usit.uio.nowrites:
Keith Thompson writes:
>Here's a thought.

Defined a new type declaration syntax:

signed(expr1, expr2) is a signed integer type that can hold values
in the range expr1 to expr2, both of which are integer constant
expressions. This will simply refer to some existing predefined
integer type;
(..snip..)

Go ahead and write up a proposal:-) But check out the SBEIR proposal
first:

Specification-Based Extended Integer Range, Revision 3 (5.1)
(WG14/N459 X3J11/95-060) date 1995-08-25, by Farance & Rugolsky
http://wwwold.dkuug.dk/JTC1/SC22/WG1...nded-integers/

More ambitious - maybe too ambitious, and with a somewhat different
goal. Turned out to be a lot of tricky issues involved. There was a
lot of discussion about it on comp.std.c and (I think) in the C standard
committee, but in the end it was left out. Don't remember why - too
late and too hairy, maybe.
Looks like I don't need to write up a proposal. 8-)}

C99's <stdint.his, IMHO, a great improvement over C90's, well, lack
of <stdint.h>, but the SBEIR proposal is much more general. I like
it.

One suggestion I would have made is to allow a specification of the
required range of a type, rather than just the number of bits. Bit
counts, IMHO, place too much emphasis on the representation of a type
rather than what it's used for. Sometimes I just want to know how
many widgets I can count, not how many bits the computer will use to
count them. Of course, sometimes the representation is important; I'd
advocate ranges *in addition to* the number of bits, not as a
replacement.

Another thing: the syntax for suffixed literals is becoming unwieldy
in C, and more so in the SBEIR proposal. For example, SBEIR proposes
that 456P32U specifies an unsigned 32-bit constant with the value 456.
(I would at least allow an underscore: 456_P32U.) The problem is that
there's an extremely terse syntax that parallels the syntax of integer
type names. Instead, I would have suggested using the actual type
name to specify the type of an integer literal. One way to do this is
to use cast syntax: in C99 <stdint.hterms, (uint32_t)456. In actual
C99, the type of the literal is determined purely by the literal
itself (int in this case), and the cast specifies a conversion to the
specified type. Instead, a special rule could say that when a numeric
literal is the immediate operand of a cast to a numeric type, the
literal has the specified type rather than the default one. Or a
different syntax could have been introduced (Ada, for example,
distinguishes between conversions and qualified expressions).

Of course, the existing suffixes would still have to be supported.

To get back to some semblance of topicality, I wonder how much of this
stuff could be done within standard C. Certainly a compiler could
provide it as an extension, but that doesn't help anyone trying to
write portable code.

Is it possible to write a macro such that
UNSIGNED_TYPE(1000000)
portably expands to either "unsigned int" or "unsigned long",
depending on which type can hold the specified value? I suspect not.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 10 '06 #59
Skarmander posted:
Like I said, nice but not spectacular. You won't need search & replace
as much if you decide to switch types. Current C programs achieve this
mostly through typedefs and macros, which is only marginally worse.

Are you familiar with C++ at all? If so, you'd know that classes and
templates are far more than simple macro and typedef substitution.

It frees you from typing (as in keypresses, not as in types) but wants
you to think carefully about what function could be invoked with every
expression you write down, lest nasty surprises befall you. The lossage
it causes in combination with promotions is notorious in C++.

Operator overloading allows you interact with user-defined types in the
same way as you would interact with an intrinisc type, e.g.:

string str("Hello");

str += " world";

Templates allow you to write one function (or struct, or class), whose code
the compiler can re-use to make a function which deals with different
types.

Put them together and you've got a very powerful tool. A sample usage might
be to write a class called "Int1024" which is a 1024-Bit integer, and to
use operator overloading to make it behave like a built-in integer, so that
we can simply do:

Int1024 i = 56;

i *= 3;

Then we could write a template function which can work with any kind of
integer type, be it built-in or user-defined:

template<typename T>
void DoSomething(void)
{
T obj = 8;

obj *= 5;
}

int main(void)
{
DoSomething<int>();

DoSomething<Int1024>();
}

--

Frederick Gotham
Aug 10 '06 #60

"Keith Thompson" <ks***@mib.orgha scritto nel messaggio
news:ln************@nuthaus.mib.org...

[...]
Is it possible to write a macro such that
UNSIGNED_TYPE(1000000)
portably expands to either "unsigned int" or "unsigned long",
depending on which type can hold the specified value? I suspect not.
#define UNSIGNED_TYPE(x) x ## u

--
Giorgio Silvestri
DSP/Embedded/Real Time OS Software Engineer

Aug 10 '06 #61

"Giorgio Silvestri" <gi**************@libero.itha scritto nel messaggio
news:gD*********************@twister1.libero.it...
>
"Keith Thompson" <ks***@mib.orgha scritto nel messaggio
news:ln************@nuthaus.mib.org...

[...]
Is it possible to write a macro such that
UNSIGNED_TYPE(1000000)
portably expands to either "unsigned int" or "unsigned long",
depending on which type can hold the specified value? I suspect not.

#define UNSIGNED_TYPE(x) x ## u

ops. you probably mean textually "unsigned int" or "unsigned long" ...

--
Giorgio Silvestri
DSP/Embedded/Real Time OS Software Engineer

Aug 10 '06 #62
jacob navia <ja***@jacob.remcomp.frwrites:
[...]
Look, if you have
double a,b,c,d;
...
d = a*b+c;

you need more precision? You do:
qfloat a,b,c,d;
...
d = a*b+c;

You see?
Yes, jacob, we understand. Most of us know what operator overloading
is.

C doesn't have it. We discuss C here. I'm very happy for you that
you've implemented what may or may not be a useful extension, but it's
off-topic here.

If you want to propose a change to the language, take it to
comp.std.c. If you want to discuss some non-standard extension, take
it to comp.compilers.lcc.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 10 '06 #63
"Giorgio Silvestri" <gi**************@libero.itwrites:
"Giorgio Silvestri" <gi**************@libero.itha scritto nel messaggio
news:gD*********************@twister1.libero.it...
>"Keith Thompson" <ks***@mib.orgha scritto nel messaggio
news:ln************@nuthaus.mib.org...
[...]
Is it possible to write a macro such that
UNSIGNED_TYPE(1000000)
portably expands to either "unsigned int" or "unsigned long",
depending on which type can hold the specified value? I suspect not.

#define UNSIGNED_TYPE(x) x ## u

ops. you probably mean textually "unsigned int" or "unsigned long" ...
Yes. For example:

UNSIGNED_TYPE(1000000) obj;

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 10 '06 #64


Keith Thompson wrote On 08/10/06 17:00,:
"Giorgio Silvestri" <gi**************@libero.itwrites:
>>"Giorgio Silvestri" <gi**************@libero.itha scritto nel messaggio
news:gD*********************@twister1.libero.it. ..
>>>"Keith Thompson" <ks***@mib.orgha scritto nel messaggio
news:ln************@nuthaus.mib.org...
[...]

Is it possible to write a macro such that
UNSIGNED_TYPE(1000000)
portably expands to either "unsigned int" or "unsigned long",
depending on which type can hold the specified value? I suspect not.

#define UNSIGNED_TYPE(x) x ## u

ops. you probably mean textually "unsigned int" or "unsigned long" ...


Yes. For example:

UNSIGNED_TYPE(1000000) obj;
All I can think of is

#include <limits.h>
#define T unsigned int
typedef T T0;
typedef T T1;
...
typedef T T65535;
#if UINT_MAX == 65535
#undef T
#define T unsigned long
#endif
typedef T T65536;
typedef T T65537;
...
typedef T T131071;
#if UINT_MAX == 131071
#undef T
#define T unsigned long
#endif
typedef T T131072
...
typedef T T4294967295
... /* !!! */
#undef T /* for cleanliness' sake */
#define UNSIGNED_TYPE(x) T ## x

.... which might be deemed less than elegant.

--
Er*********@sun.com

Aug 10 '06 #65


Keith Thompson wrote On 08/10/06 16:58,:
jacob navia <ja***@jacob.remcomp.frwrites:
[...]
>>Look, if you have
double a,b,c,d;
...
d = a*b+c;

you need more precision? You do:
qfloat a,b,c,d;
...
d = a*b+c;

You see?


Yes, jacob, we understand. Most of us know what operator overloading
is.

C doesn't have it. [...]
Please explain why binary `-' should not be considered
overloaded, given that its operands can be any of the twelve
basic arithmetic types (after promotion), or any additional
non-promotable types the implementation might provide, or any
object pointer type paired with a promoted integer type, or
any "commensurate" object pointer types.

Off-hand, I cannot think of any C operator that is *not*
overloaded. No, not even the comma operator.

--
Er*********@sun.com

Aug 10 '06 #66
Eric Sosman <Er*********@sun.comwrites:
Keith Thompson wrote On 08/10/06 16:58,:
>jacob navia <ja***@jacob.remcomp.frwrites:
[...]
>>>Look, if you have
double a,b,c,d;
...
d = a*b+c;

you need more precision? You do:
qfloat a,b,c,d;
...
d = a*b+c;

You see?


Yes, jacob, we understand. Most of us know what operator overloading
is.

C doesn't have it. [...]

Please explain why binary `-' should not be considered
overloaded, given that its operands can be any of the twelve
basic arithmetic types (after promotion), or any additional
non-promotable types the implementation might provide, or any
object pointer type paired with a promoted integer type, or
any "commensurate" object pointer types.

Off-hand, I cannot think of any C operator that is *not*
overloaded. No, not even the comma operator.
I was insufficiently precise.

C doesn't have user-defined operator overloading. Like many
languages, it overloads the predefined operators on its own predefined
types.

If this "qfloat" were part of the C language, presumably there would
be language-defined overloaded operators for it, just as there are for
any other language-defined floating-point type. But it isn't; it's an
extension implemented by a certain compiler that jacob keeps trying to
advertise here.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 10 '06 #67

jacob navia wrote:
Skarmander wrote:
Ancient_Hacker wrote:
case (3): I need an integer that can do exact math with decimal prices
from 0.01 to 999,999,999,99. COBOL and PL/I can do this.
C cannot do this natively, so you'll need libraries. Luckily, C also
makes it possible to implement such libraries efficiently. This is a
good way of highlighting the differences in philosophy.

using a signed 64 bit integer type and working in cents
should be able to handle money quantities up to

92 233 720 368 547 758 US$ and 8 cents.

Enough to accomodate the now considerable total US
debt... :-)

Using native types is quite easy considering the progres in
hardware in the last years. Besides, accounting people that
need those extreme money amounts will not shudder to buy a
latest model PC for a few thousand dollars.

You can work in decimals of cents if you need sensible rounding.

The Cobol folks are requiring 36 decimal digit support in the lastest
Cobol standard (up from 18), which in a binary format would imply a 128
bit number. For example, until the Turkish Lira was revalued a few
years ago, the US GDP (to say nothing of the global GDP) represented in
TL would rather thoroughly overflowed a 64 bit int. Right now
Indonesian Rupiah's are right on the edge, and Columbian Pesos and
Venezuelan Bolivars are not far behind.

Even worse, C does not have extended range intermediates (required by
Cobol), so currency calculations will overflow far short of the nominal
limits implied by a 64 bit int. Just multiply the GDP of the U.S. in
cents by a 99.99 percentage, and you will overflow 64 bit
intermediates.

A bignum package remains the only way to do reliable currency
calculations in C.

Aug 10 '06 #68
ro***********@yahoo.com wrote:
jacob navia wrote:
>>Skarmander wrote:
>>>Ancient_Hacker wrote:

case (3): I need an integer that can do exact math with decimal prices
from 0.01 to 999,999,999,99. COBOL and PL/I can do this.
C cannot do this natively, so you'll need libraries. Luckily, C also
makes it possible to implement such libraries efficiently. This is a
good way of highlighting the differences in philosophy.

using a signed 64 bit integer type and working in cents
should be able to handle money quantities up to

92 233 720 368 547 758 US$ and 8 cents.

Enough to accomodate the now considerable total US
debt... :-)

Using native types is quite easy considering the progres in
hardware in the last years. Besides, accounting people that
need those extreme money amounts will not shudder to buy a
latest model PC for a few thousand dollars.

You can work in decimals of cents if you need sensible rounding.

The Cobol folks are requiring 36 decimal digit support in the lastest
Cobol standard (up from 18), which in a binary format would imply a 128
bit number.
The lcc-win32 C compiler provides a 128 bit integer, and it
will be more and more common in the near future as 64 bit
machines become common. A 64 bit machine can do 128 bit
arithmetic quite fast. And that will really finish it since

pow(2,128) = 3.40282366920938463463374607431768211456E38 exactly

If you imagine that the smallest bill of currency X is a 10 million
bill, and its thickness is 0.1mm, to a pile worth 3.4e38 of
that currency would measure around 3.59E11 Light years.

The radius of the known universe is only around 1.3E9 light years.

So, we are protected against inflation with 128 bits. :-)

Using lcc-win32:

* a=pow(2,128)
3.402823669209384634633746074317682114560000000000 0000000000000000000000E38
* b=a/10000000 // smallest bill is 10 million
3.402823669209384634633746074317682114560000000000 0000000000000000000000E31
* c=b/10000 // Thickness of 0.1mm, expressed in meters: divide by 10 000
3.402823669209384634633746074317682114560000000000 0000000000000000000000E27
* e=300000*1000*365*24*3600 // meters in a light year
9.460800000000000000000000000000000000000000000000 0000000000000000000000E15
* c/e // number of light years necessary for the pile of bills
3.596761023602004729656843051663371083375613055978 3527820057500422797226E11
*

I hope I did not do a mistake somewhere :-)
Aug 10 '06 #69
In article <zj*******************@news.indigo.ieFrederick Gotham <fg*******@SPAM.comwrites:
....
Are you familiar with C++ at all? If so, you'd know that classes and
templates are far more than simple macro and typedef substitution.
I do not know much about C++, but have used other languages that extensively
use overloading of operators (Algol 68 and Ada) and functions (Ada and
Fortran for standard functions). Hence my question:
Put them together and you've got a very powerful tool. A sample usage might
be to write a class called "Int1024" which is a 1024-Bit integer, and to
use operator overloading to make it behave like a built-in integer, so that
we can simply do:
Int1024 i = 56;
i *= 3;
and i * 3 and 3 * i and i * i, I suppose. But doesn't that require the
definition of three multiplication operators?
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/
Aug 10 '06 #70
jacob navia <ja***@jacob.remcomp.frwrites:
[...]
The lcc-win32 C compiler provides a 128 bit integer,
[...]

With what syntax? Is it consistent with the C standard? Do you need
to include some non-standard header to use it?

If it's called "long long", or if it's implemented as an "extended
integer type" (types, actually, if you provide signed and unsigned
variants) as described in C99 6.2.5, and if it's used appropriately in
<stdint.h>, that's terrific. If it's implemented as a non-standard
extension, that's not nearly as interesting. (I think Mathematica
implements very long integers, and it's not C either.)

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Aug 10 '06 #71
Dik T. Winter posted:
and i * 3 and 3 * i and i * i, I suppose. But doesn't that require the
definition of three multiplication operators?
<OFF TOPIC>

Not really. Firstly you define the *= operator:

class MyInt {
private:

int val[6];

public:

MyInt &operator*=(MyInt const &rhs)
{
int *p = val;
int const *prhs = rhs.val;

*p++ *= *prhs++;
*p++ *= *prhs++;
*p++ *= *prhs++;
*p++ *= *prhs++;
*p++ *= *prhs++;
*p++ *= *prhs++;

return *this;
}
};

Then you simply pigg-back off it:

MyInt operator*(MyInt lhs, MyInt const &rhs)
{
return lhs *= rhs;
}

You can make an "int" convert implicitly to a "MyInt" by supplying a
constructor which takes a sole "int" parameter.

--

Frederick Gotham
Aug 10 '06 #72

jacob navia wrote:
ro***********@yahoo.com wrote:
The lcc-win32 C compiler provides a 128 bit integer, and it
will be more and more common in the near future as 64 bit
machines become common. A 64 bit machine can do 128 bit
arithmetic quite fast. And that will really finish it since

pow(2,128) = 3.40282366920938463463374607431768211456E38 exactly

If you imagine that the smallest bill of currency X is a 10 million
bill, and its thickness is 0.1mm, to a pile worth 3.4e38 of
that currency would measure around 3.59E11 Light years.

The radius of the known universe is only around 1.3E9 light years.

So, we are protected against inflation with 128 bits. :-)

As I mentioned, the problem happens first with intermediate results, so
this is the wrong comparison. OTOH, 128 bit intermediate results ought
to be sufficient for most cases, although will require some care at
times.

Aug 11 '06 #73
Frederick Gotham said:

<snip>
Are you familiar with C++ at all?
If I want C++, I know where to find it. Please confine C++ discussions to
newsgroups where they are topical. Thank you.

--
Richard Heathfield
"Usenet is a strange place" - dmr 29/7/1999
http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)
Aug 11 '06 #74
jacob navia <ja***@jacob.remcomp.frwrote:
Ian Collins wrote:
Isn't this a case of if the cap doesn't fit, use another one?

You could achieve what you describe in a language that supports operator
overloading on user defined types.

Yes. That is why [spam - ed.] proposes to enhance the language with that
feature. It is needed in MANY situations
No, it isn't.
and it doesn't complexify the language at all.
^^^^^^^^^^
*THWAP* You're not George Pedestrian Bush, so don't imitate him

And yes, it does. Considerably. Most importantly, it makes any code I
read in C-plus-overloading less reliable than code in C.
operator overloading is a MUST for C.
On the contrary, it is greatly to be eschewed.
P.S. flames >/dev/null
P.S. Any future spam >your own toy newsgroup, please.

Richard
Aug 11 '06 #75
Richard Heathfield <in*****@invalid.invalidwrites:
Frederick Gotham said:

<snip>
>Are you familiar with C++ at all?

If I want C++, I know where to find it. Please confine C++ discussions to
newsgroups where they are topical. Thank you.
Yawn. It was in the context of a C discussion.
Aug 11 '06 #76
jacob navia wrote:
The lcc-win32 C compiler provides a 128 bit integer, and it
will be more and more common in the near future as 64 bit
machines become common. A 64 bit machine can do 128 bit
arithmetic quite fast. And that will really finish it since
Because I hate you so very much I wrote

http://math.libtomcrypt.com/ltmpp.zip

Which is a C++ wrapper around my LTM library. It's no more valid C
than your extensions. However, it's also portable C++ and is the
"smart" way to deal with this.

If people really wanted something like

bignum a;
a = 2; a <<= 128;

Then C++ is the way to achieve it. Which is what my C++ code does.
But instead of being tied to some crappy halfbreed compiler like yours,
any conforming C++ compiler will handle my code. It will work in Linux
and BSD just as happily as in Windows, etc. Oh and my C++ class will
support ANY size integer (within reasonable limits) not just compile
time fixed size integers.

This is what separates us real software developers from the script
kiddie hackers like you who just don't get how to work in a team.

Sorry for posting about C++ on clc but Navia really needs to get beaten
back into reality.

Tom

Aug 11 '06 #77


Keith Thompson wrote On 08/10/06 18:54,:
Eric Sosman <Er*********@sun.comwrites:
>>
Off-hand, I cannot think of any C operator that is *not*
overloaded. No, not even the comma operator.


I was insufficiently precise.

C doesn't have user-defined operator overloading. Like many
languages, it overloads the predefined operators on its own predefined
types.
Ah. Okay, but C does support a very limited form of
user-defined overloading for some operators. For example,
the user can apply the () operator to function pointer
types not listed in the Standard.

In other words, "I know what you mean, and you know
what you mean, but we're both having a hard time expressing
it exactly." ;-)
If this "qfloat" were part of the C language, presumably there would
be language-defined overloaded operators for it, just as there are for
any other language-defined floating-point type. But it isn't; it's an
extension implemented by a certain compiler that jacob keeps trying to
advertise here.
From the dribs and drabs of information he's trumpeted
so often, I think Jacob has probably done his "qfloat" in the
form of a Standard-conforming extension, that is, something
that doesn't interfere with a conforming program. I don't
know the specifics of his choices, but if I were going to
add such an extension I'd add a `_qfloat' keyword (in the
implementation's name space), and maybe a <qfloat.hheader
that (among other things) did `typedef _qfloat qfloat;'.

Assuming that qfloat is really _qfloat behind the scenes,
I think the extended compiler is within its rights to define
how this extended type responds to various operators. That
is, "implementor-defined operator overloading" seems legal,
if applied to implementor-defined extended types. The
situation with mixed types is a little murkier because of the
Standard's enumeration of promotion rules; I'm not sure that

_qfloat q = 6;
q *= 7;

.... would be a legal way to obtain forty-two, because the
Standard describes all the possible promotions of the int
operand 7, and none of them leads to a _qfloat value. But
there may be a loophole somewhere I haven't spotted.

Hmmm... I wonder what his _qfloat constants look like.
It seems to me that a diagnostic is required for 3.14Q0 or
suchlike, even if the compiler also recognizes it as a
highly-precise poor approximation to pi. Maybe the compiler
needs to reject such constructs until <qfloat.hprovides
the magic `#pragma enable_qfloat'. I suppose such a #pragma
would also offer a way out of the promtion pickle. In fact,
maybe #pragma could take care of the whole business: turning
`qfloat' from an identifier to a keyword (doing away with
the need for `_qfloat'), enabling the recognition of constants,
augmenting the conversion rules, everything.

--
Er*********@sun.com

Aug 11 '06 #78

Eric Sosman wrote:
Keith Thompson wrote On 08/10/06 18:54,:
Eric Sosman <Er*********@sun.comwrites:
>
Off-hand, I cannot think of any C operator that is *not*
overloaded. No, not even the comma operator.

I was insufficiently precise.

C doesn't have user-defined operator overloading. Like many
languages, it overloads the predefined operators on its own predefined
types.

Ah. Okay, but C does support a very limited form of
user-defined overloading for some operators. For example,
the user can apply the () operator to function pointer
types not listed in the Standard.
There's still only one semantic for function call. The
operator is overloaded, but not user-defined overloading,
because the semantics can't be changed. Array indexing is
another overloaded operator; also not user-defined overloading.
In other words, "I know what you mean, and you know
what you mean, but we're both having a hard time expressing
it exactly." ;-)
The key is a user having the ability to choose different
semantics based on operand types. If the language chooses, it
isn't user-defined.

Aug 12 '06 #79

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

reply views Thread by C. Barnes | last post: by
3 posts views Thread by Derek Basch | last post: by
1 post views Thread by Connelly Barnes | last post: by
4 posts views Thread by Andrew E | last post: by
7 posts views Thread by tommaso.gastaldi | last post: by
1 post views Thread by Hendri Adriaens | last post: by
1 post views Thread by Steven Bird | last post: by
1 post views Thread by subhajit12345 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.