473,416 Members | 1,510 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,416 software developers and data experts.

why still use C?

no this is no trollposting and please don't get it wrong but iam very
curious why people still use C instead of other languages especially C++.

i heard people say C++ is slower than C but i can't believe that. in pieces
of the application where speed really matters you can still use "normal"
functions or even static methods which is basically the same.

in C there arent the simplest things present like constants, each struct and
enum have to be prefixed with "struct" and "enum". iam sure there is much
more.

i don't get it why people program in C and faking OOP features(function
pointers in structs..) instead of using C++. are they simply masochists or
is there a logical reason?

i feel C has to benefit against C++.

--
cody

[Freeware, Games and Humor]
www.deutronium.de.vu || www.deutronium.tk
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05
687 22742
Eric Backus wrote:

(snip)
You also get the best possible approximation to the 'ideal' result if you
use binary floating-point. I'm sure you know this, but it's worth
emphasizing anyway: decimal floating point does *not* get rid of the inexact
nature of floating-point operations for the majority of calculations. It seems to me that the *only* thing decimal floating point gets you is
conformance to the conventions used by the financial community for how to
deal with that inexactness. You can do that today with a library, so the
only thing that hardware support gets you is faster computations for those
calculations where this is a requirement.


(snip)

It gets conformance with the results people get on pocket calculators,
or when they do long division by hand on paper.

People learn early that one third is a repeating decimal, and one tenth
is not. In the old days, when floating point hardware was built with
vacuum tubes, it made sense to preserve every bit. (I think even 30
years ago I couldn't imagine floating point hardware built with vacuum
tubes. It is even harder today.)

In the days of billions of transistors on a chip, only a small
percentage need be allocated to decimal floating point hardware.

-- glen
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #601
In <cl****************@plethora.net> glen herrmannsfeldt <ga*@ugcs.caltech.edu> writes:
Dan Pop wrote:
In <cl****************@plethora.net> "Mike Cowlishaw" <mf*****@attglobal.net> writes:
(snip)
PL/I did many things wrong (some of its coercions were/are quite
extraordinary). But given that it was designed in the 1960s it
actually was a significant step forward in many areas. It
certainly wasn't a failure, and it is still widely used today.

It failed to achieve the goal of its designers: to be the ultimate
programming language. It never achieved the popularity of the languages
it was supposed to render obsolete: FORTRAN and COBOL.


There are reasons for that unrelated to the language itself, such as how
long it took to write the compiler, and how slow it ran on machines that
existed at the time.


None of which still applies today. Yet, the usage of PL/I is still
marginal.
It does seem hard to replace an existing language with an improved one.
Not at all. FORTRAN IV had little difficulty replacing FORTRAN II and
F77 had little difficulty replacing F66. C89 replaced K&R C in a couple
of years.
Maybe more relevant here, C++ was an improved version of C, possibly
with the intention of converting C programmers to C++ programmers, yet C
is still reasonably popular.


This is the opposite of what you've been arguing above, i.e. the
difficulty of the improved language to become popular.

Both FORTRAN and COBOL remained popular long after the introduction and
even after the de facto death of the improved PL/I.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #602
Hans-Bernhard Broeker wrote:
....
Wrong citation, I think. Those fact were the rationale for having
FLT_RADIX != 2 in _C90_. C99 didn't change that. ...
I'll take your word for that. I've never had a copy of C90 standard; it
was too expensive when it was useful to me, and ceased being useful to
me by the time the price dropped. So I'm not well-informed about which
features were introduced in C90 vs C99.
a much heavier weight on FLT_RADIX==2 architectures by devoting a lot
of appendix and standardized optional parts of the library to IEEE 754
support.
Part of that support included ways of checking to see whether a given
implemenatation supports IEEE 754.
I.e., if anything, C99 moved *away* from FLT_RADIX != 2, not towards
it. And that's why I'm concerned about it. People looked at the


Adding a feature that makes it possible to check for IEEE 754 support
counts, in my book, as making it clearer that IEEE 754 need not be
supported. Also, there are a few citations in the standard that refer to
IEEE 854, which defines a radix-independent standard for arithmetic of
which IEEE 754 is essentially a special case, for radix=2. I have a copy
of IEEE 854, but not of IEEE 754, so I'm not sure exactly how they
differ.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #603
glen herrmannsfeldt wrote:
Maybe more relevant here, C++ was an improved version of C, possibly
with the intention of converting C programmers to C++ programmers, yet C
is still reasonably popular.


Actually "C with classes", which evolved into C++, wasn't
an attempt to convert anyone. It was a way of getting
object-oriented features on the Unix platform by
exploiting an existing code generation system.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #604
On 05 Dec 2003 07:23:42 GMT, glen herrmannsfeldt <ga*@ugcs.caltech.edu>
wrote:
Maybe more relevant here, C++ was an improved version of C


I sure what you meant to say is "C++ was *intended to be* an improved
version of C".
--
#include <standard.disclaimer>
_
Kevin D Quitt USA 91387-4454 96.37% of all statistics are made up
Per the FCA, this address may not be added to any commercial mail list
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #605
"Mike Cowlishaw" <mf*****@attglobal.net> wrote in message
news:cl****************@plethora.net...
Eric Backus wrote:
You also get the best possible approximation to the 'ideal' result if
you use binary floating-point. I'm sure you know this, but it's worth
emphasizing anyway: decimal floating point does *not* get rid of the
inexact nature of floating-point operations for the majority of
calculations.


Indeed, but there is the huge class of 'interesting' calculations
where using base 10 FP will yield exact results where binary
cannot.


My intuition, bad though it may be, says that it is not so huge. Certainly
you can add and subtract many decimal values exactly, though you can also do
this with scaled integers. You can multiply decimal values exactly, at
least until you exhaust the precision of the decimal floating-point format.
As soon as you divide by other decimal values or even by an integer, or take
powers or roots, the results become inexact. So you can add up an itemized
bill exactly, and compute sales tax exactly. But the average daily balance
of a credit card, compounded interest, loan payments, or investment ROIs are
all going to be inexact.

It seems to me that the *only* thing decimal floating point gets you
is conformance to the conventions used by the financial community for
how to deal with that inexactness.

If, by 'financial community' you include everyone who does
calculations on their finances, I agree.


Point taken, though I'm not even sure that all such rounding conventions are
consistent with each other. Do all loan payment calculations use the same
number of guard digits? Do all mutual funds round numbers of shares to the
same number of digits? Does everyone use round-to-nearest, or do some use
round-towards-zero? Can an expression be evaluated to higher precision than
the input variables, and then later rounded back to the input precision, or
must every partial result be rounded to the input precision? Are all
operations and library functions required to be accurate to 1/2 lsb, or is
an error of 1 lsb allowed? And given the inexact nature of floating-point,
answers still vary depending on whether you use "decimal float" vs. "decimal
double", right?

Issues like this make me wonder just how much is really gained by using
decimal floating point.

You can do that today with a
library, so the only thing that hardware support gets you is faster
computations for those calculations where this is a requirement.


It's certainly true that the only thing that hardware support
gives you, over a library of assembler code, is improved
performance. But we are talking orders of magnitude
(remember how slow binary FP emulation was?).


Yes I do. I guess my feeling is that floating-point, in general, is
essential to a wide range of applications, while explicitly decimal
floating-point is essential to a much more limited range of applications. I
have to admit that I'm biased here, because I use floating-point all the
time but I personally don't have an immediate use for decimal
floating-point.

Note that this library could be implemented in assembly on those
platforms that have hardware support, so you don't really even need
native C support for decimal floating point in order to get the
faster computations. Of course, using such a library is inconvenient
compared to having native C support, but that inconvenience is payed
only by those who need this capability. Is that group of people
large enough that inconveniencing them justifies changing the
language?


Of course; C is the 'assembly language' on which the vast
majority of other languages are based and in which they are
implemented. It is the very foundation of software; without
access to the hardware primitive types in C, every other
language has to provide it from scratch, in a platform-
dependent way.


If there were a standard C library interface to decimal floating-point opera
tions, every other language could use that interface to implement things.
It's only the actual implementation of that C library that would need to be
platform dependent. Remember there has to be platform dependency somewhere,
my argument is that it can go in this standard library rather than in the
compiler.

I still think it comes down to a question of convenience. A standard C
library interface would certainly work, but is not as convenient for those
that need it. Built-in decimal floating-point is convenient for those that
need it, but unnecessary complication for those that don't, and is extra
effort for compiler writers.

--
Eric Backus
R&D Design Engineer
Agilent Technologies, Inc.
425-356-6010 Tel
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #606
Mike Cowlishaw wrote:
Eric Backus wrote:
"Mike Cowlishaw" <mf*****@attglobal.net> wrote in message
news:cl****************@plethora.net...
Dan Pop wrote:

And what is the exact result of dividing 1 penny by 3, when using
decimal floating point arithmetic?

This is exactly why floating-point is the right solution. Whenever
the result is Inexact, you get the best possible approximation to
the 'ideal' result -- that is, full precision.


You also get the best possible approximation to the 'ideal' result if
you use binary floating-point. I'm sure you know this, but it's worth
emphasizing anyway: decimal floating point does *not* get rid of the
inexact nature of floating-point operations for the majority of
calculations.


Indeed, but there is the huge class of 'interesting' calculations
where using base 10 FP will yield exact results where binary
cannot.


The only time that you get an exact with decimal and not with binary is
when you are dividing by a power or 5, optionally combined with a power
of 2.

If you have a daily rate and divide by 8 hours to get an hourly rate --
no benefit. If you have a weekly water usage and divide by 7 to get a
daily average -- no benefit. If you evaluate a transcendental, like
sine or take a square root -- no benefit.

When you have an approximation from any of these calculations, then
round to a given precision for presentation, you usually get the same
results or closer for binary, given the same number of bits for
calculation. If you use an IEEE-754 64-bit binary floating point, you
get in excess of 15 digits of precision. Which applications find that
insufficient and why?

Thad
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #607
"Eric Backus" <er*********@alum.mit.edu> wrote:
"Mike Cowlishaw" <mf*****@attglobal.net> wrote in message
news:cl****************@plethora.net...
Dan Pop wrote:
> And what is the exact result of dividing 1 penny by 3, when using
> decimal floating point arithmetic? Or is division a prohibited
> operation for this class of applications?
This is exactly why floating-point is the right solution. Whenever
the result is Inexact, you get the best possible approximation to
the 'ideal' result -- that is, full precision. With a fixed scaling
factor you cannot make full use of the precision unless you
know how many digits there will be to the left of the radix
point.


You also get the best possible approximation to the 'ideal' result if you
use binary floating-point. I'm sure you know this, but it's worth
emphasizing anyway: decimal floating point does *not* get rid of the inexact
nature of floating-point operations for the majority of calculations.


I have not seen that anyone is saying that it does. It does give
less astonishing results though (as in Law of Least Astonishment).
It seems to me that the *only* thing decimal floating point gets you is
conformance to the conventions used by the financial community for how to
deal with that inexactness. You can do that today with a library, so the
only thing that hardware support gets you is faster computations for those
calculations where this is a requirement.
The same would be true of binary floating point. Some have
raised the point that decimal floating point would raise the bar for
some embedded applications. Let us remove all floating point from the
language and handle it with libraries. If that does not seem too
appetising to you, consider how the people who want decimal float
feel.

I have long felt that the lack of decimal floating point is a
severe lack in many languages.
Note that this library could be implemented in assembly on those platforms
that have hardware support, so you don't really even need native C support
for decimal floating point in order to get the faster computations. Of ^^^^^^^
Replace with "any".
course, using such a library is inconvenient compared to having native C
support, but that inconvenience is payed only by those who need this
capability. Is that group of people large enough that inconveniencing them
That is quite right. Many of the programs that I write do not
use floating point at all. Let us get rid of the baggage of floating
point. I mean, I could consider others, but a little inconvenience
will, ah, strengthen them.
justifies changing the language?


You will never know for sure until it is available. If I need
decimal floating point in an app and C does not have it, C is out of
the running. I will not make elaborate complaints; I will simply not
consider an language that is inadequate to my requirements.

I understand that some people think that C should be inadequate
in this regard. It might be different if their ox were being gored.

Sincerely,

Gene Wirchenko

Computerese Irregular Verb Conjugation:
I have preferences.
You have biases.
He/She has prejudices.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #608
cody wrote:
Is it feasible to interpose a proxy library whose headers are
conforming C code that's compiled with a C++ compiler and that calls
functions from the C++ library?


when you have a C++ Library you can only call C-Style-Functions from that
Library. You cannot export Classes/Methods from that Library.


He said "call", not "export", and there's a lot more to the C++ Library
than Classes and Methods. For instance, to take a ridiculous (but AFAIK,
legal) case, memcpy() could be implemented as a wrapper for
std::copy<unsigned char>().
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #609
On 05 Dec 2003 07:23:42 GMT in comp.lang.c.moderated, glen
herrmannsfeldt <ga*@ugcs.caltech.edu> wrote:
Dan Pop wrote:
In <cl****************@plethora.net> "Mike Cowlishaw" <mf*****@attglobal.net> writes:


(snip)
PL/I did many things wrong (some of its coercions were/are quite
extraordinary). But given that it was designed in the 1960s it
actually was a significant step forward in many areas. It
certainly wasn't a failure, and it is still widely used today.

It failed to achieve the goal of its designers: to be the ultimate
programming language. It never achieved the popularity of the languages
it was supposed to render obsolete: FORTRAN and COBOL.


There are reasons for that unrelated to the language itself, such as how
long it took to write the compiler, and how slow it ran on machines that
existed at the time.

It does seem hard to replace an existing language with an improved one.

Not long ago a discussion in another newsgroup related to assembler
programming reminded me of ALP, which is a preprocessor for a certain
assembler that offers free format, structured programming, and some
other features, but it never got popular.

Ratfor, and a few different versions of MORTRAN as Fortran
preprocessors, again with improvements over the original, but never got
very popular.

The original PL/I compiler supplied conversion programs to convert
Fortran, COBOL, and maybe ALGOL to PL/I.

Maybe more relevant here, C++ was an improved version of C, possibly
with the intention of converting C programmers to C++ programmers, yet C
is still reasonably popular.


I think you're just demonstrating programmer inertia -- programmers
want to be able to write the same old code, and *possibly* learn the
best ways to use the new features. IME a lot of "C" programmers were
never very happy using pointers, except to modify function arguments,
preferring Pascal style array indices over C pointers, and Pascal
style I/O processing over C loops with function calls. I suspect a lot
of C/Pascal style code is being written in C++ and Java.
--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada

Br**********@CSi.com (Brian dot Inglis at SystematicSw dot ab dot ca)
fake address use address above to reply
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #610
Kevin D. Quitt wrote:
On 05 Dec 2003 07:23:42 GMT, glen herrmannsfeldt <ga*@ugcs.caltech.edu>
wrote:
Maybe more relevant here, C++ was an improved version of C

I sure what you meant to say is "C++ was *intended to be* an improved
version of C".


Yes, that is what I meant to say. I must not have been awake enough at
the time.

-- glen
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #611
Eric Backus wrote:
..... snip ...
You also get the best possible approximation to the 'ideal' result
if you use binary floating-point. I'm sure you know this, but it's
worth emphasizing anyway: decimal floating point does *not* get rid
of the inexact nature of floating-point operations for the majority
of calculations.


AFAICT only a binary system has the advantage of 'assumed leading
bit one', allowing replacement by a sign bit. This means a binary
system can always provide better accuracy than any other built on
the same word size.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #612
Dan Pop wrote:

(snip)
(I wrote)
There are reasons for that unrelated to the language itself, such as how
long it took to write the compiler, and how slow it ran on machines that
existed at the time.
None of which still applies today. Yet, the usage of PL/I is still
marginal.
In many parts of life, you only get one chance to make it.
It does seem hard to replace an existing language with an improved one. Not at all. FORTRAN IV had little difficulty replacing FORTRAN II and
F77 had little difficulty replacing F66. C89 replaced K&R C in a couple
of years.
Hmmm, I didn't say that right, though you snipped the part about RATFOR
and MORTRAN. It is hard to replace a language with an improved
language that isn't mostly backward compatible. MORTRAN, and I believe
also RATFOR use free format, semi-colon terminated statements. Other
improvements replace hard to use features in Fortran.

Is FORTRAN IV a new language, or a new version of an old language?

Is Fortran 2000 a new language, or an improved version of Fortran II?

It does seem that Fortran-77 hasn't been replaced yet.
Maybe more relevant here, C++ was an improved version of C, possibly
with the intention of converting C programmers to C++ programmers, yet C
is still reasonably popular.

This is the opposite of what you've been arguing above, i.e. the
difficulty of the improved language to become popular.
I don't know if it is opposite or not. How many C programmers converted
to C++ programmers, how many stayed C only programmers, and how many
started as C++ programmers without learning C first? It doesn't seem
that C++ replaced C, though.

C++ is significantly different, yet allows most C constructs to be used.

Though as someone else pointed out I should have said C++ was intended
to be an improved version of C.
Both FORTRAN and COBOL remained popular long after the introduction and
even after the de facto death of the improved PL/I.


-- glen
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #613
In article <cl****************@plethora.net>, Thad Smith
<th**@ionsky.com> writes
The only time that you get an exact with decimal and not with binary is
when you are dividing by a power or 5, optionally combined with a power
of 2.


True (but the actual proposed form of decimal float has other features
that are useful in some circumstances) but the commercial/financial
world makes extensive use of percentages. Those inherently use division
by powers of five. In addition we have unit pricing that often involves
small fractions of the smallest unit of currency. IOWs we live in a
world where commerce often specifies computations that will be exact if
done in decimal though they will not be in binary.

--
Francis Glassborow ACCU
If you are not using up-to-date virus protection you should not be reading
this. Viruses do not just hurt the infected but the whole community.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #614
In <cl****************@plethora.net> glen herrmannsfeldt <ga*@ugcs.caltech.edu> writes:
It gets conformance with the results people get on pocket calculators,
or when they do long division by hand on paper.
Why is such conformance (up to the last digit) important?
People learn early that one third is a repeating decimal, and one tenth
is not.
People programming computers learn early that this property is neither
true nor relevant when performing floating point computations. Regardless
of the base, floating point numbers are approximations of a subset of the
real numbers (the [-type_MAX, type_MAX] interval).
In the days of billions of transistors on a chip, only a small
percentage need be allocated to decimal floating point hardware.


A small percentage that would be better used as an additional binary
floating point execution unit.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #615
Thad Smith wrote:

Mike Cowlishaw wrote:

....
Indeed, but there is the huge class of 'interesting' calculations
where using base 10 FP will yield exact results where binary
cannot.


The only time that you get an exact with decimal and not with binary is
when you are dividing by a power or 5, optionally combined with a power
of 2.


What about multiplication by integers, addition, and subtraction? Those
are very common operations, especially in the financial world, and
they're all exact (except when they overflow) when performed in decimal
arithmetic, and inexact when performed in binary floating point. I'm
assuming here that the floating point numbers being multiplied, added,
and subtracted are numbers that binary float can't represent exactly,
but which decimal floating point can, such as 1.10. Such numbers are the
rule, not the exception, in financial calculations.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #616
In <cl****************@plethora.net> James Kuyper <ku****@saicmodis.com> writes:
Thad Smith wrote:

Mike Cowlishaw wrote:

...
> Indeed, but there is the huge class of 'interesting' calculations
> where using base 10 FP will yield exact results where binary
> cannot.


The only time that you get an exact with decimal and not with binary is
when you are dividing by a power or 5, optionally combined with a power
of 2.


What about multiplication by integers, addition, and subtraction? Those
are very common operations, especially in the financial world, and
they're all exact (except when they overflow) when performed in decimal
arithmetic, and inexact when performed in binary floating point. I'm
assuming here that the floating point numbers being multiplied, added,
and subtracted are numbers that binary float can't represent exactly,
but which decimal floating point can, such as 1.10. Such numbers are the
rule, not the exception, in financial calculations.


This has already been rehashed to death. Use an appropriate scaling
factor and all these computations can be performed exactly using binary
floating point, or, even better, long long arithmetic. Before C99, the
usual portable solution was double precision, but 64-bit integer
arithmetic is even more appropriate to this kind of applications. Maybe
even more appropriate than decimal floating point arithmetic (depending
on the size of the mantissa and the scaling factor imposed by the
application).

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #617
In <cl****************@plethora.net> glen herrmannsfeldt <ga*@ugcs.caltech.edu> writes:
Dan Pop wrote:

(snip)
(I wrote)
There are reasons for that unrelated to the language itself, such as how
long it took to write the compiler, and how slow it ran on machines that
existed at the time.
None of which still applies today. Yet, the usage of PL/I is still
marginal.
In many parts of life, you only get one chance to make it.


If PL/I had any compelling merits, I'm sure it would have caught on,
sooner or later. It was certainly not lack of availability of
implementations that caused its failure.
It does seem hard to replace an existing language with an improved one.

Not at all. FORTRAN IV had little difficulty replacing FORTRAN II and
F77 had little difficulty replacing F66. C89 replaced K&R C in a couple
of years.


Hmmm, I didn't say that right, though you snipped the part about RATFOR
and MORTRAN. It is hard to replace a language with an improved
language that isn't mostly backward compatible.


Yet, C brilliantly succeeded. If the improved language has enough merits
on its own, programmers are always willing to make the effort of learning
it. Another example in the scripting languages category is Perl.

The replaced language(s) will be kept alive only by the legacy
applications that still need maintenance, but their usage for new
applications will be marginal. Non-legacy applications are reimplemented
in the new language, to be easier to maintain and enhance.
It does seem that Fortran-77 hasn't been replaced yet.


What people call F77 today is Fortran-77 with a ton of extensions, mostly
inherited from VAX Fortran. I can't remember ever seeing a (non-trivial)
program written in pure ANSI F77.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #618
In comp.std.c Brian Inglis <Br**********@systematicsw.ab.ca> wrote:

I think you're just demonstrating programmer inertia -- programmers
want to be able to write the same old code, and *possibly* learn the
best ways to use the new features. IME a lot of "C" programmers were
never very happy using pointers, except to modify function arguments,
preferring Pascal style array indices over C pointers, and Pascal
style I/O processing over C loops with function calls. I suspect a lot
of C/Pascal style code is being written in C++ and Java.


As they say, you can write FORTRAN code in any language.

-Larry Jones

I can do that! It's a free country! I've got my rights! -- Calvin
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #619
Brian Inglis wrote:
herrmannsfeldt <ga*@ugcs.caltech.edu> wrote:
(snip)

Ratfor, and a few different versions of MORTRAN as Fortran
preprocessors, again with improvements over the original, but
never got very popular.

The original PL/I compiler supplied conversion programs to
convert Fortran, COBOL, and maybe ALGOL to PL/I.

Maybe more relevant here, C++ was an improved version of C,
possibly with the intention of converting C programmers to
C++ programmers, yet C is still reasonably popular.


I think you're just demonstrating programmer inertia --
programmers want to be able to write the same old code, and
*possibly* learn the best ways to use the new features. IME a lot
of "C" programmers were never very happy using pointers, except
to modify function arguments, preferring Pascal style array
indices over C pointers, and Pascal style I/O processing over C
loops with function calls. I suspect a lot of C/Pascal style code
is being written in C++ and Java.


I consider C++ was a marvelous marketing ploy, in that it coaxed C
programmers over by using virtually all their known grammar etc.
The C++ language is probably actually quite competent, but loses
it entirely (IMO) by shoehorning new constructs on top of the
already overly sparse C constructs. In the process it has made
things even more context dependant, which I consider to be bad.

There is no reason for such awkward constructs as "::". The
syntactical structure of a Pascal case statement is much superior
to that of a C (or C++) switch statement. (The existence of fall
through is not part of that structure.)

It (C++) is really a development experiment, implemented with
macros, awaiting a real definition of reserved words, etc. Again,
IMO. Ratfor hid obfuscated Fortran constructs behind a
self-consistent language, while C++ operates in the opposite
direction.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #620
Dan Pop wrote:
In <cl****************@plethora.net> glen herrmannsfeldt writes:
It gets conformance with the results people get on pocket
calculators, or when they do long division by hand on paper. Why is such conformance (up to the last digit) important? People learn early that one third is a repeating decimal,
and one tenth is not. People programming computers learn early that this property is neither
true nor relevant when performing floating point computations. Regardless
of the base, floating point numbers are approximations of a subset of the
real numbers (the [-type_MAX, type_MAX] interval).
How about a requirement that only people who have passed a high school
calculus class are allowed to use floating point arithmetic? Then issue
licenses to people who have proved that they understand the effects of
rounding and truncation in floating point arithmetic. Only holders of
such a license can purchase and use programs that use floating point.

I believe that processors should now have float decimal, though I don't
know that I would ever use it. It would almost be worthwhile not to
have to read newsgroup posts from people who don't understand binary
floating point.
In the days of billions of transistors on a chip, only a small
percentage need be allocated to decimal floating point hardware.

A small percentage that would be better used as an additional binary
floating point execution unit.


With SMT, processors are going the other way. One floating point unit
used by two processors. The additional logic to generate a decimal ALU
relative to a binary ALU is pretty small. As I understand the
proposals, they store numbers in memory with each three digits stored
into 10 bits, though in registers they are BCD. The memory format is
only slightly less efficient than binary, partly made up since fewer
exponent bits are required for the desired exponent range.

I will guess that it increases the logic between 10.000% and 20.000%.

An additional binary floating point unit requires the logic to schedule
operations between the two, and eliminate conflicts between them.

-- glen
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #621
CBFalconer wrote:

Eric Backus wrote:

.... snip ...

You also get the best possible approximation to the 'ideal' result
if you use binary floating-point. I'm sure you know this, but it's
worth emphasizing anyway: decimal floating point does *not* get rid
of the inexact nature of floating-point operations for the majority
of calculations.


AFAICT only a binary system has the advantage of 'assumed leading
bit one', allowing replacement by a sign bit. This means a binary
system can always provide better accuracy than any other built on
the same word size.

The 'assumed leading bit one' of the mantissa is the low order of the
exponent. The sign bit is the high order bit of the float or double. But
you're right. The assumption gives you an extra virtual bit of
precision. My 32-bit float has 24 bits of mantissa, 8 bits of exponent
and a sign bit. Count 'em. My 64-bit double has 53, 11 and 1. Like a
Baker's dozen (13 instead of 12). :=)
--
Joe Wright http://www.jw-wright.com
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #622
glen herrmannsfeldt <ga*@ugcs.caltech.edu> wrote:

[snip]
How about a requirement that only people who have passed a high school
calculus class are allowed to use floating point arithmetic? Then issue
licenses to people who have proved that they understand the effects of
rounding and truncation in floating point arithmetic. Only holders of
such a license can purchase and use programs that use floating point.

I believe that processors should now have float decimal, though I don't
know that I would ever use it. It would almost be worthwhile not to
have to read newsgroup posts from people who don't understand binary
floating point.


While we are at it, let us do the same for integer arithmetic.
That way, we need never again see "I think my compiler has a bug. My
program keeps giving wrong answers. It says that 5/3 equals 1."

[snip]

Sincerely,

Gene Wirchenko

Computerese Irregular Verb Conjugation:
I have preferences.
You have biases.
He/She has prejudices.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #623
Dan Pop wrote:
If PL/I had any compelling merits, I'm sure it would have caught on,
sooner or later. It was certainly not lack of availability of
implementations that caused its failure.
PL/I was moderately successful; for example MULTICS was
implemented in PL/I, and a lot of IBM systems programming
was done in PL/I. Just because other languages, designed
using "lessons learned" from previous languages including
PL/I, caught on doesn't make PL/I a failure, any more
than FORTRAN, COBOL, BASIC, LISP, etc. were failures.
Some of them evolved and are still widely used, others
fell into disuse; however, they had a significant impact
for their time and were used to implement many valuable
applications. That's not "failure".
What people call F77 today is Fortran-77 with a ton of extensions, mostly
inherited from VAX Fortran. I can't remember ever seeing a (non-trivial)
program written in pure ANSI F77.


I have seen literally hundreds, many of them written by
scientists and engineers on PDP-11 Unix systems, where
the native Fortran was Fortran-77 with essentially no
extensions. (There was also a port of DEC's F4P that
some used because of faster execution.)
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #624
glen herrmannsfeldt wrote:
..... snip ...
With SMT, processors are going the other way. One floating point
unit used by two processors. The additional logic to generate a
decimal ALU relative to a binary ALU is pretty small. As I
understand the proposals, they store numbers in memory with each
three digits stored into 10 bits, though in registers they are
BCD. The memory format is only slightly less efficient than
binary, partly made up since fewer exponent bits are required for
the desired exponent range.


Think about how you would normalize such a value. That would not
be a decimal format, that would be a base 1000 format. The count
of significant digits would jitter over a range of 3!

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #625
In <cl****************@plethora.net> glen herrmannsfeldt <ga*@ugcs.caltech.edu> writes:
Dan Pop wrote:
In <cl****************@plethora.net> glen herrmannsfeldt writes:
It gets conformance with the results people get on pocket
calculators, or when they do long division by hand on paper.
Why is such conformance (up to the last digit) important?

People learn early that one third is a repeating decimal,
and one tenth is not.
People programming computers learn early that this property is neither
true nor relevant when performing floating point computations. Regardless
of the base, floating point numbers are approximations of a subset of the
real numbers (the [-type_MAX, type_MAX] interval).


How about a requirement that only people who have passed a high school
calculus class are allowed to use floating point arithmetic?


No need for that, in order to understand floating point arithmetic.
The basic concepts are within the grasp of a junior high school student.
Then issue
licenses to people who have proved that they understand the effects of
rounding and truncation in floating point arithmetic. Only holders of
such a license can purchase and use programs that use floating point.
The user doesn't need any clue, only the programmer. If the programmer
does his job well, the user will get the expected result.

What we see in practice is clueless *programmers* that don't get the
results they expect from their programs. And changing one aspect of
floating point won't make it work as expected by the clueless ones, e.g.
largeval + 1 == largeval will still evaluate to true, baffling the
ignorant. No, decimal floating point is not the right cure for
ignorance.
I believe that processors should now have float decimal, though I don't
know that I would ever use it. It would almost be worthwhile not to
have to read newsgroup posts from people who don't understand binary
floating point.
Do you *really* think anyone in the industry would care about this
argument? Do you really think it is worth the loss of precision (for the
same bit count).
In the days of billions of transistors on a chip, only a small
percentage need be allocated to decimal floating point hardware.

A small percentage that would be better used as an additional binary
floating point execution unit.


With SMT, processors are going the other way. One floating point unit
used by two processors.


Intel's flagship processor in terms of high performance computing has
multiple FP execution units.
The additional logic to generate a decimal ALU
relative to a binary ALU is pretty small. As I understand the
proposals, they store numbers in memory with each three digits stored
into 10 bits, though in registers they are BCD. The memory format is
only slightly less efficient than binary, partly made up since fewer
exponent bits are required for the desired exponent range.

I will guess that it increases the logic between 10.000% and 20.000%.
For *what* redeeming benefits?
An additional binary floating point unit requires the logic to schedule
operations between the two, and eliminate conflicts between them.


Which is paid off by increasing the FP throughput by a factor of 2.
Which explains why most modern architectures have chosen to go this way.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #626
> This has already been rehashed to death. Use an appropriate scaling
factor and all these computations can be performed exactly using
binary floating point, or, even better, long long arithmetic. Before
C99, the
usual portable solution was double precision, but 64-bit integer
arithmetic is even more appropriate to this kind of applications.
Maybe
even more appropriate than decimal floating point arithmetic
(depending
on the size of the mantissa and the scaling factor imposed by the
application).


This is, indeed, how decimal arithmetic has been done in the
past. It is no longer an adequate or acceptable approach.

It's a valid approach for binary FP, too, and 'manual'
scaling of binary calculations is perfectly feasible.
But it is difficult, tedious, and error-prone.

In addition to these problems, the 'scaled binary'
approach for decimal means that one is constantly
carrying out base conversions. With decimal FP
as the foundation, no base conversions occur.

Mike Cowlishaw
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #627
> With SMT, processors are going the other way. One floating point unit
used by two processors. The additional logic to generate a decimal
ALU relative to a binary ALU is pretty small. As I understand the
proposals, they store numbers in memory with each three digits stored
into 10 bits, though in registers they are BCD.
Pretty close ... registers would normally hold the numbers in the
compressed format (so they stay 64-bit, etc., and can
be shared with the BFP unit). These can then be expanded to
BCD within the DFP unit to carry out arithmetic -- this only
costs 2-3 gate delays. [Other approaches are possible.]
I will guess that it increases the logic between 10.000% and 20.000%.


Sounds about right.

Mike Cowlishaw
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #628
On 11 Dec 2003 05:07:54 GMT in comp.lang.c.moderated, Da*****@cern.ch
(Dan Pop) wrote:
In <cl****************@plethora.net> James Kuyper <ku****@saicmodis.com> writes:
Thad Smith wrote:

Mike Cowlishaw wrote:

...
> Indeed, but there is the huge class of 'interesting' calculations
> where using base 10 FP will yield exact results where binary
> cannot.

The only time that you get an exact with decimal and not with binary is
when you are dividing by a power or 5, optionally combined with a power
of 2.


What about multiplication by integers, addition, and subtraction? Those
are very common operations, especially in the financial world, and
they're all exact (except when they overflow) when performed in decimal
arithmetic, and inexact when performed in binary floating point. I'm
assuming here that the floating point numbers being multiplied, added,
and subtracted are numbers that binary float can't represent exactly,
but which decimal floating point can, such as 1.10. Such numbers are the
rule, not the exception, in financial calculations.


This has already been rehashed to death. Use an appropriate scaling
factor and all these computations can be performed exactly using binary
floating point, or, even better, long long arithmetic. Before C99, the
usual portable solution was double precision, but 64-bit integer
arithmetic is even more appropriate to this kind of applications. Maybe
even more appropriate than decimal floating point arithmetic (depending
on the size of the mantissa and the scaling factor imposed by the
application).


I'm not at all sure that the financial community would be interested
in decimal floating point rather than fixed point arithmetic.
Unless extended precision is available, you can handle less than a
billion to six decimals.
I've done back of envelope calculations to ensure that numbers that
had to add up exactly would not lose precision in intermediate values
used within companies.
A safer range for most transactional work would be billions to six
decimals, requiring sixteen digits, and another three digits for
safety if you're a multinational company or federal government.
If you're converting between currencies with six decimals accuracy,
then you need to add another six decimals onto the end for exactness.
The total range tends not to change by currency; lire, won, yen, yuan
are like cents or pennies in other currencies: the decimal place comes
after them instead of before them; and if you're dealing in larger
aggregates, you often don't need the precision to more than one or a
thousand currency units.
Don't know much about countries' internal economies: would be
interesting to know how big the numbers are required to be for India
and China, whose populations are three to four times bigger than the
EU, Indonesia, US?
--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada

Br**********@CSi.com (Brian dot Inglis at SystematicSw dot ab dot ca)
fake address use address above to reply
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #629
In article <cl****************@plethora.net>, Dan Pop <Da*****@cern.ch>
writes
In <cl****************@plethora.net> glen herrmannsfeldt <ga*@ugcs.caltech.edu>
writes:
In the days of billions of transistors on a chip, only a small
percentage need be allocated to decimal floating point hardware.


A small percentage that would be better used as an additional binary
floating point execution unit.


I think in real terms, weather we like it or not this is pointless
argument.

IBM will produce the CPU with decimal FP
Compiler vendors will produce compiler to support IBM's FP
Other Silicon vendors are likely to put Decimal FP on some of their CPU
Other compiler vendors will support them.

I suggest that we stop the arguing about if it is a good idea as it is
going to happen anyway. What we have to do is make sure that the API is
sensible and in the C and C++ standards so that the compiler vendors all
do (at least the core) support for FP10 the same way.

I would suggest that this is one place where it would be sensible to
have, at least the core, the same for C and C++
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ ch***@phaedsys.org www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #630
On 10 Dec 2003 09:18:09 GMT in comp.lang.c.moderated, Da*****@cern.ch
(Dan Pop) wrote:
In <cl****************@plethora.net> glen herrmannsfeldt <ga*@ugcs.caltech.edu> writes:
It gets conformance with the results people get on pocket calculators,
or when they do long division by hand on paper.


Why is such conformance (up to the last digit) important?


Transactional systems have to get the numbers exact: for example, when
companies or banks are settling accounts between themselves, and a
single cheque is written (or EFT generated) to settle each day's or
month's accounts, that cheque must be for exactly the same number as
if each transaction was settled by a separate cheque (or EFT).

Adding machines / calculators are still used to spot check / audit
results from systems and the results are expected to be exactly the
same. The bean counters get very upset if a result from a new system
run on old data is not exactly the the same as the result they got
from the old system.

I've been on projects where we worked for a month on a penny
discrepancy -- the final result on the new system was out by a penny
from the old system -- but every intermediate value we checked was
identical, and the final total was being calculated correctly without
any loss of precision (in binary FP) -- the systems project lead gave
the business project lead a penny, and he signed off on the system at
last.
--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada

Br**********@CSi.com (Brian dot Inglis at SystematicSw dot ab dot ca)
fake address use address above to reply
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #631
Douglas A. Gwyn wrote:
Dan Pop wrote:
If PL/I had any compelling merits, I'm sure it would have caught on,
sooner or later. It was certainly not lack of availability of
implementations that caused its failure.
PL/I was moderately successful; for example MULTICS was
implemented in PL/I, and a lot of IBM systems programming
was done in PL/I.


PL/I failed to do what IBM wanted, which was to replace Fortran
and COBOL. I believe that there was once a plan not to write
Fortran or COBOL compilers for S/360.

Though around that time it was not obvious that S/360 would be
the success it turned out to be.

I am pretty sure that no-one at the time expected S/360 software
to still run on computers build 40 years later. I have recently
seen reports of testing the S/360 PL/I and Fortran compilers on
OS/390 and z/OS on current machines. They do still run!

PL/I was over ambitious for the machines that existed at the time.

-- glen
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #632
CBFalconer wrote:
glen herrmannsfeldt wrote: (snip)
As I
understand the proposals, they store numbers in memory with each
three digits stored into 10 bits, though in registers they are
BCD. The memory format is only slightly less efficient than
binary, partly made up since fewer exponent bits are required for
the desired exponent range.

Think about how you would normalize such a value. That would not
be a decimal format, that would be a base 1000 format. The count
of significant digits would jitter over a range of 3!


The normalization is done in uncompressed (BCD) form, and then
they are converted to the base 1000 form for storage.

So 50 bits would store 15 decimal digits, so 54 would store 16.
With sign, that would leave nine for an exponent, which would allow
an exponent between -128 and +127. (Some exponent values may be
reserved, though.) 16 decimal digits hold between 49.8 and 53.1 bits
A decimal 9 bit exponent representing 10**-256 though 10**255 is
equivalent to 10.7 bits of binary exponent.

So, sign+exponent+mantissa it is equivalent to 1+49.8+10.7=61.5 bits,
or 61.5/64.0=96% efficient in the worst case, and 64.8/64.0=100.1% in
the best case. About 98% on average.

-- glen
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #633
In article <cl****************@plethora.net>,
glen herrmannsfeldt <ga*@ugcs.caltech.edu> wrote:
The normalization is done in uncompressed (BCD) form, and then
they are converted to the base 1000 form for storage.

So 50 bits would store 15 decimal digits, so 54 would store 16.
With sign, that would leave nine for an exponent, which would allow
an exponent between -128 and +127. (Some exponent values may be
reserved, though.) 16 decimal digits hold between 49.8 and 53.1 bits
A decimal 9 bit exponent representing 10**-256 though 10**255 is
equivalent to 10.7 bits of binary exponent.

So, sign+exponent+mantissa it is equivalent to 1+49.8+10.7=61.5 bits,
or 61.5/64.0=96% efficient in the worst case, and 64.8/64.0=100.1% in
the best case. About 98% on average.


The proposed format for decimal numbers is just very slightly more
efficient: The 64 bit format uses indeed 50 bits for 15 decimal digits.
Then it uses 5 bits which can hold 32 values to encode one decimal
digits, three values (-1, 0, 1) that will become part of the exponent,
and two values encode infinity and NaN.

One bit is used for the sign, 8 more bits for the decimal exponent, so
you have 768 values for the exponent from -384 to +383.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #634
>> With SMT, processors are going the other way. One floating point
unit used by two processors. The additional logic to generate a
decimal ALU relative to a binary ALU is pretty small. As I
understand the proposals, they store numbers in memory with each
three digits stored into 10 bits, though in registers they are
BCD. The memory format is only slightly less efficient than
binary, partly made up since fewer exponent bits are required for
the desired exponent range.


Think about how you would normalize such a value. That would not
be a decimal format, that would be a base 1000 format. The count
of significant digits would jitter over a range of 3!


Well first of all, why would one normalize? [See the FAQ at
http://www2.hursley.ibm.com/decimal/decifaq.html part 4,
for a discussion.]

Also, the base 1000 is used only for storage; all computations,
rounding, etc., are carried out in base 10.

Mike Cowlishaw
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #635
glen herrmannsfeldt wrote:
CBFalconer wrote:
glen herrmannsfeldt wrote:


(snip)
As I understand the proposals, they store numbers in memory
with each three digits stored into 10 bits, though in
registers they are BCD. The memory format is only slightly
less efficient than binary, partly made up since fewer
exponent bits are required for the desired exponent range.

Think about how you would normalize such a value. That would
not be a decimal format, that would be a base 1000 format. The
count of significant digits would jitter over a range of 3!


The normalization is done in uncompressed (BCD) form, and then
they are converted to the base 1000 form for storage.


If you have ever designed a floating point package you will
realize that normalization takes up the majority of the time. It
needs to be simple, not a major base conversion.

Any reasonable form of decimal FP will be based on some flavor of
bcd, possibly 8421, or excess 3, or 2*421, or even bi-quinary.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #636
ge***@mail.ocis.net (Gene Wirchenko) writes:
glen herrmannsfeldt <ga*@ugcs.caltech.edu> wrote:
I believe that processors should now have float decimal, though I don't
know that I would ever use it. It would almost be worthwhile not to
have to read newsgroup posts from people who don't understand binary
floating point.


While we are at it, let us do the same for integer arithmetic.
That way, we need never again see "I think my compiler has a bug. My
program keeps giving wrong answers. It says that 5/3 equals 1."


Good idea. Many programming languages have adopted this idea,
using a different operator than "/" for truncating integer division.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #637
Fergus Henderson <fj*@cs.mu.oz.au> wrote:
ge***@mail.ocis.net (Gene Wirchenko) writes:
glen herrmannsfeldt <ga*@ugcs.caltech.edu> wrote:
I believe that processors should now have float decimal, though I don't
know that I would ever use it. It would almost be worthwhile not to
have to read newsgroup posts from people who don't understand binary
floating point.
While we are at it, let us do the same for integer arithmetic.
That way, we need never again see "I think my compiler has a bug. My
program keeps giving wrong answers. It says that 5/3 equals 1."


Good idea. Many programming languages have adopted this idea,
using a different operator than "/" for truncating integer division.


And here I thought that "/" was the different operator, used
instead of

*

*****

*

Sincerely,

Gene Wirchenko



--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net


Computerese Irregular Verb Conjugation:
I have preferences.
You have biases.
He/She has prejudices.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #638
Christian Bau wrote:
In article <cl****************@plethora.net>,
glen herrmannsfeldt <ga*@ugcs.caltech.edu> wrote:
The normalization is done in uncompressed (BCD) form, and then
they are converted to the base 1000 form for storage. So 50 bits would store 15 decimal digits, so 54 would store 16.
With sign, that would leave nine for an exponent, which would allow
an exponent between -128 and +127. (Some exponent values may be
reserved, though.) 16 decimal digits hold between 49.8 and 53.1 bits
A decimal 9 bit exponent representing 10**-256 though 10**255 is
equivalent to 10.7 bits of binary exponent. So, sign+exponent+mantissa it is equivalent to 1+49.8+10.7=61.5 bits,
or 61.5/64.0=96% efficient in the worst case, and 64.8/64.0=100.1% in
the best case. About 98% on average.

The proposed format for decimal numbers is just very slightly more
efficient: The 64 bit format uses indeed 50 bits for 15 decimal digits.
Then it uses 5 bits which can hold 32 values to encode one decimal
digits, three values (-1, 0, 1) that will become part of the exponent,
and two values encode infinity and NaN.

One bit is used for the sign, 8 more bits for the decimal exponent, so
you have 768 values for the exponent from -384 to +383.


I had remembered the three digits in 10 bits form. The rest was just
a guess as to how it possibly could be done.

OK, so still between 49.8 and 53.1 equivalent bits for the mantissa,
11.3 bits for the exponent, and 1 for the sign, so 62.1 worst case
and 65.4 best case, for 63.75 average case, or 99.6% efficient.

This reminds me of all the work the authors of the original Fortran
compiler did to generate as efficient object code as possible.
They needed to convince people that Fortran could be used in place
of assembly code, without a significant loss of efficiency.

Personally, I would have been happy with ordinary BCD arithmetic,
in floating point form. It seems to have worked well for calculator
users for many years now.

-- glen
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #639
Gene Wirchenko wrote:
Fergus Henderson <fj*@cs.mu.oz.au> wrote:
ge***@mail.ocis.net (Gene Wirchenko) writes: (snip)
While we are at it, let us do the same for integer arithmetic.
That way, we need never again see "I think my compiler has a bug. My
program keeps giving wrong answers. It says that 5/3 equals 1."
Good idea. Many programming languages have adopted this idea,
using a different operator than "/" for truncating integer division.

And here I thought that "/" was the different operator, used
instead of

*

*****

*

The calculator in Microsoft windows has a / on the divide key.

Some time ago, my son was asking how to input fractions into
the calculator, and I showed him the key used to do it.

1 / 3 = and you get one third.

(He already knew that it was the divide key.)

-- glen
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #640
CBFalconer wrote:
glen herrmannsfeldt wrote:
(snip of base 1000 representation of decimal floating point)
The normalization is done in uncompressed (BCD) form, and then
they are converted to the base 1000 form for storage.

If you have ever designed a floating point package you will
realize that normalization takes up the majority of the time. It
needs to be simple, not a major base conversion. Any reasonable form of decimal FP will be based on some flavor of
bcd, possibly 8421, or excess 3, or 2*421, or even bi-quinary.


I haven't actually checked, but rumors are that it only takes a
few gates to convert between the base 1000 representation, and
BCD. It can be done while loading into registers, or even
as part of the ALU, itself.

-- glen
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #641
"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> writes:
Note that this code is not very general; it doesn't work for e.g.

CHECK_EXPR_TYPE(foo, int(*)[5]);


How about this:

#define CHECK_EXPR_TYPE(expr, type) \
((void) sizeof(((int (*)(type)) 0)(expr)))

I prefer writing it without the (void) though, so that I can use
the macro as part of a constant expression that initializes a
structure:

enum type {
T_NULL,
T_INT,
T_FLOAT
};
struct smember {
enum type type;
const char *name;
size_t offset;
};
#define SMEMBER_FLOAT(stype, member) \
{ T_FLOAT, #member, \
offsetof(stype, member) \
+ 0*CHECK_EXPR_TYPE(&((stype *) 0)->member, const float *) }

struct demo {
float girth;
float curvature;
};
const struct smember demo_members[] = {
SMEMBER_FLOAT(struct demo, girth),
SMEMBER_FLOAT(struct demo, curvature),
{ T_NULL }
};

Type checks like this would be easier if the comma operator were
allowed in constant expressions.
Nov 14 '05 #642

On Sun, 21 Dec 2003, Kalle Olavi Niemitalo wrote:

"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> writes:
Note that this code is not very general; it doesn't work for e.g.

CHECK_EXPR_TYPE(foo, int(*)[5]);


How about this:

#define CHECK_EXPR_TYPE(expr, type) \
((void) sizeof(((int (*)(type)) 0)(expr)))


Quite ingenious, I think -- but AFAIK it isn't guaranteed to
be portable. What you're doing, step-by-step, is:

(int (*)(type)) cast something to ptr-to-function-taking-'type'
0 in fact, cast NULL to that ptr-to-func-type
(expr) see if it will accept 'expr' as an argument
sizeof but don't actually evaluate the result
(void) and discard the result of 'sizeof' as well

The problem is that it's not guaranteed possible to "call" a null
pointer, no matter what type it is, no matter what arguments you try
to pass it. So you have undefined behavior *inside* the 'sizeof',
even though it will not be evaluated. As far as I know. Chapter
and verse would be welcome.
Similar problems shoot down this macro:

#define my_broken_offsetof(t, f) ((char *)&((t *)0)->f - (char *)0)

and I'm curious as to what various compilers make of 'sizeof(1/0)',
and whether it's valid C or not. [I think the answer should be
sizeof(int), but I'm not sure it doesn't invoke UB.]

-Arthur

Nov 14 '05 #643
Arthur J. O'Dwyer wrote:
and I'm curious as to what various compilers make of 'sizeof(1/0)',
and whether it's valid C or not.
[I think the answer should be
sizeof(int),
It is.
but I'm not sure it doesn't invoke UB.]


It doesn't.
The division by zero, does not occur.

--
pete
Nov 14 '05 #644
"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> writes:
On Sun, 21 Dec 2003, Kalle Olavi Niemitalo wrote:

"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> writes:
> Note that this code is not very general; it doesn't work for e.g.
>
> CHECK_EXPR_TYPE(foo, int(*)[5]);
How about this:

#define CHECK_EXPR_TYPE(expr, type) \
((void) sizeof(((int (*)(type)) 0)(expr)))


Quite ingenious, I think -- but AFAIK it isn't guaranteed to
be portable. What you're doing, step-by-step, is:

(int (*)(type)) cast something to ptr-to-function-taking-'type'
0 in fact, cast NULL to that ptr-to-func-type
(expr) see if it will accept 'expr' as an argument
sizeof but don't actually evaluate the result
(void) and discard the result of 'sizeof' as well

The problem is that it's not guaranteed possible to "call" a null
pointer, no matter what type it is, no matter what arguments you try
to pass it.


That's OK; this code doesn't call a null pointer. It contains a
subexpression which is a call to a null pointer, but that subexpression
is never evaluated. Undefined behaviour would only result if that
subexpression was evaluated.
Similar problems shoot down this macro:

#define my_broken_offsetof(t, f) ((char *)&((t *)0)->f - (char *)0)


As long as you never invoke that macro, there's no problem.

If however, you invoke it, e.g.

#define my_broken_offsetof(t, f) ((char *)&((t *)0)->f - (char *)0)
struct s { int x; };
int main() {
return my_broken_offsetof(struct s, x);
}

then the subexpression "((t *)0)->f", which in this example becomes
"((struct s *)0)->x" after macro expansion, will get evaluated,
and so the behaviour is undefined.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
Nov 14 '05 #645
glen herrmannsfeldt wrote:
CBFalconer wrote:
glen herrmannsfeldt wrote:


(snip of base 1000 representation of decimal floating point)
The normalization is done in uncompressed (BCD) form, and then
they are converted to the base 1000 form for storage.

If you have ever designed a floating point package you will
realize that normalization takes up the majority of the time. It
needs to be simple, not a major base conversion.

Any reasonable form of decimal FP will be based on some flavor of
bcd, possibly 8421, or excess 3, or 2*421, or even bi-quinary.


I haven't actually checked, but rumors are that it only takes a
few gates to convert between the base 1000 representation, and
BCD. It can be done while loading into registers, or even
as part of the ALU, itself.


If you show me (in detail) I will believe you. Not before.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #646
> OK, so still between 49.8 and 53.1 equivalent bits for the mantissa,
11.3 bits for the exponent, and 1 for the sign, so 62.1 worst case
and 65.4 best case, for 63.75 average case, or 99.6% efficient.

This reminds me of all the work the authors of the original Fortran
compiler did to generate as efficient object code as possible.
They needed to convince people that Fortran could be used in place
of assembly code, without a significant loss of efficiency.

Personally, I would have been happy with ordinary BCD arithmetic,
in floating point form. It seems to have worked well for calculator
users for many years now.


There are a few constraints, though. For example, the ISO
COBOL 2002 standard specifies that intermediate calculations
be done using 32 decimal digits of precision. 32 digits of BCD
into a 128-bit register leaves very little room for a sign and
exponent.

Mike Cowlishaw
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #647
> (snip of base 1000 representation of decimal floating point)
The normalization is done in uncompressed (BCD) form, and then
they are converted to the base 1000 form for storage.

If you have ever designed a floating point package you will
realize that normalization takes up the majority of the time. It
needs to be simple, not a major base conversion.

Any reasonable form of decimal FP will be based on some flavor of
bcd, possibly 8421, or excess 3, or 2*421, or even bi-quinary.


I haven't actually checked, but rumors are that it only takes a
few gates to convert between the base 1000 representation, and
BCD. It can be done while loading into registers, or even
as part of the ALU, itself.


Correct. For details, see a summary at:

http://www2.hursley.ibm.com/decimal/DPDecimal.html

Mike Cowlishaw
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 14 '05 #648
"Kalle Olavi Niemitalo" <ko*@iki.fi> wrote in message
news:87************@Astalo.kon.iki.fi...
Type checks like this would be easier if the comma operator were
allowed in constant expressions.


Yes; that's an aggravating restriction.
The main argument for it seems to be to diagnose
int a[20,30];
but with VLAs that has to be allowed anyway in many contexts,
so it seems pointless to keep the restriction.
Nov 14 '05 #649
"Arthur J. O'Dwyer" <aj*@nospam.andrew.cmu.edu> wrote...
and I'm curious as to what various compilers make of 'sizeof(1/0)',
and whether it's valid C or not. [I think the answer should be
sizeof(int), but I'm not sure it doesn't invoke UB.]


There is no undefined behavior, because the / operator is not
executed (the argument of sizeof is not evaluated).
Nov 14 '05 #650

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: William C. White | last post by:
Does anyone know of a way to use PHP /w Authorize.net AIM without using cURL? Our website is hosted on a shared drive and the webhost company doesn't installed additional software (such as cURL)...
2
by: Albert Ahtenberg | last post by:
Hello, I don't know if it is only me but I was sure that header("Location:url") redirects the browser instantly to URL, or at least stops the execution of the code. But appearantely it continues...
3
by: James | last post by:
Hi, I have a form with 2 fields. 'A' 'B' The user completes one of the fields and the form is submitted. On the results page I want to run a query, but this will change subject to which...
0
by: Ollivier Robert | last post by:
Hello, I'm trying to link PHP with Oracle 9.2.0/OCI8 with gcc 3.2.3 on a Solaris9 system. The link succeeds but everytime I try to run php, I get a SEGV from inside the libcnltsh.so library. ...
1
by: Richard Galli | last post by:
I want viewers to compare state laws on a single subject. Imagine a three-column table with a drop-down box on the top. A viewer selects a state from the list, and that state's text fills the...
4
by: Albert Ahtenberg | last post by:
Hello, I have two questions. 1. When the user presses the back button and returns to a form he filled the form is reseted. How do I leave there the values he inserted? 2. When the...
1
by: inderjit S Gabrie | last post by:
Hi all Here is the scenerio ...is it possibly to do this... i am getting valid course dates output on to a web which i have designed ....all is okay so far , look at the following web url ...
2
by: Jack | last post by:
Hi All, What is the PHP equivilent of Oracle bind variables in a SQL statement, e.g. select x from y where z=:parameter Which in asp/jsp would be followed by some statements to bind a value...
3
by: Sandwick | last post by:
I am trying to change the size of a drawing so they are all 3x3. the script below is what i was trying to use to cut it in half ... I get errors. I can display the normal picture but not the...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.