473,416 Members | 1,868 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,416 software developers and data experts.

why still use C?

no this is no trollposting and please don't get it wrong but iam very
curious why people still use C instead of other languages especially C++.

i heard people say C++ is slower than C but i can't believe that. in pieces
of the application where speed really matters you can still use "normal"
functions or even static methods which is basically the same.

in C there arent the simplest things present like constants, each struct and
enum have to be prefixed with "struct" and "enum". iam sure there is much
more.

i don't get it why people program in C and faking OOP features(function
pointers in structs..) instead of using C++. are they simply masochists or
is there a logical reason?

i feel C has to benefit against C++.

--
cody

[Freeware, Games and Humor]
www.deutronium.de.vu || www.deutronium.tk
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05
687 22740
"Eric Backus" <er*********@alum.mit.edu> wrote:
B. Don't forget Moore's law. It'll be at least, say, 5 years before you
could expect to have decimal floating-point in the C standard. By then,
computer hardware will be an order of magnitude faster, so even if decimal
math is the bottleneck in a real application today, it may not be by then.


Alas, Moore's law applies to people's expectations of computers, as
well. If by that time decimal FP is as fast as binary FP now, but slower
than binary FP then, that will still be perceived as too slow.

Richard
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #551
la************@eds.com wrote:
In comp.std.c Morris Dovey <mr*****@iedu.com> wrote:
[1] Assuming the standard were changed to include FP-10, will the
compiler producers consider the new standard non-relevant, given
that it covers hardware not now available anywhere in the world?


The standard could be changed to accomodate FP-10 without requiring it.
I sincerely doubt that the standard would mandate FP-10 without a broad
concensus among both users and implementors that that was the right
thing to do.


Moreover, this does not necessarily have anything to do with the
commonness of decimal FP hardware. The C Standard supported FP in
general at a time when many PCs were sold with no FP processor, and
co-processors, emulators and emulation-code-producing compilers were
common.
The only important thing is that decimal FP is seen as important to
_have_, not necessarily to have at high speed - those computers where
decimal speed is of the essence will already have hardware for it (if
only to accomodate things like Cobol), and the rest of us will catch up
later, as we have with normal FP.

Richard
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #552
To add to Francis Glassborow's excellent reply:
[4] Would the application needs be met by a simpler, more direct
approach? If rounding is a problem, for example, could not logic
be added to current (FP-2,FP-16) FPUs to produce the desired
behaviors?


No, because it is a mathematical impossibility to produce the
desired behavior (other than by using the BFP numbers as
integers, and handling the exponent separately).

See examples at google: decimal arithmetic FAQ

One could, possibly, build a dual unit which can switch the
base for calculation -- but it would be unlikely to be an
optimal design (for example, for BFP it's best if the
coefficient is a binary integer; for decimal it's better to have
a decimal coefficient (e.g., in BCD).

However, it is certainly possible to share the FP register
file, and that is a significant saving.

Mike Cowlishaw
Google: decimal arithmetic
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #553
Thad Smith <Th*******@acm.org> writes:
C has a mechanism to accommodate many innovative hardware features -- a
library function. This can be used without changing the language.


That works fine for most features, but it kinda sucks for numeric types.
The problem is that C doesn't have any support for user-defined operators,
operator overloading, or indeed any user-defined infix notation, so
user-defined numeric types feel decidedly like second-class citizens in
comparison with.

E.g. compare

decimal_t x, y, z, a, b, c;
z += x * y * z + a * b * (c - 1);

with

decimal_t x, y, z, a, b, c;
decimal_mult_assign(&z, decimal_add(
decimal_mult(decimal_mult(x, y), z),
decimal_mult(decimal_mult(a, b),
decimal_sub(c, int_to_decimal(1)))));

The former is a lot more readable.

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #554
Dan Pop wrote:
That would be nice, too, but what if you need decimal FLOAT?

What for? Given the intrinsic nature of floating point, if the base is
relevant to your application, then you should review the application.

If the ability of representing $3.11 *exactly* is important, simply
do all your computing in pennies and the base used by your floating
point representation no longer matters (only the precision is relevant).


This does not work, because applications are constantly changing.
Items are often costed in sub-penny decimal units; taxes are now
sometimes specified to 4 or 5 decimal places. The scaling factor
you need to apply, to get exact results, is difficult to work out, and
has to be updated often as the data changes. And if the application
designer or maintainer doesn't allow enough fractional digits, all
of a sudden results become Inexact, with no indication.

With decimal floating-point, results remain exact for additions
and multiplications so long as there is sufficient total precision.
And with the IEEE 765R specification, the Inexact flag gets
set, too, if appropriate -- which can be used to detect for
unexpected inexactness.

All in all, floating-point decimal is simpler, easier, and safer than
fixed-point. (And you can use it for all your other
calculations, too, so there is no need for any binary <-->
decimal conversions.)

Mike Cowlishaw
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #555
"Hans-Bernhard Broeker" <br*****@physik.rwth-aachen.de> wrote in message
news:cl****************@plethora.net...
I'm all for it, _as_long_as_ it doesn't interfere with existing code.
Scientists will insist that behaviour of existing code that does rely
on FP being done in base 2 doesn't change.
You really think so? Most `scientists' I know are content to have
their FP calculations produce results that look more or less reasonable
to, say, a dozen decimal places. They neither know nor care that the
underlying (double) precision can range from 56 bits on an old Vax
to a wobbling 53-56 bits on an old S/370 to 53 bits on modern IEEE
machines. Indeed, the better `scientists' write code that doesn't
depend on those last few bits being at all reproducible.

Perhaps you're thinking of the constituency who think the `right'
answer is the one that reproduces all the noise digits exactly
from the original test run. Not much help for them.

But otherwise, all the new IEEE 754R really does to existing FP
calculations is add wobbly precision to the last couple of bits,
much like the S/370 hexadecimal FP that generations of programmers
survived, often without noticing the underlying problems for math
library writers.
I.e. there really should
be a set of new types and library support for the guaranteed-decimal
FP. Overlaying the existing ones with base-10 ones isn't a viable
alternative, IMHO.


Certainly the proposal accepted by both the C and C++ committees
calls for a parallel set of FP capabilities, but I confess I'm
a long way from being convinced that such added complexity is
either necessary or desirable. I *am* convinced that fleshing
out IEEE 754R decimal FP arithmetic as an alternative representation
for the existing three FP builtin types is quite viable.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #556
In <cl****************@plethora.net> glen herrmannsfeldt <ga*@ugcs.caltech.edu> writes:
It may or may not be relevant, but PL/I has supported FLOAT DECIMAL


It is relevant. PL/I was designed to be the ultimate programming
language and it failed big time. Therefore, it should be considered an
excellent example of how NOT to do things ;-)

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #557
Dan Pop wrote:

In <cl****************@plethora.net> ge***@mail.ocis.net (Gene Wirchenko) writes:
That would be nice, too, but what if you need decimal FLOAT?


What for? Given the intrinsic nature of floating point, if the base is
relevant to your application, then you should review the application.

If the ability of representing $3.11 *exactly* is important, simply do all
your computing in pennies and the base used by your floating point
representation no longer matters (only the precision is relevant).


I agree with Dan on this point. The primary argument for BCD is really
about specified rounding and precision, not radix. BCD avoids the base
conversion, but this is fairly efficient at the ranges required for
monetary use.

To make it easier on the programmer I think that fixed point with
specified radix point, a la PL/I, would problem be a better answer, if
you really want decimal. For my embedded work, I would like scaled
fixed point binary, as well.

I'm beginning to think that an appropriate implementation for C is a
preprocessor that converts decimal declarations and arithmetic into
suitable library calls. Hmmm... sounds like a C++ class/template.

Thad
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #558
Hans-Bernhard Broeker wrote:

In comp.lang.c.moderated Eric Backus <er*********@alum.mit.edu> wrote:
Is IBM really suggesting that new financial programs be written in C?
Let me answer this with another question: Would you prefer that they
forever keep being written in COBOL, instead?


C++? Java? Ada?
I'm all for it, _as_long_as_ it doesn't interfere with existing code.
While I encourage anyone that wants to extend their own compiler with
decimal arithmetic, I am seeing, in the embedded compiler community, the
increasing resistance to follow the Standard with increasingly complex
compilers to support features not usually required by their customers.
Apparently this resistance applies to other platforms as well. Adding
more special purpose features makes the situation worse, IMO.
If you view C as "portable assembler", then C should support as many
aspects of assembler programming as remotely possible.


There are many aspects of assembly programming that aren't supported in
a direct manner in C: carry bit and task swapping are two examples. C
provides macros and supports function libraries. These provide
reasonable support (through customized libraries) for most hardware
features through add-on libraries.

Does the added benefit of compiler-generated inline code for decimal
arithmetic outweigh the costs?

Thad
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #559
"Hans-Bernhard Broeker" <br*****@physik.rwth-aachen.de> wrote in message
news:cl****************@plethora.net...
In comp.lang.c.moderated Eric Backus <er*********@alum.mit.edu> wrote:
Is IBM really suggesting that new financial programs be written in C?
Let me answer this with another question: Would you prefer that they
forever keep being written in COBOL, instead?


OK, good point.

why does a low-level language like C need it?


Because it's the common-ground low-level language many other languages
(more precisely: their runtime libraries) are built on top of.


But you don't need built-in primitives to support other languages - a set of
standard decimal-floating-point library calls would suffice. These could
presumably be implemented in assembler on hardware platforms where there is
significant benefit for doing that. Moreover, in C++, a standard
decimal-floating-point class could wrap those calls to create a class that
looks mostly like a built-in primitive.

If you view C as "portable assembler", then C should support as many
aspects of assembler programming as remotely possible.


I don't really buy the portable assembler idea. If it were true, why has
there never been support for interrupts, and interrupt handlers, in C?
Almost any assembly language supports these.
--
Eric Backus
R&D Design Engineer
Agilent Technologies, Inc.
425-356-6010 Tel
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #560
In article <cl****************@plethora.net>, Morris Dovey
<mr*****@iedu.com> writes
[1] Assuming the standard were changed to include FP-10, will the
compiler producers consider the new standard non-relevant, given
that it covers hardware not now available anywhere in the world?


Yes... where appropriate. Ie FP-10 will never become available in many
architectures. However I think it is likely to become common on many
others. Therefore compilers will support it.

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ ch***@phaedsys.org www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #561
Da*****@cern.ch (Dan Pop) writes:
In <cl****************@plethora.net> ge***@mail.ocis.net (Gene
Wirchenko) writes:
That would be nice, too, but what if you need decimal FLOAT?


What for? Given the intrinsic nature of floating point, if the base is
relevant to your application, then you should review the application.

If the ability of representing $3.11 *exactly* is important, simply do all
your computing in pennies and the base used by your floating point
representation no longer matters (only the precision is relevant).


This doesn't work very well if you want to buy a gallon of gasoline.

(In the US, gasoline is priced in dollars per gallon, with 3 digits
after the decimal point; the third digit is almost always 9. For
example, I recently paid $1.599/gallon. Back when gasoline was a lot
cheaper, I think the idea was that $0.299 looked less expensive than
$0.30; unfortunately, the tradition has continued.)

Stock prices are also sometimes expressed in fractional cents.

And compound interest is calculated by a set of rules that I don't
pretend to understand.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://www.sdsc.edu/~kst>
Schroedinger does Shakespeare: "To be *and* not to be"
(Note new e-mail address)
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #562
Da*****@cern.ch (Dan Pop) wrote:
In <cl****************@plethora.net> glen herrmannsfeldt <ga*@ugcs.caltech.edu> writes:
It may or may not be relevant, but PL/I has supported FLOAT DECIMAL


It is relevant. PL/I was designed to be the ultimate programming
language and it failed big time. Therefore, it should be considered an
excellent example of how NOT to do things ;-)


And you wish to lay that all at the feet of decimal floating
point? PL/I also had semicolons to end statements and a number of
other things that C has.

How about we carefully consider any proposed changes instead,
regardless of whether the features have been used before?

Sincerely,

Gene Wirchenko

Computerese Irregular Verb Conjugation:
I have preferences.
You have biases.
He/She has prejudices.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #563
>> It may or may not be relevant, but PL/I has supported FLOAT DECIMAL

It is relevant. PL/I was designed to be the ultimate programming
language and it failed big time. Therefore, it should be considered
an excellent example of how NOT to do things ;-)


PL/I did many things wrong (some of its coercions were/are quite
extraordinary). But given that it was designed in the 1960s it
actually was a significant step forward in many areas. It
certainly wasn't a failure, and it is still widely used today.

Mike Cowlishaw
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #564
Thad Smith wrote:
While I encourage anyone that wants to extend their own compiler with
decimal arithmetic, I am seeing, in the embedded compiler community,
the increasing resistance to follow the Standard with increasingly
complex compilers to support features not usually required by their
customers. Apparently this resistance applies to other platforms as
well. Adding more special purpose features makes the situation
worse, IMO.


I would agree, except for the implication that decimal arithmetic is
a 'special purpose feature'. Decimal arithmetic is far more pervasive
than binary arithmetic. It is the arithmetic used by every numerate
person in the world, and in commercial databases decimal data
columns are 25 times more common than binary.

What's happening here is that hardware is adding a new primitive
data type which is useful for far more applications than binary
floating-point. It's probably more appropriate for 'embedded'
machines, too, as the reduced complexity and simpler conversions
save both time and space. In fact, programming languages
above the assembler level need *only* a decimal type, as
integers (and fixed point) data types are a subset of the
decimal floating-point type. The latter is where the decimal
754R types differ quite radically from the binary types.

Mike Cowlishaw
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #565
In <cl****************@plethora.net> "Mike Cowlishaw" <mf*****@attglobal.net> writes:
Dan Pop wrote:
That would be nice, too, but what if you need decimal FLOAT?

What for? Given the intrinsic nature of floating point, if the base is
relevant to your application, then you should review the application.

If the ability of representing $3.11 *exactly* is important, simply
do all your computing in pennies and the base used by your floating
point representation no longer matters (only the precision is relevant).


This does not work, because applications are constantly changing.
Items are often costed in sub-penny decimal units; taxes are now
sometimes specified to 4 or 5 decimal places. The scaling factor
you need to apply, to get exact results, is difficult to work out, and
has to be updated often as the data changes. And if the application
designer or maintainer doesn't allow enough fractional digits, all
of a sudden results become Inexact, with no indication.


This is trivially handled by defining the scaling factor as a macro.
The precision of an IEEE 754 double allows enough space for that, without
limiting the range of precise representations too much for the needs of
the usual financial application.

And what is the exact result of dividing 1 penny by 3, when using
decimal floating point arithmetic? Or is division a prohibited operation
for this class of applications?

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #566
Dan Pop wrote:
In <cl****************@plethora.net> ge***@mail.ocis.net (Gene Wirchenko) writes:

That would be nice, too, but what if you need decimal FLOAT?

What for? Given the intrinsic nature of floating point, if the base is
relevant to your application, then you should review the application.

If the ability of representing $3.11 *exactly* is important, simply do all
your computing in pennies and the base used by your floating point
representation no longer matters (only the precision is relevant).


I think I agree 100% with these statements, but not everyone else does,
and not everyone even understands what they mean.

To me, it would almost be worth it for the reduction in posts asking why
some operation comes out 24.299999 when it obviously should be 24.3.

Maybe if a license was required before one was allowed to use floating
point arithmetic, but that isn't likely anytime soon.

-- glen
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #567
In <cl****************@plethora.net> "P.J. Plauger" <pj*@dinkumware.com> writes:
Certainly the proposal accepted by both the C and C++ committees
calls for a parallel set of FP capabilities, but I confess I'm
a long way from being convinced that such added complexity is
either necessary or desirable. I *am* convinced that fleshing
out IEEE 754R decimal FP arithmetic as an alternative representation
for the existing three FP builtin types is quite viable.


Can this be done without changing the current floating point model of the
C standard? IIRC, the IEEE 754R specification goes beyond merely
specifying base 10. Are the other details compatible with the C floating
point model?

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #568
In <cl****************@plethora.net> Chris Hills <ch***@phaedsys.org> writes:
In article <cl****************@plethora.net>, Morris Dovey
<mr*****@iedu.com> writes
[1] Assuming the standard were changed to include FP-10, will the
compiler producers consider the new standard non-relevant, given
that it covers hardware not now available anywhere in the world?


Yes... where appropriate. Ie FP-10 will never become available in many
architectures. However I think it is likely to become common on many
others. Therefore compilers will support it.


There's no question of supporting it in the compiler if the hardware
supports it. The question is about the perspectives of a C standard
mandating unconditional support for it, on platforms with no hardware
support for it.

For the time being, the perspectives of C99 as an industry standard are
not particularly bright...

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #569
In comp.lang.c.moderated P.J. Plauger <pj*@dinkumware.com> wrote:
"Hans-Bernhard Broeker" <br*****@physik.rwth-aachen.de> wrote in message
news:cl****************@plethora.net...
I'm all for it, _as_long_as_ it doesn't interfere with existing code.
Scientists will insist that behaviour of existing code that does rely
on FP being done in base 2 doesn't change.

You really think so?
Yes. Been there, done that. It was big High-Energy Physics
experiment, with a total project time of more than a decade, and still
counting. Lots of computations go on between the actual raw data
taking and the output of published results. At least 3 generations of
computer hardware were involved over the time the experiment has been
running, and they want to be sure that changing the FPU doesn't affect
the results. Not even minimally. Result was that they decided to
re-configure the Intel FPUs to turn of their "excess" precision.
These guys would be *very* upset if a compiler came out that no longer
supported binary FP.
Most `scientists' I know are content to have their FP calculations
produce results that look more or less reasonable to, say, a dozen
decimal places.


Then may you only know `scientists' (including the quotes), but no
actual scientists.

I see no problem adding new features to the language. But the day you
start removing features is when you may be causing real trouble for
people out there.

Actually, if the plan were to just use decimal FP *instead* of the now
common binary FP, there would be nothing for the committee to decide
about, as far as I can see. A platform with FLT_RADIX==10 should be
perfectly compliant right now, as far as I can see. It might enrage
some potential users and steer them away from such a platform, but
that's an economic risk for its vendors to worry about rather than a
concern for the C standardization comittee.

The only thing the current standard(s) doesn't support, and thus
requiring an actual committee decision, would be having more than one
FP base available on the same platform, to be used within the same
program. And once you support more two, you have to make essentially
3 substantial decisions:

1) Whether to prescribe which of them is used as good old float,
double & surrounding tools, or leave that to the implementors
2) If so, which one to prescribe.
3) Whether to make support for the known-base types optional or mandatory

--
Hans-Bernhard Broeker (br*****@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #570
In message <cl****************@plethora.net>
Fergus Henderson <fj*@cs.mu.oz.au> wrote:
The problem is that C doesn't have any support for user-defined operators,
operator overloading, or indeed any user-defined infix notation, so
user-defined numeric types feel decidedly like second-class citizens in
comparison with.

E.g. compare

decimal_t x, y, z, a, b, c;
z += x * y * z + a * b * (c - 1);

with

decimal_t x, y, z, a, b, c;
decimal_mult_assign(&z, decimal_add(
decimal_mult(decimal_mult(x, y), z),
decimal_mult(decimal_mult(a, b),
decimal_sub(c, int_to_decimal(1)))));

The former is a lot more readable.


Which strikes me as a very good argument for using C++. I mean, that's
the language with user defined operators, paramaterised classes and all
that jazz. If one really wants float<2> and float<10> etc, surely C++
is the way to go.

Create an analogue of Annex F for implementations with FLT_RADIX==10 by all
means, but the added complexity of yet another class of base types? The
complex stuff is hairy enough. I've certainly not been able to tackle
implementing any of that yet, and it's not clear that my users have any
particular interest in it. Any base 10 stuff will be similar.

--
Kevin Bracey, Principal Software Engineer
Tematic Ltd Tel: +44 (0) 1223 503464
182-190 Newmarket Road Fax: +44 (0) 1223 503458
Cambridge, CB5 8HE, United Kingdom WWW: http://www.tematic.com/
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #571
"Dan Pop" <Da*****@cern.ch> wrote in message
news:cl****************@plethora.net...
In <cl****************@plethora.net> "P.J. Plauger" <pj*@dinkumware.com> writes:
Certainly the proposal accepted by both the C and C++ committees
calls for a parallel set of FP capabilities, but I confess I'm
a long way from being convinced that such added complexity is
either necessary or desirable. I *am* convinced that fleshing
out IEEE 754R decimal FP arithmetic as an alternative representation
for the existing three FP builtin types is quite viable.
Can this be done without changing the current floating point model of the
C standard? IIRC, the IEEE 754R specification goes beyond merely
specifying base 10. Are the other details compatible with the C floating
point model?

From what I've seen so far, and I have studied 754R more than casually,

the answer is yes. 754R adds a few functions that are quite useful
for (possibly denormalized) base 10 arithmetic, but none of those
functions are completely silly when applied to (usually normalized)
binary arithmetic either.

Luckily, we put in C89 the possibility that FLT_RADIX could be other
than 2. Mostly that was to accommodate S/370, which is base 16. But
we also had an eye to a few machines that did decimal floating point
in software. So the basic C model is not severely stressed by 754R.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #572
"Hans-Bernhard Broeker" <br*****@physik.rwth-aachen.de> wrote in message
news:cl****************@plethora.net...
In comp.lang.c.moderated P.J. Plauger <pj*@dinkumware.com> wrote:
"Hans-Bernhard Broeker" <br*****@physik.rwth-aachen.de> wrote in message
news:cl****************@plethora.net...
I'm all for it, _as_long_as_ it doesn't interfere with existing code.
Scientists will insist that behaviour of existing code that does rely
on FP being done in base 2 doesn't change.
You really think so?


Yes. Been there, done that. It was big High-Energy Physics
experiment, with a total project time of more than a decade, and still
counting. Lots of computations go on between the actual raw data
taking and the output of published results. At least 3 generations of
computer hardware were involved over the time the experiment has been
running, and they want to be sure that changing the FPU doesn't affect
the results. Not even minimally. Result was that they decided to
re-configure the Intel FPUs to turn of their "excess" precision.
These guys would be *very* upset if a compiler came out that no longer
supported binary FP.


This is the kind of simplistic thinking I was referring to when I
wrote (and you clipped):

: Perhaps you're thinking of the constituency who think the `right'
: answer is the one that reproduces all the noise digits exactly
: from the original test run. Not much help for them.

I remember when Princeton University upgraded their IBM 7090 to an
IBM 7094, which would trap on a zero divide instead of effectively
dividing by one. After one week of production code bombing regularly,
the user community *demanded* that the divide check trap be left
permanently disabled. They just didn't want to know...

I have trouble stifling the evolution of a programming language
standard to accommodate people who `solve' problems this way.
Most `scientists' I know are content to have their FP calculations
produce results that look more or less reasonable to, say, a dozen
decimal places.


Then may you only know `scientists' (including the quotes), but no
actual scientists.


Uh, I spent most of a decade in cyclotron laboratories while earning
an AB and a PhD in nuclear physics. I worked my way through school
writing computer programs for both theoretical calculations and
data reduction. I've spent a good part of the past 40 years writing
and rewriting math functions that only a small fraction of our
clientele cares much about. Many of them consider themselves
`scientists'. I do too, to the extent they favor rational thinking
over dogma and/or officiousness.
I see no problem adding new features to the language. But the day you
start removing features is when you may be causing real trouble for
people out there.
Who's talking about removing features? Standard C has *never* promised
that floating-point arithmetic will be done in binary. And it's damned
hard to write a program that can determine whether it is.
Actually, if the plan were to just use decimal FP *instead* of the now
common binary FP, there would be nothing for the committee to decide
about, as far as I can see. A platform with FLT_RADIX==10 should be
perfectly compliant right now, as far as I can see. It might enrage
some potential users and steer them away from such a platform, but
that's an economic risk for its vendors to worry about rather than a
concern for the C standardization comittee.
You're mostly correct. The one added piece of work would be to add
the handful of functions recommended by IEEE 754R. I believe those
can and should be overhauled to be useful for floating-point of
*any* base.
The only thing the current standard(s) doesn't support, and thus
requiring an actual committee decision, would be having more than one
FP base available on the same platform, to be used within the same
program. And once you support more two, you have to make essentially
3 substantial decisions:

1) Whether to prescribe which of them is used as good old float,
double & surrounding tools, or leave that to the implementors
2) If so, which one to prescribe.
3) Whether to make support for the known-base types optional or mandatory


Exactly. That's *so* much more work than simply fleshing out good
support for 754R *if present* that it's really worth avoiding, if
at all possible. I want to explore thoroughly the implications of *not*
having Standard C support multiple floating-point formats simultaneously,
before we commit to adding all that complexity to C.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #573
On Wed, 26 Nov 2003 18:57:09 +0000, Dan Pop wrote:
In <cl****************@plethora.net> "Mike Cowlishaw" <mf*****@attglobal.net> writes:
Dan Pop wrote:
That would be nice, too, but what if you need decimal FLOAT?
What for? Given the intrinsic nature of floating point, if the base is
relevant to your application, then you should review the application.

If the ability of representing $3.11 *exactly* is important, simply
do all your computing in pennies and the base used by your floating
point representation no longer matters (only the precision is relevant).


This does not work, because applications are constantly changing.
Items are often costed in sub-penny decimal units; taxes are now
sometimes specified to 4 or 5 decimal places. The scaling factor
you need to apply, to get exact results, is difficult to work out, and
has to be updated often as the data changes. And if the application
designer or maintainer doesn't allow enough fractional digits, all
of a sudden results become Inexact, with no indication.


This is trivially handled by defining the scaling factor as a macro.
The precision of an IEEE 754 double allows enough space for that, without
limiting the range of precise representations too much for the needs of
the usual financial application.

And what is the exact result of dividing 1 penny by 3, when using
decimal floating point arithmetic? Or is division a prohibited operation
for this class of applications?


For that particular calculation you'll have to use the new trinary
arithmetic annex being added to support such things.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #574
P.J. Plauger wrote:
... I want to explore thoroughly the implications of *not*
having Standard C support multiple floating-point formats simultaneously,
before we commit to adding all that complexity to C.


The main implication would be that applications requiring
the properties of decimal f.p. would not be portable to C
implementations that use binary f.p. as the sole flavor
of f.p. (and if there are any depending on the properties
of binary f.p. they would not be portable in the other
direction). It could be that software emulation of the
other flavor of f.p. would be adequate in many cases, and
requiring support would thus enhance portability. We
need to determine a reliable estimate for the number of
applications in this category.

In past committee discussions, reprsentatives of the
numerical analysis community have assured us that there
are important algorithms in use where the exact behavior
of the lowest-order bits significantly affects the
outcome of f.p. computation. Thus that community would
presumably care whether a binary or a decimal radix was
used, and we should get their feedback also.

I'll also remark that this newsgroup discussion isn't a
very effective way to proceed. In several days of
dicussion so far, no point has been made that wasn't
dealt with within a few minutes at the evening session
during the recent Kona C meeting.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #575
In comp.lang.c.moderated P.J. Plauger <pj*@dinkumware.com> wrote:
"Hans-Bernhard Broeker" <br*****@physik.rwth-aachen.de> wrote in message
news:cl****************@plethora.net...
[...]
running, and they want to be sure that changing the FPU doesn't affect
the results. Not even minimally. Result was that they decided to
re-configure the Intel FPUs to turn of their "excess" precision.
These guys would be *very* upset if a compiler came out that no longer
supported binary FP. This is the kind of simplistic thinking I was referring to when I
wrote (and you clipped): : Perhaps you're thinking of the constituency who think the `right'
: answer is the one that reproduces all the noise digits exactly
: from the original test run. Not much help for them.
Going over your arguments again, I concede your point. Sorry if I
came across as being mule-headed. Germans from the region of the
country I come from _do_ have a tendency to be quite stubborn...
I remember when Princeton University upgraded their IBM 7090 to an
IBM 7094, which would trap on a zero divide instead of effectively
dividing by one. After one week of production code bombing regularly,
the user community *demanded* that the divide check trap be left
permanently disabled. They just didn't want to know...
Well, the new hardware should have offered to silently return infinity
instead ...
I see no problem adding new features to the language. But the day you
start removing features is when you may be causing real trouble for
people out there.

Who's talking about removing features? Standard C has *never* promised
that floating-point arithmetic will be done in binary.
You're right of course, C never promised that. It's the hardware
designers who effectively did that. I'm not aware of any architecture
in active use right now that has FLT_RADIX != 2. In fact, there seems
to be hardly anything else but IEEE 754 (with its extensions) out
there these days.

Given that, programmers did start to rely on this de-facto standard,
and would at least somewhat justifiably be opposed to any change if
that caused extra work for them. Since that change would only come to
them with the next generation of hardware, though, users can still
vote against it with their hardware investments, so no harm done.
And it's damned hard to write a program that can determine whether
it is.
Unless it is allowed to just look up FLT_RADIX in <float.h>, that is ;-)

[...]
Exactly. That's *so* much more work than simply fleshing out good
support for 754R *if present* that it's really worth avoiding, if at
all possible. I want to explore thoroughly the implications of *not*
having Standard C support multiple floating-point formats
simultaneously, before we commit to adding all that complexity to C.


I agree. So the issue boils down to this question:

How likely is a single program going to really need both decimal and
binary FP?

If and only if the answer to that is "essentially never", then there's
nothing to do --- let the compiler vendors and platform ABI designers
make the decision on a per-program basis, and thus outside the realm
of standard C.

--
Hans-Bernhard Broeker (br*****@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #576
Dan Pop wrote:
That would be nice, too, but what if you need decimal FLOAT?
What for? Given the intrinsic nature of floating point, if the
base is
relevant to your application, then you should review the
application.

If the ability of representing $3.11 *exactly* is important, simply
do all your computing in pennies and the base used by your floating
point representation no longer matters (only the precision is
relevant).
This does not work, because applications are constantly changing.
Items are often costed in sub-penny decimal units; taxes are now
sometimes specified to 4 or 5 decimal places. The scaling factor
you need to apply, to get exact results, is difficult to work out,
and
has to be updated often as the data changes. And if the application
designer or maintainer doesn't allow enough fractional digits, all
of a sudden results become Inexact, with no indication.


This is trivially handled by defining the scaling factor as a macro.
The precision of an IEEE 754 double allows enough space for that,
without limiting the range of precise representations too much for
the needs of the usual financial application.


I suggest you try it; it is not trivial at all.
And what is the exact result of dividing 1 penny by 3, when using
decimal floating point arithmetic? Or is division a prohibited
operation for this class of applications?


This is exactly why floating-point is the right solution. Whenever
the result is Inexact, you get the best possible approximation to
the 'ideal' result -- that is, full precision. With a fixed scaling
factor you cannot make full use of the precision unless you
know how many digits there will be to the left of the radix
point.

Mike Cowlishaw
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #577
In <cl****************@plethora.net> ge***@mail.ocis.net (Gene Wirchenko) writes:
Da*****@cern.ch (Dan Pop) wrote:
In <cl****************@plethora.net> glen herrmannsfeldt <ga*@ugcs.caltech.edu> writes:
It may or may not be relevant, but PL/I has supported FLOAT DECIMAL


It is relevant. PL/I was designed to be the ultimate programming
language and it failed big time. Therefore, it should be considered an
excellent example of how NOT to do things ;-)


And you wish to lay that all at the feet of decimal floating
point? PL/I also had semicolons to end statements and a number of
other things that C has.

How about we carefully consider any proposed changes instead,
regardless of whether the features have been used before?


How about *you* carefully consider the meaning of the emoticon ending my
previous post?

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #578
P.J. Plauger wrote:
Who's talking about removing features? Standard C has *never* promised
that floating-point arithmetic will be done in binary. And it's damned
hard to write a program that can determine whether it is.


This one should be quite reliable:

#include <stdio.h>
int main(void) {
double x = 0.1;
if (x*8 != x+x+x+x+x+x+x+x)
printf("Binary");
else
printf("Decimal");
return 0;
}

(Though it's harder to distinguish if more than just the two possibilities,
of course.)

Mike Cowlishaw
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #579
In <cl****************@plethora.net> "Mike Cowlishaw" <mf*****@attglobal.net> writes:
It may or may not be relevant, but PL/I has supported FLOAT DECIMAL


It is relevant. PL/I was designed to be the ultimate programming
language and it failed big time. Therefore, it should be considered
an excellent example of how NOT to do things ;-)


PL/I did many things wrong (some of its coercions were/are quite
extraordinary). But given that it was designed in the 1960s it
actually was a significant step forward in many areas. It
certainly wasn't a failure, and it is still widely used today.


It failed to achieve the goal of its designers: to be the ultimate
programming language. It never achieved the popularity of the languages
it was supposed to render obsolete: FORTRAN and COBOL.

Dan
--
Dan Pop
DESY Zeuthen, RZ group
Email: Da*****@ifh.de
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #580
P.J. Plauger wrote:
Exactly. That's *so* much more work than simply fleshing out good
support for 754R *if present* that it's really worth avoiding, if
at all possible. I want to explore thoroughly the implications of
*not* having Standard C support multiple floating-point formats
simultaneously, before we commit to adding all that complexity to C.


This is a very valid, and important, question, which we (IBM)
spent some time in considering before proposing the addition of
new types. Here's a slightly edited extract from a note Raymond
Mak wrote describing some of the main points [additional comments
by me are in square brackets]:

... there was a question about using a pragma to switch the
meaning of the floating-point types [between base 10 and base
2].

Yes, in principle it can be done, and on the surface it might
seems it would limit complexity. But after some code
prototyping, and thinking it through more carefully, using pragma
has a number of disadvantages.

Below quickly summarizes the main points:

1/ The fact that there are two sets of floating-point types in
itself does not mean the language would become more complex.

The complexity question should be answered from the
perspective of the user's program - that is, do the new data
types add complexity to the user's code? My answer is no,
except for the issues surrounding implicit conversions, which
I will address below. For a program that uses only binary
floating-point [FP] types, or uses only decimal FP types,
the programmer is still working with at most three FP
types. We are not making the program more difficult to
write, understand, or maintain.

2/ Implicit conversions can be handled by simply disallowing them
(except maybe for cases that involve literals). If we do this,
for CUs that have both binary and decimal FP types, the
code is still clean and easy to understand. In a large
source file, with std pragma flipping the meaning of the
types back and forth, the code is actually a field of land
mines for the maintenance programmer, who might not
immediately aware of the context of the piece of code.

[For example, if a piece of code expected to be doing
'safe' exact decimal calculations were accidentally
switched to use binary, the change could be very hard to
detect, or only cause occasional failure.]

3/ Giving two meanings to one data type hurts type safety. A
program may bind by mistake to the wrong library, causing
runtime errors that are difficult to trace. It is always
preferable to detect errors during compile time. Overloading
the meaning of a data type makes the language more
complicated, not more simple.

4/ A related advantage of using separate types is that it
facilitates the use of source checking/scanning utilities (or
scripts). They can easily detect which FP types are used
in a piece of code with just local processing. If a std
pragma can change the representation of a type, the use of
grep, for example, as an aid to understand and to search
program text would become very difficult.

Comparatively speaking, this is not a technical issue for the
implementation, as it might seem on the surface initially --
i.e., it might seem easier to just tag new meaning to
existing types -- but is an issue about usability for the
programmer. The meaning of a piece of code can become
obscure if we reuse the float/double/long double types.
Also, I feel that we have a chance here to bind the C
behavior directly with [the new] IEEE types, reducing the
number of variations among implementations. This would help
programmer writing portable code, with one source tree
building on multiple platforms. Using a new set of data
types is the cleanest way to achieve this.

To this I would add (at least) a few more problems with
the 'overloading' approach:

5/ There would be no way for a programmer in a 'decimal'
program to invoke routines in existing (binary) libraries.
Every existing routine and library would need to be
rewritten for decimal floating-point, whereas in many
(most?) cases the binary value from an existing library
would have been perfectly adequate.

6/ Similarly, any new routine that was written using decimal FP
would be inaccessible to programmers writing programs which
primarily used binary FP.

7/ There would be no way to modify existing programs (using
binary FP calculation) to cleanly access data in the new
IEEE 754 decimal formats.

8/ There would be no way to have both binary and decimal
FP variables in the same data structure.

9/ Debuggers would have no way of detecting whether a FP number
is decimal or binary and so would be unable to display the
value in a human-readable form. The datatypes need to be
distinguished at the language level and below.

The new decimal types are true primitives, which will exist at
the hardware level. Unlike compound types (such as Complex),
which are built from existing primitives, they are first class
primitives in their own right. As such, they are in the same
category as ints and doubles, and should be treated similarly and
distinctly.

Mike Cowlishaw
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #581
"P.J. Plauger" wrote:
"Hans-Bernhard Broeker" <br*****@physik.rwth-aachen.de> wrote:
..... snip ...
This is the kind of simplistic thinking I was referring to when I
wrote (and you clipped):

: Perhaps you're thinking of the constituency who think the `right'
: answer is the one that reproduces all the noise digits exactly
: from the original test run. Not much help for them.

I remember when Princeton University upgraded their IBM 7090 to an
IBM 7094, which would trap on a zero divide instead of effectively
dividing by one. After one week of production code bombing regularly,
the user community *demanded* that the divide check trap be left
permanently disabled. They just didn't want to know...

I have trouble stifling the evolution of a programming language
standard to accommodate people who `solve' problems this way.


And the entire episode reminds me of those who complain(ed) about
Pascal strong typeing and range checking "getting in their way".

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #582
On 27 Nov 2003 07:15:07 GMT in comp.lang.c.moderated, "P.J. Plauger"
<pj*@dinkumware.com> wrote:
"Hans-Bernhard Broeker" <br*****@physik.rwth-aachen.de> wrote in message
news:cl****************@plethora.net...
In comp.lang.c.moderated P.J. Plauger <pj*@dinkumware.com> wrote:
> "Hans-Bernhard Broeker" <br*****@physik.rwth-aachen.de> wrote in message
> news:cl****************@plethora.net...

> > I'm all for it, _as_long_as_ it doesn't interfere with existing code.
> > Scientists will insist that behaviour of existing code that does rely
> > on FP being done in base 2 doesn't change.

> You really think so?


Yes. Been there, done that. It was big High-Energy Physics
experiment, with a total project time of more than a decade, and still
counting. Lots of computations go on between the actual raw data
taking and the output of published results. At least 3 generations of
computer hardware were involved over the time the experiment has been
running, and they want to be sure that changing the FPU doesn't affect
the results. Not even minimally. Result was that they decided to
re-configure the Intel FPUs to turn of their "excess" precision.
These guys would be *very* upset if a compiler came out that no longer
supported binary FP.


This is the kind of simplistic thinking I was referring to when I
wrote (and you clipped):

: Perhaps you're thinking of the constituency who think the `right'
: answer is the one that reproduces all the noise digits exactly
: from the original test run. Not much help for them.

I remember when Princeton University upgraded their IBM 7090 to an
IBM 7094, which would trap on a zero divide instead of effectively
dividing by one. After one week of production code bombing regularly,
the user community *demanded* that the divide check trap be left
permanently disabled. They just didn't want to know...

I have trouble stifling the evolution of a programming language
standard to accommodate people who `solve' problems this way.
> Most `scientists' I know are content to have their FP calculations
> produce results that look more or less reasonable to, say, a dozen
> decimal places.


Then may you only know `scientists' (including the quotes), but no
actual scientists.


Uh, I spent most of a decade in cyclotron laboratories while earning
an AB and a PhD in nuclear physics. I worked my way through school
writing computer programs for both theoretical calculations and
data reduction. I've spent a good part of the past 40 years writing
and rewriting math functions that only a small fraction of our
clientele cares much about. Many of them consider themselves
`scientists'. I do too, to the extent they favor rational thinking
over dogma and/or officiousness.
I see no problem adding new features to the language. But the day you
start removing features is when you may be causing real trouble for
people out there.


Who's talking about removing features? Standard C has *never* promised
that floating-point arithmetic will be done in binary. And it's damned
hard to write a program that can determine whether it is.
Actually, if the plan were to just use decimal FP *instead* of the now
common binary FP, there would be nothing for the committee to decide
about, as far as I can see. A platform with FLT_RADIX==10 should be
perfectly compliant right now, as far as I can see. It might enrage
some potential users and steer them away from such a platform, but
that's an economic risk for its vendors to worry about rather than a
concern for the C standardization comittee.


You're mostly correct. The one added piece of work would be to add
the handful of functions recommended by IEEE 754R. I believe those
can and should be overhauled to be useful for floating-point of
*any* base.
The only thing the current standard(s) doesn't support, and thus
requiring an actual committee decision, would be having more than one
FP base available on the same platform, to be used within the same
program. And once you support more two, you have to make essentially
3 substantial decisions:

1) Whether to prescribe which of them is used as good old float,
double & surrounding tools, or leave that to the implementors
2) If so, which one to prescribe.
3) Whether to make support for the known-base types optional or mandatory


Exactly. That's *so* much more work than simply fleshing out good
support for 754R *if present* that it's really worth avoiding, if
at all possible. I want to explore thoroughly the implications of *not*
having Standard C support multiple floating-point formats simultaneously,
before we commit to adding all that complexity to C.


Oh good -- "spirit of C" thinking -- and don't the IBM C compilers
currently select between binary and hex FP instructions with options?
Are they proposing to support anything more than a decimal FP
instructions option?
--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada

Br**********@CSi.com (Brian dot Inglis at SystematicSw dot ab dot ca)
fake address use address above to reply
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #583
On 21 Nov 2003 23:48:45 GMT in comp.lang.c.moderated, "Mike Cowlishaw"
<mf*****@attglobal.net> wrote:
Morris Dovey wrote:
Never wanting to miss an opportunity to display my considerable
ignorance, why not leave selection of floating point radix a
compile/link option? I've been thinking back over all the
financial code I've ever written/seen and can't think of any
instance where use of more than a single floating point radix
made any sense at all. Why not simply use the type names we
already have?


This would mean that one could never have both binary and decimal
FP data in the same program/structure.


This is a very good idea -- mixing binary, decimal (and hex) FP
formats in a structure or a set of related modules is a very bad idea
-- unless the radix can be detected at the hardware level.
A pragma which could be used
inside a program would be especially dangerous (consider the
base being switched inside an #include).
Nooooooooo!
The entire existing base
of binary FP functions could not be used from a program which
selected base 10.


Current IBM 390/z compilers and platforms don't appear to support any
mixing of binary and hex FP instructions and functions in the same
code -- Linux supports IEEE functions and requires IEEE instructions
-- native OSes support hex functions and require hex instructions.

I think the FP functions can be adequately handled by the tgmath.h
(generic math function names) additions, with the extra effort to
develop decimal FP functions borne solely on platforms where that is a
compiler option.

Modules either have an explicit radix dependency or not -- those with
require special coding or compile options -- those without either
don't care, or have to take precautions assuming the worst case --
typically the minimum limits documented for float.h.
--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada

Br**********@CSi.com (Brian dot Inglis at SystematicSw dot ab dot ca)
fake address use address above to reply
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #584
Da*****@cern.ch (Dan Pop) wrote:
In <cl****************@plethora.net> ge***@mail.ocis.net (Gene Wirchenko) writes:
Da*****@cern.ch (Dan Pop) wrote:
In <cl****************@plethora.net> glen herrmannsfeldt <ga*@ugcs.caltech.edu> writes:

It may or may not be relevant, but PL/I has supported FLOAT DECIMAL

It is relevant. PL/I was designed to be the ultimate programming
language and it failed big time. Therefore, it should be considered an
excellent example of how NOT to do things ;-)


And you wish to lay that all at the feet of decimal floating
point? PL/I also had semicolons to end statements and a number of
other things that C has.

How about we carefully consider any proposed changes instead,
regardless of whether the features have been used before?


How about *you* carefully consider the meaning of the emoticon ending my
previous post?


Ah, yes, but what about that PL/I did fail big time? I think you
may have been more sincere than you intended.

Sincerely,

Gene Wirchenko

Computerese Irregular Verb Conjugation:
I have preferences.
You have biases.
He/She has prejudices.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #585
"Douglas A. Gwyn" <DA****@null.net> wrote in message
news:cl****************@plethora.net...
P.J. Plauger wrote:
... I want to explore thoroughly the implications of *not*
having Standard C support multiple floating-point formats simultaneously, before we commit to adding all that complexity to C.
The main implication would be that applications requiring
the properties of decimal f.p. would not be portable to C
implementations that use binary f.p. as the sole flavor
of f.p. (and if there are any depending on the properties
of binary f.p. they would not be portable in the other
direction). It could be that software emulation of the
other flavor of f.p. would be adequate in many cases, and
requiring support would thus enhance portability. We
need to determine a reliable estimate for the number of
applications in this category.


Agreed.
In past committee discussions, reprsentatives of the
numerical analysis community have assured us that there
are important algorithms in use where the exact behavior
of the lowest-order bits significantly affects the
outcome of f.p. computation. Thus that community would
presumably care whether a binary or a decimal radix was
used, and we should get their feedback also.
Also agreed.
I'll also remark that this newsgroup discussion isn't a
very effective way to proceed. In several days of
dicussion so far, no point has been made that wasn't
dealt with within a few minutes at the evening session
during the recent Kona C meeting.


But it has educated a wider audience to some of the issues,
and that's an important part of proceeding.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #586
"Mike Cowlishaw" <mf*****@attglobal.net> writes:
P.J. Plauger wrote:
Who's talking about removing features? Standard C has *never* promised
that floating-point arithmetic will be done in binary. And it's damned
hard to write a program that can determine whether it is.


This one should be quite reliable:

#include <stdio.h>
int main(void) {
double x = 0.1;
if (x*8 != x+x+x+x+x+x+x+x)
printf("Binary");
else
printf("Decimal");
return 0;
}


You are kidding, right?

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #587
In article <cl****************@plethora.net>, Mike Cowlishaw
<mf*****@attglobal.net> writes
2/ Implicit conversions can be handled by simply disallowing them
(except maybe for cases that involve literals). If we do this,
for CUs that have both binary and decimal FP types, the
code is still clean and easy to understand.


I actually think this is important enough that one of my criteria for a
decimal type is that there should be very few allowed implicit
conversions to the existing arithmetic types. Some might like to argue
for conversions to integer types but those are essentially narrowing
conversions and should, IMO, always require an explicit conversion. The
problem with implicit conversions with the current floating point types
is that it would loose exactly the guarantees being offered by the
proposed decimal floats. I would, I think, allow implicit conversion
from an integer type to a decimal floating point type.

I would also require explicit conversions between decimal float types
which were narrowing ones.

Personally I think that the introduction of decimal floating point types
allows us a chance to provide a more robust set of conversion rules than
those we currently have.
--
Francis Glassborow ACCU
If you are not using up-to-date virus protection you should not be reading
this. Viruses do not just hurt the infected but the whole community.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #588
"Mike Cowlishaw" <mf*****@attglobal.net> wrote:
Dan Pop wrote:
And what is the exact result of dividing 1 penny by 3, when using
decimal floating point arithmetic? Or is division a prohibited
operation for this class of applications?


This is exactly why floating-point is the right solution. Whenever
the result is Inexact, you get the best possible approximation to
the 'ideal' result -- that is, full precision. With a fixed scaling
factor you cannot make full use of the precision unless you
know how many digits there will be to the left of the radix
point.


Well, there's always rational arithmetic. It probably isn't appropriate
for most applications, but it _is_ guaranteed correct as long as you
stick to ASM&D.

Richard
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #589
Gene Wirchenko wrote:
Ah, yes, but what about that PL/I did fail big time?


IBM provided me with some wonderful training so that I could use
PL/I to write a single program (I did; and never used the
language again - wandering onward to write code in PL/S and later
PL/S2 and still later PL/DS, all of which were "sort of"
descendants of PL/I.)

I've given PL/I a lot of thought since those days; and think I
might be able to offer two significant parts of the answer, both
of which have to do with perceptions in the might-have-been user
community:

[1] PL/I was a big, heavy language. It required a substantial
vehicle to carry it. Specifically, it required a more expensive
computer system than prospective users were willing to rent or
(shudder) purchase.

[2] PL/I was generally regarded as a single-source product; and
prospective customers who were unwilling to be locked to IBM
refused to touch it.

I've found it interesting that these two perceptions seem to be a
common thread in nearly all the firms I've worked with since my
IBM days, particularly so since I've always considered that IBM
fielded one of the most effective marketing organizations in the
world. Go figure.
--
Morris Dovey
West Des Moines, Iowa USA
C links at http://www.iedu.com/c
Read my lips: The apple doesn't fall far from the tree.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #590
Hans-Bernhard Broeker wrote:
....
designers who effectively did that. I'm not aware of any architecture
in active use right now that has FLT_RADIX != 2. In fact, there seems


P.J. Plauger has already mentioned the S/370, which had FLT_RADIX==16,
as well as a few less common machines where FLT_RADIX==10, as being the
driving factors behind allowing FLT_RADIX!=2 in C99.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #591
Mike Cowlishaw wrote:
... there was a question about using a pragma to switch the
meaning of the floating-point types [between base 10 and base
2].

Yes, in principle it can be done, and on the surface it might
seems it would limit complexity. But after some code
prototyping, and thinking it through more carefully, using pragma
has a number of disadvantages.

Below quickly summarizes the main points:

1/ The fact that there are two sets of floating-point types in
itself does not mean the language would become more complex.

The complexity question should be answered from the
perspective of the user's program - that is, do the new data
types add complexity to the user's code? My answer is no,
except for the issues surrounding implicit conversions, which
I will address below. For a program that uses only binary
floating-point [FP] types, or uses only decimal FP types,
the programmer is still working with at most three FP
types. We are not making the program more difficult to
write, understand, or maintain.
[wholehearted agreement]
2/ Implicit conversions can be handled by simply disallowing them
(except maybe for cases that involve literals). If we do this,
for CUs that have both binary and decimal FP types, the
code is still clean and easy to understand. In a large
source file, with std pragma flipping the meaning of the
types back and forth, the code is actually a field of land
mines for the maintenance programmer, who might not
immediately aware of the context of the piece of code.

[For example, if a piece of code expected to be doing
'safe' exact decimal calculations were accidentally
switched to use binary, the change could be very hard to
detect, or only cause occasional failure.]
Not a good idea. This was the kind of rationale exercised by the
Stratus Computer people when they included PL/I-style
variable-length strings into their VOS C implementation. It has
produced nightmare after nightmare for even highly experienced
and sophisticated programming teams. (Stratus' VOS was, at least
orininally, written in PL/I)

I have no difficulty seeing the same kind of confusion and
frustration growing out of this kind of policy.

How does this proposal deal with expressions evaluating both FP-2
and FP-10 variables? What are the proposed rules of promotion for
mixed floating point types and sizes? Are you going to require
explict promotions and conversions? If the answer to this latter
is "yes", then I'd like to go on record as having warned before
the fact that you're not only creating an ugly monster; but that
you aren't acting in the best interests of the wider community.
3/ Giving two meanings to one data type hurts type safety. A
program may bind by mistake to the wrong library, causing
runtime errors that are difficult to trace. It is always
preferable to detect errors during compile time. Overloading
the meaning of a data type makes the language more
complicated, not more simple.
Unless you can somehow nullify Murphy's Law, mistakes will be
made, regardless. If the proposal obscures the meaning of C code
as you suggest below, then this would seem to suggest that you'd
prefer the errors to be made at coding time - and remain
undetected at compile time. [I don't think this is your actual
intent; but I see it as the natural consequence of implementing
this rationale.]
4/ A related advantage of using separate types is that it
facilitates the use of source checking/scanning utilities (or
scripts). They can easily detect which FP types are used
in a piece of code with just local processing. If a std
pragma can change the representation of a type, the use of
grep, for example, as an aid to understand and to search
program text would become very difficult.
Only if the grep user is too lazy to check the most recent
radix-controlling pragma. The source editors I use most (nedit,
gvim) BTW make this a "no brainer". If you're addressing
/particular/ "source checking/scanning utilities (or scripts)"
then you should probably consider changing/replacing those (dare
I say "brain dead"?) inadequate tools. This is not a /C/ issue.
Comparatively speaking, this is not a technical issue for the
implementation, as it might seem on the surface initially --
i.e., it might seem easier to just tag new meaning to
existing types -- but is an issue about usability for the
programmer. The meaning of a piece of code can become
obscure if we reuse the float/double/long double types.
Also, I feel that we have a chance here to bind the C
behavior directly with [the new] IEEE types, reducing the
number of variations among implementations. This would help
programmer writing portable code, with one source tree
building on multiple platforms. Using a new set of data
types is the cleanest way to achieve this.
If the meaning of the code can made obscure by changing the
underlying floating point processing, then I'd like to suggest
that we're on a bad path, however good our intentions.
To this I would add (at least) a few more problems with
the 'overloading' approach:

5/ There would be no way for a programmer in a 'decimal'
program to invoke routines in existing (binary) libraries.
Every existing routine and library would need to be
rewritten for decimal floating-point, whereas in many
(most?) cases the binary value from an existing library
would have been perfectly adequate.
Even if the word was "most" (instead of just "many"), what of the
other cases? I'm understanding this 'point' as an invitation to
degrade the language/library.

My clients won't accept "adequate" results "most" of the time.
Whatever we do needs to be a higher quality solution than that.
If this means a whole new library, then let's be honest enough to
face that fact now.
6/ Similarly, any new routine that was written using decimal FP
would be inaccessible to programmers writing programs which
primarily used binary FP.

7/ There would be no way to modify existing programs (using
binary FP calculation) to cleanly access data in the new
IEEE 754 decimal formats.

8/ There would be no way to have both binary and decimal
FP variables in the same data structure.

9/ Debuggers would have no way of detecting whether a FP number
is decimal or binary and so would be unable to display the
value in a human-readable form. The datatypes need to be
distinguished at the language level and below.

The new decimal types are true primitives, which will exist at
the hardware level. Unlike compound types (such as Complex),
which are built from existing primitives, they are first class
primitives in their own right. As such, they are in the same
category as ints and doubles, and should be treated similarly and
distinctly.


To an ordinary C programmer who has no investment (emotional or
otherwise) in FP-10, in the implementation and/or marketing of a
new generation compiler, nor the implementation and/or marketing
of a new generation library, all of this sounds very much like an
argument /against/ a change to the C language standard to
accommodate radix 10 floating point.

In point of fact, as an ordinary C programmer I have no
difficulty appreciating the benefits of FP-10. I don't care what
goes on in the circuitry so long as it produces accurate results
in even (especially?) worst case scenarios. As an ordinary C
programmer I feel that the circuitry is not something I should
need to worry about (at least not in /hosted/ systems) else I'd
still be an assembly language programmer. Finally (and still an
ordinary C programmer), I'll welcome FP-10 benefits when they
produce the accurate results I require in a transparent fashion -
when FP-10 data is properly handled in all arithmetic expressions
(including those containing floating point data in any other
radix representation) as clearly, cleanly, and consistantly as
all currently existing arithmetic types. IME, a failure to
provide this kind of type compatibility amounts to a low-quality
arithmetic operator overloading - and I have difficulty seeing
that as an improvement to the language.
--
Morris Dovey
West Des Moines, Iowa USA
C links at http://www.iedu.com/c
Read my lips: The apple doesn't fall far from the tree.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #592
Fergus Henderson wrote:

"Mike Cowlishaw" <mf*****@attglobal.net> writes:
P.J. Plauger wrote:
Who's talking about removing features? Standard C has *never* promised
that floating-point arithmetic will be done in binary. And it's damned
hard to write a program that can determine whether it is.


This one should be quite reliable:

#include <stdio.h>
int main(void) {
double x = 0.1;
if (x*8 != x+x+x+x+x+x+x+x)
printf("Binary");
else
printf("Decimal");
return 0;
}


You are kidding, right?

--
Fergus Henderson <fj*@cs.mu.oz.au> | "I have always known that the pursuit
The University of Melbourne | of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net


maybe, replace line 3 and 4 by:

double x = 1.2;
if (x/3 != x - 0.8)

Wolfgang
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #593
"Mike Cowlishaw" <mf*****@attglobal.net> wrote in message
news:cl****************@plethora.net...
Dan Pop wrote:
And what is the exact result of dividing 1 penny by 3, when using
decimal floating point arithmetic? Or is division a prohibited
operation for this class of applications?


This is exactly why floating-point is the right solution. Whenever
the result is Inexact, you get the best possible approximation to
the 'ideal' result -- that is, full precision. With a fixed scaling
factor you cannot make full use of the precision unless you
know how many digits there will be to the left of the radix
point.


You also get the best possible approximation to the 'ideal' result if you
use binary floating-point. I'm sure you know this, but it's worth
emphasizing anyway: decimal floating point does *not* get rid of the inexact
nature of floating-point operations for the majority of calculations.

It seems to me that the *only* thing decimal floating point gets you is
conformance to the conventions used by the financial community for how to
deal with that inexactness. You can do that today with a library, so the
only thing that hardware support gets you is faster computations for those
calculations where this is a requirement.

Note that this library could be implemented in assembly on those platforms
that have hardware support, so you don't really even need native C support
for decimal floating point in order to get the faster computations. Of
course, using such a library is inconvenient compared to having native C
support, but that inconvenience is payed only by those who need this
capability. Is that group of people large enough that inconveniencing them
justifies changing the language?

--
Eric Backus
R&D Design Engineer
Agilent Technologies, Inc.
425-356-6010 Tel
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #594
In article <cl****************@plethora.net>, P.J. Plauger
<pj*@dinkumware.com> writes
"Douglas A. Gwyn" <DA****@null.net> wrote in message
news:cl****************@plethora.net...
I'll also remark that this newsgroup discussion isn't a
very effective way to proceed. In several days of
dicussion so far, no point has been made that wasn't
dealt with within a few minutes at the evening session
during the recent Kona C meeting.


But it has educated a wider audience to some of the issues,
and that's an important part of proceeding.


This is true. Not everyone can make the panel meetings. The majority
don't: particularly some of the experts. I recall that out of many who
understood and discussed the problem on our panel only two attend the
Kona meeting.
/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\
/\/\/ ch***@phaedsys.org www.phaedsys.org \/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #595
Eric Backus wrote:
"Mike Cowlishaw" <mf*****@attglobal.net> wrote in message
news:cl****************@plethora.net...
Dan Pop wrote:
And what is the exact result of dividing 1 penny by 3, when using
decimal floating point arithmetic? Or is division a prohibited
operation for this class of applications?
This is exactly why floating-point is the right solution. Whenever
the result is Inexact, you get the best possible approximation to
the 'ideal' result -- that is, full precision. With a fixed scaling
factor you cannot make full use of the precision unless you
know how many digits there will be to the left of the radix
point.


You also get the best possible approximation to the 'ideal' result if
you use binary floating-point. I'm sure you know this, but it's worth
emphasizing anyway: decimal floating point does *not* get rid of the
inexact nature of floating-point operations for the majority of
calculations.


Indeed, but there is the huge class of 'interesting' calculations
where using base 10 FP will yield exact results where binary
cannot.
It seems to me that the *only* thing decimal floating point gets you
is conformance to the conventions used by the financial community for
how to deal with that inexactness. If, by 'financial community' you include everyone who does
calculations on their finances, I agree.
You can do that today with a
library, so the only thing that hardware support gets you is faster
computations for those calculations where this is a requirement.
It's certainly true that the only thing that hardware support
gives you, over a library of assembler code, is improved
performance. But we are talking orders of magnitude
(remember how slow binary FP emulation was?).
Note that this library could be implemented in assembly on those
platforms that have hardware support, so you don't really even need
native C support for decimal floating point in order to get the
faster computations. Of course, using such a library is inconvenient
compared to having native C support, but that inconvenience is payed
only by those who need this capability. Is that group of people
large enough that inconveniencing them justifies changing the
language?


Of course; C is the 'assembly language' on which the vast
majority of other languages are based and in which they are
implemented. It is the very foundation of software; without
access to the hardware primitive types in C, every other
language has to provide it from scratch, in a platform-
dependent way.

Mike Cowlishaw
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #596
Eric Backus wrote:
....
for decimal floating point in order to get the faster computations. Of
course, using such a library is inconvenient compared to having native C
support, but that inconvenience is payed only by those who need this
capability. Is that group of people large enough that inconveniencing them
justifies changing the language?


That group of people is large enough that someone has started building
hardware solutions to meet their needs. I suspect that's means the group
is large enough.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #597
In comp.lang.c.moderated James Kuyper <ku****@saicmodis.com> wrote:
Hans-Bernhard Broeker wrote:
...
designers who effectively did that. I'm not aware of any architecture
in active use right now that has FLT_RADIX != 2. In fact, there seems
P.J. Plauger has already mentioned the S/370, which had FLT_RADIX==16,
as well as a few less common machines where FLT_RADIX==10, as being the
driving factors behind allowing FLT_RADIX!=2 in C99.


Wrong citation, I think. Those fact were the rationale for having
FLT_RADIX != 2 in _C90_. C99 didn't change that. Instead it did put
a much heavier weight on FLT_RADIX==2 architectures by devoting a lot
of appendix and standardized optional parts of the library to IEEE 754
support.

I.e., if anything, C99 moved *away* from FLT_RADIX != 2, not towards
it. And that's why I'm concerned about it. People looked at the
current mainstream hardware, and at the impression made by C99, and
are lead to think that all floating point is binary these days.

Anyone who read the famous paper "What every computer scientist should
know about floating point" should be aware that it's not *always*
foolish to rely on the exact behaviour of the lowermost few digits of
a floating-point number. You have to be very careful working with
them, sure, but classifying them as "garbage" unanimously doesn't do
justice to numerical analysis.

--
Hans-Bernhard Broeker (br*****@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #598
Dan Pop wrote:
In <cl****************@plethora.net> "Mike Cowlishaw" <mf*****@attglobal.net> writes:
(snip)
PL/I did many things wrong (some of its coercions were/are quite
extraordinary). But given that it was designed in the 1960s it
actually was a significant step forward in many areas. It
certainly wasn't a failure, and it is still widely used today.

It failed to achieve the goal of its designers: to be the ultimate
programming language. It never achieved the popularity of the languages
it was supposed to render obsolete: FORTRAN and COBOL.


There are reasons for that unrelated to the language itself, such as how
long it took to write the compiler, and how slow it ran on machines that
existed at the time.

It does seem hard to replace an existing language with an improved one.

Not long ago a discussion in another newsgroup related to assembler
programming reminded me of ALP, which is a preprocessor for a certain
assembler that offers free format, structured programming, and some
other features, but it never got popular.

Ratfor, and a few different versions of MORTRAN as Fortran
preprocessors, again with improvements over the original, but never got
very popular.

The original PL/I compiler supplied conversion programs to convert
Fortran, COBOL, and maybe ALGOL to PL/I.

Maybe more relevant here, C++ was an improved version of C, possibly
with the intention of converting C programmers to C++ programmers, yet C
is still reasonably popular.

-- glen
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #599
Brian Inglis wrote:

(snip)
This is a very good idea -- mixing binary, decimal (and hex) FP
formats in a structure or a set of related modules is a very bad idea
-- unless the radix can be detected at the hardware level.
(snip)
Current IBM 390/z compilers and platforms don't appear to support any
mixing of binary and hex FP instructions and functions in the same
code -- Linux supports IEEE functions and requires IEEE instructions
-- native OSes support hex functions and require hex instructions.


Well, the compilers and libraries supported by Linux use IEEE, so any
programs compiled with them will.

The compilers and libraries traditionally supplied with IBM OS's
supported hex because that is all there used to be.

The OS itself doesn't really support any base. The same floating point
registers are used, which the OS will save and restore as appropriate,
without regard to the base currently store in them.

-- glen
--
comp.lang.c.moderated - moderation address: cl**@plethora.net
Nov 13 '05 #600

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: William C. White | last post by:
Does anyone know of a way to use PHP /w Authorize.net AIM without using cURL? Our website is hosted on a shared drive and the webhost company doesn't installed additional software (such as cURL)...
2
by: Albert Ahtenberg | last post by:
Hello, I don't know if it is only me but I was sure that header("Location:url") redirects the browser instantly to URL, or at least stops the execution of the code. But appearantely it continues...
3
by: James | last post by:
Hi, I have a form with 2 fields. 'A' 'B' The user completes one of the fields and the form is submitted. On the results page I want to run a query, but this will change subject to which...
0
by: Ollivier Robert | last post by:
Hello, I'm trying to link PHP with Oracle 9.2.0/OCI8 with gcc 3.2.3 on a Solaris9 system. The link succeeds but everytime I try to run php, I get a SEGV from inside the libcnltsh.so library. ...
1
by: Richard Galli | last post by:
I want viewers to compare state laws on a single subject. Imagine a three-column table with a drop-down box on the top. A viewer selects a state from the list, and that state's text fills the...
4
by: Albert Ahtenberg | last post by:
Hello, I have two questions. 1. When the user presses the back button and returns to a form he filled the form is reseted. How do I leave there the values he inserted? 2. When the...
1
by: inderjit S Gabrie | last post by:
Hi all Here is the scenerio ...is it possibly to do this... i am getting valid course dates output on to a web which i have designed ....all is okay so far , look at the following web url ...
2
by: Jack | last post by:
Hi All, What is the PHP equivilent of Oracle bind variables in a SQL statement, e.g. select x from y where z=:parameter Which in asp/jsp would be followed by some statements to bind a value...
3
by: Sandwick | last post by:
I am trying to change the size of a drawing so they are all 3x3. the script below is what i was trying to use to cut it in half ... I get errors. I can display the normal picture but not the...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.