469,360 Members | 1,807 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 469,360 developers. It's quick & easy.

Writing Scalabe Software in C++

Hello,

This morning I had an idea how to write Scalable Software in general.
Unfortunately with Delphi 2007 it can't be done because it does not support
operating overloading for classes, or record inheritance (records do have
operator overloading)

The idea is to write a generic integer class with derived integer classess
for 8 bit, 16 bit, 32 bit, 64 bit and 64 bit emulated.

Then at runtime the computer program can determine which derived integer
class is needed to perform the necessary calculations.

The necessary integer class is instantiated and assigned to a generic
integer class variable/reference and the generic references/variables are
used to write the actual code that performs the calculations.

Below is a demonstration program, it's not yet completely compiling, but
it's getting close.

// TestWritingScalableSoftware.cpp : Defines the entry point for the console
application.
//
#include "stdafx.h"

class TSkybuckGenericInteger
{
};
class TSkybuckInt32 : public TSkybuckGenericInteger
{
private:
int mInteger;

public:

// constructor with initializer parameter
TSkybuckInt32( int ParaValue );

// add operator overloader
TSkybuckInt32& operator+( const TSkybuckInt32& ParaSkybuckInt32 );

void Display();
};

class TSkybuckInt64 : TSkybuckGenericInteger
{
private:
long long mInteger;

public:
// constructor with initializer parameter
TSkybuckInt64( long long ParaValue );

// add operator overloader
TSkybuckInt64& operator+( const TSkybuckInt64& ParaSkybuckInt64 );

void Display();
};

//
// TSkybuckInt32

// constructor
TSkybuckInt32::TSkybuckInt32( int ParaValue )
{
mInteger = ParaValue;
}

// add operator overloader
TSkybuckInt32& TSkybuckInt32::operator+ ( const TSkybuckInt32&
ParaSkybuckInt32 )
{
mInteger = mInteger + ParaSkybuckInt32.mInteger;
return *this;
}

void TSkybuckInt32::Display()
{
printf( "%d \n", mInteger );
}

//
// TSkybuckInt64
//
// constructor
TSkybuckInt64::TSkybuckInt64( long long ParaValue )
{
mInteger = ParaValue;
}

// add operator overloader
TSkybuckInt64& TSkybuckInt64::operator+ ( const TSkybuckInt64&
ParaSkybuckInt64 )
{
mInteger = mInteger + ParaSkybuckInt64.mInteger;
return *this;
}

void TSkybuckInt64::Display()
{
printf( "%lu \n", mInteger );
}

int _tmain(int argc, _TCHAR* argv[])
{
long long FileSize;
long long MaxFileSize32bit;

// must write code like this to use constructor ? can't just declare
a,b,c ?
TSkybuckInt32 A32 = TSkybuckInt32( 30 );
TSkybuckInt32 B32 = TSkybuckInt32( 70 );
TSkybuckInt32 C32 = TSkybuckInt32( 0 );
C32 = A32 + B32;
C32.Display();

TSkybuckInt64 A64 = TSkybuckInt64( 30 );
TSkybuckInt64 B64 = TSkybuckInt64( 70 );
TSkybuckInt64 C64 = TSkybuckInt64( 0 );
C64 = A64 + B64;
C64.Display();

FileSize = 1024; // kilobyte
FileSize = FileSize * 1024; // megabyte
FileSize = FileSize * 1024; // gigabyte
FileSize = FileSize * 1024; // terrabyte

MaxFileSize32bit = 1024; // kilobyte
MaxFileSize32bit = MaxFileSize32bit * 1024; // megabyte
MaxFileSize32bit = MaxFileSize32bit * 1024; // gigabyte
MaxFileSize32bit = MaxFileSize32bit * 4; // 4 gigabyte
if (FileSize < MaxFileSize32bit)
{
TSkybuckGenericInteger AGeneric = TSkybuckInt32( 30 );
TSkybuckGenericInteger BGeneric = TSkybuckInt32( 70 );
TSkybuckGenericInteger CGeneric = TSkybuckInt32( 0 );
} else
{
TSkybuckGenericInteger AGeneric = TSkybuckInt64( 30 );
TSkybuckGenericInteger BGeneric = TSkybuckInt64( 70 );
TSkybuckGenericInteger CGeneric = TSkybuckInt64( 0 );
}

CGeneric = AGeneric + BGeneric;
CGeneric.Display();

while (1)
{
}

return 0;
}

Probably minor compile issue's remain:

Error 1 error C2243: 'type cast' : conversion from 'TSkybuckInt64 *__w64 '
to 'const TSkybuckGenericInteger &' exists, but is inaccessible
y:\cpp\tests\test writing scalable software generic math\version
0.01\testwritingscalablesoftware\testwritingscalab lesoftware\testwritingscalablesoftware.cpp
152

Error 2 error C2243: 'type cast' : conversion from 'TSkybuckInt64 *__w64 '
to 'const TSkybuckGenericInteger &' exists, but is inaccessible
y:\cpp\tests\test writing scalable software generic math\version
0.01\testwritingscalablesoftware\testwritingscalab lesoftware\testwritingscalablesoftware.cpp
153

Error 3 error C2243: 'type cast' : conversion from 'TSkybuckInt64 *__w64 '
to 'const TSkybuckGenericInteger &' exists, but is inaccessible
y:\cpp\tests\test writing scalable software generic math\version
0.01\testwritingscalablesoftware\testwritingscalab lesoftware\testwritingscalablesoftware.cpp
154

Error 4 error C2065: 'CGeneric' : undeclared identifier y:\cpp\tests\test
writing scalable software generic math\version
0.01\testwritingscalablesoftware\testwritingscalab lesoftware\testwritingscalablesoftware.cpp
157

Error 5 error C2065: 'AGeneric' : undeclared identifier y:\cpp\tests\test
writing scalable software generic math\version
0.01\testwritingscalablesoftware\testwritingscalab lesoftware\testwritingscalablesoftware.cpp
157

Error 6 error C2065: 'BGeneric' : undeclared identifier y:\cpp\tests\test
writing scalable software generic math\version
0.01\testwritingscalablesoftware\testwritingscalab lesoftware\testwritingscalablesoftware.cpp
157

How to solve the remaining issue's ?

Bye,
Skybuck.
Aug 29 '07
89 3404
Frederick Williams wrote:
Skybuck Flying wrote:

Hello,

This morning I had an idea ...

I hope that this doesn't sound impolite, but why are you posting to
sci.electronics.design and alt.math?

He's a well-known troll in comp.lang.c, looks like he's decided to
expand his business.

Brian
Aug 30 '07 #51
Skybuck Flying wrote:
What I wrote is really simple.

if FileSize < 2^32 bits then 32 bit case
if FileSize 2^32 bits then 64 bit case.

Ofcourse the compiler doesn't know at compile time, because the files are
opened at runtime.

Not even the programmer knows what the size of the file will be.
Finally you have managed to explain what you want, after all this
absurdly long-winded drivel. Had you said this at the start, someone
would have told you the answer.

When you are reading a file from a disk, any extra time spent by a
32-bit cpu doing 64-bit arithmetic to handle the file offsets will be
totally and utterly irrelevant. Thus if you want to handle such large
files, you use 64-bit integers.

If you really are interested in learning to develop software, you should
first learn what's important.
Aug 30 '07 #52
On Aug 30, 6:02 am, Ron Natalie <r...@spamcop.netwrote:
MooseFET wrote:
This statement is incorrect. C, C++, Borland Pascal and it
decendants, and just about every other language I can think of allow
you to declare a new type to be the same as a simple type, allow
conditional compiles, and allow include files. You don't need to have
two copies of the source code.

Incorrect. C and C++ certainly do not.
You claim the above and then go on to say the below:
You can #define or typedef
something that appears to be a type but they aren't distinct types.
The "typedef" declares a new type. It has a place in the symbol table
of the compiler where it keeps track of types. The compiler knows
that it is equivelent to the simple type it was declared. This gives
all the ability needed to do what the OP is asking for. If the
"typedef" and "#declare# didn't exist then he would be right in his
claims. Since they do he is wrong.
You're just conditionally compiling which type you are using (which
accomplishes what you want). The distinction is an important one.
A typedef isn't seperately resolvable from the type it aliases.
I don't think you understand the argument. In Borland Pascal and C
you can do this: (perhaps slightly wrong C.)

#define tfoo int
#include <stuff.inc>
#undef tfoo
#define tfoo long int
#include <stuff.inc>

You can end up with two versions of the smae code one for "int" and
another for "long int". There is nothing useful that the OP is
talking about that can't be done already. He is just making a new way
to do each thing.
Aug 30 '07 #53
On Aug 30, 10:04 am, "Skybuck Flying" <s...@hotmail.comwrote:
"MooseFET" <kensm...@rahul.netwrote in message

news:11*********************@m37g2000prh.googlegro ups.com...
On Aug 30, 2:31 am, "Skybuck Flying" <s...@hotmail.comwrote:
Absolutely nonsense.
If I want I can write a computer program that runs 32 bit when possible
and
64 bit emulated when needed.
My computer program will outperform your "always 64 emulated" program
WITH
EASE.
The only problem is that I have to write each code twice.
This statement is incorrect. C, C++, Borland Pascal and it
decendants, and just about every other language I can think of allow
you to declare a new type to be the same as a simple type, allow
conditional compiles, and allow include files. You don't need to have
two copies of the source code.

{$ifdef bit32}
blablabla
{$endif else
{$ifdef bit64}
blablabla
{$endif}

^^ Still have to write two versions BLEH !
You haven't thought about it. You don't need to make two copies. In
Borland Pascal, the exact same code can be used twice. You don't need
two copies of it.

I wouldn't call that "Scalable Software" :)

It doesn't even scale properly at runtime.
We already told you about virtual methods. They do the scaling at run
time.

Only one can be chosen at compile time.
That is not true.
>
Bye,
Skybuck.

Aug 30 '07 #54
mpm
On Aug 30, 12:58?pm, "Skybuck Flying" <s...@hotmail.comwrote:
"David Brown" <da...@westcontrol.removethisbit.comwrote in message

news:46**********************@news.wineasy.se...


Skybuck Flying wrote:
Which is ofcourse impossible.
The compiler does not know what the program wants at compile time.
Does it want 32 bit or 64 bit ?
Only the program knows at runtime !
Depends on the situation.
Bye,
Skybuck.
If you learn to use Usenet properly before trying to post this stuff, it
would be a lot easier to get you back on the path of sane software
development. It's not worth spending time answering you if you can't
write questions or comments that make sense.

What I wrote above is pretty clear to me, even my mother could understand
that ! ;)

Bye,
Skybuck.- Hide quoted text -

- Show quoted text -
First of all, if your mother can understand YOU, then she probably
"can" understand everything right down to the subatomic particles!!!
So, I really don't think your proof of clarity really establishes all
that much...

Besides, I have a much better task for you:

Instead of worrying about simple compile time directives, why don't
you think about execution time directives? Say, on-the-fly
recompilation to run (re-configure) programs to operate on AVAILABLE
hardware. (For example, a navy destroyer or aircraft carrier that has
just sustained heavy battle damage).

That way, you're not limiting your brain power to just plain old 32
and/or 64 bits.
You can incorporate a whole host of new variable and other
considerations!!.

Maybe you can write a program to control ship's navigation and run it
on a toaster oven when the going gets tough. Or maybe a universal
iPod-based fire control radar? Or even, get the washer/dryer machine
to play DVD's? Maybe at the same time its doing the other two jobs
just mentioned. Should be pretty simple.

All you just need to do is just put in some more conditional
statements, and maybe get the latest DLL and COM libraries, but I
don't know why it wouldn't work.?

Bye.
-mpm

Aug 30 '07 #55
Lol,

You losing it ! LOL

Bye,
Skybuck.
Aug 31 '07 #56
Lot's of code will have to be 64 bit.

My guess is the performance impact will be noticeable ! ;)

Even if it wasn't no reason for sloppy coding :)

Code might be re-used for something else sometime ;)

Bye,
Skybuck.
Aug 31 '07 #57
Using conditional means one will be compiled and the other won't be
compiled.

It's that simple.

(You might use a different way of writing the conditionals but the concept
remains the same, if not give an example ;))

Bye,
Skybuck.
Aug 31 '07 #58

"Default User" <de***********@yahoo.comwrote in message
news:5j***********@mid.individual.net...
Frederick Williams wrote:
>Skybuck Flying wrote:
>
Hello,

This morning I had an idea ...

I hope that this doesn't sound impolite, but why are you posting to
sci.electronics.design and alt.math?


He's a well-known troll in comp.lang.c, looks like he's decided to
expand his business.
Lol such big statements LOL.

I visited that newsgroup two times.

And I never plan to revisit it again unless I have a really really really
really really strange question.

Bye,
Skybuck ;)
Aug 31 '07 #59
"Skybuck Flying" <sp**@hotmail.comwrote in message
news:fb**********@news5.zwoll1.ov.home.nl...
The world is not completely 64 bit, The world is not statis it fluctuates.

Sometimes the program only needs 32 bits, sometimes 64 bits.

Always choosing 64 bits would hurt performance LOL.
Not if you have a 64-bit machine; even if you're using a 32-bit machine,
emulating 64-bit operations s will hurts performance less than trying to
detect the appropriate choice and then act on that information.

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
--
Posted via a free Usenet account from http://www.teranews.com

Aug 31 '07 #60
"Skybuck Flying" <sp**@hotmail.comwrote in message
news:fb**********@news2.zwoll1.ov.home.nl...
Lot's of code will have to be 64 bit.

My guess is the performance impact will be noticeable ! ;)
Do some actual _measurements_ and find out, rather than guessing. Emulating
64-bit operations even when not required is almost always cheaper in both
programmer and CPU time than trying to detect and handle cases in which not
to use emulation.

"Rules of Optimization:
Rule 1: Don't do it.
Rule 2 (for experts only): Don't do it yet."
- M.A. Jackson

"More computing sins are committed in the name of efficiency (without
necessarily achieving it) than for any other single reason - including blind
stupidity."
- W.A. Wulf

"We should forget about small efficiencies, say about 97% of the time:
premature optimization is the root of all evil."
- Donald Knuth
S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
--
Posted via a free Usenet account from http://www.teranews.com

Aug 31 '07 #61
"Skybuck Flying" <sp**@hotmail.comwrote in message
news:fb**********@news5.zwoll1.ov.home.nl...
Absolutely nonsense.

If I want I can write a computer program that runs 32 bit when possible
and 64 bit emulated when needed.
Yes, it's entirely possible to do that.
My computer program will outperform your "always 64 emulated" program WITH
EASE.
No, it won't. Post an actual test program using your method, and I'll
produce a program that does the same thing with my method, and we can
compare runtimes.
The only problem is that I have to write each code twice.

A 32 bit version and a 64 bit version.
No, you can write the code once and compile it twice.
I simply instantiate the necessary object and run it.
First of all, you must pay the cost of determining which type to use. Even
ignoring that, tracking down which code path to execute for that object at
runtime will be slower than simply using 64-bit operations (which may or may
not need to be emulated) all the time.
Absolutely no big deal.

The only undesirable property of this solution is two code bases.
Wrong. You only need one code base, but the poor performance of such a
solution will be a "big deal".
Your lack fo programming language knownledge and experience is definetly
showing.
Are you talking to yourself? _Every single person_ commenting on this
thread is telling you you're wrong.

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
--
Posted via a free Usenet account from http://www.teranews.com

Aug 31 '07 #62
In comp.arch Stephen Sprunk <st*****@sprunk.orgwrote:
"Skybuck Flying" <sp**@hotmail.comwrote in message
news:fb**********@news5.zwoll1.ov.home.nl...
>Absolutely nonsense.

If I want I can write a computer program that runs 32 bit when possible
and 64 bit emulated when needed.

Yes, it's entirely possible to do that.
>My computer program will outperform your "always 64 emulated" program WITH
EASE.

No, it won't. Post an actual test program using your method, and I'll
produce a program that does the same thing with my method, and we can
compare runtimes.
>The only problem is that I have to write each code twice.

A 32 bit version and a 64 bit version.

No, you can write the code once and compile it twice.
>I simply instantiate the necessary object and run it.

First of all, you must pay the cost of determining which type to use. Even
ignoring that, tracking down which code path to execute for that object at
runtime will be slower than simply using 64-bit operations (which may or may
not need to be emulated) all the time.
>Absolutely no big deal.

The only undesirable property of this solution is two code bases.

Wrong. You only need one code base, but the poor performance of such a
solution will be a "big deal".
>Your lack fo programming language knownledge and experience is definetly
showing.

Are you talking to yourself? _Every single person_ commenting on this
thread is telling you you're wrong.
And at least one person (me) put him in a kill file after reading the first
3 of his posts.
S
--
Aug 31 '07 #63

"Stephen Sprunk" <st*****@sprunk.orgwrote in message
news:46***********************@free.teranews.com.. .
"Skybuck Flying" <sp**@hotmail.comwrote in message
news:fb**********@news5.zwoll1.ov.home.nl...
>The world is not completely 64 bit, The world is not statis it
fluctuates.

Sometimes the program only needs 32 bits, sometimes 64 bits.

Always choosing 64 bits would hurt performance LOL.

Not if you have a 64-bit machine; even if you're using a 32-bit machine,
emulating 64-bit operations s will hurts performance less than trying to
detect the appropriate choice and then act on that information.
For addition and subtraction probably.

For multiple and division some performance could be reduced for 32 bits but
would still be faster than simulating it.

Whatever the case maybe.

The point is the detection is the overhead if cpu can do the detection that
overhead might disappear ! ;) :)

Bye,
Skybuck.
Aug 31 '07 #64
Actually what I wrote only applies to doing the check each time.

If the check only has to be done once for large parts of code, there will
definetly be performance gains achievable ! ;)

(I already wrote that elsewhere but ok ;))

Bye,
Skybuck.
Aug 31 '07 #65
I don't agree with that.

Write large parts of code, do a check once.

Voila, only problem you will have two codes.

Bye,
Skybuck.
Aug 31 '07 #66
Now I see what those persons where bitching about.

It's called the instruction prefix which is part of the instruction
encoding.

Which if I interpret the manual correctly means this instruction prefix is
added before each instruction.

That means it's hard coded into the instruction and this means only one mode
can be selected.

So it's not an efficient way to switch modes during runtime, since the
instruction prefixes would need to be changed, which requires changes in
many memory locations.

Finally it might be interesting for compilers that won't to generate
multiple code paths, they might just need a few bit switches while
generating the instructions... but why bother... why not use some other
method to specify the operand size which might be more reliable.

From the manual:
"
The operand-size override prefix allows a program to switch between 16- and
32-bit operand
sizes. Either size can be the default; use of the prefix selects the
non-default size. Use of 66H
followed by 0FH is treated as a mandatory prefix by some SSE/SSE2/SSE3
instructions. Other
use of the 66H prefix with MMX/SSE/SSE2/SSE3 instructions is reserved; such
use may cause
unpredictable behavior.
The address-size override prefix (67H) allows programs to switch between 16-
and 32-bit
addressing. Either size can be the default; the prefix selects the
non-default size. Using this
prefix and/or other undefined opcodes when operands for the instruction do
not reside in
memory is reserved; such use may cause unpredictable behavior.
"

So far this was the 16/32 bit mode some people were bitching about ;)

Now I go look for a 64 bit mode switch.

Bye,
Skybuck.
Aug 31 '07 #67
However somebody else is still bitching about something else I think:

"
Intel introduced a bit
in the segment descriptor of the executable code, equivalent to your
BitMode variable, to specify whether the default code size is 16 bits
or 32 bits.
"

Where can I find more information about this ? (Me go search in manuals some
more ! ;))

Bye,
Skybuck.
Aug 31 '07 #68
Hi!

Skybuck Flying schrieb:
I wouldn't call that "Scalable Software" :)

It doesn't even scale properly at runtime.

Only one can be chosen at compile time.
So if you use the virtual function approach you can decide at runtime
which of the following you want to support:

- 16bit integer
- 32bit integer
- 32bit integer, emulated
- 64bit integer
- 64bit integer, emulated

combined with any of:

- i286 optimized
- i386 optimized
- i486 optimized
- i586 optimized
- i686 optimized
- 64bit integer instruction set

combined with any of:

- various flavours with different cache sizes

When you got all code paths in one single "über"-library I'll blame you:
it's too big for the 64kB RAM of my i286. On the other hand I'll blame
you if it won't run on:

- 128bit integer instruction set (yet to be invented)

And then we do the same for floating point numbers which come in 32bit,
64bit, 80bit, 128bit already on current CPU. Which can be mixed with or
without the use of ix87, MMX, MMXext, SSE 1 to 3, special addon cards
for math calculation (e.g. physics board for hardcore gamers).

And then I want to use your library on SPARC, PPC, ARM, Alpha, ...

My statements:

- yes, you can do a single check at the start of your program to choose
whether to use 32bit native, 64bit native, or 64bit emulated
- no, you do not have to code three times. you can use the compiler to
generate the code for you by the use of templates
- yes, the code would work without virtual functions
- no, I won't use your library because I only want to pay for what I
need. And I don't need this sort of code bloat.
- no, this approach can not scale to infinity without recompilation of
your library (e.g. using a compiler which can generate 128bit instructions)
- yes, you could use virtual functions and have "plugins" for the
various architecutres. the main library would detect the right "plugin",
load it, and use it
- yes, this would be essentially the same as providing distinct
libraries the first place
- yes, for theory it is nice to think about the polymorphic behaviour

Frank
Aug 31 '07 #69
No,

That newsgroup full with retards, that's all LOL.

Bye,
Skybuck.
Aug 31 '07 #70
Skybuck Flying wrote:
No,

That newsgroup full with retards, that's all LOL.
Well, this particular one (clc++) isn't.
>
Bye,
Skybuck.
I wish you stuck to that and flew off in the wind.
Aug 31 '07 #71
On Aug 31, 2:33 am, "Skybuck Flying" <s...@hotmail.comwrote:
"Stephen Sprunk" <step...@sprunk.orgwrote in message

news:46***********************@free.teranews.com.. .
"Skybuck Flying" <s...@hotmail.comwrote in message
news:fb**********@news5.zwoll1.ov.home.nl...
The world is not completely 64 bit, The world is not statis it
fluctuates.
Sometimes the program only needs 32 bits, sometimes 64 bits.
Always choosing 64 bits would hurt performance LOL.
Not if you have a 64-bit machine; even if you're using a 32-bit machine,
emulating 64-bit operations s will hurts performance less than trying to
detect the appropriate choice and then act on that information.

For addition and subtraction probably.

For multiple and division some performance could be reduced for 32 bits but
would still be faster than simulating it.
Multiply doesn't take very long to do. The instructions for it can be
easily inlined. For a divide, finding 2^N/X and then multiplying is
sometimes quicker. There are tricks for doing 2^N/X quickly.

A 256 bit divided by a 64 bit yelding a 64 bit can be done this way in
about 1/3rd the time of the actual divide on an 8 bit machine.

Whatever the case maybe.

The point is the detection is the overhead if cpu can do the detection that
overhead might disappear ! ;) :)
I wouldn't go away. You are adding parts and logic and choices to be
made to every instruction in the CPU, This uses up transistors and
time. Doing stuff takes time.
>
Bye,
Skybuck.

Aug 31 '07 #72
On Aug 30, 9:32 pm, "Stephen Sprunk" <step...@sprunk.orgwrote:
"Skybuck Flying" <s...@hotmail.comwrote in message

news:fb**********@news2.zwoll1.ov.home.nl...
Lot's of code will have to be 64 bit.
My guess is the performance impact will be noticeable ! ;)

Do some actual _measurements_ and find out, rather than guessing. Emulating
64-bit operations even when not required is almost always cheaper in both
programmer and CPU time than trying to detect and handle cases in which not
to use emulation.

"Rules of Optimization:
Rule 1: Don't do it.
Rule 2 (for experts only): Don't do it yet."
- M.A. Jackson

"More computing sins are committed in the name of efficiency (without
necessarily achieving it) than for any other single reason - including blind
stupidity."
- W.A. Wulf

"We should forget about small efficiencies, say about 97% of the time:
premature optimization is the root of all evil."
- Donald Knuth
I forgot who said these:

No amount of optimizing the implementation of the slow algorithm will
turn it into the fast one.

Optimize after there are zero bugs. There never are zero bugs.

Aug 31 '07 #73
On 30 Aug., 13:08, "Skybuck Flying" <s...@hotmail.comwrote:
Yes you missed the other threads, I shall explain again lol:

I want:

1. One code base which adepts at runtime:

2. Uses 32 bit instructions when possible.

3. Switches to 64 bit instructions when necessary (true or emulated).

4. No extra overhead.

As far as I can tell the cpu's for pc's are inflexible:

32 bit data types require 32 bit instructions.

64 bit data types require 64 bit instructions or alternatively:

64 bit data types require multiple 32 bit instructions.

This means it's necessary to code 3 code paths !

I do not want to write code 3 times !

I want to express my formula's and algorithms just one time !

I want the program/code base to adept to the optimal instruction sequences
without actually having to code those three times !

I suggested a "feature extension" to processors: "Flexible Instruction Set".

The idea is to use a BitMode variable to specify to the cpu how it is
supposed to interpret the coded instructions sequences.

So that I can write simple one instruction sequence and only need to change
a single variable.

Many people started bitching that the current cpu's can already do this for
16/32/64.

I have seen no prove what so ever.

Can you provide prove ?

Bye,
Skybuck.
OK, I guess what you want is in a way similar to the D flag in intel
cpu's:
Depending on whether it is set or not, the string assembly instruction
behave differently:
LODSB = move byte from (esi) to al, then increment/decrement edi
depending on D
STOSB = move byte from al to (edi), then increment/decrement esi
depending on D
By this, the same code
1: LODSB
STOSB
LOOP 1b
will eithr copy a string from the location pointed to by esi to edi
either forward (pointers
point to beginning of string) or backward (pointers to end).
Similarly, the fpu has a rounding mode setting that allows to choose
for all
subsequent fpu instructione between
- round to 0
- round down
- round up

Both these flags tend to cause more problem than advantage in many
situations:
Some code in some library may either
a) hope that the flags are set in a way to make its code work
correctly
or
b) save the flag somewhere on entry, set it to the desired value,
restiore the flag on exit
In case a) you better keep your hands off the flags (this is the usual
best practice
for the direction flag).
In case b) you lose performance (this is why most compilers don't use
the otherwise
efficient instruction to convert floats to ints - to work properly
they would have to implement
a lot of overhead just to not interfere with the user's current
rounding configuration)

I'm afraid that your suggestion might suffer from simlar problems.
Of course you may say that you want to add new instructions,
e.g. ADDG (add generic) to ADDB (add byte), ADDW (add word), ADDL (add
long).
where ADDG is ine effect equivalent to one of the others depending on
some copntrol flags.
Firstly, such code would have to produce different microcode depending
on
flag setting and whis might turn out somewhat problematic - well, not
too much, you jsut have to dump the complete instruction cache when
the flags are changed.
The next problem is storage. When running through an array of generic
(but
consistently so) intergers, your step size must vary depending on te
flag setting.
If the code is really generic, you cannot knw the sizer of your data
in the higher level language (e.g. C++ - like).
Note that I'm not talking about classes - generic integer would be a
primitive type
since it is implemented in the cpu!
But still sizeof(generic int) could not be a compile time constant.

IMHO, the resulting code and programming technique would become
too awful for me to like it.
Aug 31 '07 #74
I have invested the segment selectors and descriptors.

Currently there seems to be no way to specifiy a default operand size of 64
bits ?!?!?

However some bits are reserved for future use and I think these might be
used to implement a default mode for operand size 64 bits.

CS.L = 1 and CS.D = 1 are reserved for future use.

I think this combination of bits could be used to implement default operand
size 64 bits !

However I think this would be related to 64 bit compatibility mode.

I am not sure how usefull that would be.

It might be insanely usefull or not at all ?!? I dont know ;)

Here are some texts I found from the Intel Manual Volume 3A:

"
4.2.1 Code Segment Descriptor in 64-bit Mode

Code segments continue to exist in 64-bit mode even though, for address
calculations, the
segment base is treated as zero. Some code-segment (CS) descriptor content
(the base address
and limit fields) is ignored; the remaining fields function normally (except
for the readable bit
in the type field).

Code segment descriptors and selectors are needed in IA-32e mode to
establish the processor's
operating mode and execution privilege-level. The usage is as follows:

.. IA-32e mode uses a previously unused bit in the CS descriptor. Bit 53 is
defined as the
64-bit (L) flag and is used to select between 64-bit mode and compatibility
mode when
IA-32e mode is active (IA32_EFER.LMA = 1). See Figure 4-2.

- If CS.L = 0 and IA-32e mode is active, the processor is running in
compatibility mode.
In this case, CS.D selects the default size for data and addresses. If CS.D
= 0, the
default data and address size is 16 bits. If CS.D = 1, the default data and
address size is
32 bits.

- If CS.L = 1 and IA-32e mode is active, the only valid setting is CS.D = 0.
This setting
indicates a default operand size of 32 bits and a default address size of 64
bits. The
CS.L = 1 and CS.D = 1 bit combination is reserved for future use and a #GP
fault will
be generated on an attempt to use a code segment with these bits set in
IA-32e mode.

.. In IA-32e mode, the CS descriptor's DPL is used for execution privilege
checks (as in
legacy 32-bit mode).
"

Specifically:

"
If CS.L = 1 and IA-32e mode is active, the only valid setting is CS.D = 0.
This setting
indicates a default operand size of 32 bits and a default address size of 64
bits. The
"

As you can see from the text above, the default operand size remains 32
bits.

There is no way to specify a default operand size of 64 bits.

This makes it impossible to use a segment descriptor to quickly change
operand size from 32 bits to 64 bits or vice versa AT RUNTIME !

Otherwise it might have been possible to change the operand size at runtime
by simply changing the segment descriptor ?

^^^ Only one place for a change to occur ^^^ <- Could be a real nice feature
to convert existing binary code from 32 bit to 64 bit or vica versa with a
single change ! ;) preferrably all at runtime ! <-- Nice idea.

Bye,
Skybuck.
Aug 31 '07 #75
It's the other people that started the insults not me !

They think they know everything, well that's definetly not the case !

Bye,
Skybuck.
Aug 31 '07 #76
Give me wings bitch ! =D

Bye,
Skybuck.

"Miguel Guedes" <ze*******@newsgroups.userwrote in message
news:ED*****************@newsfet01.ams...
Skybuck Flying wrote:
>No,

That newsgroup full with retards, that's all LOL.

Well, this particular one (clc++) isn't.
>>
Bye,
Skybuck.

I wish you stuck to that and flew off in the wind.

Aug 31 '07 #77
Stephen Sprunk wrote:
"More computing sins are committed in the name of efficiency (without
necessarily achieving it) than for any other single reason -
including blind stupidity." - W.A. Wulf
Skybuck is very good in the "blind stupidity" department, though. <g>

--
Rudy Velthuis http://rvelthuis.de

"'Everything you say is boring and incomprehensible', she said,
'but that alone doesn't make it true.'" -- Franz Kafka
Aug 31 '07 #78
Skybuck Flying schrieb:
Give me wings bitch ! =D
Time to ban you.

Frank
Aug 31 '07 #79
"Skybuck Flying" <sp**@hotmail.comwrote in message
news:fb**********@news4.zwoll1.ov.home.nl...
>
"Stephen Sprunk" <st*****@sprunk.orgwrote in message
news:46***********************@free.teranews.com.. .
>"Skybuck Flying" <sp**@hotmail.comwrote in message
news:fb**********@news5.zwoll1.ov.home.nl...
>>The world is not completely 64 bit, The world is not statis it
fluctuates.

Sometimes the program only needs 32 bits, sometimes 64 bits.

Always choosing 64 bits would hurt performance LOL.

Not if you have a 64-bit machine; even if you're using a 32-bit machine,
emulating 64-bit operations s will hurts performance less than trying to
detect the appropriate choice and then act on that information.

For addition and subtraction probably.

For multiple and division some performance could be reduced for 32 bits
but would still be faster than simulating it.
Not likely. On common 64-bit machines, all operations take the same amount
of time regardless of whether they're 32- or 64-bit, so there's no potential
speedup. So, you're only talking about potential benefits of not using
emulation on older 32-bit machines. The performance of detecting the 32-bit
case and then branching to either the 32- or 64-bit code paths (or, in the
16/32-bit equivalent, setting a bit in the segment descriptor) will usually
outweigh the savings you'll get from not needing to emulate 64-bit
operations. Even if it's not a certain victory, the programmer cost of the
code complexity will likely decide things in such a case -- especially since
it only benefits people with outdated machines.
Whatever the case maybe.

The point is the detection is the overhead if cpu can do the detection
that overhead might disappear ! ;) :)
Adding that detection logic into the CPU will just change where the overhead
is paid for; that cost has to be paid _somewhere_.

You seem to think that counting instructions is how to measure speed. That
hasn't been true on x86 since the days of the 486, or possibly even earlier.
Memory latency, cache (both instruction and data) hit rates, BPU and BHT
misses, utilization of varying types of functional units, parallelism, OOE,
and various other things mean the _only_ way to determine what's fastest is
to actually write the code and test it -- and the answers may be different
depending on the chips being used.

You are postulating chips that do not exist (this mythical BitMode) and that
the makers have shown no interest in making. You also ignore the cost of
figuring out what to set the BitMode too, as if that were free. You further
ignore how width-independent instructions are supposed to know how much data
to load/store, or how the compiler is supposed to efficiently reserve space
for such when the data types are not known at compile time.

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
--
Posted via a free Usenet account from http://www.teranews.com

Aug 31 '07 #80
MooseFET wrote:
On Aug 30, 6:02 am, Ron Natalie <r...@spamcop.netwrote:
>MooseFET wrote:
>>This statement is incorrect. C, C++, Borland Pascal and it
decendants, and just about every other language I can think of allow
you to declare a new type to be the same as a simple type, allow
conditional compiles, and allow include files. You don't need to have
two copies of the source code.
Incorrect. C and C++ certainly do not.

You claim the above and then go on to say the below:
> You can #define or typedef
something that appears to be a type but they aren't distinct types.

The "typedef" declares a new type.
No it does not. It makes an alias for an existing type. You can't
distinguish between the typedef and the original type either through
overloading or typeid or anything else.

Aug 31 '07 #81
On Aug 31, 3:13 pm, Ron Natalie <r...@spamcop.netwrote:
MooseFET wrote:
On Aug 30, 6:02 am, Ron Natalie <r...@spamcop.netwrote:
MooseFET wrote:
>This statement is incorrect. C, C++, Borland Pascal and it
decendants, and just about every other language I can think of allow
you to declare a new type to be the same as a simple type, allow
conditional compiles, and allow include files. You don't need to have
two copies of the source code.
Incorrect. C and C++ certainly do not.
You claim the above and then go on to say the below:
You can #define or typedef
something that appears to be a type but they aren't distinct types.
The "typedef" declares a new type.

No it does not.
Yes it does in all the ways tha matter to the argument with Skybuck.
It causes a new name to be associated with a type. This makes it a
declaration of a type. Just because C doesn't do as strict of type
checking as some other languages doesn't make it not a declare of a
type. After the typedef has been done there is a new symbol that is a
type.

Aug 31 '07 #82

"David Brown" <da***@westcontrol.removethisbit.comskrev i en meddelelse
news:46**********************@news.wineasy.se...

If you learn to use Usenet properly before trying to post this stuff, it
would be a lot easier to get you back on the path of sane software
development. It's not worth spending time answering you if you can't
write questions or comments that make sense.
It's this particular troll's mode of operation. News2020 was more fun.
>

Sep 1 '07 #83

"Frederick Williams" <"Frederick Williams"@antispamhotmail.co.uk.invalid>
skrev i en meddelelse
news:46***************@antispamhotmail.co.uk.inval id...
Skybuck Flying wrote:
>>
Hello,

This morning I had an idea ...

I hope that this doesn't sound impolite, but why are you posting to
sci.electronics.design and alt.math?
....because he knows that there are always a few people in
'sci.electronics.design' that will take the bait!
Sep 1 '07 #84
MooseFET wrote:
:: On Aug 31, 3:13 pm, Ron Natalie <r...@spamcop.netwrote:
::: MooseFET wrote:
:::: On Aug 30, 6:02 am, Ron Natalie <r...@spamcop.netwrote:
::::: MooseFET wrote:
:::
:::::: This statement is incorrect. C, C++, Borland Pascal and it
:::::: decendants, and just about every other language I can think of
:::::: allow you to declare a new type to be the same as a simple
:::::: type, allow conditional compiles, and allow include files.
:::::: You don't need to have two copies of the source code.
::::: Incorrect. C and C++ certainly do not.
:::
:::: You claim the above and then go on to say the below:
:::
::::: You can #define or typedef
::::: something that appears to be a type but they aren't distinct
::::: types.
:::
:::: The "typedef" declares a new type.
:::
::: No it does not.
::
:: Yes it does in all the ways tha matter to the argument with
:: Skybuck. It causes a new name to be associated with a type. This
:: makes it a declaration of a type. Just because C doesn't do as
:: strict of type checking as some other languages doesn't make it
:: not a declare of a type. After the typedef has been done there is
:: a new symbol that is a type.

We don't care much about how Skybuck defines a "new type".

After the typedef there is a new symbol that is the name of the type.
The type, however, is exactly the same as it was before the typedef.
It has just got a "nickname", or an alias.
typedef Skybuck Bucky;

doesn't create a new person!

Bo Persson
Sep 1 '07 #85
Skybuck Flying sp**@hotmail.com posted to sci.electronics.design:
>
"David Brown" <da***@westcontrol.removethisbit.comwrote in message
news:46**********************@news.wineasy.se...
>Skybuck Flying wrote:
>>"David Brown" <da***@westcontrol.removethisbit.comwrote in
message news:46***********************@news.wineasy.se...
Skybuck Flying wrote:
There is definetly a speed difference especially for mul and div
for the modes I described.
>
Why do I have to choose the data type ?
>
Why can't the program choose the data type at runtime ?
>
If *you* are writing the program, *you* should know what sort of
data is
stored in each variable. *You* can then tell the compiler by
choosing
an appropriate data type. Is that so hard to grasp? It is up to
*you* to figure out that what limits there will be on the size of
the data you are using, and therefore pick 32-bit or 64-bit (or
whatever) integers
for your program. If you think there could be large variations
in the sizes, then either use a data type that will certainly be
big enough, or pick one with no arbitrary limit (there are
multiple precision integer libraries available for most
languages), or use a dynamically typed language.

Well that clearly sucks.

The world is not completely 64 bit, The world is not statis it
fluctuates.

Sometimes the program only needs 32 bits, sometimes 64 bits.

Always choosing 64 bits would hurt performance LOL.

So if your program needs 32 bits, use 32 bits. If it needs 64
bits, use 64 bits.

Yes very simply statement.

Achieving this in a scalable way is what this thread is all about.

Re-writing code, or writing double code, or even using multiple
libraries is not really what this is about.

It's nearly impossible to achieve without hurting performance. Only
solutions might be c++ templates or generics, not even sure how easy
it would be to switch between two generated class at runtime.

Bye,
Skybuck.
Can't speak for other libraries but the integer and floating point
routines GCC C++ libraries are written as templates. Just don't
expect to change the CPU ALU mode on the fly. (at least not on
x86_64, PPC, MIPS or SPARC architectures.)

Sep 1 '07 #86
Skybuck Flying sp**@hotmail.com posted to sci.electronics.design:
>
"Stephen Sprunk" <st*****@sprunk.orgwrote in message
news:46***********************@free.teranews.com.. .
>"Skybuck Flying" <sp**@hotmail.comwrote in message
news:fb**********@news5.zwoll1.ov.home.nl...
>>The world is not completely 64 bit, The world is not statis it
fluctuates.

Sometimes the program only needs 32 bits, sometimes 64 bits.

Always choosing 64 bits would hurt performance LOL.

Not if you have a 64-bit machine; even if you're using a 32-bit
machine, emulating 64-bit operations s will hurts performance less
than trying to detect the appropriate choice and then act on that
information.

For addition and subtraction probably.

For multiple and division some performance could be reduced for 32
bits but would still be faster than simulating it.

Whatever the case maybe.

The point is the detection is the overhead if cpu can do the
detection that overhead might disappear ! ;) :)

Bye,
Skybuck.
Precisely why we use typing and let the compiler do it. It moves the
overhead of the detection clear out of the running program.
Sep 1 '07 #87
You have valid points.

Just because an if statement/switch to 32 bit code proved faster on my
system and the simple test program doesn't have to mean it will always be
faster, or always be faster on other chips.

I am pretty much done investigating this issue.

I am now convinced extending the code with int64's is ok.

And I will do it only at those places where it's absolutely necessary, the
rest will remain 32 bits.

So current code base is being converted/upgrade to a mix of 32 bit and 64
bit numbers.

Haven't look at the compare statements for 64 bits and copy statements,
there some more overhead.

Some new algorithm parts to cope with 64 bits as well, so program will
probably be a bit slower anyway.

It's the price to pay lol ;)

Bye,
Skybuck.

Sep 1 '07 #88
Good get lost.

Bye,
Skybuck.

"Frank Birbacher" <bl************@gmx.netwrote in message
news:5j***********@mid.dfncis.de...
Skybuck Flying schrieb:
>Give me wings bitch ! =D

Time to ban you.

Frank

Sep 1 '07 #89
ChairmanOfTheBored wrote:
Gee... Look! The SkyTard is back, and he even answered his own post
seven times!
What are you on? He wouldn't be the SkyTard otherwise!
Sep 2 '07 #90

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

385 posts views Thread by Xah Lee | last post: by
30 posts views Thread by Cramer | last post: by
1 post views Thread by CARIGAR | last post: by
1 post views Thread by Marylou17 | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.