By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,148 Members | 759 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,148 IT Pros & Developers. It's quick & easy.

Writing Scalabe Software in C++

P: n/a
Hello,

This morning I had an idea how to write Scalable Software in general.
Unfortunately with Delphi 2007 it can't be done because it does not support
operating overloading for classes, or record inheritance (records do have
operator overloading)

The idea is to write a generic integer class with derived integer classess
for 8 bit, 16 bit, 32 bit, 64 bit and 64 bit emulated.

Then at runtime the computer program can determine which derived integer
class is needed to perform the necessary calculations.

The necessary integer class is instantiated and assigned to a generic
integer class variable/reference and the generic references/variables are
used to write the actual code that performs the calculations.

Below is a demonstration program, it's not yet completely compiling, but
it's getting close.

// TestWritingScalableSoftware.cpp : Defines the entry point for the console
application.
//
#include "stdafx.h"

class TSkybuckGenericInteger
{
};
class TSkybuckInt32 : public TSkybuckGenericInteger
{
private:
int mInteger;

public:

// constructor with initializer parameter
TSkybuckInt32( int ParaValue );

// add operator overloader
TSkybuckInt32& operator+( const TSkybuckInt32& ParaSkybuckInt32 );

void Display();
};

class TSkybuckInt64 : TSkybuckGenericInteger
{
private:
long long mInteger;

public:
// constructor with initializer parameter
TSkybuckInt64( long long ParaValue );

// add operator overloader
TSkybuckInt64& operator+( const TSkybuckInt64& ParaSkybuckInt64 );

void Display();
};

//
// TSkybuckInt32

// constructor
TSkybuckInt32::TSkybuckInt32( int ParaValue )
{
mInteger = ParaValue;
}

// add operator overloader
TSkybuckInt32& TSkybuckInt32::operator+ ( const TSkybuckInt32&
ParaSkybuckInt32 )
{
mInteger = mInteger + ParaSkybuckInt32.mInteger;
return *this;
}

void TSkybuckInt32::Display()
{
printf( "%d \n", mInteger );
}

//
// TSkybuckInt64
//
// constructor
TSkybuckInt64::TSkybuckInt64( long long ParaValue )
{
mInteger = ParaValue;
}

// add operator overloader
TSkybuckInt64& TSkybuckInt64::operator+ ( const TSkybuckInt64&
ParaSkybuckInt64 )
{
mInteger = mInteger + ParaSkybuckInt64.mInteger;
return *this;
}

void TSkybuckInt64::Display()
{
printf( "%lu \n", mInteger );
}

int _tmain(int argc, _TCHAR* argv[])
{
long long FileSize;
long long MaxFileSize32bit;

// must write code like this to use constructor ? can't just declare
a,b,c ?
TSkybuckInt32 A32 = TSkybuckInt32( 30 );
TSkybuckInt32 B32 = TSkybuckInt32( 70 );
TSkybuckInt32 C32 = TSkybuckInt32( 0 );
C32 = A32 + B32;
C32.Display();

TSkybuckInt64 A64 = TSkybuckInt64( 30 );
TSkybuckInt64 B64 = TSkybuckInt64( 70 );
TSkybuckInt64 C64 = TSkybuckInt64( 0 );
C64 = A64 + B64;
C64.Display();

FileSize = 1024; // kilobyte
FileSize = FileSize * 1024; // megabyte
FileSize = FileSize * 1024; // gigabyte
FileSize = FileSize * 1024; // terrabyte

MaxFileSize32bit = 1024; // kilobyte
MaxFileSize32bit = MaxFileSize32bit * 1024; // megabyte
MaxFileSize32bit = MaxFileSize32bit * 1024; // gigabyte
MaxFileSize32bit = MaxFileSize32bit * 4; // 4 gigabyte
if (FileSize < MaxFileSize32bit)
{
TSkybuckGenericInteger AGeneric = TSkybuckInt32( 30 );
TSkybuckGenericInteger BGeneric = TSkybuckInt32( 70 );
TSkybuckGenericInteger CGeneric = TSkybuckInt32( 0 );
} else
{
TSkybuckGenericInteger AGeneric = TSkybuckInt64( 30 );
TSkybuckGenericInteger BGeneric = TSkybuckInt64( 70 );
TSkybuckGenericInteger CGeneric = TSkybuckInt64( 0 );
}

CGeneric = AGeneric + BGeneric;
CGeneric.Display();

while (1)
{
}

return 0;
}

Probably minor compile issue's remain:

Error 1 error C2243: 'type cast' : conversion from 'TSkybuckInt64 *__w64 '
to 'const TSkybuckGenericInteger &' exists, but is inaccessible
y:\cpp\tests\test writing scalable software generic math\version
0.01\testwritingscalablesoftware\testwritingscalab lesoftware\testwritingscalablesoftware.cpp
152

Error 2 error C2243: 'type cast' : conversion from 'TSkybuckInt64 *__w64 '
to 'const TSkybuckGenericInteger &' exists, but is inaccessible
y:\cpp\tests\test writing scalable software generic math\version
0.01\testwritingscalablesoftware\testwritingscalab lesoftware\testwritingscalablesoftware.cpp
153

Error 3 error C2243: 'type cast' : conversion from 'TSkybuckInt64 *__w64 '
to 'const TSkybuckGenericInteger &' exists, but is inaccessible
y:\cpp\tests\test writing scalable software generic math\version
0.01\testwritingscalablesoftware\testwritingscalab lesoftware\testwritingscalablesoftware.cpp
154

Error 4 error C2065: 'CGeneric' : undeclared identifier y:\cpp\tests\test
writing scalable software generic math\version
0.01\testwritingscalablesoftware\testwritingscalab lesoftware\testwritingscalablesoftware.cpp
157

Error 5 error C2065: 'AGeneric' : undeclared identifier y:\cpp\tests\test
writing scalable software generic math\version
0.01\testwritingscalablesoftware\testwritingscalab lesoftware\testwritingscalablesoftware.cpp
157

Error 6 error C2065: 'BGeneric' : undeclared identifier y:\cpp\tests\test
writing scalable software generic math\version
0.01\testwritingscalablesoftware\testwritingscalab lesoftware\testwritingscalablesoftware.cpp
157

How to solve the remaining issue's ?

Bye,
Skybuck.
Aug 29 '07 #1
Share this Question
Share on Google+
89 Replies


P: n/a
For those that missed the other threads here is the explanation why I want
something like this:

For 32 bit compilers:

int (32 bit signed integer) is fast, it's translated to single 32 bit cpu
instructions.

long long (64 bit signed integer) is slow, it's translated to multiple 32
bit cpu instructions.

For 64 bit compilers

long long (64 bit signed integer) should be fast, it's translated to a
single 64 bit cpu instruction.

I want to write code just once ! not three times ! and I want maximum speed
!

So I need a generic integer class which will use the appriorate class, the
program must decide what's necessary at runtime, and still give good
performance !

I believe/hope the provided example after some minor fixes should be able to
do what I want ;) !

Bye,
Skybuck.

Aug 29 '07 #2

P: n/a
Well, since the code doesn't compile yet I can't look at the asm generated
but now that I think about it...

Maybe C++ does like polymorhpic or virtual function stuff for compileable
code and that might introduce more overhead than it's worth.

Remains to be seen.

Bye,
Skybuck.
Aug 29 '07 #3

P: n/a
On Aug 29, 7:02 am, "Skybuck Flying" <s...@hotmail.comwrote:
Hello,

This morning I had an idea how to write Scalable Software in general.
Unfortunately with Delphi 2007 it can't be done because it does not support
operating overloading for classes, or record inheritance (records do have
operator overloading)
This argument is wrong in two ways. It assumes things that are not
true and then draws conclusions that don't follow.

Delphi implements objects, and virtual methods. Any language that has
these features is able to operate on values where the type is not
known at compile time.
On the other hand neither this nor what you included below will do
what you started off trying to suggest. They are just methods by
which different instructions can be used at a point in the logic flow
depending on the sort of variable under consideration.

Aug 29 '07 #4

P: n/a
Yeah and possibly:

TdoubleGenericInteger = int128; // implemented manually.

These doubles could be used to detected generic integer overflows, range
check errors and other kinds of problems.

Bye,
Skybuck.
Aug 29 '07 #5

P: n/a

"MooseFET" <ke******@rahul.netwrote in message
news:11**********************@q5g2000prf.googlegro ups.com...
On Aug 29, 7:02 am, "Skybuck Flying" <s...@hotmail.comwrote:
>Hello,

This morning I had an idea how to write Scalable Software in general.
Unfortunately with Delphi 2007 it can't be done because it does not
support
operating overloading for classes, or record inheritance (records do have
operator overloading)

This argument is wrong in two ways. It assumes things that are not
true and then draws conclusions that don't follow.

Delphi implements objects, and virtual methods. Any language that has
these features is able to operate on values where the type is not
known at compile time.
Without the mentioned features writing scalable software, including writing
scalable math routines becomes impractical.

Even with virtual methods it would become slow.
On the other hand neither this nor what you included below will do
what you started off trying to suggest. They are just methods by
which different instructions can be used at a point in the logic flow
depending on the sort of variable under consideration.
It does exactly what I want it to do, it does it slowly, so it's not what I
want it to do.

Bye,
Skybuck.
Aug 29 '07 #6

P: n/a
Somebody else also had an interesting idea:

It comes down to this:

1. Generate multiple libraries, for example:

32 bit version
true 64 bit version
emulated 64 bit version

2. It might have some problems:

Problem 1: parameters for routine are different.
Problem 2: calling for routines are different because of parameters.
Problem 3: debugging problem, different libraries same source <- can't be,
source was slightly modified for each generate library.

These problems could make finding a solution more complex.

It does solve another problem:

Different parts of the application can have different versions.

This idea is definetly worth exploring.

Bye,
Skybuck.
Aug 29 '07 #7

P: n/a
Problem 4:

Distribution size grows considerable.

3 Different libraries must be supplied.

Only one library has to be loaded.

Maybe two if different parts required it.

Biggest problem:

The debugging problem.

That's what I don't like about it.

Debugging very important for me.

How can different libraries be debugged with the same code ? Where only one
declartion is different, it was modified during the build ?

Strange.

Bye,
Skybuck.

Aug 29 '07 #8

P: n/a
On 2007-08-29 16:02, Skybuck Flying wrote:
Hello,
Why the crossposting to all those different groups, especially when this
subject is off-topic in most of them (what does sci.electronics.design
have to do with anything). Please read the the FAQ before posting any
more messages, http://www.parashift.com/c++-faq-lite/ start by reading
section 5. I've replied only to comp.lang.c++.
This morning I had an idea how to write Scalable Software in general.
Unfortunately with Delphi 2007 it can't be done because it does not support
operating overloading for classes, or record inheritance (records do have
operator overloading)
It's not a very good scheme, using virtual functions is not very
performance efficient and you'd be using them all over the place. If
it's not know at compile-time what size is needed then a library for
arbitrary precision is probably better than this solution.
The idea is to write a generic integer class with derived integer classess
for 8 bit, 16 bit, 32 bit, 64 bit and 64 bit emulated.

Then at runtime the computer program can determine which derived integer
class is needed to perform the necessary calculations.

The necessary integer class is instantiated and assigned to a generic
integer class variable/reference and the generic references/variables are
used to write the actual code that performs the calculations.

Below is a demonstration program, it's not yet completely compiling, but
it's getting close.

// TestWritingScalableSoftware.cpp : Defines the entry point for the console
application.
//
#include "stdafx.h"

class TSkybuckGenericInteger
{
};
class TSkybuckInt32 : public TSkybuckGenericInteger
{
private:
int mInteger;
You are wrong to assume that an int is always 32 bits.
>
public:

// constructor with initializer parameter
TSkybuckInt32( int ParaValue );

// add operator overloader
TSkybuckInt32& operator+( const TSkybuckInt32& ParaSkybuckInt32 );

void Display();
Instead of a Display() function, overload the << operator.
};

class TSkybuckInt64 : TSkybuckGenericInteger
{
private:
long long mInteger;
Same goes for long, it's not guaranteed to be 64 bits.
>
public:
// constructor with initializer parameter
TSkybuckInt64( long long ParaValue );

// add operator overloader
TSkybuckInt64& operator+( const TSkybuckInt64& ParaSkybuckInt64 );

void Display();
};

//
// TSkybuckInt32

// constructor
TSkybuckInt32::TSkybuckInt32( int ParaValue )
{
mInteger = ParaValue;
}

// add operator overloader
TSkybuckInt32& TSkybuckInt32::operator+ ( const TSkybuckInt32&
ParaSkybuckInt32 )
{
mInteger = mInteger + ParaSkybuckInt32.mInteger;
return *this;
}

void TSkybuckInt32::Display()
{
printf( "%d \n", mInteger );
}

//
// TSkybuckInt64
//
// constructor
TSkybuckInt64::TSkybuckInt64( long long ParaValue )
{
mInteger = ParaValue;
}

// add operator overloader
TSkybuckInt64& TSkybuckInt64::operator+ ( const TSkybuckInt64&
ParaSkybuckInt64 )
{
mInteger = mInteger + ParaSkybuckInt64.mInteger;
return *this;
}

void TSkybuckInt64::Display()
{
printf( "%lu \n", mInteger );
}

int _tmain(int argc, _TCHAR* argv[])
Non-standard main
{
long long FileSize;
long long MaxFileSize32bit;

// must write code like this to use constructor ? can't just declare
a,b,c ?
TSkybuckInt32 A32 = TSkybuckInt32( 30 );
TSkybuckInt32 A32(30);
TSkybuckInt32 B32 = TSkybuckInt32( 70 );
TSkybuckInt32 C32 = TSkybuckInt32( 0 );
C32 = A32 + B32;
C32.Display();

TSkybuckInt64 A64 = TSkybuckInt64( 30 );
TSkybuckInt64 B64 = TSkybuckInt64( 70 );
TSkybuckInt64 C64 = TSkybuckInt64( 0 );
C64 = A64 + B64;
C64.Display();

FileSize = 1024; // kilobyte
FileSize = FileSize * 1024; // megabyte
FileSize = FileSize * 1024; // gigabyte
FileSize = FileSize * 1024; // terrabyte

MaxFileSize32bit = 1024; // kilobyte
MaxFileSize32bit = MaxFileSize32bit * 1024; // megabyte
MaxFileSize32bit = MaxFileSize32bit * 1024; // gigabyte
MaxFileSize32bit = MaxFileSize32bit * 4; // 4 gigabyte
if (FileSize < MaxFileSize32bit)
{
TSkybuckGenericInteger AGeneric = TSkybuckInt32( 30 );
TSkybuckGenericInteger BGeneric = TSkybuckInt32( 70 );
TSkybuckGenericInteger CGeneric = TSkybuckInt32( 0 );
} else
{
TSkybuckGenericInteger AGeneric = TSkybuckInt64( 30 );
TSkybuckGenericInteger BGeneric = TSkybuckInt64( 70 );
TSkybuckGenericInteger CGeneric = TSkybuckInt64( 0 );
}

CGeneric = AGeneric + BGeneric;
Those variables are all out of scope.

I'm not very impressed with the idea so far, I think you can make
something much more useful with templates, that should also give a lot
more efficiency than you can get with virtual functions.

--
Erik Wikström
Aug 29 '07 #9

P: n/a
"Skybuck Flying" <sp**@hotmail.comwrote in message
news:fb**********@news2.zwoll1.ov.home.nl...
>
"MooseFET" <ke******@rahul.netwrote in message
news:11**********************@q5g2000prf.googlegro ups.com...
>On Aug 29, 7:02 am, "Skybuck Flying" <s...@hotmail.comwrote:
>>Hello,

This morning I had an idea how to write Scalable Software in general.
Unfortunately with Delphi 2007 it can't be done because it does not
support
operating overloading for classes, or record inheritance (records do
have
operator overloading)

This argument is wrong in two ways. It assumes things that are not
true and then draws conclusions that don't follow.

Delphi implements objects, and virtual methods. Any language that has
these features is able to operate on values where the type is not
known at compile time.

Without the mentioned features writing scalable software, including
writing scalable math routines becomes impractical.

Even with virtual methods it would become slow.
Indeed.
>On the other hand neither this nor what you included below will do
what you started off trying to suggest. They are just methods by
which different instructions can be used at a point in the logic flow
depending on the sort of variable under consideration.

It does exactly what I want it to do, it does it slowly, so it's not what
I want it to do.
That's why I told you that checking for whether emulation is needed and
picking a code path will be slower than just using it all the time. The
cost of emulation, unless you're writing incredibly math-intensive code, is
trivial. If you care about raw math performance and the code is just too
slow to use, buying a faster CPU will be cheaper than trying to figure out
how to make slow machines perform better. It's certainly cheaper than
modifying the ISA.

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
--
Posted via a free Usenet account from http://www.teranews.com

Aug 29 '07 #10

P: n/a
"Skybuck Flying" <sp**@hotmail.comwrote in message
news:fb**********@news5.zwoll1.ov.home.nl...
For those that missed the other threads here is the explanation why I want
something like this:

For 32 bit compilers:

int (32 bit signed integer) is fast, it's translated to single 32 bit cpu
instructions.

long long (64 bit signed integer) is slow, it's translated to multiple 32
bit cpu instructions.
What benchmark are you using that shows it's "slow"? How much is the
supposed performance hit vs 32-bit compared to the overall program program
execution time?
For 64 bit compilers

long long (64 bit signed integer) should be fast, it's translated to a
single 64 bit cpu instruction.

I want to write code just once ! not three times ! and I want maximum
speed !
Those things rarely go together. You can have good, fast, and cheap -- but
only two at a time.

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
--
Posted via a free Usenet account from http://www.teranews.com

Aug 29 '07 #11

P: n/a
mpm
On Aug 29, 10:39?am, MooseFET <kensm...@rahul.netwrote:
On Aug 29, 7:02 am, "Skybuck Flying" <s...@hotmail.comwrote:
Hello,
This morning I had an idea how to write Scalable Software in general.
Unfortunately with Delphi 2007 it can't be done because it does not support
operating overloading for classes, or record inheritance (records do have
operator overloading)

This argument is wrong in two ways. It assumes things that are not
true and then draws conclusions that don't follow.

Delphi implements objects, and virtual methods. Any language that has
these features is able to operate on values where the type is not
known at compile time.

On the other hand neither this nor what you included below will do
what you started off trying to suggest. They are just methods by
which different instructions can be used at a point in the logic flow
depending on the sort of variable under consideration.
IMO, this Skybuck poster is whacked. Mentally so.

I'll wager that if you just parse his code and remove all occurances
of the letters "skybuck" you'll discover its someone else's work and
he's just inserted his jibberish to make himself feel more
important. Probably right out of a help file or compiler manual or
something.

I can practically guarantee you he did not write any of this code.
-which of course would explain why it doesn't even do what he claims
it's supposed to do.

And I've never me the guy (girl?, it?, whatever Skybuck is).

Aug 29 '07 #12

P: n/a
Skybuck Flying wrote:
:: For those that missed the other threads here is the explanation
:: why I want something like this:
::
:: For 32 bit compilers:
::
:: int (32 bit signed integer) is fast, it's translated to single 32
:: bit cpu instructions.
::
:: long long (64 bit signed integer) is slow, it's translated to
:: multiple 32 bit cpu instructions.
::
:: For 64 bit compilers
::
:: long long (64 bit signed integer) should be fast, it's translated
:: to a single 64 bit cpu instruction.

int (which might be 32 bit) is also fast, it's translated to single 32
bit cpu instruction.
Bo Persson
Aug 29 '07 #13

P: n/a
Well we can pretty safely forget about this "solution".

It's not really a solution.

The problem is with the data.

Different data types are needed.

32 bit data and 64 bit data.

Trying to cram those into one data type not possible.

Not with classes, not with records, maybe variants but those to slow.

If you do try you will run into all kinds of problems, code problems.

It was an interesting experience though.

I played around with DLL's then Packages. LoadPackage, UnloadPackage,
Tpersistent (Delphi stuff) then I realized let's just copy & paste the code
and try to use unit_someversion.TsomeClass but nope.

The problem with the data remains.

I really do want one data type to perform operations on, and this data type
should scale when necessary.

I want one code on this data type and should change when necessary.

It looks simply to do but currently in Delphi it's probably impossible to do
it fast, even slow versions create problems.

The best solution is probably my own solution:

TgenericInteger = record
mInteger : int64;
end;

Overloaded add operator:
if BitMode = 32 then
begin
int32(mInteger) := int32(mInteger) + etc;
end else
if BitMode = 64 then
begin
int64(mInteger) := int64(mInteger) + etc;
end;

Something like that.

Introduces a few if statements... which is overhead...

Question remains, how much overhead is it really ?

Yeah good question:

Which one is faster:

add
adc

Or:
mov al, bitmode
cmp al, 32
jne lab1
add eax, ecx
lab1:
cmp al, 64
jne lab2
add eax, ecx
adc eax, ecx
lab2:

Something like that...

Well I think always executing add, adc is faster then the compares and jumps
:) LOL.

End of story ? Not yet... this is simple example... what about mul and div ?
<- those complex for int64 emulated.

Maybe using if statement to switch to 32 bit when possible would be much
faster after all ?!

Bye,
Skybuck.
Aug 29 '07 #14

P: n/a
Well I just had an idea which might be interesting after all:

64 bit emulated mul and div are probably slow.

So if it's possible to switch to 32 bit maybe some speed gains can be
achieved !

So for addition and subtraction the 64 bit emulated versions are always
called.

But for multiplication and division the 32 bit version might be called when
possible and the 64 bit emulated version when absolutely necessary.

I shall inspect what Delphi does for 64 bit (<-emulated) multiplication and
division ;)

Bye,
Skybuck.
Aug 29 '07 #15

P: n/a
Lol, you funny.

Bye,
Skybuck.
Aug 29 '07 #16

P: n/a
Oh shit.

I was wondering who added alt.math.

But it was me LOL.

I added the wrong newsgroup.

I wanted to add alt.lang.asm.

Oh well

I ll start new thread there and post a question about this ;)

Bye,
Skybuck.

"Skybuck Flying" <sp**@hotmail.comwrote in message
news:fb**********@news5.zwoll1.ov.home.nl...
Well I just had an idea which might be interesting after all:

64 bit emulated mul and div are probably slow.

So if it's possible to switch to 32 bit maybe some speed gains can be
achieved !

So for addition and subtraction the 64 bit emulated versions are always
called.

But for multiplication and division the 32 bit version might be called
when possible and the 64 bit emulated version when absolutely necessary.

I shall inspect what Delphi does for 64 bit (<-emulated) multiplication
and division ;)

Bye,
Skybuck.

Aug 29 '07 #17

P: n/a
Ok,

I fixed the code somewhat.

I don't completely understand all the syntax shit and such, simply followed
some other code examples.

The 3 lines which are commented probably need a type conversion or
something.

I changed the operators to be virtual, now they can be overloaded.

That's kinda interesting/cool... virtual overloaded operators.

However: IT'S HELLISH SLOW when looking at the assembler.

It might not be as slow as delphi's dynamic array referencing count and such
but still.. (however not fair to compare... because this c+= example ain't
dynamic infinite scalable ;))

Way to slow to be usualable for my purposes.

It was interesting to see C++ virtual overloaded operators in action...
maybe making the operators virtual would not be necessary if other tricks
used ? I am not good enough in C++ to try other tricks.

I also explored some other ideas, non of which are statisfieng.

Really, really sad.

I will probably have to convert my code to 64 bit emulated ints and say
goodbye to performance for 32 bit cases !

Here is the fixed up code:

// TestWritingScalableSoftware.cpp : Defines the entry point for the console
application.
//
#include "stdafx.h"

class TSkybuckGenericInteger
{
public:
virtual TSkybuckGenericInteger& operator+( const TSkybuckGenericInteger&
ParaSkybuckGenericInteger );
virtual void Display();
};
class TSkybuckInt32 : public TSkybuckGenericInteger
{
private:
int mInteger;
public:
// constructor with initializer parameter
TSkybuckInt32( int ParaValue );
// first solution:
virtual TSkybuckInt32& operator+( const TSkybuckInt32& ParaSkybuckInt32 );
virtual void Display();
};
class TSkybuckInt64 : TSkybuckGenericInteger
{
private:
long long mInteger;
public:
// constructor with initializer parameter
TSkybuckInt64( long long ParaValue );
// first solution:
virtual TSkybuckInt64& operator+( const TSkybuckInt64& ParaSkybuckInt64 );
virtual void Display();
};
//
// TSkybuckGenericInteger
//
TSkybuckGenericInteger& TSkybuckGenericInteger::operator+( const
TSkybuckGenericInteger& ParaSkybuckGenericInteger )
{
return *this;
}
void TSkybuckGenericInteger::Display()
{
printf("nothing \n");
}

//
// TSkybuckInt32
//
// binary arithmetic add operator overloader
// adds A and B together and returns a new C
// this might not be what I want disabled for now
/*
TSkybuckInt32 operator + ( const TSkybuckInt32 &A, const TSkybuckInt32 &B );
*/
// constructor
TSkybuckInt32::TSkybuckInt32( int ParaValue )
{
mInteger = ParaValue;
}

// add operator overloader
/*
TSkybuckInt32 operator + ( const TSkybuckInt32& A, const TSkybuckInt32& B)
{
TSkybuckInt32 C = TSkybuckInt32( 0 );
C.mInteger = A.mInteger + B.mInteger;
return C.mInteger;
}
*/
// add operator overloader
TSkybuckInt32& TSkybuckInt32::operator+ ( const TSkybuckInt32&
ParaSkybuckInt32 )
{
mInteger = mInteger + ParaSkybuckInt32.mInteger;
return *this;
}

void TSkybuckInt32::Display()
{
printf( "%d \n", mInteger );
}
//
// TSkybuckInt64
//
// constructor
TSkybuckInt64::TSkybuckInt64( long long ParaValue )
{
mInteger = ParaValue;
}
// add operator overloader
TSkybuckInt64& TSkybuckInt64::operator+ ( const TSkybuckInt64&
ParaSkybuckInt64 )
{
mInteger = mInteger + ParaSkybuckInt64.mInteger;
return *this;
}

void TSkybuckInt64::Display()
{
printf( "%lu \n", mInteger );
}

int _tmain(int argc, _TCHAR* argv[])
{
long long FileSize;
long long MaxFileSize32bit;
// must write code like this to use constructor ? can't just declare a,b,c ?
TSkybuckInt32 A32 = TSkybuckInt32( 30 );
TSkybuckInt32 B32 = TSkybuckInt32( 70 );
TSkybuckInt32 C32 = TSkybuckInt32( 0 );
C32 = A32 + B32;
C32.Display();
TSkybuckInt64 A64 = TSkybuckInt64( 30 );
TSkybuckInt64 B64 = TSkybuckInt64( 70 );
TSkybuckInt64 C64 = TSkybuckInt64( 0 );
C64 = A64 + B64;
C64.Display();
FileSize = 1024; // kilobyte
FileSize = FileSize * 1024; // megabyte
FileSize = FileSize * 1024; // gigabyte
FileSize = FileSize * 1024; // terrabyte
MaxFileSize32bit = 1024; // kilobyte
MaxFileSize32bit = MaxFileSize32bit * 1024; // megabyte
MaxFileSize32bit = MaxFileSize32bit * 1024; // gigabyte
MaxFileSize32bit = MaxFileSize32bit * 4; // 4 gigabyte
TSkybuckGenericInteger AGeneric = TSkybuckGenericInteger();
TSkybuckGenericInteger BGeneric = TSkybuckGenericInteger();
TSkybuckGenericInteger CGeneric = TSkybuckGenericInteger();
if (FileSize < MaxFileSize32bit)
{
TSkybuckGenericInteger AGeneric = TSkybuckInt32( 30 );
TSkybuckGenericInteger BGeneric = TSkybuckInt32( 70 );
TSkybuckGenericInteger CGeneric = TSkybuckInt32( 0 );
} else
{
// TSkybuckGenericInteger AGeneric = TSkybuckInt64( 30 );
// TSkybuckGenericInteger BGeneric = TSkybuckInt64( 70 );
// TSkybuckGenericInteger CGeneric = TSkybuckInt64( 0 );
}
CGeneric = AGeneric + BGeneric;
CGeneric.Display();
while (1)
{
}
return 0;
}

Bye,
Skybuck.
Aug 29 '07 #18

P: n/a
On Aug 29, 8:16 am, "Skybuck Flying" <s...@hotmail.comwrote:
"MooseFET" <kensm...@rahul.netwrote in message

news:11**********************@q5g2000prf.googlegro ups.com...
On Aug 29, 7:02 am, "Skybuck Flying" <s...@hotmail.comwrote:
Hello,
This morning I had an idea how to write Scalable Software in general.
Unfortunately with Delphi 2007 it can't be done because it does not
support
operating overloading for classes, or record inheritance (records do have
operator overloading)
This argument is wrong in two ways. It assumes things that are not
true and then draws conclusions that don't follow.
Delphi implements objects, and virtual methods. Any language that has
these features is able to operate on values where the type is not
known at compile time.

Without the mentioned features writing scalable software, including writing
scalable math routines becomes impractical.
That is simply false. Why do you think it can't be done with virtual
methods?
Even with virtual methods it would become slow.
Virtual methods when done the way that Borland Pascal, Delphi and a
good implementation of C++ add very little extra time to the total run
time. The virtual method dispatch code takes less instructions than
the entry code of most routines.
On the other hand neither this nor what you included below will do
what you started off trying to suggest. They are just methods by
which different instructions can be used at a point in the logic flow
depending on the sort of variable under consideration.

It does exactly what I want it to do, it does it slowly, so it's not what I
want it to do.
No, it don't. It only gives the appearance of doing what you want.
There is nothing scalable going on.

>
Bye,
Skybuck.

Aug 30 '07 #19

P: n/a
"Skybuck Flying" <sp**@hotmail.comwrote in message
news:fb**********@news3.zwoll1.ov.home.nl...
That's kinda interesting/cool... virtual overloaded operators.

However: IT'S HELLISH SLOW when looking at the assembler.

It might not be as slow as delphi's dynamic array referencing count and
such but still.. (however not fair to compare... because this c+= example
ain't dynamic infinite scalable ;))

Way to slow to be usualable for my purposes.

It was interesting to see C++ virtual overloaded operators in action...
maybe making the operators virtual would not be necessary if other tricks
used ? I am not good enough in C++ to try other tricks.

I also explored some other ideas, non of which are statisfieng.

Really, really sad.

I will probably have to convert my code to 64 bit emulated ints and say
goodbye to performance for 32 bit cases !
And, if you'd bothered to read my posts, you'd see that's exactly what I
told you you'd see: "Simply running in 64-bit mode (even emulated) all the
time will be faster on modern CPUs than trying to decide at runtime which is
better."

I didn't have to run any tests to know that, merely understanding how the
CPUs and compilers actually work. You might try investigating those things
before posting, as it'll save you (and us) a lot of time and effort.

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
--
Posted via a free Usenet account from http://www.teranews.com

Aug 30 '07 #20

P: n/a
"MooseFET" <ke******@rahul.netwrote in message
news:11**********************@x40g2000prg.googlegr oups.com...
On Aug 29, 8:16 am, "Skybuck Flying" <s...@hotmail.comwrote:
>"MooseFET" <kensm...@rahul.netwrote in message
news:11**********************@q5g2000prf.googlegro ups.com...
On Aug 29, 7:02 am, "Skybuck Flying" <s...@hotmail.comwrote:
This morning I had an idea how to write Scalable Software in
general. Unfortunately with Delphi 2007 it can't be done
because it does not support operating overloading for
classes, or record inheritance (records do have operator
overloading)
This argument is wrong in two ways. It assumes things that
are not true and then draws conclusions that don't follow.
Delphi implements objects, and virtual methods. Any
language that has these features is able to operate on values
where the type is not known at compile time.

Without the mentioned features writing scalable software,
including writing scalable math routines becomes impractical.

That is simply false. Why do you think it can't be done with
virtual methods?
It can, of course.
>Even with virtual methods it would become slow.

Virtual methods when done the way that Borland Pascal, Delphi
and a good implementation of C++ add very little extra time to
the total run time. The virtual method dispatch code takes less
instructions than the entry code of most routines.
True. However, in this particular example, we're comparing the cost of
using virtual methods to select 32- and 64-bit code paths vs. the cost of
emulating 64-bit all the time.

* You have to do a vtable lookup
* You have to get the parameters into the right registers or, worse, in the
right places on the stack
* You have to call the function
* You have to do a function prolog
* Do the work
* You have to do a function epilog
* You have to return from the function
* You have to get the results from the return register or stack to where you
want it.

All of those steps need to be done in series, because they depend on each
other. You also lose the ability to schedule multiple such operations in
parallel or one operation in parallel with other code, greatly increasing
latency and reducing performance. Finally, there's significant additional
costs if you have L1-I misses, BHT misses, stack-based arguments, etc.

Compare all of that vs. just emulating a 64-bit type (assuming a 32-bit CPU)
for all math. It's obvious to anyone who understands CPU architecture which
will win. Skybuck's the only one who doesn't get it, for obvious reasons.

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
--
Posted via a free Usenet account from http://www.teranews.com

Aug 30 '07 #21

P: n/a
Skybuck Flying wrote:
>
Hello,

This morning I had an idea ...
I hope that this doesn't sound impolite, but why are you posting to
sci.electronics.design and alt.math?

--
Remove "antispam" and ".invalid" for e-mail address.
"He that giveth to the poor lendeth to the Lord, and shall be repaid,"
said Mrs Fairchild, hastily slipping a shilling into the poor woman's
hand.
Aug 30 '07 #22

P: n/a
Skybuck Flying wrote:
For those that missed the other threads here is the explanation why I want
something like this:

For 32 bit compilers:

int (32 bit signed integer) is fast, it's translated to single 32 bit cpu
instructions.

long long (64 bit signed integer) is slow, it's translated to multiple 32
bit cpu instructions.

For 64 bit compilers

long long (64 bit signed integer) should be fast, it's translated to a
single 64 bit cpu instruction.

I want to write code just once ! not three times ! and I want maximum speed
!
Look, it's quite simple - if you want 32-bit data, use 32-bit integers.
If you want 64-bit data, use 64-bit integers. There is virtually no
situation where 64-bit integers are faster than 32-bit integers on a
64-bit processor. On such rare occasions when there *is* a difference,
coding specifically for the algorithm in question will make more difference.
Aug 30 '07 #23

P: n/a

"MooseFET" <ke******@rahul.netwrote in message
news:11**********************@x40g2000prg.googlegr oups.com...
On Aug 29, 8:16 am, "Skybuck Flying" <s...@hotmail.comwrote:
>"MooseFET" <kensm...@rahul.netwrote in message

news:11**********************@q5g2000prf.googlegr oups.com...
On Aug 29, 7:02 am, "Skybuck Flying" <s...@hotmail.comwrote:
Hello,
>This morning I had an idea how to write Scalable Software in general.
Unfortunately with Delphi 2007 it can't be done because it does not
support
operating overloading for classes, or record inheritance (records do
have
operator overloading)
This argument is wrong in two ways. It assumes things that are not
true and then draws conclusions that don't follow.
Delphi implements objects, and virtual methods. Any language that has
these features is able to operate on values where the type is not
known at compile time.

Without the mentioned features writing scalable software, including
writing
scalable math routines becomes impractical.

That is simply false. Why do you think it can't be done with virtual
methods?
Because virtual methods is not operator overloading.

Writing code such as:

Div( Multiply( Add( A, B ), C ), D )

Is unpractical and ofcourse slow, ^^^ call overhead.
>
>Even with virtual methods it would become slow.

Virtual methods when done the way that Borland Pascal, Delphi and a
good implementation of C++ add very little extra time to the total run
time. The virtual method dispatch code takes less instructions than
the entry code of most routines.
A good operator overloading implementation does not even have call overhead.

I think that's how Delphi's operator overloading for records work.

^^^ No overhead ^^^

(Not sure though, but I thought that's how it works)

On the other hand neither this nor what you included below will do
what you started off trying to suggest. They are just methods by
which different instructions can be used at a point in the logic flow
depending on the sort of variable under consideration.

It does exactly what I want it to do, it does it slowly, so it's not what
I
want it to do.

No, it don't. It only gives the appearance of doing what you want.
There is nothing scalable going on.
Apperently you see it differently.

I already used these techniques to scale to infinety.

So you simply wrong about that one.

Bye,
Skybuck.
Aug 30 '07 #24

P: n/a
On 29 Aug., 20:10, "Skybuck Flying" <s...@hotmail.comwrote:
Well we can pretty safely forget about this "solution".

It's not really a solution.

The problem is with the data.

Different data types are needed.

32 bit data and 64 bit data.

Trying to cram those into one data type not possible.

Not with classes, not with records, maybe variants but those to slow.

If you do try you will run into all kinds of problems, code problems.

It was an interesting experience though.

I played around with DLL's then Packages. LoadPackage, UnloadPackage,
Tpersistent (Delphi stuff) then I realized let's just copy & paste the code
and try to use unit_someversion.TsomeClass but nope.

The problem with the data remains.

I really do want one data type to perform operations on, and this data type
should scale when necessary.

I want one code on this data type and should change when necessary.

It looks simply to do but currently in Delphi it's probably impossible to do
it fast, even slow versions create problems.

The best solution is probably my own solution:

TgenericInteger = record
mInteger : int64;
end;

Overloaded add operator:
if BitMode = 32 then
begin
int32(mInteger) := int32(mInteger) + etc;
end else
if BitMode = 64 then
begin
int64(mInteger) := int64(mInteger) + etc;
end;

Something like that.

Introduces a few if statements... which is overhead...

Question remains, how much overhead is it really ?

Yeah good question:

Which one is faster:

add
adc

Or:
mov al, bitmode
cmp al, 32
jne lab1
add eax, ecx
lab1:
cmp al, 64
jne lab2
add eax, ecx
adc eax, ecx
lab2:

Something like that...

Well I think always executing add, adc is faster then the compares and jumps
:) LOL.

End of story ? Not yet... this is simple example... what about mul and div ?
<- those complex for int64 emulated.

Maybe using if statement to switch to 32 bit when possible would be much
faster after all ?!

Bye,
Skybuck.
I think I still don't get what you want.
If you want 32 bits, use "int".
If you want 64 bits, use "long long"
If you want the biggest type that the target cpu can mul/div in a
single instrcution, use "long".
At least this sems to work out correctly with gcc and intel 32/64 bit
machines
At least you don't save *memory* by adding a 4byte vtable pointer just
to distinguish between the case of additional 4bytes of int or 8bytes
of long long (not to mention alignment) and I doubt you save much
*time* either.

Moreover, I have the impression that you don't treat mixed cases like
int32 + int64 well

Aug 30 '07 #25

P: n/a
What makes you believe I don't get it ?

Please stop your unnecessary insults.

Bye,
Skybuck.
Aug 30 '07 #26

P: n/a
Absolutely nonsense.

If I want I can write a computer program that runs 32 bit when possible and
64 bit emulated when needed.

My computer program will outperform your "always 64 emulated" program WITH
EASE.

The only problem is that I have to write each code twice.

A 32 bit version and a 64 bit version.

I simply instantiate the necessary object and run it.

Absolutely no big deal.

The only undesirable property of this solution is two code bases.

Your lack fo programming language knownledge and experience is definetly
showing.

Bye,
Skybuck.

Aug 30 '07 #27

P: n/a
Math was accident, probably related anyway.

Electronics.design might be related as well ;)

Bye,
Skybuck.
Aug 30 '07 #28

P: n/a
pan
In article <fb**********@news5.zwoll1.ov.home.nl"Skybuck
Flying"<sp**@hotmail.comwrote:
> That is simply false. Why do you think it can't be done with
virtual methods?
Because virtual methods is not operator overloading.
Writing code such as:
Div( Multiply( Add( A, B ), C ), D )
Is unpractical and ofcourse slow, ^^^ call overhead.
An user defined overloaded operator cal is as fast as an user
definedfunction call, simply because they're both the same thing.
A good operator overloading implementation does not even have call
overhead.
That is usually called "inlining", and can be applied both to
functionsand overloaded operators.
Anyway inlining is hard to happen for a virtual function or operator.

--
Marco

--
I'm trying a new usenet client for Mac, Nemo OS X.
You can download it at http://www.malcom-mac.com/nemo

Aug 30 '07 #29

P: n/a
There is definetly a speed difference especially for mul and div for the
modes I described.

Why do I have to choose the data type ?

Why can't the program choose the data type at runtime ?

Bye,
Skybuck.
Aug 30 '07 #30

P: n/a
Yes you missed the other threads, I shall explain again lol:

I want:

1. One code base which adepts at runtime:

2. Uses 32 bit instructions when possible.

3. Switches to 64 bit instructions when necessary (true or emulated).

4. No extra overhead.

As far as I can tell the cpu's for pc's are inflexible:

32 bit data types require 32 bit instructions.

64 bit data types require 64 bit instructions or alternatively:

64 bit data types require multiple 32 bit instructions.

This means it's necessary to code 3 code paths !

I do not want to write code 3 times !

I want to express my formula's and algorithms just one time !

I want the program/code base to adept to the optimal instruction sequences
without actually having to code those three times !

I suggested a "feature extension" to processors: "Flexible Instruction Set".

The idea is to use a BitMode variable to specify to the cpu how it is
supposed to interpret the coded instructions sequences.

So that I can write simple one instruction sequence and only need to change
a single variable.

Many people started bitching that the current cpu's can already do this for
16/32/64.

I have seen no prove what so ever.

Can you provide prove ?

Bye,
Skybuck.
Aug 30 '07 #31

P: n/a
Skybuck Flying wrote:
There is definetly a speed difference especially for mul and div for the
modes I described.

Why do I have to choose the data type ?

Why can't the program choose the data type at runtime ?
If *you* are writing the program, *you* should know what sort of data is
stored in each variable. *You* can then tell the compiler by choosing
an appropriate data type. Is that so hard to grasp? It is up to *you*
to figure out that what limits there will be on the size of the data you
are using, and therefore pick 32-bit or 64-bit (or whatever) integers
for your program. If you think there could be large variations in the
sizes, then either use a data type that will certainly be big enough, or
pick one with no arbitrary limit (there are multiple precision integer
libraries available for most languages), or use a dynamically typed
language.
Aug 30 '07 #32

P: n/a
Skybuck Flying wrote:
"MooseFET" <ke******@rahul.netwrote in message
news:11**********************@x40g2000prg.googlegr oups.com...
>On Aug 29, 8:16 am, "Skybuck Flying" <s...@hotmail.comwrote:
>>"MooseFET" <kensm...@rahul.netwrote in message

news:11**********************@q5g2000prf.googleg roups.com...

On Aug 29, 7:02 am, "Skybuck Flying" <s...@hotmail.comwrote:
Hello,
This morning I had an idea how to write Scalable Software in general.
Unfortunately with Delphi 2007 it can't be done because it does not
support
operating overloading for classes, or record inheritance (records do
have
operator overloading)
This argument is wrong in two ways. It assumes things that are not
true and then draws conclusions that don't follow.
Delphi implements objects, and virtual methods. Any language that has
these features is able to operate on values where the type is not
known at compile time.
Without the mentioned features writing scalable software, including
writing
scalable math routines becomes impractical.
That is simply false. Why do you think it can't be done with virtual
methods?

Because virtual methods is not operator overloading.

Writing code such as:

Div( Multiply( Add( A, B ), C ), D )

Is unpractical and ofcourse slow, ^^^ call overhead.
>>Even with virtual methods it would become slow.
Virtual methods when done the way that Borland Pascal, Delphi and a
good implementation of C++ add very little extra time to the total run
time. The virtual method dispatch code takes less instructions than
the entry code of most routines.

A good operator overloading implementation does not even have call overhead.

I think that's how Delphi's operator overloading for records work.

^^^ No overhead ^^^

(Not sure though, but I thought that's how it works)

>>>On the other hand neither this nor what you included below will do
what you started off trying to suggest. They are just methods by
which different instructions can be used at a point in the logic flow
depending on the sort of variable under consideration.
It does exactly what I want it to do, it does it slowly, so it's not what
I
want it to do.
No, it don't. It only gives the appearance of doing what you want.
There is nothing scalable going on.

Apperently you see it differently.

I already used these techniques to scale to infinety.

So you simply wrong about that one.

Bye,
Skybuck.

When you are looking at operator overloading, the compiler sees an
expression such as "b * c", it considers it *exactly* the same as a
function "multiply(a, b)". There is absolutely no difference to the
compiler, and you can use virtual methods, overloading, inlining, and
any other tricks to get the effect you want.

I presume you also know that compilers do not have to use virtual calls
just because a function is a virtual method of a class? If the compiler
knows what class an object is, then it can short-circuit the virtual
method table and call the method directly. And if it knows the
definition of the method in question, it can automatically inline the
call - resulting in zero overhead.
Aug 30 '07 #33

P: n/a

"David Brown" <da***@westcontrol.removethisbit.comwrote in message
news:46***********************@news.wineasy.se...
Skybuck Flying wrote:
>There is definetly a speed difference especially for mul and div for the
modes I described.

Why do I have to choose the data type ?

Why can't the program choose the data type at runtime ?

If *you* are writing the program, *you* should know what sort of data is
stored in each variable. *You* can then tell the compiler by choosing an
appropriate data type. Is that so hard to grasp? It is up to *you* to
figure out that what limits there will be on the size of the data you are
using, and therefore pick 32-bit or 64-bit (or whatever) integers for your
program. If you think there could be large variations in the sizes, then
either use a data type that will certainly be big enough, or pick one with
no arbitrary limit (there are multiple precision integer libraries
available for most languages), or use a dynamically typed language.
Well that clearly sucks.

The world is not completely 64 bit, The world is not statis it fluctuates.

Sometimes the program only needs 32 bits, sometimes 64 bits.

Always choosing 64 bits would hurt performance LOL.

Bye,
Skybuck.
Aug 30 '07 #34

P: n/a
Which is ofcourse impossible.

The compiler does not know what the program wants at compile time.

Does it want 32 bit or 64 bit ?

Only the program knows at runtime !

Depends on the situation.

Bye,
Skybuck.
Aug 30 '07 #35

P: n/a
On Aug 30, 2:23 am, "Skybuck Flying" <s...@hotmail.comwrote:
"MooseFET" <kensm...@rahul.netwrote in message

news:11**********************@x40g2000prg.googlegr oups.com...
On Aug 29, 8:16 am, "Skybuck Flying" <s...@hotmail.comwrote:
"MooseFET" <kensm...@rahul.netwrote in message
>news:11**********************@q5g2000prf.googlegr oups.com...
On Aug 29, 7:02 am, "Skybuck Flying" <s...@hotmail.comwrote:
Hello,
This morning I had an idea how to write Scalable Software in general.
Unfortunately with Delphi 2007 it can't be done because it does not
support
operating overloading for classes, or record inheritance (records do
have
operator overloading)
This argument is wrong in two ways. It assumes things that are not
true and then draws conclusions that don't follow.
Delphi implements objects, and virtual methods. Any language that has
these features is able to operate on values where the type is not
known at compile time.
Without the mentioned features writing scalable software, including
writing
scalable math routines becomes impractical.
That is simply false. Why do you think it can't be done with virtual
methods?

Because virtual methods is not operator overloading.
Any operator overloading that allows the type of the variable to be
determined at run time most certainly is virtual methods. You need to
look into how it is done.

Operator overloading that is not virtual (ei: the variable type can't
be changed at run time) can be inlined. A smart compiler will do this
for small functions.
Writing code such as:

Div( Multiply( Add( A, B ), C ), D )

Is unpractical and ofcourse slow, ^^^ call overhead.
There is nothing unpractical about what you coded. People write code
like that all the time.

>

Even with virtual methods it would become slow.
Virtual methods when done the way that Borland Pascal, Delphi and a
good implementation of C++ add very little extra time to the total run
time. The virtual method dispatch code takes less instructions than
the entry code of most routines.

A good operator overloading implementation does not even have call overhead.
A non-virtual function can be inlined.
>
I think that's how Delphi's operator overloading for records work.

^^^ No overhead ^^^

(Not sure though, but I thought that's how it works)
Why don't you know? Go read up on it. You will find out that a lot
of what you are assumeing is wrong.

>
On the other hand neither this nor what you included below will do
what you started off trying to suggest. They are just methods by
which different instructions can be used at a point in the logic flow
depending on the sort of variable under consideration.
It does exactly what I want it to do, it does it slowly, so it's not what
I
want it to do.
No, it don't. It only gives the appearance of doing what you want.
There is nothing scalable going on.

Apperently you see it differently.

I already used these techniques to scale to infinety.

So you simply wrong about that one.
No you are the one who is wrong. You are suggesting that there is run
time type determination that is different from the virtual method
dispatch. This is simply false. Get your compiler to spit out the
assembly and look at it. You will see what it really does.

Aug 30 '07 #36

P: n/a
hIf you want 32 bits, use "int".
hIf you want 64 bits, use "long long"

If one specifically wants 32 bits, use "int32_t"/"uint32_t". If one
specifically wants 64 bits, use "int64_t"/"uint64_t".

Aug 30 '07 #37

P: n/a
On Aug 30, 2:31 am, "Skybuck Flying" <s...@hotmail.comwrote:
Absolutely nonsense.

If I want I can write a computer program that runs 32 bit when possible and
64 bit emulated when needed.

My computer program will outperform your "always 64 emulated" program WITH
EASE.

The only problem is that I have to write each code twice.
This statement is incorrect. C, C++, Borland Pascal and it
decendants, and just about every other language I can think of allow
you to declare a new type to be the same as a simple type, allow
conditional compiles, and allow include files. You don't need to have
two copies of the source code.
>
A 32 bit version and a 64 bit version.

I simply instantiate the necessary object and run it.

Absolutely no big deal.

The only undesirable property of this solution is two code bases.

Your lack fo programming language knownledge and experience is definetly
showing.
Right back at you.
>
Bye,
Skybuck.

Aug 30 '07 #38

P: n/a
Skybuck Flying wrote:
"David Brown" <da***@westcontrol.removethisbit.comwrote in message
news:46***********************@news.wineasy.se...
>Skybuck Flying wrote:
>>There is definetly a speed difference especially for mul and div for the
modes I described.

Why do I have to choose the data type ?

Why can't the program choose the data type at runtime ?
If *you* are writing the program, *you* should know what sort of data is
stored in each variable. *You* can then tell the compiler by choosing an
appropriate data type. Is that so hard to grasp? It is up to *you* to
figure out that what limits there will be on the size of the data you are
using, and therefore pick 32-bit or 64-bit (or whatever) integers for your
program. If you think there could be large variations in the sizes, then
either use a data type that will certainly be big enough, or pick one with
no arbitrary limit (there are multiple precision integer libraries
available for most languages), or use a dynamically typed language.

Well that clearly sucks.

The world is not completely 64 bit, The world is not statis it fluctuates.

Sometimes the program only needs 32 bits, sometimes 64 bits.

Always choosing 64 bits would hurt performance LOL.
So if your program needs 32 bits, use 32 bits. If it needs 64 bits, use
64 bits.

I work in the world of embedded systems - it can often make a huge
difference whether you pick 8 bits, 16 bits, or 32 bits for your data.
People sometimes prefer 24 bits or 40 bits - whatever makes sense for
the task in question. This all makes far more of a difference than a
choice of 32-bit or 64-bit integers (on a 32-bit or 64-bit processor),
yet programmers have no trouble dealing with it.

If you are trying to get performance, figure out how to use libraries
written by experts, rather than trying to roll your own code at this
level - you haven't a chance of getting optimal code until you first
understand what you want your program to do, and then understand the
issues that actually make a difference in real life programming rather
than some little test snippet of code.
Aug 30 '07 #39

P: n/a
Skybuck Flying wrote:
Which is ofcourse impossible.

The compiler does not know what the program wants at compile time.

Does it want 32 bit or 64 bit ?

Only the program knows at runtime !

Depends on the situation.

Bye,
Skybuck.

If you learn to use Usenet properly before trying to post this stuff, it
would be a lot easier to get you back on the path of sane software
development. It's not worth spending time answering you if you can't
write questions or comments that make sense.
Aug 30 '07 #40

P: n/a
MooseFET wrote:
>
This statement is incorrect. C, C++, Borland Pascal and it
decendants, and just about every other language I can think of allow
you to declare a new type to be the same as a simple type, allow
conditional compiles, and allow include files. You don't need to have
two copies of the source code.
Incorrect. C and C++ certainly do not. You can #define or typedef
something that appears to be a type but they aren't distinct types.
You're just conditionally compiling which type you are using (which
accomplishes what you want). The distinction is an important one.
A typedef isn't seperately resolvable from the type it aliases.

All that being said, we have produced versions of our product for
a wide variety of machines in C++ and C and to this day provide
win 32 and 64 versions. The difference in code is a handful of
conditional compiles and typedefs. We spend more time dealing
with interfaces to other people's products (ESPECIALLY FREAKING
MICROSOFT) who haven't bothered to provide 64-bit versions of
all their interfaces.

>
Aug 30 '07 #41

P: n/a
Ron Natalie wrote:
MooseFET wrote:
>>
This statement is incorrect. C, C++, Borland Pascal and it
decendants, and just about every other language I can think of allow
you to declare a new type to be the same as a simple type, allow
conditional compiles, and allow include files. You don't need to have
two copies of the source code.

Incorrect. C and C++ certainly do not. You can #define or typedef
something that appears to be a type but they aren't distinct types.
You are correct that C and C++ do not distinguish between two types that
are typedef'ed the same (and obviously not if they are #define'd). In
other words, if typeA and typeB are both typedef'ed to "int", then it is
perfectly legal to assign data of typeA to a variable of typeB.

However, it is possible to use C++ classes to get much of the effect of
this. You would first have to make a base class type such as
"baseInt32" with a single 32-bit integer data member, and a full range
of operators and constructors to allow it to act like a normal integer -
if these are all inlined simple functions, then code should be optimised
properly (although you might not get as good constant folding as with a
normal int). If you then have two types that directly inherit from
"baseInt32", you'll have two types that function are identical in
function with each other, pretty close to identical in function to the
original int, and which are mutually incompatible for things like
assignment. It's far from perfect, and more than a little messy, but it
could give you much the same effect as proper strongly-typed subtyping.
You're just conditionally compiling which type you are using (which
accomplishes what you want). The distinction is an important one.
A typedef isn't seperately resolvable from the type it aliases.

All that being said, we have produced versions of our product for
a wide variety of machines in C++ and C and to this day provide
win 32 and 64 versions. The difference in code is a handful of
conditional compiles and typedefs. We spend more time dealing
with interfaces to other people's products (ESPECIALLY FREAKING
MICROSOFT) who haven't bothered to provide 64-bit versions of
all their interfaces.

>>
Aug 30 '07 #42

P: n/a
LR
Skybuck Flying wrote:
Which is ofcourse impossible.
I think you snipped a little too much context. Just MHO but leaving the
context, or at least a little of it would be much better. TIA.
The compiler does not know what the program wants at compile time.
I think it would be safe to say that I don't understand the above.
Besides, it's not really the compiler's job to know what the "program"
wants. The "program" doesn't really want anything. It's what the
programmer wants. Or what whomever is paying the programmer wants. Isn't
it? Or am I mistaken about that?

Does it want 32 bit or 64 bit ?
Only the programmer knows for sure.

Only the program knows at runtime !
Please tell us how the program "knows" this at run time or any other
time for that matter.
Depends on the situation.
How?
I suspect that what I think you want, if I understand correctly, is
possible, but the overhead at run time makes it uneconomical.
Just consider the situation where a user will input an array of numbers.
You, the programmer, will not know in advance what the magnitude of
the numbers will be. Nor will the program "know" in advance. But
suppose you or the program could know (for some meaning of the word
know) that.

How would you store the array?

All elements the same size or each element a different size depending on
the magnitude of the number stored in that element?

Of course, if you want, you could make your array some sort of
polymorphic container. Sounds expensive.

Of course, there are some implementations of code for numbers that can
have a huge amount of precision and/or magnitude where the amount of
data used for each number can vary quite a bit, but these are for fairly
specialized cases and usually not very efficient in using time or space.

The general moral here is, if your data set requires information in
units of light years don't input your data in angstroms.

OTOH, I once heard someone suggest that a particular implementation of
an interpreted language I once used was implemented in this way to store
numbers that could fit into a 16 bit integer as 16 bit integers and
everything else as a 64 bit real type.

Maybe, if you're really interested, you might think about how to do that
simple case.

LR

Aug 30 '07 #43

P: n/a
Frederick Williams <"Frederick Williams"@antispamhotmail.co.uk.invalid>
wrote:
>Skybuck Flying wrote:
>>
Hello,

This morning I had an idea ...

I hope that this doesn't sound impolite, but why are you posting to
sci.electronics.design and alt.math?
Because he is a complete loon and don't worry about being impolite his
previous widely cross posted rantings have already earned him many impolite
suggestions. Kill file him and ignore this thread - job done.
--
Aug 30 '07 #44

P: n/a
On Aug 29, 10:09 am, "Stephen Sprunk" <step...@sprunk.orgwrote:
"Skybuck Flying" <s...@hotmail.comwrote in message

news:fb**********@news2.zwoll1.ov.home.nl...


"MooseFET" <kensm...@rahul.netwrote in message
news:11**********************@q5g2000prf.googlegro ups.com...
On Aug 29, 7:02 am, "Skybuck Flying" <s...@hotmail.comwrote:
Hello,
>This morning I had an idea how to write Scalable Software in general.
Unfortunately with Delphi 2007 it can't be done because it does not
support
operating overloading for classes, or record inheritance (records do
have
operator overloading)
This argument is wrong in two ways. It assumes things that are not
true and then draws conclusions that don't follow.
Delphi implements objects, and virtual methods. Any language that has
these features is able to operate on values where the type is not
known at compile time.
Without the mentioned features writing scalable software, including
writing scalable math routines becomes impractical.
Even with virtual methods it would become slow.

Indeed.
On the other hand neither this nor what you included below will do
what you started off trying to suggest. They are just methods by
which different instructions can be used at a point in the logic flow
depending on the sort of variable under consideration.
It does exactly what I want it to do, it does it slowly, so it's not what
I want it to do.

That's why I told you that checking for whether emulation is needed and
picking a code path will be slower than just using it all the time. The
cost of emulation, unless you're writing incredibly math-intensive code, is
trivial. If you care about raw math performance and the code is just too
slow to use, buying a faster CPU will be cheaper than trying to figure out
how to make slow machines perform better. It's certainly cheaper than
modifying the ISA.

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking

--
Posted via a free Usenet account fromhttp://www.teranews.com- Hide quoted text -

- Show quoted text -
Wow, wonderful discussion. Learned so much.

Ken
Opportunities are never lost. The other fellow takes those you miss.
| Torrey Hills Technologies, LLC |
| www.threerollmill.com |
| www.torreyhillstech.com |

Aug 30 '07 #45

P: n/a
On Aug 29, 11:13 am, "Skybuck Flying" <s...@hotmail.comwrote:
Well I just had an idea which might be interesting after all:

64 bit emulated mul and div are probably slow.

So if it's possible to switch to 32 bit maybe some speed gains can be
achieved !

So for addition and subtraction the 64 bit emulated versions are always
called.

But for multiplication and division the 32 bit version might be called when
possible and the 64 bit emulated version when absolutely necessary.

I shall inspect what Delphi does for 64 bit (<-emulated) multiplication and
division ;)

Bye,
Skybuck.


What are you multiplying and dividing by?

If you're multiplying by or dividing by powers of 2, bit shifts are
much faster than multiplications or divisions.

Aug 30 '07 #46

P: n/a

"David Brown" <da***@westcontrol.removethisbit.comwrote in message
news:46**********************@news.wineasy.se...
Skybuck Flying wrote:
>"David Brown" <da***@westcontrol.removethisbit.comwrote in message
news:46***********************@news.wineasy.se. ..
>>Skybuck Flying wrote:
There is definetly a speed difference especially for mul and div for
the modes I described.

Why do I have to choose the data type ?

Why can't the program choose the data type at runtime ?

If *you* are writing the program, *you* should know what sort of data is
stored in each variable. *You* can then tell the compiler by choosing
an appropriate data type. Is that so hard to grasp? It is up to *you*
to figure out that what limits there will be on the size of the data you
are using, and therefore pick 32-bit or 64-bit (or whatever) integers
for your program. If you think there could be large variations in the
sizes, then either use a data type that will certainly be big enough, or
pick one with no arbitrary limit (there are multiple precision integer
libraries available for most languages), or use a dynamically typed
language.

Well that clearly sucks.

The world is not completely 64 bit, The world is not statis it
fluctuates.

Sometimes the program only needs 32 bits, sometimes 64 bits.

Always choosing 64 bits would hurt performance LOL.

So if your program needs 32 bits, use 32 bits. If it needs 64 bits, use
64 bits.
Yes very simply statement.

Achieving this in a scalable way is what this thread is all about.

Re-writing code, or writing double code, or even using multiple libraries is
not really what this is about.

It's nearly impossible to achieve without hurting performance. Only
solutions might be c++ templates or generics, not even sure how easy it
would be to switch between two generated class at runtime.

Bye,
Skybuck.

Aug 30 '07 #47

P: n/a

"David Brown" <da***@westcontrol.removethisbit.comwrote in message
news:46**********************@news.wineasy.se...
Skybuck Flying wrote:
>Which is ofcourse impossible.

The compiler does not know what the program wants at compile time.

Does it want 32 bit or 64 bit ?

Only the program knows at runtime !

Depends on the situation.

Bye,
Skybuck.

If you learn to use Usenet properly before trying to post this stuff, it
would be a lot easier to get you back on the path of sane software
development. It's not worth spending time answering you if you can't
write questions or comments that make sense.
What I wrote above is pretty clear to me, even my mother could understand
that ! ;)

Bye,
Skybuck.
Aug 30 '07 #48

P: n/a
What I wrote is really simple.

if FileSize < 2^32 bits then 32 bit case
if FileSize 2^32 bits then 64 bit case.

Ofcourse the compiler doesn't know at compile time, because the files are
opened at runtime.

Not even the programmer knows what the size of the file will be.

Bye,
Skybuck.
Aug 30 '07 #49

P: n/a

"MooseFET" <ke******@rahul.netwrote in message
news:11*********************@m37g2000prh.googlegro ups.com...
On Aug 30, 2:31 am, "Skybuck Flying" <s...@hotmail.comwrote:
>Absolutely nonsense.

If I want I can write a computer program that runs 32 bit when possible
and
64 bit emulated when needed.

My computer program will outperform your "always 64 emulated" program
WITH
EASE.

The only problem is that I have to write each code twice.

This statement is incorrect. C, C++, Borland Pascal and it
decendants, and just about every other language I can think of allow
you to declare a new type to be the same as a simple type, allow
conditional compiles, and allow include files. You don't need to have
two copies of the source code.

{$ifdef bit32}
blablabla
{$endif else
{$ifdef bit64}
blablabla
{$endif}

^^ Still have to write two versions BLEH !

I wouldn't call that "Scalable Software" :)

It doesn't even scale properly at runtime.

Only one can be chosen at compile time.

Bye,
Skybuck.
Aug 30 '07 #50

89 Replies

This discussion thread is closed

Replies have been disabled for this discussion.