473,385 Members | 2,005 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

Making a std::string a member of a union ???

Is there anyway of doing this besides making my own string from scratch?

union AnyType {
std::string String;
double Number;
};
Jan 9 '07
84 15742
Peter Olcott wrote:
"Simon G Best" <si**********@btinternet.comwrote in message
news:nN******************************@bt.com...
>>
* What do you mean by "a single set of memory locations"? Is that from the
perspective of your interpreted language?

It will be implemented as a std::vector<AnyTypeAny;
Doesn't really answer the question I asked. I was asking about /what it
is/ you're implementing, not /how you're implementing/ it.
>* When you say, "A language lacking the capability of C++ must interface with
this data", what do you mean by "interface"?

Read and write based on subscript.
Which doesn't tell me whether that's /in the language/ you're
implementing, or /in the implementation/ of your language, or whatever.

But what I do gather is that you want an array-like container of things
(the std::vector<AnyType>), where those things are all "elemental" types
of data, but are various, different "elemental" types of data. It still
sounds very much like runtime polymorphism.
>If you're trying to do what I /think/ you're trying to do, then I think you're
probably failing to properly separate your interpreted language from your
implementation of its interpreter. But as you're not at all

I have no choice in this, the interpreted language is provided by a third party.
I am hooking this third party interpreted language into my system and then
exposing another different interpreted language implemented in terms of the
third party language.
Oh, /two/ interpreted languages.

Well, you still haven't really said why runtime polymorphism wouldn't
work. And, as you're still not being at all clear on this stuff, on
what you're actually trying to do, I'm giving up.

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?
Jan 11 '07 #51

"Simon G Best" <si**********@btinternet.comwrote in message
news:be******************************@bt.com...
Peter Olcott wrote:
>"Simon G Best" <si**********@btinternet.comwrote in message
news:nN******************************@bt.com...
>>>
* What do you mean by "a single set of memory locations"? Is that from the
perspective of your interpreted language?

It will be implemented as a std::vector<AnyTypeAny;

Doesn't really answer the question I asked. I was asking about /what it is/
you're implementing, not /how you're implementing/ it.
>>* When you say, "A language lacking the capability of C++ must interface
with this data", what do you mean by "interface"?

Read and write based on subscript.

Which doesn't tell me whether that's /in the language/ you're implementing, or
/in the implementation/ of your language, or whatever.

But what I do gather is that you want an array-like container of things (the
std::vector<AnyType>), where those things are all "elemental" types of data,
but are various, different "elemental" types of data. It still sounds very
much like runtime polymorphism.
>>If you're trying to do what I /think/ you're trying to do, then I think
you're probably failing to properly separate your interpreted language from
your implementation of its interpreter. But as you're not at all

I have no choice in this, the interpreted language is provided by a third
party. I am hooking this third party interpreted language into my system and
then exposing another different interpreted language implemented in terms of
the third party language.

Oh, /two/ interpreted languages.

Well, you still haven't really said why runtime polymorphism wouldn't work.
And, as you're still not being at all clear on this stuff, on what you're
actually trying to do, I'm giving up.
I did say why run-time polymorphism wouldn't work and you did not pay attention
to this answer. Maybe I should have been more specific. Runtime polymorphism
will not work because a language that is incapable of accessing polymorphic
functions must have direct access to the underlying data. The language can not
even call polymorphic functions. For all practical purposes this language is
"C". What can "C" do with polymorphism?
>
--
Simon G Best
What happens if I mention Leader Kibo in my .signature?

Jan 11 '07 #52

"Jim Langston" <ta*******@rocketmail.comwrote in message
news:Qy***********@newsfe03.lga...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:5K******************@newsfe19.lga...
>>
"Simon G Best" <si**********@btinternet.comwrote in message
news:We******************************@bt.com...
>>Peter Olcott wrote:

Its like I am making my own VARIANT record. I need some features that
VARIANT does not have.

Is that "VARIANT record" in the Pascal sense? If it is, then I really do
think you probably want runtime polymorphism, as that is the C++ way of
doing that kind of thing.

One reason that I can't use run-time polymorphism is that the interpreted
language that I am constructing can not directly interface with polymorphic
class members. It will only have the capabilities of "C" and not C++.
>>>
--
Simon G Best
What happens if I mention Leader Kibo in my .signature?

Okay, I think the easiest way would be to have a class that can store any of
the possible types of data. You'll need to overload operator= and operator
type for each type, then you'll want to store in this class what type is
actually being used. You'll have problems, however, when the type is
arbitary.

class AnyType
{
public:
operator std::string() { return StringVal; }
operator int() { return IntVal; }
// etc...
};
I need more details to see how this would work. What does the private data look
like?
>
int main();
{
AnyType Foo;
// yada yada

std::cout << AnyType << "\n";
// ooops, what type is it supposed to output? std::string? int? float?
double? char? etc..
}

Assignments and constructors would be easier, because there will be a parm
that says what type it is.

AnyType Foo;
Foo = 12;
12 is an integer, and so would use operator=( const int ); no ambiguity there.

AnyType Foo( 12.5 );
12.5 is a double, and so would use the constructor accepting double, so no
ambiguity there.

You will have problems when the type isn't known because of no parameter. What
is accepted in it's use?

std::cout << AnyType.val(INT) << "\n";
is something like that acceptable?

Jan 11 '07 #53
Peter Olcott wrote:
"Simon G Best" <si**********@btinternet.comwrote in message
news:be******************************@bt.com...

I did say why run-time polymorphism wouldn't work and you did not pay attention
to this answer. Maybe I should have been more specific. Runtime polymorphism
will not work because a language that is incapable of accessing polymorphic
functions must have direct access to the underlying data. The language can not
even call polymorphic functions. For all practical purposes this language is
"C". What can "C" do with polymorphism?
This just seems wrong and confused. It's certainly very unclear.
You're not clear on /which/ of the two interpreted languages needs to
directly access the data. You haven't said /why/ stuff in the
interpreted language would need such direct access. You haven't been at
all clear on what the actual, specific problem is that the union is
supposed to solve. It's because of this persistent lack of clarity that
I'm giving up.

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?
Jan 11 '07 #54

"Simon G Best" <si**********@btinternet.comwrote in message
news:L6*********************@bt.com...
Peter Olcott wrote:
>"Simon G Best" <si**********@btinternet.comwrote in message
news:be******************************@bt.com...

I did say why run-time polymorphism wouldn't work and you did not pay
attention to this answer. Maybe I should have been more specific. Runtime
polymorphism will not work because a language that is incapable of accessing
polymorphic functions must have direct access to the underlying data. The
language can not even call polymorphic functions. For all practical purposes
this language is "C". What can "C" do with polymorphism?

This just seems wrong and confused. It's certainly very unclear.
How can providing a "C" interface to C++ data possibly be either wrong of
confused?
You're not clear on /which/ of the two interpreted languages needs to directly
access the data. You haven't said /why/ stuff in the
I did not say that there are two interpreted languages. There are two different
abstractions of the same interpreted language. For all practical purposes these
details can be abstracted out of the problem. For all practical purposes the
problem is simply providing "C" access to C++ data.

It can often be quite annoying when people insist on having me provide all of
the irrelevant details before they are willing to answer the question, and they
then still refuse to answer the question because they have become confused by
all the irrelevant details.

I wish that people would stop trying to second guess my questions, and just
answer them.
interpreted language would need such direct access. You haven't been at all
clear on what the actual, specific problem is that the union is supposed to
solve. It's because of this persistent lack of clarity that I'm giving up.

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?

Jan 11 '07 #55
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:VO***********@newsfe07.phx...
>
"Jim Langston" <ta*******@rocketmail.comwrote in message
news:Qy***********@newsfe03.lga...
>"Peter Olcott" <No****@SeeScreen.comwrote in message
news:5K******************@newsfe19.lga...
>>>
"Simon G Best" <si**********@btinternet.comwrote in message
news:We******************************@bt.com.. .
Peter Olcott wrote:

Its like I am making my own VARIANT record. I need some features that
VARIANT does not have.

Is that "VARIANT record" in the Pascal sense? If it is, then I really
do think you probably want runtime polymorphism, as that is the C++ way
of doing that kind of thing.

One reason that I can't use run-time polymorphism is that the
interpreted language that I am constructing can not directly interface
with polymorphic class members. It will only have the capabilities of
"C" and not C++.
--
Simon G Best
What happens if I mention Leader Kibo in my .signature?

Okay, I think the easiest way would be to have a class that can store any
of the possible types of data. You'll need to overload operator= and
operator type for each type, then you'll want to store in this class what
type is actually being used. You'll have problems, however, when the
type is arbitary.

class AnyType
{
public:
operator std::string() { return StringVal; }
operator int() { return IntVal; }
// etc...
};

I need more details to see how this would work. What does the private data
look like?
I was doing a sample, and this is class is anything but trivial. With
templates it would become a lot more trivial. With polymorphism, it would
become a lot more trivial. Without either tool it's a lot of coding. A
seperate constructor for each type. A seperate operator= for each type. A
seperate operator type for each type. At least one operator+ for each type,
probably more (AnyType int + AnyType int. AnyType int + AnyType short.
AnyType int + int, etc...)

I think if you want a variant class you should find the source code for one
(gcc provides source) and modify it to do the things you need it to do,
otherwise you'll spend a long time reinventing the wheel.
>int main();
{
AnyType Foo;
// yada yada

std::cout << AnyType << "\n";
// ooops, what type is it supposed to output? std::string? int?
float? double? char? etc..
}

Assignments and constructors would be easier, because there will be a
parm that says what type it is.

AnyType Foo;
Foo = 12;
12 is an integer, and so would use operator=( const int ); no ambiguity
there.

AnyType Foo( 12.5 );
12.5 is a double, and so would use the constructor accepting double, so
no ambiguity there.

You will have problems when the type isn't known because of no parameter.
What is accepted in it's use?

std::cout << AnyType.val(INT) << "\n";
is something like that acceptable?

Jan 11 '07 #56
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:ob*********************@newsfe16.lga...
>
"Simon G Best" <si**********@btinternet.comwrote in message
news:L6*********************@bt.com...
>Peter Olcott wrote:
>>"Simon G Best" <si**********@btinternet.comwrote in message
news:be******************************@bt.com.. .

I did say why run-time polymorphism wouldn't work and you did not pay
attention to this answer. Maybe I should have been more specific.
Runtime polymorphism will not work because a language that is incapable
of accessing polymorphic functions must have direct access to the
underlying data. The language can not even call polymorphic functions.
For all practical purposes this language is "C". What can "C" do with
polymorphism?

This just seems wrong and confused. It's certainly very unclear.

How can providing a "C" interface to C++ data possibly be either wrong of
confused?
>You're not clear on /which/ of the two interpreted languages needs to
directly access the data. You haven't said /why/ stuff in the

I did not say that there are two interpreted languages. There are two
different abstractions of the same interpreted language. For all practical
purposes these details can be abstracted out of the problem. For all
practical purposes the problem is simply providing "C" access to C++ data.

It can often be quite annoying when people insist on having me provide all
of the irrelevant details before they are willing to answer the question,
and they then still refuse to answer the question because they have become
confused by all the irrelevant details.

I wish that people would stop trying to second guess my questions, and
just answer them.
Sorry, it doesn't work this way. If you've paid any attention to this group
for any length of time (or all the C++ groups I've seen so far) for any non
trivial algorithm question (which this actually is) the use for which it's
going to be put usually has to be known, because different uses make
different algorithms.

A lot of the time the reason is simply because the person is asking, they
don't understand that maybe there's something already in the C++ language
that will do it for them. An example being, someone asks about making
pointers to nodes. Someone asks them for what, they say so they can make a
linked list. Well, have you tried std::list? Oh, didn't know that existed
(I've seen this actual exchange).

Another good reason is becuase the person asking for a clarification can
think of more than one way to solve the problem, but they don't know which
solution would fit the answer, so they ask for clarifcation.

Your original question is a good case in point, you were asking if a
std::string can be a member of a union, and it turns out that, no, it can't
be the way you want it to. But people asked what you needed it for which
brought on this thread of the thread.

If you don't want anyone to ask you any more questions, fine. Your answer
is no, you can't make a std::string part of a union. Thread over.
>interpreted language would need such direct access. You haven't been at
all clear on what the actual, specific problem is that the union is
supposed to solve. It's because of this persistent lack of clarity that
I'm giving up.

Jan 11 '07 #57

"Jim Langston" <ta*******@rocketmail.comwrote in message
news:df*************@newsfe05.lga...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:VO***********@newsfe07.phx...
>>
"Jim Langston" <ta*******@rocketmail.comwrote in message
news:Qy***********@newsfe03.lga...
>>"Peter Olcott" <No****@SeeScreen.comwrote in message
news:5K******************@newsfe19.lga...

"Simon G Best" <si**********@btinternet.comwrote in message
news:We******************************@bt.com. ..
Peter Olcott wrote:

>Its like I am making my own VARIANT record. I need some features that
>VARIANT does not have.
>
Is that "VARIANT record" in the Pascal sense? If it is, then I really do
think you probably want runtime polymorphism, as that is the C++ way of
doing that kind of thing.

One reason that I can't use run-time polymorphism is that the interpreted
language that I am constructing can not directly interface with polymorphic
class members. It will only have the capabilities of "C" and not C++.

>
--
Simon G Best
What happens if I mention Leader Kibo in my .signature?

Okay, I think the easiest way would be to have a class that can store any of
the possible types of data. You'll need to overload operator= and operator
type for each type, then you'll want to store in this class what type is
actually being used. You'll have problems, however, when the type is
arbitary.

class AnyType
{
public:
operator std::string() { return StringVal; }
operator int() { return IntVal; }
// etc...
};

I need more details to see how this would work. What does the private data
look like?

I was doing a sample, and this is class is anything but trivial. With
templates it would become a lot more trivial. With polymorphism, it would
become a lot more trivial. Without either tool it's a lot of coding. A
seperate constructor for each type. A seperate operator= for each type. A
seperate operator type for each type. At least one operator+ for each type,
probably more (AnyType int + AnyType int. AnyType int + AnyType short.
AnyType int + int, etc...)
I could already infer those details. The detail that I am missing is how the
data itself is declared.

union AnyType {
std::string String;
int Integer;
};

Will not compile. The best that I could figure so far is this:

union AnyType {
std::string* String;
int Integer;
};
>
I think if you want a variant class you should find the source code for one
(gcc provides source) and modify it to do the things you need it to do,
otherwise you'll spend a long time reinventing the wheel.
>>int main();
{
AnyType Foo;
// yada yada

std::cout << AnyType << "\n";
// ooops, what type is it supposed to output? std::string? int? float?
double? char? etc..
}

Assignments and constructors would be easier, because there will be a parm
that says what type it is.

AnyType Foo;
Foo = 12;
12 is an integer, and so would use operator=( const int ); no ambiguity
there.

AnyType Foo( 12.5 );
12.5 is a double, and so would use the constructor accepting double, so no
ambiguity there.

You will have problems when the type isn't known because of no parameter.
What is accepted in it's use?

std::cout << AnyType.val(INT) << "\n";
is something like that acceptable?


Jan 11 '07 #58

"Jim Langston" <ta*******@rocketmail.comwrote in message
news:Pp*************@newsfe05.lga...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:ob*********************@newsfe16.lga...
>>
"Simon G Best" <si**********@btinternet.comwrote in message
news:L6*********************@bt.com...
>>Peter Olcott wrote:
"Simon G Best" <si**********@btinternet.comwrote in message
news:be******************************@bt.com. ..

I did say why run-time polymorphism wouldn't work and you did not pay
attention to this answer. Maybe I should have been more specific. Runtime
polymorphism will not work because a language that is incapable of
accessing polymorphic functions must have direct access to the underlying
data. The language can not even call polymorphic functions. For all
practical purposes this language is "C". What can "C" do with polymorphism?

This just seems wrong and confused. It's certainly very unclear.

How can providing a "C" interface to C++ data possibly be either wrong of
confused?
>>You're not clear on /which/ of the two interpreted languages needs to
directly access the data. You haven't said /why/ stuff in the

I did not say that there are two interpreted languages. There are two
different abstractions of the same interpreted language. For all practical
purposes these details can be abstracted out of the problem. For all
practical purposes the problem is simply providing "C" access to C++ data.

It can often be quite annoying when people insist on having me provide all of
the irrelevant details before they are willing to answer the question, and
they then still refuse to answer the question because they have become
confused by all the irrelevant details.

I wish that people would stop trying to second guess my questions, and just
answer them.

Sorry, it doesn't work this way. If you've paid any attention to this group
for any length of time (or all the C++ groups I've seen so far) for any non
trivial algorithm question (which this actually is) the use for which it's
going to be put usually has to be known, because different uses make different
algorithms.

A lot of the time the reason is simply because the person is asking, they
don't understand that maybe there's something already in the C++ language that
will do it for them. An example being, someone asks about making pointers to
nodes. Someone asks them for what, they say so they can make a linked list.
Well, have you tried std::list? Oh, didn't know that existed (I've seen this
actual exchange).

Another good reason is becuase the person asking for a clarification can think
of more than one way to solve the problem, but they don't know which solution
would fit the answer, so they ask for clarifcation.

Your original question is a good case in point, you were asking if a
std::string can be a member of a union, and it turns out that, no, it can't be
the way you want it to. But people asked what you needed it for which brought
on this thread of the thread.

If you don't want anyone to ask you any more questions, fine. Your answer is
no, you can't make a std::string part of a union. Thread over.
So the next best thing is a union including std::string*
>
>>interpreted language would need such direct access. You haven't been at all
clear on what the actual, specific problem is that the union is supposed to
solve. It's because of this persistent lack of clarity that I'm giving up.


Jan 11 '07 #59
Peter Olcott a écrit :
"Jim Langston" <ta*******@rocketmail.comwrote in message
[snip] Your answer is
>no, you can't make a std::string part of a union. Thread over.

So the next best thing is a union including std::string*
Or a struct with a union of unionable types:
struct MyAny
{
std::string str;
union
{
long int num;
float big;
double huge;
...
} tohubohu;
};

This cost only a string object and doesn't require dynamic allocation,
overload of copy operator ...

If you can afford it.
Jan 11 '07 #60
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:WB*******************@newsfe17.lga...
>
"Jim Langston" <ta*******@rocketmail.comwrote in message
news:df*************@newsfe05.lga...
>"Peter Olcott" <No****@SeeScreen.comwrote in message
news:VO***********@newsfe07.phx...
>>>
"Jim Langston" <ta*******@rocketmail.comwrote in message
news:Qy***********@newsfe03.lga...
"Peter Olcott" <No****@SeeScreen.comwrote in message
news:5K******************@newsfe19.lga...
>
"Simon G Best" <si**********@btinternet.comwrote in message
news:We******************************@bt.com.. .
>Peter Olcott wrote:
>
>>Its like I am making my own VARIANT record. I need some features
>>that VARIANT does not have.
>>
>Is that "VARIANT record" in the Pascal sense? If it is, then I
>really do think you probably want runtime polymorphism, as that is
>the C++ way of doing that kind of thing.
>
One reason that I can't use run-time polymorphism is that the
interpreted language that I am constructing can not directly interface
with polymorphic class members. It will only have the capabilities of
"C" and not C++.
>
>>
>--
>Simon G Best
>What happens if I mention Leader Kibo in my .signature?

Okay, I think the easiest way would be to have a class that can store
any of the possible types of data. You'll need to overload operator=
and operator type for each type, then you'll want to store in this
class what type is actually being used. You'll have problems, however,
when the type is arbitary.

class AnyType
{
public:
operator std::string() { return StringVal; }
operator int() { return IntVal; }
// etc...
};

I need more details to see how this would work. What does the private
data look like?

I was doing a sample, and this is class is anything but trivial. With
templates it would become a lot more trivial. With polymorphism, it
would become a lot more trivial. Without either tool it's a lot of
coding. A seperate constructor for each type. A seperate operator= for
each type. A seperate operator type for each type. At least one
operator+ for each type, probably more (AnyType int + AnyType int.
AnyType int + AnyType short. AnyType int + int, etc...)

I could already infer those details. The detail that I am missing is how
the data itself is declared.

union AnyType {
std::string String;
int Integer;
};

Will not compile. The best that I could figure so far is this:

union AnyType {
std::string* String;
int Integer;
};
I would go with:
class AnyType
{
std::string String;
union {
int Integer;
double Double;
float Float;
// etc...
};

In VC 2003 I get this error if I try t put the std::string inside the union:
error C2621: member 'AnyType::String' of union 'AnyType' has copy
constructor

which makes sense, because a union is *only* for POD. Non POD can not exist
inside a union (according to MS anyway).
>I think if you want a variant class you should find the source code for
one (gcc provides source) and modify it to do the things you need it to
do, otherwise you'll spend a long time reinventing the wheel.
>>>int main();
{
AnyType Foo;
// yada yada

std::cout << AnyType << "\n";
// ooops, what type is it supposed to output? std::string? int?
float? double? char? etc..
}

Assignments and constructors would be easier, because there will be a
parm that says what type it is.

AnyType Foo;
Foo = 12;
12 is an integer, and so would use operator=( const int ); no ambiguity
there.

AnyType Foo( 12.5 );
12.5 is a double, and so would use the constructor accepting double, so
no ambiguity there.

You will have problems when the type isn't known because of no
parameter. What is accepted in it's use?

std::cout << AnyType.val(INT) << "\n";
is something like that acceptable?



Jan 11 '07 #61
Peter Olcott wrote:
"Simon G Best" <si**********@btinternet.comwrote in message
news:L6*********************@bt.com...
>Peter Olcott wrote:
>>"Simon G Best" <si**********@btinternet.comwrote in message
news:be******************************@bt.com.. .

I did say why run-time polymorphism wouldn't work and you did not pay
attention to this answer. Maybe I should have been more specific. Runtime
polymorphism will not work because a language that is incapable of accessing
polymorphic functions must have direct access to the underlying data. The
language can not even call polymorphic functions. For all practical purposes
this language is "C". What can "C" do with polymorphism?
This just seems wrong and confused. It's certainly very unclear.

How can providing a "C" interface to C++ data possibly be either wrong of
confused?
It isn't. But your assertion that "a language that is incapable of
accessing polymorphic functions must have direct access to the
underlying data" certainly seems wrong and confused.
>You're not clear on /which/ of the two interpreted languages needs to directly
access the data. You haven't said /why/ stuff in the

I did not say that there are two interpreted languages.
In one post, you said, "I am creating my own computer language and I
need a simple way to store the various elemental data types." In
another, you said, "the interpreted language is provided by a third
party. I am hooking this third party interpreted language into my system
and then exposing another different interpreted language implemented in
terms of the third party language." Clearly, you /have/ said that there
are two, "different", interpreted languages.
There are two different
abstractions of the same interpreted language.
See what I mean about your lack of clarity? Sometimes it's an
interpreted language you're creating yourself. Sometimes there are two,
different, interpreted languages, one of which is from a third party.
Sometimes they're not different languages after all, and are actually
the same language. Since you don't seem to actually know yourself what
you're doing, it's hardly surprising that I don't, either!
For all practical purposes these
details can be abstracted out of the problem. For all practical purposes the
problem is simply providing "C" access to C++ data.
Oh! Well, why didn't you say so?! *If* I understand you correctly on
this (which is a big 'if'), you basically want your C++ data to be
accessible from within C. Is that it?

If so, then you're doing it really wrongly. Instead of properly
encapsulating your data in C++, you're trying to expose it directly to
the C stuff. What you /should/ be doing is properly encapsulating it as
normal, and then providing a suitable extern "C" interface to wrap up
your C++ stuff. No need to muck about with unions.

Or, of course, you could just have it in C to begin with.
It can often be quite annoying when people insist on having me provide all of
the irrelevant details before they are willing to answer the question, and they
then still refuse to answer the question because they have become confused by
all the irrelevant details.
No one asked for irrelevant details. Well, I certainly didn't. What I
wanted to know was what *specific* problem the union was supposed to
solve. Just as with The Halting Problem, much of the confusion is of
your own making.
I wish that people would stop trying to second guess my questions, and just
answer them.
:-(

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?
Jan 12 '07 #62

"Simon G Best" <si**********@btinternet.comwrote in message
news:8o******************************@bt.com...
Peter Olcott wrote:
>"Simon G Best" <si**********@btinternet.comwrote in message
news:L6*********************@bt.com...
>>Peter Olcott wrote:
"Simon G Best" <si**********@btinternet.comwrote in message
news:be******************************@bt.com. ..

I did say why run-time polymorphism wouldn't work and you did not pay
attention to this answer. Maybe I should have been more specific. Runtime
polymorphism will not work because a language that is incapable of
accessing polymorphic functions must have direct access to the underlying
data. The language can not even call polymorphic functions. For all
practical purposes this language is "C". What can "C" do with polymorphism?
This just seems wrong and confused. It's certainly very unclear.

How can providing a "C" interface to C++ data possibly be either wrong of
confused?

It isn't. But your assertion that "a language that is incapable of accessing
polymorphic functions must have direct access to the underlying data"
certainly seems wrong and confused.
I want to minimize the unnecessary overhead so that the resulting interpreted
language is as close as possible to the speed of compiled native code.
Alternatives that do things the "right" way are twenty-five fold slower than an
optimally designed interpreter.
>
>>You're not clear on /which/ of the two interpreted languages needs to
directly access the data. You haven't said /why/ stuff in the

I did not say that there are two interpreted languages.

In one post, you said, "I am creating my own computer language and I need a
simple way to store the various elemental data types." In another, you said,
"the interpreted language is provided by a third party. I am hooking this
third party interpreted language into my system and then exposing another
different interpreted language implemented in terms of the third party
language." Clearly, you /have/ said that there are two, "different",
interpreted languages.
I am simultaneously exploring several different alternatives. I want the
resulting design to be optimal for both of these alternatives.

nterpreted language you're creating yourself. Sometimes there are two,
different, interpreted languages, one of which is from a third party.
Sometimes they're not different languages after all, and are actually the same
language. Since you don't seem to actually know yourself what you're doing,
it's hardly surprising that I don't, either!
>For all practical purposes these details can be abstracted out of the
problem. For all practical purposes the problem is simply providing "C"
access to C++ data.

Oh! Well, why didn't you say so?! *If* I understand you correctly on this
(which is a big 'if'), you basically want your C++ data to be accessible from
within C. Is that it?

If so, then you're doing it really wrongly. Instead of properly encapsulating
your data in C++, you're trying to expose it directly to the C stuff. What
you /should/ be doing is properly encapsulating it as normal, and then
providing a suitable extern "C" interface to wrap up your C++ stuff. No need
to muck about with unions.
I don't want the overhead.
>
Or, of course, you could just have it in C to begin with.
I do want OOP and OOD and std::vector.
>
>It can often be quite annoying when people insist on having me provide all of
the irrelevant details before they are willing to answer the question, and
they then still refuse to answer the question because they have become
confused by all the irrelevant details.

No one asked for irrelevant details. Well, I certainly didn't. What I wanted
to know was what *specific* problem the union was supposed to solve.
I told you this from the very beginning. It has to be able to hold a set of
types including {double, int, std::string}. It is a form of VARIANT that can be
directly accessed from "C". These are GIVEN, and thus immutable requirements.
Sometime exploring alternatives that I have not considered is helpful. This does
not seem to be one of these times.
Just as with The Halting Problem, much of the confusion is of your own making.
>I wish that people would stop trying to second guess my questions, and just
answer them.

:-(

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?

Jan 12 '07 #63
Peter Olcott wrote:
"Simon G Best" <si**********@btinternet.comwrote in message
news:8o******************************@bt.com...
....
I want to minimize the unnecessary overhead so that the resulting interpreted
language is as close as possible to the speed of compiled native code.
Alternatives that do things the "right" way are twenty-five fold slower than an
optimally designed interpreter.
Where did you get the "twenty-five fold slower" figure from?
>In one post, you said, "I am creating my own computer language and I need a
simple way to store the various elemental data types." In another, you said,
"the interpreted language is provided by a third party. I am hooking this
third party interpreted language into my system and then exposing another
different interpreted language implemented in terms of the third party
language." Clearly, you /have/ said that there are two, "different",
interpreted languages.

I am simultaneously exploring several different alternatives. I want the
resulting design to be optimal for both of these alternatives.
Well, if you're going to jump about from alternative to alternative
without being clear about it, it's hardly surprising that confusion results.
>If so, then you're doing it really wrongly. Instead of properly encapsulating
your data in C++, you're trying to expose it directly to the C stuff. What
you /should/ be doing is properly encapsulating it as normal, and then
providing a suitable extern "C" interface to wrap up your C++ stuff. No need
to muck about with unions.

I don't want the overhead.
Sounds like you might not understand that famous Knuth quote: "Premature
optimization is the root of all evil."

Anyway, if it's going to be accessible from within C, but itself is
going to be in C++, then there is no alternative than to provide an
interface with C linkage. There is no alternative. That means using
extern "C". (It may well be that your compilers (and whatever) happen
to use compatible linkage conventions for both C and C++, in which case
the extern "C" won't actually introduce any overheads. Otherwise, if
your compilers (etc) use incompatible linkage conventions for C and C++,
you *will* have to use extern "C" anyway. Either way, there's no good
reason not to use extern "C".)
I told you this from the very beginning. It has to be able to hold a set of
types including {double, int, std::string}. It is a form of VARIANT that can be
directly accessed from "C". These are GIVEN, and thus immutable requirements.
They are contradictory requirements. std::strings are *not* accessible
from within C (except indirectly, when you provide an appropriate,
extern "C" interface).

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?
Jan 12 '07 #64

"Simon G Best" <si**********@btinternet.comwrote in message
news:W8******************************@bt.com...
Peter Olcott wrote:
>"Simon G Best" <si**********@btinternet.comwrote in message
news:8o******************************@bt.com...
...
>I want to minimize the unnecessary overhead so that the resulting interpreted
language is as close as possible to the speed of compiled native code.
Alternatives that do things the "right" way are twenty-five fold slower than
an optimally designed interpreter.

Where did you get the "twenty-five fold slower" figure from?
http://www.softintegration.com/ -
The best C/C++ Interpreter in terms of overall quality and reliability. The
documentation is fabulous. It is 250-fold slower than native code on loops. My
own carefully designed virtual machine-code interpreter is 10-fold slower than
native machine code on the same loops. 250/10 = twenty-five fold slower.
>
>>In one post, you said, "I am creating my own computer language and I need a
simple way to store the various elemental data types." In another, you
said, "the interpreted language is provided by a third party. I am hooking
this third party interpreted language into my system and then exposing
another different interpreted language implemented in terms of the third
party language." Clearly, you /have/ said that there are two, "different",
interpreted languages.

I am simultaneously exploring several different alternatives. I want the
resulting design to be optimal for both of these alternatives.

Well, if you're going to jump about from alternative to alternative without
being clear about it, it's hardly surprising that confusion results.
It was all details that you didn't need to know anyway.
>
>>If so, then you're doing it really wrongly. Instead of properly
encapsulating your data in C++, you're trying to expose it directly to the C
stuff. What you /should/ be doing is properly encapsulating it as normal,
and then providing a suitable extern "C" interface to wrap up your C++
stuff. No need to muck about with unions.

I don't want the overhead.

Sounds like you might not understand that famous Knuth quote: "Premature
optimization is the root of all evil."
Although this is an error that I may sometimes make, my interpreter design does
beat all other alternatives by a wide margin.
>
Anyway, if it's going to be accessible from within C, but itself is going to
be in C++, then there is no alternative than to provide an interface with C
linkage. There is no alternative. That means using
Ah yes, but then that still ignores my direct access.
extern "C". (It may well be that your compilers (and whatever) happen to use
compatible linkage conventions for both C and C++, in which case the extern
"C" won't actually introduce any overheads. Otherwise, if your compilers
(etc) use incompatible linkage conventions for C and C++, you *will* have to
use extern "C" anyway. Either way, there's no good reason not to use extern
"C".)
I might do it this way if I have to, but, I don't think that I have to.
>
>I told you this from the very beginning. It has to be able to hold a set of
types including {double, int, std::string}. It is a form of VARIANT that can
be directly accessed from "C". These are GIVEN, and thus immutable
requirements.

They are contradictory requirements. std::strings are *not* accessible from
within C (except indirectly, when you provide an appropriate, extern "C"
interface).
Its not actually going to be a std::string anyway. I just used that as a
simplifying example. It is probably going to be a Unicode string. Maybe I can
translate my FastString to "C". I would lose a few things, but, it could be
stored in a union.
>
--
Simon G Best
What happens if I mention Leader Kibo in my .signature?

Jan 12 '07 #65
Peter Olcott wrote:
"Simon G Best" <si**********@btinternet.comwrote in message
news:W8******************************@bt.com...
>Peter Olcott wrote:
>>"Simon G Best" <si**********@btinternet.comwrote in message
news:8o******************************@bt.com.. .
...
>>I want to minimize the unnecessary overhead so that the resulting interpreted
language is as close as possible to the speed of compiled native code.
Alternatives that do things the "right" way are twenty-five fold slower than
an optimally designed interpreter.
Where did you get the "twenty-five fold slower" figure from?

http://www.softintegration.com/ -
The best C/C++ Interpreter in terms of overall quality and reliability. The
documentation is fabulous. It is 250-fold slower than native code on loops. My
own carefully designed virtual machine-code interpreter is 10-fold slower than
native machine code on the same loops. 250/10 = twenty-five fold slower.
Even if those were relevant, appropriate figures (which I very much
doubt), on what basis do you justify dividing the 250 by 10? Your
interpreter is still going to be interpreting your C-like language, just
as the C/C++ interpreter does. The speed of your virtual machine code
interpreter seems irrelevant.
>Anyway, if it's going to be accessible from within C, but itself is going to
be in C++, then there is no alternative than to provide an interface with C
linkage. There is no alternative. That means using

Ah yes, but then that still ignores my direct access.
That /includes/ your direct access. You can't do your direct access
from within C unless the data you're directly accessing has C linkage.
>extern "C". (It may well be that your compilers (and whatever) happen to use
compatible linkage conventions for both C and C++, in which case the extern
"C" won't actually introduce any overheads. Otherwise, if your compilers
(etc) use incompatible linkage conventions for C and C++, you *will* have to
use extern "C" anyway. Either way, there's no good reason not to use extern
"C".)

I might do it this way if I have to, but, I don't think that I have to.
If it's going to be directly accessed from within C, it *must* have C
linkage.
Its not actually going to be a std::string anyway. I just used that as a
simplifying example. It is probably going to be a Unicode string. Maybe I can
translate my FastString to "C". I would lose a few things, but, it could be
stored in a union.
Well, whatever it is, if it's going to be directly accessed from within
C, it's going to have to have C linkage.

By the sounds of it, it would make sense for you to actually do the
union within C to begin with. But then it's not really a C++ question.

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?
Jan 12 '07 #66

"Simon G Best" <si**********@btinternet.comwrote in message
news:YN*********************@bt.com...
Peter Olcott wrote:
>"Simon G Best" <si**********@btinternet.comwrote in message
news:W8******************************@bt.com...
>>Peter Olcott wrote:
"Simon G Best" <si**********@btinternet.comwrote in message
news:8o******************************@bt.com. ..
...
I want to minimize the unnecessary overhead so that the resulting
interpreted language is as close as possible to the speed of compiled
native code. Alternatives that do things the "right" way are twenty-five
fold slower than an optimally designed interpreter.
Where did you get the "twenty-five fold slower" figure from?

http://www.softintegration.com/ -
The best C/C++ Interpreter in terms of overall quality and reliability. The
documentation is fabulous. It is 250-fold slower than native code on loops.
My own carefully designed virtual machine-code interpreter is 10-fold slower
than native machine code on the same loops. 250/10 = twenty-five fold
slower.

Even if those were relevant, appropriate figures (which I very much doubt), on
what basis do you justify dividing the 250 by 10? Your
I just showed you the basis, my interpreter benchmarks at TEN (that's the basis)
fold slower than native code. The other interpreter benchmarks at 250-fold
slower, therefore my interpreter is 25-fold faster.
interpreter is still going to be interpreting your C-like language, just as
the C/C++ interpreter does. The speed of your virtual machine code
interpreter seems irrelevant.
>>Anyway, if it's going to be accessible from within C, but itself is going to
be in C++, then there is no alternative than to provide an interface with C
linkage. There is no alternative. That means using

Ah yes, but then that still ignores my direct access.

That /includes/ your direct access. You can't do your direct access from
within C unless the data you're directly accessing has C linkage.
So it is impossible for a "C" function to access an array without calling a
function?

int Array[100];
int Num;
Num = Array[10];

What did you think that I meant by direct access?
>
>>extern "C". (It may well be that your compilers (and whatever) happen to
use compatible linkage conventions for both C and C++, in which case the
extern "C" won't actually introduce any overheads. Otherwise, if your
compilers (etc) use incompatible linkage conventions for C and C++, you
*will* have to use extern "C" anyway. Either way, there's no good reason
not to use extern "C".)

I might do it this way if I have to, but, I don't think that I have to.

If it's going to be directly accessed from within C, it *must* have C linkage.
>Its not actually going to be a std::string anyway. I just used that as a
simplifying example. It is probably going to be a Unicode string. Maybe I can
translate my FastString to "C". I would lose a few things, but, it could be
stored in a union.

Well, whatever it is, if it's going to be directly accessed from within C,
it's going to have to have C linkage.
The StringType will probably have to be written in "C" to be interfaced by "C"
and that entails "C" linkage.
>
By the sounds of it, it would make sense for you to actually do the union
within C to begin with. But then it's not really a C++ question.

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?

Jan 12 '07 #67
Simon G Best <si**********@btinternet.comwrites:
Peter Olcott wrote:
>"Simon G Best" <si**********@btinternet.comwrote in message
news:8o******************************@bt.com...
...
>I want to minimize the unnecessary overhead so that the resulting
interpreted language is as close as possible to the speed of
compiled native code. Alternatives that do things the "right" way
are twenty-five fold slower than an optimally designed interpreter.

Where did you get the "twenty-five fold slower" figure from?
>>In one post, you said, "I am creating my own computer language and
I need a simple way to store the various elemental data types." In
another, you said, "the interpreted language is provided by a third
party. I am hooking this third party interpreted language into my
system and then exposing another different interpreted language
implemented in terms of the third party language." Clearly, you
/have/ said that there are two, "different", interpreted languages.

I am simultaneously exploring several different alternatives. I want
the resulting design to be optimal for both of these alternatives.

Well, if you're going to jump about from alternative to alternative
without being clear about it, it's hardly surprising that confusion
results.
>>If so, then you're doing it really wrongly. Instead of properly
encapsulating your data in C++, you're trying to expose it directly
to the C stuff. What you /should/ be doing is properly
encapsulating it as normal, and then providing a suitable extern
"C" interface to wrap up your C++ stuff. No need to muck about
with unions.

I don't want the overhead.

Sounds like you might not understand that famous Knuth quote:
"Premature optimization is the root of all evil."
And with all due respect to Donald Knuth, who is certainly a lot cleverer
than most people here, NOT considering optimizing at an early stage can
lead to horrendously bad design and a framework which can be extremely
difficult and even impossible to alter in an economic time frame and
budget in order to process the data in a realistic time frame.

Like that awful quote about debugging being twice as hard as the
programming itself, this quote about premature optimization probably has
more validity in the dusty corridoors of a university than it does in a
real development environment.
>
Anyway, if it's going to be accessible from within C, but itself is
going to be in C++, then there is no alternative than to provide an
interface with C linkage. There is no alternative. That means using
extern "C". (It may well be that your compilers (and whatever) happen
to use compatible linkage conventions for both C and C++, in which
case the extern "C" won't actually introduce any overheads.
Otherwise, if your compilers (etc) use incompatible linkage
conventions for C and C++, you *will* have to use extern "C" anyway.
Either way, there's no good reason not to use extern "C".)
>I told you this from the very beginning. It has to be able to hold a
set of types including {double, int, std::string}. It is a form of
VARIANT that can be directly accessed from "C". These are GIVEN,
and thus immutable requirements.

They are contradictory requirements. std::strings are *not*
accessible from within C (except indirectly, when you provide an
appropriate, extern "C" interface).
--
Jan 12 '07 #68

"Richard" <rg****@gmail.comwrote in message news:87************@gmail.com...
Simon G Best <si**********@btinternet.comwrites:
>Peter Olcott wrote:
>>"Simon G Best" <si**********@btinternet.comwrote in message
news:8o******************************@bt.com.. .
I don't want the overhead.

Sounds like you might not understand that famous Knuth quote:
"Premature optimization is the root of all evil."

And with all due respect to Donald Knuth, who is certainly a lot cleverer
than most people here, NOT considering optimizing at an early stage can
lead to horrendously bad design and a framework which can be extremely
difficult and even impossible to alter in an economic time frame and
budget in order to process the data in a realistic time frame.

Like that awful quote about debugging being twice as hard as the
programming itself, this quote about premature optimization probably has
more validity in the dusty corridoors of a university than it does in a
real development environment.
You are right, and Knuth is right, the trick is finding the perfect balance
between not optimizing enough and optimizing too much. I tend to err on the
optimizing too much side. If you optimize too much development costs can
increase ten-fold or more with little increased performance.

If you don't put reasonable optimization in the design from the beginning we
have the problem that you stated, a bad design that can not be cost-effectively
improved.

I also err on the side of over design. I spend at least half the total project
time on design. It seems that the more time spent on design the
disproportionally less time is required for debugging.
Jan 12 '07 #69
"Peter Olcott" <No****@SeeScreen.comwrites:
"Richard" <rg****@gmail.comwrote in message news:87************@gmail.com...
>Simon G Best <si**********@btinternet.comwrites:
>>Peter Olcott wrote:
"Simon G Best" <si**********@btinternet.comwrote in message
news:8o******************************@bt.com. ..
I don't want the overhead.

Sounds like you might not understand that famous Knuth quote:
"Premature optimization is the root of all evil."

And with all due respect to Donald Knuth, who is certainly a lot cleverer
than most people here, NOT considering optimizing at an early stage can
lead to horrendously bad design and a framework which can be extremely
difficult and even impossible to alter in an economic time frame and
budget in order to process the data in a realistic time frame.

Like that awful quote about debugging being twice as hard as the
programming itself, this quote about premature optimization probably has
more validity in the dusty corridoors of a university than it does in a
real development environment.

You are right, and Knuth is right, the trick is finding the perfect balance
between not optimizing enough and optimizing too much. I tend to err on the
optimizing too much side. If you optimize too much development costs can
increase ten-fold or more with little increased performance.

If you don't put reasonable optimization in the design from the beginning we
have the problem that you stated, a bad design that can not be cost-effectively
improved.

I also err on the side of over design. I spend at least half the total project
time on design. It seems that the more time spent on design the
disproportionally less time is required for debugging.

Heading a little OT, but I am very "hands on" with design : and
invariably knock up a framework quickly and use the debugger at the
earliest possible stage in order to step through the program flow -
having the program "debugger" friendly is a very crucial point in any
system design IMO - possibly because I have spent a LOT of time in huge
multiprogrammer systems which incorporate a huge legacy as well as newer
modules. Yes I know there are "geniuses" out there who maintain a
debugger is only for people who dont know how to design properly
(although how that relates to programmers coming onto a legacy design is
beyond me) but I am not one of them and find the debugger to be one of
the best tools for new programmers to learn the data flow and program
structure while at the same time using strategic parameter manipulation
in the debugger to pull up unusual cases in an easy and effort free
manner. Part of this has always lead me to ban multistatement lines - a
nightmare to debug.
Jan 12 '07 #70
Peter Olcott wrote:
"Simon G Best" <si**********@btinternet.comwrote in message
news:YN*********************@bt.com...
....
>Even if those were relevant, appropriate figures (which I very much doubt), on
what basis do you justify dividing the 250 by 10? Your

I just showed you the basis, my interpreter benchmarks at TEN (that's the basis)
fold slower than native code. The other interpreter benchmarks at 250-fold
slower, therefore my interpreter is 25-fold faster.
You're comparing someone else's interpreter *for C and C++* with your
interpreter *for virtual machine code!* That's not a sensible
comparison. You're kidding yourself if you think it means your code
itself is 25 times faster.
>That /includes/ your direct access. You can't do your direct access from
within C unless the data you're directly accessing has C linkage.

So it is impossible for a "C" function to access an array without calling a
function?
Linkage isn't just about function calling. It's about data, too.
What did you think that I meant by direct access?
Access without going via intermediate functions.

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?
Jan 12 '07 #71
Richard wrote:
Simon G Best <si**********@btinternet.comwrites:
>Sounds like you might not understand that famous Knuth quote:
"Premature optimization is the root of all evil."

And with all due respect to Donald Knuth, who is certainly a lot cleverer
than most people here, NOT considering optimizing at an early stage can
lead to horrendously bad design and a framework which can be extremely
difficult and even impossible to alter in an economic time frame and
budget in order to process the data in a realistic time frame.
"*Premature* optimization". "*Premature* optimization".

For example, efficiency resulting from good design *is not premature.*

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?
Jan 12 '07 #72
Peter Olcott wrote:
Is there anyway of doing this besides making my own string from scratch?

union AnyType {
std::string String;
double Number;
};
Quite apart from *what* you can put in a union, I'm thinking,
*why* would you put stuff in a union?

I'm sitting here trying to think of a case where a union would
be the preferred case over polymorphism of some kind.
I suppose it's probably my limited imagination, but I don't
tend to use unions. Or it may be that I've found people
using unions to do stuff that they really ought not to,
and so I'm shy of them.

So, when is a union the preferred way to put multiple format
data into a block?
Socks

Jan 12 '07 #73

"Puppet_Sock" <pu*********@hotmail.comwrote in message
news:11**********************@l53g2000cwa.googlegr oups.com...
Peter Olcott wrote:
>Is there anyway of doing this besides making my own string from scratch?

union AnyType {
std::string String;
double Number;
};

Quite apart from *what* you can put in a union, I'm thinking,
*why* would you put stuff in a union?

I'm sitting here trying to think of a case where a union would
be the preferred case over polymorphism of some kind.
It must be able to be directly accessed from a "C" (not C++) program, and yet
written in C++. It probably won't be a std::string, I just provided this example
to abstract out most of the irrelevant details. What I really need is a Unicode
string accessible from "C" that has as much of the capabilities of std::string
as possible.

I also need a String (or dynamic array) of user defined type. This type will
store hardware input actions from the mouse, and keyboard. All of the persistent
suggestions of polymorphism can't work with "C".
I suppose it's probably my limited imagination, but I don't
tend to use unions. Or it may be that I've found people
using unions to do stuff that they really ought not to,
and so I'm shy of them.

So, when is a union the preferred way to put multiple format
data into a block?
Socks

Jan 12 '07 #74

"Simon G Best" <si**********@btinternet.comwrote in message
news:Wc*********************@bt.com...
Peter Olcott wrote:
>"Simon G Best" <si**********@btinternet.comwrote in message
news:YN*********************@bt.com...
...
>>Even if those were relevant, appropriate figures (which I very much doubt),
on what basis do you justify dividing the 250 by 10? Your

I just showed you the basis, my interpreter benchmarks at TEN (that's the
basis) fold slower than native code. The other interpreter benchmarks at
250-fold slower, therefore my interpreter is 25-fold faster.

You're comparing someone else's interpreter *for C and C++* with your
interpreter *for virtual machine code!* That's not a sensible comparison.
You're kidding yourself if you think it means your code itself is 25 times
faster.
Actual benchmark timings indicated that it was 25-fold faster accomplishing
exactly the same end-result. It was also much faster than another interpreter
that precompiled to virtual machine code. I think that a 10-fold degradation
from the speed of native code (which is what my interpreter achieves) is the
upper limit of performance for an interpreter on loop constructs, anything
faster than this probably would not meet the definition of an interpreter.
>
>>That /includes/ your direct access. You can't do your direct access from
within C unless the data you're directly accessing has C linkage.

So it is impossible for a "C" function to access an array without calling a
function?

Linkage isn't just about function calling. It's about data, too.
>What did you think that I meant by direct access?

Access without going via intermediate functions.

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?

Jan 12 '07 #75
Peter Olcott wrote:
"Simon G Best" <si**********@btinternet.comwrote in message
news:Wc*********************@bt.com...
....
>You're comparing someone else's interpreter *for C and C++* with your
interpreter *for virtual machine code!* That's not a sensible comparison.
You're kidding yourself if you think it means your code itself is 25 times
faster.

Actual benchmark timings indicated that it was 25-fold faster accomplishing
exactly the same end-result.
But you were still comparing a C/C++ interpreter with a virtual machine
code interpreter. It's still not a sensible comparison.
It was also much faster than another interpreter
that precompiled to virtual machine code. I think that a 10-fold degradation
from the speed of native code (which is what my interpreter achieves) is the
upper limit of performance for an interpreter on loop constructs, anything
faster than this probably would not meet the definition of an interpreter.
Did you compile from the same source in both cases? Did you use the
same compiler for both? Or did you use different compilers? Did you
compile from source code for one, but write it directly in virtual
machine code for the other?

The fact that your virtual machine code interpreter is an order of
magnitude slower than native machine code does not strike me as being at
all remarkable. I've seen nothing here to justify a lack of proper
design and proper encapsulation. But I do think you've been fooling
yourself with clearly inappropriate speed comparisons.

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?
Jan 12 '07 #76

"Simon G Best" <si**********@btinternet.comwrote in message
news:Hv*********************@bt.com...
Peter Olcott wrote:
>"Simon G Best" <si**********@btinternet.comwrote in message
news:Wc*********************@bt.com...
...
>>You're comparing someone else's interpreter *for C and C++* with your
interpreter *for virtual machine code!* That's not a sensible comparison.
You're kidding yourself if you think it means your code itself is 25 times
faster.

Actual benchmark timings indicated that it was 25-fold faster accomplishing
exactly the same end-result.

But you were still comparing a C/C++ interpreter with a virtual machine code
interpreter. It's still not a sensible comparison.
Its one C/C++ Virtual Machine code interpreter to another.
>
>It was also much faster than another interpreter that precompiled to virtual
machine code. I think that a 10-fold degradation from the speed of native
code (which is what my interpreter achieves) is the upper limit of
performance for an interpreter on loop constructs, anything faster than this
probably would not meet the definition of an interpreter.

Did you compile from the same source in both cases? Did you use the same
compiler for both? Or did you use different compilers? Did you compile from
source code for one, but write it directly in virtual machine code for the
other?

The fact that your virtual machine code interpreter is an order of magnitude
slower than native machine code does not strike me as being at all remarkable.
I've seen nothing here to justify a lack of proper
Try and find another one that is this fast!
design and proper encapsulation. But I do think you've been fooling yourself
with clearly inappropriate speed comparisons.

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?

Jan 12 '07 #77
Peter Olcott wrote:
"Simon G Best" <si**********@btinternet.comwrote in message
news:Hv*********************@bt.com...
....
>But you were still comparing a C/C++ interpreter with a virtual machine code
interpreter. It's still not a sensible comparison.
Its one C/C++ Virtual Machine code interpreter to another.
According to http://www.softintegration.com/products/, Ch "parses and
executes C code directly without intermediate code or byte code."
Calling C and C++ "Virtual Machine code" really is stretching it. If
you're having to stretch things that far to try to justify your claims,
then you must already know that your claims are bogus.

You're only kidding yourself.

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?
Jan 12 '07 #78

"Simon G Best" <si**********@btinternet.comwrote in message
news:EN******************************@bt.com...
Peter Olcott wrote:
>"Simon G Best" <si**********@btinternet.comwrote in message
news:Hv*********************@bt.com...
...
>>But you were still comparing a C/C++ interpreter with a virtual machine code
interpreter. It's still not a sensible comparison.
Its one C/C++ Virtual Machine code interpreter to another.

According to http://www.softintegration.com/products/, Ch "parses and executes
C code directly without intermediate code or byte code." Calling C and C++
"Virtual Machine code" really is stretching it. If you're having to stretch
things that far to try to justify your claims, then you must already know that
your claims are bogus.

You're only kidding yourself.
http://root.cern.ch/root/Cint.html
I was not referring to their interpreter as using virtual machine code, this is
the interpreter that uses virtual machine byte codes. My implementation is much
faster than this one too. Do you really have to be so disagreeable?
--
Simon G Best
What happens if I mention Leader Kibo in my .signature?

Jan 12 '07 #79
Peter Olcott wrote:
"Simon G Best" <si**********@btinternet.comwrote in message
news:EN******************************@bt.com...
>Peter Olcott wrote:
>>"Simon G Best" <si**********@btinternet.comwrote in message
news:Hv*********************@bt.com...
...
>>>But you were still comparing a C/C++ interpreter with a virtual machine code
interpreter. It's still not a sensible comparison.
Here you say:-
>>Its one C/C++ Virtual Machine code interpreter to another.
*"C/C++ Virtual Machine code".*

But then you try to change which interpreter you're referring to:-
http://root.cern.ch/root/Cint.html
and say:-
I was not referring to their interpreter as using virtual machine code,
Too late.
this is
the interpreter that uses virtual machine byte codes. My implementation is much
faster than this one too. Do you really have to be so disagreeable?
This is silly. I'm done with this.

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?
Jan 12 '07 #80

"Simon G Best" <si**********@btinternet.comwrote in message
news:vq******************************@bt.com...
Peter Olcott wrote:
>"Simon G Best" <si**********@btinternet.comwrote in message
news:EN******************************@bt.com...
>>Peter Olcott wrote:
"Simon G Best" <si**********@btinternet.comwrote in message
news:Hv*********************@bt.com...
...
But you were still comparing a C/C++ interpreter with a virtual machine
code interpreter. It's still not a sensible comparison.

Here you say:-
>>>Its one C/C++ Virtual Machine code interpreter to another.

*"C/C++ Virtual Machine code".*

But then you try to change which interpreter you're referring to:-
> http://root.cern.ch/root/Cint.html

and say:-
>I was not referring to their interpreter as using virtual machine code,

Too late.
>this is the interpreter that uses virtual machine byte codes. My
implementation is much faster than this one too. Do you really have to be so
disagreeable?

This is silly. I'm done with this.
It would seem to me that you might be a little dense, but, then it is clearly my
fault to some degree for not being reasonably clear enough as you have pointed
out.
>
--
Simon G Best
What happens if I mention Leader Kibo in my .signature?

Jan 12 '07 #81

"Simon G Best" <si**********@btinternet.comwrote in message
news:vq******************************@bt.com...
Peter Olcott wrote:
>"Simon G Best" <si**********@btinternet.comwrote in message
news:EN******************************@bt.com...
>>Peter Olcott wrote:
"Simon G Best" <si**********@btinternet.comwrote in message
news:Hv*********************@bt.com...
...
But you were still comparing a C/C++ interpreter with a virtual machine
code interpreter. It's still not a sensible comparison.

Here you say:-
>>>Its one C/C++ Virtual Machine code interpreter to another.

*"C/C++ Virtual Machine code".*

But then you try to change which interpreter you're referring to:-
> http://root.cern.ch/root/Cint.html
I referred to this other interpreter in another message, when I said that my
interpreter is much faster than two other interpreters. I can't find that
message now. My ISP is having problems with posting to newsgroups since they
switched to a contractor last summer.
>
and say:-
>I was not referring to their interpreter as using virtual machine code,

Too late.
>this is the interpreter that uses virtual machine byte codes. My
implementation is much faster than this one too. Do you really have to be so
disagreeable?

This is silly. I'm done with this.

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?

Jan 12 '07 #82
Peter Olcott wrote:
>
I referred to this other interpreter in another message, when I said that my
interpreter is much faster than two other interpreters.
Too late.

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?
Jan 12 '07 #83

"Simon G Best" <si**********@btinternet.comwrote in message
news:Sp*********************@bt.com...
Peter Olcott wrote:
>>
I referred to this other interpreter in another message, when I said that my
interpreter is much faster than two other interpreters.

Too late.
Just for the record and for future reference I will acknowledge that our
communication difficulties do appear to be mostly my fault. They may also be
partly the fault of my ISP dropping posted messages. In any case they do not
appear to be very much your fault. I apologize for my comment to the contrary.

I state this apology in light of a deeper insight into how a particular
component object technology works. From the point of view of this new insight
its looks like your advice was much better that I gave it credit for.
Jan 13 '07 #84
Peter Olcott wrote:
"Simon G Best" <si**********@btinternet.comwrote in message
news:Sp*********************@bt.com...
>Peter Olcott wrote:
>>I referred to this other interpreter in another message, when I said that my
interpreter is much faster than two other interpreters.
Too late.

Just for the record and for future reference I will acknowledge that our
communication difficulties do appear to be mostly my fault. They may also be
partly the fault of my ISP dropping posted messages. In any case they do not
appear to be very much your fault. I apologize for my comment to the contrary.

I state this apology in light of a deeper insight into how a particular
component object technology works. From the point of view of this new insight
its looks like your advice was much better that I gave it credit for.
:-)

--
Simon G Best
What happens if I mention Leader Kibo in my .signature?
Jan 14 '07 #85

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
by: prettysmurfed | last post by:
Hi all I have a bit of a problem, the subject of this post is almost selfexplaing. But here goes: Heres an example of the code I want to implement, its all nice and simple, but the flaw is I...
5
by: Naren | last post by:
Hello Grp, Correct me if I am wrong. static member functions can act only on static member varaibles.It can accessed by using the name of the class. Then why is there an access controller. ...
7
by: Steven T. Hatton | last post by:
I am trying to convert some basic OpenGL code to an OO form. This is the C version of the program: http://www.opengl.org/resources/code/basics/redbook/double.c You can see what my current...
18
by: ranjeet.gupta | last post by:
Dear ALL As we know that when we declare the union then we have the size of the union which is the size of the highest data type as in the below case the size should be 4 (For my case and...
9
by: Richard Lewis Haggard | last post by:
What is the logic for dropping the C++ prepending of 'm_' from member variables? Has there been a consensus developed on how to discriminate between class members and automatic temp variables? --...
11
by: cps | last post by:
Hi, I'm a C programmer taking my first steps into the world of C++. I'm currently developing a C++ 3D graphics application using GLUT (OpenGL Utility Toolkit written in C) for the GUI...
5
by: mailforpr | last post by:
Hello. Given is the following class: class Test { public: int i; int& get() { return i; } };
2
by: Zytan | last post by:
I want to extend an existing class (a standard control), by making more member functions, to improve the ease of its use. *I don't want to make my own control, or add new functionality.* I just...
6
by: steve.kim | last post by:
Hello, I'm trying to make a class like below... class myClass { public: // ctor / dtor .... // methods
7
by: Valeriu Catina | last post by:
Hi, consider the Shape class from the FAQ: class Shape{ public: Shape(); virtual ~Shape(); virtual void draw() = 0;
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.