By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,657 Members | 1,094 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,657 IT Pros & Developers. It's quick & easy.

A simple class

P: n/a
Hi:

Recently i write this code:

class Simple
{
private:
int value;
public:
int GiveMeARandom(void);
int GiveMeValue(void);
}

int Simple::GiveMeARandom(void)
{
return rand()%100;
}

int Simple::GiveMeValue(void)
{
return this->value;
}

....

int main()
{
Simple * Object = NULL;
printf("%d",Object->GiveMeARandom());
return 0;
}
Well, this code compile's ok and when i tried to use them... works!,
but my question is how i can access to a method of an object that not
exist in memory?. In the other hand, if you try to access to the other
method like this:

printf("%d",Object->GiveMeValue());

it crash!, and obviusly it crash because the object not exist in
memory and when you attempt to access to it variable "value" it read's
the 0x00000 direction.

But my question is about the method, the method's in C++ classes
exists without produce any object in code?. I'm thinking about and i
have a possibly explanation but what do you think about?.

Thanks
Giancarlo Berenz

Mar 6 '07 #1
Share this Question
Share on Google+
14 Replies


P: n/a
Giancarlo Berenz wrote:
Simple * Object = NULL;
printf("%d",Object->GiveMeARandom());
Well, this code compile's ok and when i tried to use them... works!,
but my question is how i can access to a method of an object that not
exist in memory?
Because C++ is very efficient and very mechanical. It will not check many
details of your output code, including whether a pointer is correctly
seated. That's because C++ must compete with assembler (where you can get
away with much worse things), while compiling huge programs in reasonable
time.

So, at -time, C++ inserted no opcodes into the output program that checked
where Object points. The previous line could have pointed it to a legitimate
object, or dangled it, or NULLed it, like you did.

When you write a broken program like that, you get "undefined behavior".
That means the program could work the way you expect, or work another way,
or crash, or the program could explode the nearest toilet.
printf("%d",Object->GiveMeValue());

it crash!, and obviusly it crash because the object not exist in
memory and when you attempt to access to it variable "value" it read's
the 0x00000 direction.
And the other function had no reason to touch the presumed object as it ran,
so it accidentally appeared to work correctly.

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!
Mar 6 '07 #2

P: n/a
Phlip a écrit :
Giancarlo Berenz wrote:
> Simple * Object = NULL;
printf("%d",Object->GiveMeARandom());
>Well, this code compile's ok and when i tried to use them... works!,
but my question is how i can access to a method of an object that not
exist in memory?

Because C++ is very efficient and very mechanical. It will not check many
details of your output code, including whether a pointer is correctly
seated. That's because C++ must compete with assembler (where you can get
away with much worse things), while compiling huge programs in reasonable
time.

[snip]
It has nothing to do with assembly.
IMO the compiler has no way of knowing whether the pointer is correct or
not. It doesn't know about register memory layout by example and I can
perfectly design a plateform with memory starting at 0x0000000 in which
case deferencing NULL is valid.

The only thing the compiler can do is checking the type and that is
already a good thing C++ is strongly typed.

Michael
Mar 6 '07 #3

P: n/a
Le 06.03.2007 03:59, Giancarlo Berenz a ecrit:
Hi:

Recently i write this code:

class Simple
{
private:
int value;
public:
int GiveMeARandom(void);
int GiveMeValue(void);
}

int Simple::GiveMeARandom(void)
{
return rand()%100;
}

int Simple::GiveMeValue(void)
{
return this->value;
}

....

int main()
{
Simple * Object = NULL;
printf("%d",Object->GiveMeARandom());
return 0;
}
Well, this code compile's ok and when i tried to use them... works!,
but my question is how i can access to a method of an object that not
exist in memory?. In the other hand, if you try to access to the other
method like this:

printf("%d",Object->GiveMeValue());

it crash!, and obviusly it crash because the object not exist in
memory and when you attempt to access to it variable "value" it read's
the 0x00000 direction.

But my question is about the method, the method's in C++ classes
exists without produce any object in code?. I'm thinking about and i
have a possibly explanation but what do you think about?.
Since you write a definition for your functions, they can be compiled
and their code can exist in memory. The object they "belong to" is just
a pointer passed as a hidden parameter, and if that parameter is not
used, it may work on some implementations, like yours.

In other words, your code is *very roughly* similar to the following:

class Simple
{
private:
int value;

friend int Simple_GiveMeARandom(Simple *);
friend int Simple_GiveMeValue(Simple *);
}

int Simple_GiveMeARandom(Simple * /*unused*/)
{
return rand() % 100;
}

int Simple_GiveMeValue(Simple *that)
{
return that->value;
}

.....

int main()
{
Simple * Object = NULL;
printf("%d",Simple_GiveMeARandom(Object));
return 0;
}


--
Serge Paccalin
<se************@easyvisio.net>
Mar 6 '07 #4

P: n/a
Michael DOUBEZ wrote:
It has nothing to do with assembly.
C++ doesn't compete with Assembler? Then why is everything and its
uncle "undefined"?
IMO the compiler has no way of knowing whether the pointer is correct or
not. It doesn't know about register memory layout by example and I can
perfectly design a plateform with memory starting at 0x0000000 in which
case deferencing NULL is valid.
The compiler could know that if it added extra opcodes. That would
slow down all the systems that use C++ in speed-sensitive contexts. So
those systems would have to use the next faster language, Assembler.
So, to compete with Assembler, C++ permits undefined behavior.

--
Phlip

Mar 6 '07 #5

P: n/a
On 6 Mar, 08:16, Michael DOUBEZ <michael.dou...@free.frwrote:
It has nothing to do with assembly.
IMO the compiler has no way of knowing whether the pointer is correct or
not. It doesn't know about register memory layout by example and I can
perfectly design a plateform with memory starting at 0x0000000 in which
case deferencing NULL is valid.
It's a bit academic, but I don't believe that's true. As I understand
it, dereferencing a null pointer *always* leads to undefined
behaviour. If the target hardware understands 0 to be a valid memory
address, it is the compiler's job to manage the magic that allows the
programmer to use null pointer constants and null pointer values as
special entities as defined in the standard while the underlying
hardware can still use the piece of memory it understands to be at
address 0.

Gavin Deane

Mar 6 '07 #6

P: n/a
Giancarlo Berenz wrote:
Hi:

Recently i write this code:

class Simple
{
private:
int value;
public:
int GiveMeARandom(void);
int GiveMeValue(void);
}

int Simple::GiveMeARandom(void)
{
return rand()%100;
}

int Simple::GiveMeValue(void)
{
return this->value;
}

...

int main()
{
Simple * Object = NULL;
printf("%d",Object->GiveMeARandom());
return 0;
}
Well, this code compile's ok and when i tried to use them... works!,
It just seems to work. :-)

There is an implicit contact between the programmer and the compiler - you
provide legal code, and the compiler translates it correctly.

When you are dereferencing a null pointer, you have broken the contract with
the compiler. It is then free to do anything at all, like print a value
anyway. Or crash.
Technically, it is "Undefined behaviour". Anything can happen.
Bo Persson
Mar 6 '07 #7

P: n/a
Phlip a écrit :
Michael DOUBEZ wrote:
>It has nothing to do with assembly.

C++ doesn't compete with Assembler? Then why is everything and its
uncle "undefined"?
It is undefined because it is compiler/plateform/optimisation dependant.
>
>IMO the compiler has no way of knowing whether the pointer is correct or
not. It doesn't know about register memory layout by example and I can
perfectly design a plateform with memory starting at 0x0000000 in which
case deferencing NULL is valid.

The compiler could know that if it added extra opcodes. That would
slow down all the systems that use C++ in speed-sensitive contexts. So
those systems would have to use the next faster language, Assembler.
So, to compete with Assembler, C++ permits undefined behavior.
What opcode would it add to know whether a pointer is valid or not ?
That would mean the processor is aware of the plateform it is on and of
the layout of various component.

Those opcode would be related to bios or some other metaconfiguration
and I have never seen such thing. Though component are more and more
communicating and that will perhaps be the case one day.

C++ permits undefined behavior sometimes to allow optimisation as you
say but also because some things are plateform dependant, some would
unnecessarily constaint the compiler or could not be portable.
Michael
Mar 7 '07 #8

P: n/a
Gavin Deane a écrit :
On 6 Mar, 08:16, Michael DOUBEZ <michael.dou...@free.frwrote:
>It has nothing to do with assembly.
IMO the compiler has no way of knowing whether the pointer is correct or
not. It doesn't know about register memory layout by example and I can
perfectly design a plateform with memory starting at 0x0000000 in which
case deferencing NULL is valid.

It's a bit academic, but I don't believe that's true. As I understand
it, dereferencing a null pointer *always* leads to undefined
behaviour. If the target hardware understands 0 to be a valid memory
address, it is the compiler's job to manage the magic that allows the
programmer to use null pointer constants and null pointer values as
special entities as defined in the standard while the underlying
hardware can still use the piece of memory it understands to be at
address 0.
True. It was just a (bad) example.
Deferencing NULL is undefined behavior and NULL should evaluate to
integer 0 so using hardware at 0 would be a problem.

Let say I can put my device at address 0xDEAD0000 :)
Who is to say a pointer value is invalid ?

Michael
Mar 7 '07 #9

P: n/a
On 6 Mar, 17:05, "Gavin Deane" <deane_ga...@hotmail.comwrote:
On 6 Mar, 08:16, Michael DOUBEZ <michael.dou...@free.frwrote:
It has nothing to do with assembly.
IMO the compiler has no way of knowing whether the pointer is correct or
not. It doesn't know about register memory layout by example and I can
perfectly design a plateform with memory starting at 0x0000000 in which
case deferencing NULL is valid.

It's a bit academic, but I don't believe that's true. As I understand
it, dereferencing a null pointer *always* leads to undefined
behaviour. If the target hardware understands 0 to be a valid memory
address, it is the compiler's job to manage the magic that allows the
programmer to use null pointer constants and null pointer values as
special entities as defined in the standard while the underlying
hardware can still use the piece of memory it understands to be at
address 0.
Yes, it's because the integer value 0 can be cast to a pointer, but
that pointer does not necessarily have to have the value of 0, the
requirement is that its value is distinguishable from all other
pointer values.

--
Erik Wikström

Mar 7 '07 #10

P: n/a
The compiler could know that if it added extra opcodes. That would
slow down all the systems that use C++ in speed-sensitive contexts. So
It wouldn't be just a few extra opcodes. First, there is no mechanism
in C++ to validate a memory address, but let's assume there was. It
would somehow interact with the memory manager. This would, at minimum
assume a map lookup. More likely it would involve a range search from
a map or similar data structure. Wouldn't be very efficient at all.

The idea here is that the pointer is assumed to be correct and the
code is generated to take this for granted. It is client's
responsibility to pass correct pointers to the methods, or invoke
member methods to correctly initialized objects.

It's a fairly sound trade-off. I don't think the idea here is to
"compete" with assembler at all, the idea with assembler and c++
compiler are to generate binary code which can be executed by the host
platform. C++ takes a higher level approach and frees the programmer
from repetitive tasks and brings new repetitive tasks to replace the
ones that are deprecated.

To combat the new repetitive tasks utility / library code is written,
which brings new repetitive tasks into play. =)

those systems would have to use the next faster language, Assembler.
So, to compete with Assembler, C++ permits undefined behavior.
I don't see how undefined behavior is 'permitted'; it is possible to
invoke and after that you're not dealing with C++ any longer. Now it's
implementation specific issue and not interesting from the C++ point
of view.

Mar 7 '07 #11

P: n/a
Hi:

It's true, C++ doesn´t have the ability to know if a pointer is a
valid reference to the object in memory, but my explanation of this
weird thing is about the way of the compiler to implement a class in
binary code.

When you define a function inside of a class, the compiler creates a
normal function (like in C) but it is hidden from other objects that
not belong to the class. And the class add's a reference (the name of
the function) to the method (the function). The class creates the
variables in a normal way.

When you attemp to do this:

Simple * Object = NULL;
printf("%d",Object->GiveMeARandom());

Object has a invalid reference but the pointers to the methods of
class Simple are valid and if you execute GiveMeARandom():

int Simple::GiveMeARandom(void)
{
return rand()%100;

}

The GiveMeARandom() function doesn't access to the internal variables
of class Simple and simply return rand()%100;

But when you do this:

printf("%d",Object->GiveMeValue());

....

int Simple::GiveMeValue(void)
{
return this->value;
}

The GiveMeValue() function access to the internal variables of class
Simple and desreference the pointer to the Object and if Object is
NULL it crashes.

It's strange, but interesting, thanks for your answers.

Giancarlo Berenz

Mar 7 '07 #12

P: n/a
"Michael DOUBEZ" <mi************@free.frwrote in message
news:45**********************@news.free.fr...
Phlip a écrit :
Michael DOUBEZ wrote:
It has nothing to do with assembly.
C++ doesn't compete with Assembler? Then why is everything and its
uncle "undefined"?

It is undefined because it is compiler/plateform/optimisation dependant.
IMO the compiler has no way of knowing whether the pointer is correct
or
not. It doesn't know about register memory layout by example and I can
perfectly design a plateform with memory starting at 0x0000000 in which
case deferencing NULL is valid.
The compiler could know that if it added extra opcodes. That would
slow down all the systems that use C++ in speed-sensitive contexts. So
those systems would have to use the next faster language, Assembler.
So, to compete with Assembler, C++ permits undefined behavior.

What opcode would it add to know whether a pointer is valid or not ?
That would mean the processor is aware of the plateform it is on and of
the layout of various component.
One way would be to record each pointer and its size. Prior to use a check
could be made to see if the pointer is in the range of pointer to
pointer+size.
Of course, that won't solve all problems. And that won't catch problems
where pointer2+bigoffset happens to fall in the range pointer to
pointer+size.
Most lilely its an error, but the check would pass

Dennis
Mar 17 '07 #13

P: n/a
One way would be to record each pointer and its size. Prior to use a check
could be made to see if the pointer is in the range of pointer to
pointer+size.
If there are, and there ARE, thousands of memory allocations how you
would prefer to do this check? No matter how you do it, it will be
expensive. This would happen EVERY TIME you read or write from memory
location (!)

Most architectures have this in *hardware*, you will get some sort of
hardware response which is out of scope of the C++

One way which *would* be feasible, is to have a memory block object.
This object would have a header. The header would contain address of
allocated memory region and size of the allocated memory region.
Possibly also "raw size of the allocated memory region", which would
take into consideration padding for aligment and so on. The header
could also contain a "magic" value for validation purposes, it could
be product of the memory address of the allocated memory region and
some nice constant value.

Memory access could follow the base+offset syntax, always, because the
"base" would be the memoryObject.address field in the memory region
header. This would also reflect fairly common syntax for memory
addressing in virtually all hardware.

This would be fairly trivial to implement overloading [] operator. It
would just be syntatic sugar and cruft on top of the C++ and not very
interesting, IMHO. If this were to be implemented as core language
feature, it wouldn't be C++ anymore, and it would always assumed that
a pointer is a pointer to an array. :D

*ptr++; // hello,i wouldn't be legal anymore.. how is this supposed to
be "C++" ?

Either it's just a lame extension, or we throw away a lot of what
makes C++ the C++ it is.. _or_ the implementation would have to check
allocated memory ranges for *every* memory read and write. Each of
these possibilities is lame.

std::vector "kind of" already does what the memory allocation object
would be doing; afaik, it doesn't range-check the [] but at() unless I
remember incorrectly. I'm not feeling arsed to open the reference
manual just to avoid humiliation, feel free to check, just making a
point.

Mar 17 '07 #14

P: n/a
"persenaama" <ju***@liimatta.orgwrote in message
news:11**********************@b75g2000hsg.googlegr oups.com...
One way would be to record each pointer and its size. Prior to use a
check
could be made to see if the pointer is in the range of pointer to
pointer+size.

If there are, and there ARE, thousands of memory allocations how you
would prefer to do this check? No matter how you do it, it will be
expensive. This would happen EVERY TIME you read or write from memory
location (!)
Yes indeed. Instead of pointer access being constant, its now a log(n)
lookup.
>
Most architectures have this in *hardware*, you will get some sort of
hardware response which is out of scope of the C++

One way which *would* be feasible, is to have a memory block object.
This object would have a header. The header would contain address of
allocated memory region and size of the allocated memory region.

Possibly also "raw size of the allocated memory region", which would
take into consideration padding for aligment and so on. The header
could also contain a "magic" value for validation purposes, it could
be product of the memory address of the allocated memory region and
some nice constant value.

Memory access could follow the base+offset syntax, always, because the
"base" would be the memoryObject.address field in the memory region
header. This would also reflect fairly common syntax for memory
addressing in virtually all hardware.

This would be fairly trivial to implement overloading [] operator. It
would just be syntatic sugar and cruft on top of the C++ and not very
interesting, IMHO. If this were to be implemented as core language
feature, it wouldn't be C++ anymore, and it would always assumed that
a pointer is a pointer to an array. :D

*ptr++; // hello,i wouldn't be legal anymore.. how is this supposed to
be "C++" ?
So how was ptr initialized? Placement new could be used to set the memory to
any given point. The size could ebe given as well.
>
Either it's just a lame extension, or we throw away a lot of what
makes C++ the C++ it is.. _or_ the implementation would have to check
allocated memory ranges for *every* memory read and write. Each of
these possibilities is lame.
If I may suggest another possibility:
char *ptr = new char [45];
char *cursor = ptr;
*(*ptr + 4500) ;
*(4500+*ptr)
*cursor++ ;
Now, *ptr is checked - sure, its a char ptr, dynamically allocated, size 45.
The pointer-arithmetic + would have to check to see that the offset is
within the range as well.
And since the pointer may not be exactly on the block, another comparison
would have to occur.
>
std::vector "kind of" already does what the memory allocation object
would be doing; afaik, it doesn't range-check the [] but at() unless I
remember incorrectly. I'm not feeling arsed to open the reference
manual just to avoid humiliation, feel free to check, just making a
point.
And my point was that it would be possible to check pointer validity.
:-)

Whether the cost is worth it is another question.

Dennis
Mar 18 '07 #15

This discussion thread is closed

Replies have been disabled for this discussion.