Frederic Mayot wrote:
...
I just wondered why
(char*&)p += l;
could be differently compiled than
p = reinterpret_cast<A*>(reinterpret_cast<char*>(p) + l);
...
Let's just take a different example first. Let's assume we are using a platform
with sizeof(float) == sizeof(int) and identical alignment requirements for both
types. Consider the following code
float f = 5.0;
int i1 = (int) f; // 1
int i2 = *(int*) &f; // 2
Take a close look at lines labeled 1 and 2. Both lines will be accepted by the
compiler (there are issues here from the very pedantic point of view, but let's
just assume that the compiler accepted both). But do you understand the
difference between the two?
In the line 1 we have a cast that converts a 'float' into an 'int'. The behavior
of this conversion is defined by the language. Variable 'i1' will be initialized
with value '5' and that does not depend on the implementation.
In the line 2 we are actually create a pointer of type 'float*' that points to
'f', forcefully convert it to type 'int*' (this is implementation-defined, but
let's assume that it still points to the same spot) and then dereference it as
if we have an 'int' object in memory occupied by 'f'. In other words, in this
case we just take the raw memory occupied by 'float f' and read it as an 'int'
object. What do you think the result of this is going to be? The truth is,
there's absolutely no way to predict it from the language point of view. The
physical representation of 'float' value '5.0' will normally have no relation to
the physical representation of 'int' value '5', so there's virtually no chance
that 'i2' will be initialized with '5'. In practice, if it doesn't crash, you
will see some what looks like garbage in 'i2'.
I'm saying all this in order to emphasize the difference between _converting_ an
object of one type to another type (which is what we have in the first case),
and _reinterpreting_ the raw memory occupied by an object of one type as an
object of another type (which is what we have in the second case). The former
makes perfect sense when defined by the language or implementation, while an
attempt to do the latter makes no sense at all in majority of cases.
Now, back to your code.
Expression '(char*&) p' is equivalent to '*(char*) &p', which attempts to do
pretty much the same thing as line 2 in my float-int example. It is nothing else
that an attempt to _reinterpret_ the memory occupied by 'p' as an object of type
'char*'. Object 'p' has type 'A*'. Physical representation of 'A*' has
absolutely no relation to the physical representation of 'char*'. Which means
that this reinterpretation attempt makes no sense for the very same reason line
2 in the my float-int example made no sense.
Expression 'p = (A*) ((char*) p + 1)' (rewritten for brevity) is something
completely different. It is similar to the line 1 in the my float-int example.
It actually converts an 'A*' value into an 'char*' value, performs the increment
and then converts it back to 'A*' type. These conversions are
implementation-defined but, assuming that your implementation defines them the
way you want it to, they produce meaningful results.
--
Best regards,
Andrey Tarasevich