By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
445,852 Members | 2,219 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 445,852 IT Pros & Developers. It's quick & easy.

make this snippet efficient

P: n/a
KK
/* Target - read an integer from a binary file */
unsigned int Byte2Int(char *buff)
{
unsigned char* byte = reinterpret_cast<unsigned char*> (buff);
return ((byte[0]<<24)|(byte[1]<<16)|(byte[2]<<8)|(byte[3]));
}
/* part of main funciton */

ifstream fp("in.bin",ios::binary);
char buff[4];
fp.read(buff,4);
unsigned int loadSize = Byte2Int(buff);

Thank you.
KK

Jun 29 '06 #1
Share this Question
Share on Google+
14 Replies


P: n/a
KK wrote:
/* Target - read an integer from a binary file */
unsigned int Byte2Int(char *buff)
{
unsigned char* byte = reinterpret_cast<unsigned char*> (buff);
return ((byte[0]<<24)|(byte[1]<<16)|(byte[2]<<8)|(byte[3]));
}
/* part of main funciton */

ifstream fp("in.bin",ios::binary);
char buff[4];
fp.read(buff,4);
unsigned int loadSize = Byte2Int(buff);


What's *INefficient* about it?

V
--
Please remove capital 'A's when replying by e-mail
I do not respond to top-posted replies, please don't ask
Jun 29 '06 #2

P: n/a

Victor Bazarov wrote:
KK wrote:
/* Target - read an integer from a binary file */
unsigned int Byte2Int(char *buff)
{
unsigned char* byte = reinterpret_cast<unsigned char*> (buff);
return ((byte[0]<<24)|(byte[1]<<16)|(byte[2]<<8)|(byte[3]));
}
/* part of main funciton */

ifstream fp("in.bin",ios::binary);
char buff[4];
fp.read(buff,4);
unsigned int loadSize = Byte2Int(buff);


What's *INefficient* about it?

V
--

Must I use reinterpret_cast operator ? How can I avoid it?

Jun 29 '06 #3

P: n/a
KK posted:
/* Target - read an integer from a binary file */
unsigned int Byte2Int(char *buff)
{
unsigned char* byte = reinterpret_cast<unsigned char*> (buff);
return ((byte[0]<<24)|(byte[1]<<16)|(byte[2]<<8)|(byte[3]));
}
/* part of main funciton */

ifstream fp("in.bin",ios::binary);
char buff[4];
fp.read(buff,4);
unsigned int loadSize = Byte2Int(buff);

Thank you.
KK

You don't specify the amount of bits in a byte, however, looking at your
code, we can make an educated guess of 8.

You don't specify the amount of bytes in an int, however, looking at your
code, we can make an educated guess of 4.

You don't specify the byte order of the integer stored in the file, so we
can only hope that it's the same as the system's.

You don't specify the negative number system used to represent the number
in the file, so we can only hope that it's the same as the system's.

You don't specify whether the integer in the file contains padding bits, or
where they're located, nor do you specify whether the system stores
integers with padding bits, or where they're located.

Working with the scraps being given, try this:

unsigned Func( char (&array)[4] )
{
return reinterpret_cast<int&>( array );
}
--

Frederick Gotham
Jun 29 '06 #4

P: n/a
> > > unsigned char* byte = reinterpret_cast<unsigned char*> (buff);
Must I use reinterpret_cast operator ? How can I avoid it?


use : unsigned char* byte = (unsigned char*)buff;

Jun 29 '06 #5

P: n/a
Frederick Gotham posted:

unsigned Func( char (&array)[4] )
{
return reinterpret_cast<int&>( array );
}

Should have casted to unsigned&, rather than int&.
--

Frederick Gotham
Jun 29 '06 #6

P: n/a
chandu wrote:
unsigned char* byte = reinterpret_cast<unsigned char*> (buff);

Must I use reinterpret_cast operator ? How can I avoid it?


use : unsigned char* byte = (unsigned char*)buff;


Which means exactly the same thing without the keyword. How is it
any better?
Jun 29 '06 #7

P: n/a

Frederick Gotham wrote:
KK posted:
/* Target - read an integer from a binary file */
unsigned int Byte2Int(char *buff)
{
unsigned char* byte = reinterpret_cast<unsigned char*> (buff);
return ((byte[0]<<24)|(byte[1]<<16)|(byte[2]<<8)|(byte[3]));
}
/* part of main funciton */

ifstream fp("in.bin",ios::binary);
char buff[4];
fp.read(buff,4);
unsigned int loadSize = Byte2Int(buff);

Thank you.
KK

You don't specify the amount of bits in a byte, however, looking at your
code, we can make an educated guess of 8.

When programming in C++, could one realistically expect to encounter a
system that does not have 8 bits in a byte?

Markus.

Jun 30 '06 #8

P: n/a
Markus Svilans posted:

When programming in C++, could one realistically expect to encounter a
system that does not have 8 bits in a byte?

You're on a Standard C++ newsgroup, and people here like to be pedantic. It
pays off in the long run, you end up with code that will run perfectly for
eons.

Here's a few things that the Standard allows:

(1) Machines need not use two's complement.
(2) Null pointers need not be all-bits-zero.
(3) Bytes need not be eight bits.
(4) Primitive types may contain padding bits.

Either you take all these things into account, and write FULLY-portable and
Standard-compliant code, or you don't.

If it ever got to a point where an old-fashioned constraint was hindering
efficiency or functionality, the constraint would be lifted. But until
then, you use the following macros to tell you how many bits you have in a
byte:
#define CHAR_BIT \
(((unsigned char)-1)/(((unsigned char)-1)%0x3fffffffL+1) \
/0x3fffffffL%0x3fffffffL*30+((unsigned char)-1)%0x3fffffffL \
/(((unsigned char)-1)%31+1)/31%31*5 + 4-12/(((unsigned char)\
-1)%31+3))

--

Frederick Gotham
Jun 30 '06 #9

P: n/a
Frederick Gotham wrote:
use the following macros to tell you how many bits you have in a
byte:
#define CHAR_BIT \
(((unsigned char)-1)/(((unsigned char)-1)%0x3fffffffL+1) \
/0x3fffffffL%0x3fffffffL*30+((unsigned char)-1)%0x3fffffffL \
/(((unsigned char)-1)%31+1)/31%31*5 + 4-12/(((unsigned char)\
-1)%31+3))


Why provide an implementation (especially one so... urk), rather than
just explain that this macro is available in the standard header
<climits>?

Luke

Jun 30 '06 #10

P: n/a
In article <11**********************@m73g2000cwd.googlegroups .com>,
ms******@gmail.com says...

[ ... ]
When programming in C++, could one realistically expect to encounter a
system that does not have 8 bits in a byte?


Yes. Under Windows CE, the smallest available type is 16 bits. A
number of DSPs don't have any 8-bit types either.

--
Later,
Jerry.

The universe is a figment of its own imagination.
Jun 30 '06 #11

P: n/a
Frederick Gotham wrote:
Markus Svilans posted:

When programming in C++, could one realistically expect to encounter a
system that does not have 8 bits in a byte?

You're on a Standard C++ newsgroup, and people here like to be pedantic. It
pays off in the long run, you end up with code that will run perfectly for
eons.


I can see your point. But in the last 10-15 years, has there been a new
CPU or microprocessor produced that does not have 8 bits in a byte? Are
there any C++ compilers that compile code for non-8-bit-byte systems?

I'm not arguing about the C++ standard, I'm just surprised that
variable byte sizes are something that people worry about enough to
include in the standard.

On second thought... 16-bit character sets could be considered to be
the harbingers of future non-8-bit bytes, could they not?
Here's a few things that the Standard allows:

(1) Machines need not use two's complement.
(2) Null pointers need not be all-bits-zero.
I'm confused. To set a pointer to null in C++, isn't the standard way
to do that to assign zero to the pointer? If you're on a system where
null pointers are non-zero, what happens to the pointer you thought you
had set to null?
From what you say, would the truly portable way to do that be to #define NULL depending on what system you're compiling for?
(3) Bytes need not be eight bits.
(4) Primitive types may contain padding bits.


I can see where padding bits would be necessary, for example
representing 32-bit integers on a 7-bit-per-byte system would require
five 7-bit bytes, with 3 padding bits. But are there any cases in
practice where primitive types actually contain padding bits?

Regards,
Markus.

Jun 30 '06 #12

P: n/a
Markus Svilans schrieb:
Here's a few things that the Standard allows:

(1) Machines need not use two's complement.
(2) Null pointers need not be all-bits-zero.


I'm confused. To set a pointer to null in C++, isn't the standard way
to do that to assign zero to the pointer? If you're on a system where
null pointers are non-zero, what happens to the pointer you thought you
had set to null?
From what you say, would the truly portable way to do that be to

#define NULL depending on what system you're compiling for?


An integer constant with value zero (eg. 0, 7+1-8, 0x0) is magically
converted to the systems null-pointer-value if assign to a pointer type.
(3) Bytes need not be eight bits.
(4) Primitive types may contain padding bits.


I can see where padding bits would be necessary, for example
representing 32-bit integers on a 7-bit-per-byte system would require
five 7-bit bytes, with 3 padding bits. But are there any cases in
practice where primitive types actually contain padding bits?


Don't know. But it would be a valid C++ system.

But only if the byte had 8 or more bits. A 7-bit-byte is not allowed.

Thomas
Jun 30 '06 #13

P: n/a
Markus Svilans posted:
You're on a Standard C++ newsgroup, and people here like to be
pedantic. It pays off in the long run, you end up with code that will
run perfectly for eons.
I can see your point. But in the last 10-15 years, has there been a
new CPU or microprocessor produced that does not have 8 bits in a
byte? Are there any C++ compilers that compile code for non-8-bit-byte
systems?

I'm not arguing about the C++ standard, I'm just surprised that
variable byte sizes are something that people worry about enough to
include in the standard.

On second thought... 16-bit character sets could be considered to be
the harbingers of future non-8-bit bytes, could they not?

Very possible. I think there's a certainty in life: Twenty years from
now, the world will have progressed more than we expected it, and in
unexpected ways.

Who's knows what the computers of tomorrow will bring?

Here's a few things that the Standard allows:

(1) Machines need not use two's complement.
(2) Null pointers need not be all-bits-zero.


I'm confused. To set a pointer to null in C++, isn't the standard way
to do that to assign zero to the pointer? If you're on a system where
null pointers are non-zero, what happens to the pointer you thought
you had set to null?

A "compile-time constant" is an expression whose value can be evaluated
at compile-time. Here's a few examples:

7

56 * 5 / 2 + 3

8 == 2 ? 1 : 6
If you have a compile-time constant which evaluates to zero, whether it
be:

0
5 - 5
2 * 6 - 12

Then it gets special treatment in C++, and qualifies as a null pointer
constant. A null pointer constant can be used to set a pointer to its
null pointer value, like so:

char *p = 0;

Because 0 qualifies as a null pointer constant, it gets special treatment
in the above statement (note how we'd normally have a type mismatch from
int to char*). Anyway, what the above statement does is set the pointer
to its null pointer value, whether that be:

0000 0000 0000 0000 0000 0000 0000 0000

or:

1111 1111 1111 1111 1111 1111 1111 1111

or:

1000 0000 0000 0000 0000 0000 0000 0000

or:

0000 0000 0000 0000 0000 0000 0000 0001

or:

1010 0101 1010 0101 1010 0101 1010 0101
From what you say, would the truly portable way to do that be to
#define NULL depending on what system you're compiling for?

No, all you do is:

char *p = 0;

And let your compiler deal with the rest.

(3) Bytes need not be eight bits.
(4) Primitive types may contain padding bits.


I can see where padding bits would be necessary, for example
representing 32-bit integers on a 7-bit-per-byte system would require
five 7-bit bytes, with 3 padding bits.

In actual fact it would make more sense to have a 35-Bit integer type,
instead of a 32-Bit one with padding.
But are there any cases in practice where primitive types actually
contain padding bits?

Mostly on supercomputers, I think.

Here's a quotation from a recent post on comp.lang.c:

For example, I'm currently logged into a system with the following
characteristics:

CHAR_BIT = 8
sizeof(short) = 8 (64 bits)
sizeof(int) = 8 (64 bits)
sizeof(long) = 8 (64 bits)

SHRT_MAX = 2147483647 (32 padding bits)
USHRT_MAX = 4294967295 (32 padding bits)

INT_MAX = 35184372088831 (18 padding bits)
UINT_MAX = 18446744073709551615 (no padding bits)

LONG_MAX = 9223372036854775807 (no padding bits)
ULONG_MAX = 18446744073709551615 (no padding bits)

(It's a Cray Y/MP EL running Unicos 9.0, basically an obsolete
supercomputer.)

--

Frederick Gotham
Jun 30 '06 #14

P: n/a

"Markus Svilans" <ms******@gmail.com> skrev i meddelandet
news:11*********************@75g2000cwc.googlegrou ps.com...
Frederick Gotham wrote:
(3) Bytes need not be eight bits.
(4) Primitive types may contain padding bits.


I can see where padding bits would be necessary, for example
representing 32-bit integers on a 7-bit-per-byte system would
require
five 7-bit bytes, with 3 padding bits. But are there any cases in
practice where primitive types actually contain padding bits?


No, but what we do have is machines with 36-bit integers and 9 bits
per byte.

http://www.unisys.com/products/clear...vers/index.htm

Should we not allow C++ to be implemented on such a machine?
A much more common problem is DSPs having 16 or 32 bit words as the
smallest unit. Then that will be the byte size, making sizeof(char) ==
sizeof(short) == sizeof(int). Quite possible!
Bo Persson
Jun 30 '06 #15

This discussion thread is closed

Replies have been disabled for this discussion.