473,406 Members | 2,378 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,406 software developers and data experts.

converting char to float (reading binary data from file)

Hi,
I'm trying to convert some char data I read from a binary file (using
ifstream) to a float type. I've managed to convert the int types but
now I need to do the float types as well but it doesn't seem to work.
The code below is what I'm trying to use. Anyone see any obvious
errors? or have any hints/pointers?
regards,
Igor

float floatRead = 0;

UINT32* ptr = (UINT32 *) (&floatRead);

int offset = 0;

for (int ii=startInd; ii<=endInd; ii++){
*ptr |= (charBuf[ii] << offset);
offset += 8;
};
Jun 27 '08 #1
11 3769
On May 21, 11:29*am, Michael DOUBEZ <michael.dou...@free.frwrote:
itdevries a écrit :
I'm trying to convert some char data I read from a binary file (using
ifstream) to a float type. I've managed to convert the int types but
now I need to do the float types as well but it doesn't seem to work.
The code below is what I'm trying to use. Anyone see any obvious
errors? or have any hints/pointers?

I suppose charBuf is of the type char[] ? In this case 'charBuf[ii] <<
offset' is 0 as soon as offset>=8 so you get only 0s.
float floatRead = 0;
UINT32* * *ptr *= (UINT32 *) (&floatRead);
int offset * = 0;
for (int ii=startInd; ii<=endInd; ii++){
* *ptr |= (charBuf[ii] << offset);
* offset *+= 8;
};

Your use of bitwise operator looks clumsy unless you have some logic to
handle different byte ordering.

What's wrong with
memcpy(&floatRead,charBuf+startInd,sizeof(floatRea d));
?

--
Michael
thanks, that works... however, I don't understand what's wrong with my
original code. any ideas?
Igor

Jun 27 '08 #2
On May 21, 12:23*pm, Michael DOUBEZ <michael.dou...@free.frwrote:
itdevries a écrit :
On May 21, 11:29 am, Michael DOUBEZ <michael.dou...@free.frwrote:
itdevries a écrit :
float floatRead = 0;
UINT32* * *ptr *= (UINT32 *) (&floatRead);
int offset * = 0;
for (int ii=startInd; ii<=endInd; ii++){
* *ptr |= (charBuf[ii] << offset);
* offset *+= 8;
};
I don't understand what's wrong with my
original code. any ideas?

Unrolling your loop:
*ptr |= (charBuf[startInd+0] << *0);
*ptr |= (charBuf[startInd+1] << *8);
*ptr |= (charBuf[startInd+2] << 16);
*ptr |= (charBuf[startInd+3] << 24);

Becomes (as I have mentionned):
*ptr |= charBuf[startInd];
*ptr |= 0;
*ptr |= 0;
*ptr |= 0;

Because you shift-left a char more times than its size in bits (usually
8) so it becomes 0.

Example:
char x=0xBF;
assert( (x<< 8) == 0);
assert( (x<<12) == 0);
assert( (x<<42) == 0);

--
Michael
Thanks so much for taking the time to respond. I understand the logic
you're using and it was one of the initial concerns I had with the
code, however it seemed to work fine for int types (maybe by
coincidence) so I thought it would work for float types as well. One
thing I don't understand however is that when I step through the loop
with the debugger I see the value of the float change even though from
step 2 I thought I'd be doing "*ptr |= 0" which I thought shouldn't
alter the value of the float. that lead me to the conclusion that the
bitwise shift worked differently from what I expected.
Igor
Jun 27 '08 #3
On 21 mai, 11:29, Michael DOUBEZ <michael.dou...@free.frwrote:
itdevries a écrit :
I'm trying to convert some char data I read from a binary file (using
ifstream) to a float type. I've managed to convert the int types but
now I need to do the float types as well but it doesn't seem to work.
The code below is what I'm trying to use. Anyone see any obvious
errors? or have any hints/pointers?

I suppose charBuf is of the type char[] ? In this case 'charBuf[ii] <<
offset' is 0 as soon as offset>=8 so you get only 0s.
float floatRead = 0;
UINT32* ptr = (UINT32 *) (&floatRead);
int offset = 0;
for (int ii=startInd; ii<=endInd; ii++){
*ptr |= (charBuf[ii] << offset);
offset += 8;
};

Your use of bitwise operator looks clumsy unless you have some logic to
handle different byte ordering.

What's wrong with
memcpy(&floatRead,charBuf+startInd,sizeof(floatRea d));
?

--
Michael
Jun 27 '08 #4
On 21 mai, 10:46, itdevries <itdevr...@gmail.comwrote:
Hi,
I'm trying to convert some char data I read from a binary file (using
ifstream) to a float type. I've managed to convert the int types but
now I need to do the float types as well but it doesn't seem to work.
The code below is what I'm trying to use. Anyone see any obvious
errors? or have any hints/pointers?
regards,
Igor

float floatRead = 0;

UINT32* ptr = (UINT32 *) (&floatRead);

int offset = 0;

for (int ii=startInd; ii<=endInd; ii++){
*ptr |= (charBuf[ii] << offset);
offset += 8;

};
Jun 27 '08 #5
On May 26, 8:15 pm, c...@mailvault.com wrote:
On May 22, 1:58 am, James Kanze <james.ka...@gmail.comwrote:
[...]
In Boost 1.35 they've added an optimization to take advantage of
contiguous collections of primitive data types. Here is a copy
of a file that is involved:
Note however:

[...]
// archives stored as native binary - this should be the fastest way
// to archive the state of a group of obects. It makes no attempt to
// convert to any canonical form.
// IN GENERAL, ARCHIVES CREATED WITH THIS CLASS WILL NOT BE READABLE
// ON PLATFORM APART FROM THE ONE THEY ARE CREATE ON
Where "same platform" here means compiled on the same hardware,
using the same version of the same compiler, and the same
compiler options. If you ever recompile your executable with a
more recent version of the compiler, or with different options,
you may no longer be able to read the data.

In sum, it's an acceptable solution for temporary files within a
single run of the executable, but not for much else.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #6
On May 27, 10:04*am, James Kanze <james.ka...@gmail.comwrote:
On May 26, 8:15 pm, c...@mailvault.com wrote:
On May 22, 1:58 am, James Kanze <james.ka...@gmail.comwrote:

* * [...]
In Boost 1.35 they've added an optimization to take advantage of
contiguous collections of primitive data types. *Here is a copy
of a file that is involved:

Note however:

* * [...]
// archives stored as native binary - this should be the fastest way
// to archive the state of a group of obects. *It makes no attempt to
// convert to any canonical form.
// IN GENERAL, ARCHIVES CREATED WITH THIS CLASS WILL NOT BE READABLE
// ON PLATFORM APART FROM THE ONE THEY ARE CREATE ON

Where "same platform" here means compiled on the same hardware,
using the same version of the same compiler, and the same
compiler options. *If you ever recompile your executable with a
more recent version of the compiler, or with different options,
you may no longer be able to read the data.

In sum, it's an acceptable solution for temporary files within a
single run of the executable, but not for much else.
Modulo what is guaranteed by the compiler/platform ABI, I guess.

In particular, the Boost.Serialization binary format is primarily used
by Boost.MPI (which obviously is a wrapper around MPI) for inter
process communication. I think that the idea is that the MPI layer
will take care of marshaling between peers and thus resolve any
representation difference. I think that in practice most (but not all)
MPI implementations just assume that peers use the same layout format
(i.e. same CPU/compiler/OS) and just network copy bytes back and
forward.

In a sense the distributed program is a logical single run of the same
program even if in practice are different processes running on
different machines, so your observation is still valid

--
Giovanni P. Deretta
Jun 27 '08 #7
On May 27, 12:07 pm, gpderetta <gpdere...@gmail.comwrote:
On May 27, 10:04 am, James Kanze <james.ka...@gmail.comwrote:
On May 26, 8:15 pm, c...@mailvault.com wrote:
On May 22, 1:58 am, James Kanze <james.ka...@gmail.comwrote:
[...]
In Boost 1.35 they've added an optimization to take advantage of
contiguous collections of primitive data types. Here is a copy
of a file that is involved:
Note however:
[...]
// archives stored as native binary - this should be the fastest way
// to archive the state of a group of obects. It makes no attempt to
// convert to any canonical form.
// IN GENERAL, ARCHIVES CREATED WITH THIS CLASS WILL NOT BE READABLE
// ON PLATFORM APART FROM THE ONE THEY ARE CREATE ON
Where "same platform" here means compiled on the same hardware,
using the same version of the same compiler, and the same
compiler options. If you ever recompile your executable with a
more recent version of the compiler, or with different options,
you may no longer be able to read the data.
In sum, it's an acceptable solution for temporary files within a
single run of the executable, but not for much else.
Modulo what is guaranteed by the compiler/platform ABI, I guess.
Supposing you can trust them to be stable:-). In actual
practice, I've seen plenty of size changes, and I've seen long
and the floating point types change their representation, just
between different versions of the compiler. Not to mention
changes in padding which, at least in some cases depend on
compiler options. (For that matter, on most of the machines I
use, the size of a long depends on compiler options. And is the
sort of option that someone is likely to change in the makefile,
because e.g. they suddenly have to deal with big files.)
In particular, the Boost.Serialization binary format is
primarily used by Boost.MPI (which obviously is a wrapper
around MPI) for inter process communication. I think that the
idea is that the MPI layer will take care of marshaling
between peers and thus resolve any representation difference.
I think that in practice most (but not all) MPI
implementations just assume that peers use the same layout
format (i.e. same CPU/compiler/OS) and just network copy bytes
back and forward.
In a sense the distributed program is a logical single run of
the same program even if in practice are different processes
running on different machines, so your observation is still
valid
If the programs are not running on different machines, what's
the point of marshalling. Just put the objects in shared
memory. Marshalling is only necessary if the data is to be used
in a different place or time (networking or persistency). And a
different place or time means a different machine (sooner or
later, in the case of time).

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #8
On May 28, 10:30*am, James Kanze <james.ka...@gmail.comwrote:
On May 27, 12:07 pm, gpderetta <gpdere...@gmail.comwrote:
On May 27, 10:04 am, James Kanze <james.ka...@gmail.comwrote:
On May 26, 8:15 pm, c...@mailvault.com wrote:
On May 22, 1:58 am, James Kanze <james.ka...@gmail.comwrote:
* * [...]
In Boost 1.35 they've added an optimization to take advantage of
contiguous collections of primitive data types. *Here is a copy
of a file that is involved:
Note however:
* * [...]
// archives stored as native binary - this should be the fastest way
// to archive the state of a group of obects. *It makes no attemptto
// convert to any canonical form.
// IN GENERAL, ARCHIVES CREATED WITH THIS CLASS WILL NOT BE READABLE
// ON PLATFORM APART FROM THE ONE THEY ARE CREATE ON
Where "same platform" here means compiled on the same hardware,
using the same version of the same compiler, and the same
compiler options. *If you ever recompile your executable with a
more recent version of the compiler, or with different options,
you may no longer be able to read the data.
In sum, it's an acceptable solution for temporary files within a
single run of the executable, but not for much else.
Modulo what is guaranteed by the compiler/platform ABI, I guess.

Supposing you can trust them to be stable:-). *In actual
practice, I've seen plenty of size changes, and I've seen long
and the floating point types change their representation, just
between different versions of the compiler. *Not to mention
changes in padding which, at least in some cases depend on
compiler options. *(For that matter, on most of the machines I
use, the size of a long depends on compiler options. *And is the
sort of option that someone is likely to change in the makefile,
because e.g. they suddenly have to deal with big files.)
The size of long or that of off_t?
>
In particular, the Boost.Serialization binary format is
primarily used by Boost.MPI (which obviously is a wrapper
around MPI) for inter process communication. I think that the
idea is that the MPI layer will take care of marshaling
between peers and thus resolve any representation difference.
I think that in practice most (but not all) MPI
implementations just assume that peers use the same layout
format (i.e. same CPU/compiler/OS) and just network copy bytes
back and forward.
In a sense the distributed program is a logical single run of
the same program even if in practice are different processes
running on different machines, so your observation is still
valid

If the programs are not running on different machines, what's
the point of marshalling. *Just put the objects in shared
memory. *Marshalling is only necessary if the data is to be used
in a different place or time (networking or persistency). *And a
different place or time means a different machine (sooner or
later, in the case of time).
Well, MPI programs runs on large clusters of, usually, homogeneous
machines, connected via LAN. The same program will spawn
multiple copies of itself on every machine in the cluster, and every
copy communicates via message passing.
So you have one logical program which is partitioned on multiple
machines. I guess that most MPI implementations do not bother (in fact
I do not even know if it is required by the standard) to convert
messages to a machine agnostic format before sending it to another
peer.

--
Giovanni P. Deretta
Jun 27 '08 #9
On May 28, 12:11 pm, gpderetta <gpdere...@gmail.comwrote:
On May 28, 10:30 am, James Kanze <james.ka...@gmail.comwrote:
On May 27, 12:07 pm, gpderetta <gpdere...@gmail.comwrote:
On May 27, 10:04 am, James Kanze <james.ka...@gmail.comwrote:
On May 26, 8:15 pm, c...@mailvault.com wrote:
On May 22, 1:58 am, James Kanze <james.ka...@gmail.comwrote:
[...]
In Boost 1.35 they've added an optimization to take advantage of
contiguous collections of primitive data types. Here is a copy
of a file that is involved:
Note however:
[...]
// archives stored as native binary - this should be the fastest way
// to archive the state of a group of obects. It makes no attemptto
// convert to any canonical form.
// IN GENERAL, ARCHIVES CREATED WITH THIS CLASS WILL NOT BE READABLE
// ON PLATFORM APART FROM THE ONE THEY ARE CREATE ON
Where "same platform" here means compiled on the same hardware,
using the same version of the same compiler, and the same
compiler options. If you ever recompile your executable with a
more recent version of the compiler, or with different options,
you may no longer be able to read the data.
In sum, it's an acceptable solution for temporary files within a
single run of the executable, but not for much else.
Modulo what is guaranteed by the compiler/platform ABI, I guess.
Supposing you can trust them to be stable:-). In actual
practice, I've seen plenty of size changes, and I've seen long
and the floating point types change their representation, just
between different versions of the compiler. Not to mention
changes in padding which, at least in some cases depend on
compiler options. (For that matter, on most of the machines I
use, the size of a long depends on compiler options. And is the
sort of option that someone is likely to change in the makefile,
because e.g. they suddenly have to deal with big files.)
The size of long or that of off_t?
No matter. The point is that they have to compile with
different options, and suddenly long has changed its size.
In particular, the Boost.Serialization binary format is
primarily used by Boost.MPI (which obviously is a wrapper
around MPI) for inter process communication. I think that
the idea is that the MPI layer will take care of
marshaling between peers and thus resolve any
representation difference. I think that in practice most
(but not all) MPI implementations just assume that peers
use the same layout format (i.e. same CPU/compiler/OS) and
just network copy bytes back and forward. In a sense the
distributed program is a logical single run of the same
program even if in practice are different processes
running on different machines, so your observation is
still valid
If the programs are not running on different machines,
what's the point of marshalling. Just put the objects in
shared memory. Marshalling is only necessary if the data is
to be used in a different place or time (networking or
persistency). And a different place or time means a
different machine (sooner or later, in the case of time).
Well, MPI programs runs on large clusters of, usually,
homogeneous machines, connected via LAN.
That's original. I don't think I've ever seen a cluster of
machines where every system in the cluster was identical. At
the very least, you'll have different versions of Sparc, or PC.
Some of which are 32 bit, and others 64. The cluster may start
out homogeneous, but one of the machines breaks down, and is
replaced with a newer model...

The real question, however, doesn't concern just the machines.
If all of the machines are running a single executable, loaded
from the same shared disk, it will probably work. If not, then
sooner or later, some of the machines will have different
compiles of the program, which may or may not be binary
compatible. In practice, the old rule always holds: identical
copies aren't. (Remember, binary compatibility can be lost just
by changing options, or using a newer version of the compiler.)
The same program will spawn multiple copies of itself on every
machine in the cluster, and every copy communicates via
message passing. So you have one logical program which is
partitioned on multiple machines. I guess that most MPI
implementations do not bother (in fact I do not even know if
it is required by the standard) to convert messages to a
machine agnostic format before sending it to another peer.
Well, I don't know much about that context. In my work, we have
a hetrogeneous network, with PC's under Windows as clients, and
either PC's under Linux or Sparcs under Solaris as servers (and
high level clients). And that more or less corresponds to what
I've seen elswhere as well.

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #10
On May 28, 10:06 pm, James Kanze <james.ka...@gmail.comwrote:
On May 28, 12:11 pm, gpderetta <gpdere...@gmail.comwrote:
On May 28, 10:30 am, James Kanze <james.ka...@gmail.comwrote:
On May 27, 12:07 pm, gpderetta <gpdere...@gmail.comwrote:
In particular, the Boost.Serialization binary format is
primarily used by Boost.MPI (which obviously is a wrapper
around MPI) for inter process communication. I think that
the idea is that the MPI layer will take care of
marshaling between peers and thus resolve any
representation difference. I think that in practice most
(but not all) MPI implementations just assume that peers
use the same layout format (i.e. same CPU/compiler/OS) and
just network copy bytes back and forward. In a sense the
distributed program is a logical single run of the same
program even if in practice are different processes
running on different machines, so your observation is
still valid
If the programs are not running on different machines,
what's the point of marshalling. Just put the objects in
shared memory. Marshalling is only necessary if the data is
to be used in a different place or time (networking or
persistency). And a different place or time means a
different machine (sooner or later, in the case of time).
Well, MPI programs runs on large clusters of, usually,
homogeneous machines, connected via LAN.

That's original. I don't think I've ever seen a cluster of
machines where every system in the cluster was identical.
I think that for MPI it is common. Some vendors even sell shrink
wrapped clusters in a box (something like a big closet with thousands
of different computers-on-a-board, each running a different OS image).
Even custom built MPI clusters are fairly homogeneous (i.e. at least
same architecture and OS version).

I think that you work mostly on services applications, while MPI is
more common on high performance computing.
[...]
The real question, however, doesn't concern just the machines.
If all of the machines are running a single executable, loaded
from the same shared disk, it will probably work. If not, then
sooner or later, some of the machines will have different
compiles of the program, which may or may not be binary
compatible. In practice, the old rule always holds: identical
copies aren't. (Remember, binary compatibility can be lost just
by changing options, or using a newer version of the compiler.)
Yep, one need to be careful, but at least with the compiler I use,
options that change the ABI are explicitly documented as such.
Probably a much bigger problem are differences in third party
libraries between machines (i.e. do not expect the layout of objects
you do not control to stay stable).
The same program will spawn multiple copies of itself on every
machine in the cluster, and every copy communicates via
message passing. So you have one logical program which is
partitioned on multiple machines. I guess that most MPI
implementations do not bother (in fact I do not even know if
it is required by the standard) to convert messages to a
machine agnostic format before sending it to another peer.

Well, I don't know much about that context. In my work, we have
a hetrogeneous network, with PC's under Windows as clients, and
either PC's under Linux or Sparcs under Solaris as servers (and
high level clients). And that more or less corresponds to what
I've seen elswhere as well.
Where I work, clusters are composed of hundreds of very different
machines, but all use the same architecture and exact same OS version
(so that we can copy binaries around and not have to worry about
library incompatibilities). We do not use MPI though, but have an in-
house communication framework which does take care of marshaling in a
(mostly) system agnostic format.

--
Giovanni P. Deretta

Jun 27 '08 #11
On May 30, 6:38 pm, gpderetta <gpdere...@gmail.comwrote:
On May 28, 10:06 pm, James Kanze <james.ka...@gmail.comwrote:
On May 28, 12:11 pm, gpderetta <gpdere...@gmail.comwrote:
On May 28, 10:30 am, James Kanze <james.ka...@gmail.comwrote:
On May 27, 12:07 pm, gpderetta <gpdere...@gmail.comwrote:
In particular, the Boost.Serialization binary format is
primarily used by Boost.MPI (which obviously is a wrapper
around MPI) for inter process communication. I think that
the idea is that the MPI layer will take care of
marshaling between peers and thus resolve any
representation difference. I think that in practice most
(but not all) MPI implementations just assume that peers
use the same layout format (i.e. same CPU/compiler/OS) and
just network copy bytes back and forward. In a sense the
distributed program is a logical single run of the same
program even if in practice are different processes
running on different machines, so your observation is
still valid
If the programs are not running on different machines,
what's the point of marshalling. Just put the objects in
shared memory. Marshalling is only necessary if the data is
to be used in a different place or time (networking or
persistency). And a different place or time means a
different machine (sooner or later, in the case of time).
Well, MPI programs runs on large clusters of, usually,
homogeneous machines, connected via LAN.
That's original. I don't think I've ever seen a cluster of
machines where every system in the cluster was identical.
I think that for MPI it is common. Some vendors even sell
shrink wrapped clusters in a box (something like a big closet
with thousands of different computers-on-a-board, each running
a different OS image). Even custom built MPI clusters are
fairly homogeneous (i.e. at least same architecture and OS
version).
I think that you work mostly on services applications, while
MPI is more common on high performance computing.
I realized that much, but I wasn't aware that it was that common
even on high performance computing. The high performance
computing solutions I've seen have mostly involved a lot of
CPU's using the same memory, so marshalling wasn't an issue.
(But I'm not much of an expert in the domain, and I've not seen
that many systems, so what I've seen doesn't mean much.)
[...]
The real question, however, doesn't concern just the machines.
If all of the machines are running a single executable, loaded
from the same shared disk, it will probably work. If not, then
sooner or later, some of the machines will have different
compiles of the program, which may or may not be binary
compatible. In practice, the old rule always holds: identical
copies aren't. (Remember, binary compatibility can be lost just
by changing options, or using a newer version of the compiler.)
Yep, one need to be careful, but at least with the compiler I use,
options that change the ABI are explicitly documented as such.
Lucky guy:-). For the most part, what the options actually do
is well documented, and if you understand a bit about what it
means at the hardware level, you can figure out which ones are
safe, and which aren't. But it's far from explicit.

Note that this can be a problem just trying to statically link
libraries; you don't need marshalling at all to get into
trouble. (Or rather: you don't want to have to marshall every
time you pass an std::vector to a function in another module.)
Probably a much bigger problem are differences in third party
libraries between machines (i.e. do not expect the layout of
objects you do not control to stay stable).
That's another problem entirely, and affects linking more than
marshalling. The problem is that compilers may change
representation between versions, etc.
The same program will spawn multiple copies of itself on every
machine in the cluster, and every copy communicates via
message passing. So you have one logical program which is
partitioned on multiple machines. I guess that most MPI
implementations do not bother (in fact I do not even know if
it is required by the standard) to convert messages to a
machine agnostic format before sending it to another peer.
Well, I don't know much about that context. In my work, we have
a hetrogeneous network, with PC's under Windows as clients, and
either PC's under Linux or Sparcs under Solaris as servers (and
high level clients). And that more or less corresponds to what
I've seen elswhere as well.
Where I work, clusters are composed of hundreds of very
different machines, but all use the same architecture and
exact same OS version (so that we can copy binaries around and
not have to worry about library incompatibilities). We do not
use MPI though, but have an in- house communication framework
which does take care of marshaling in a (mostly) system
agnostic format.
Yes. We do something more or less like this for the clients:
they're all PC's under Windows, and we use a lowest common
denominator which should work for all Windows systems. Our
machines are geographically distributed, however, so
realistically, ensuring exactly the same version of the OS,
isn't possible.

For the servers, economic considerations result in a decision to
move from Solaris on Sparc to Linux on PC, at least for all but
the most critical systems. Similarly, economic considerations
mean that the entire park won't be upgraded at the same instant.
Are you saying that if a decision comes to upgrade the
architecture, you change all of the machines in a cluster at
once? (But maybe... I can imagine that all of the machines in a
cluster still cost less than one supercomputer. And if you were
using a supercomputer, and wanted to upgrade, you'd change it
all at once. I guess it's just a different mindset.)

--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
Jun 27 '08 #12

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: Joseph Suprenant | last post by:
I have an array of unsigned chars and i would like them converted to an array of ints. What is the best way to do this? Using RedHat 7.3 on an Intel Pentium 4 machine. Having trouble here, hope...
20
by: ishmael4 | last post by:
hello everyone! i have a problem with reading from binary file. i was googling and searching, but i just cant understand, why isnt this code working. i could use any help. here's the source code:...
9
by: Gregory.A.Book | last post by:
I am interested in converting sets of 4 bytes to floats in C++. I have a library that reads image data and returns the data as an array of unsigned chars. The image data is stored as 4-byte floats....
3
by: Howler | last post by:
Hello all, I am having a hard time seeing what I am doing wrong with a program I am having to write that converts pbm monochrome images into a similar pgm file. The problem I am having is...
2
by: DBuss | last post by:
OK, I'm reading a multicast socket. It attaches fine, reads fine, all of that. The problem is that while some of the data I get is normal text (ASCII String), some of it is Binary Integer. ...
3
by: psbasha | last post by:
Hi , When ever we read any data from file ,we read as a single line string ,and we convert the respective field data available in that string based on the data type ( say int,float ). ...
12
by: joestevens232 | last post by:
Okay, Im having some problems with my code. Im trying to use the <cstdlib> library and im trying to convert string data at each whitespace slot. I think if you see my code you'll get what im trying...
15
by: itdevries | last post by:
Hi, I'm trying to read some binary data from a file, I've read a few bytes of the data into a char array with ifstream. Now I know that the first 4 bytes in the char array represent an integer....
7
by: ma740988 | last post by:
Consider the equation (flight dynamics stuff): Yaw (Degrees) = Azimuth Angle(Radians) * 180 (Degrees) / 3.1415926535897932384626433832795 (Radians) There's a valid reason to use single...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.