473,397 Members | 2,028 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,397 software developers and data experts.

bit fields pointers?

is it possible to have pointers to bit fields?
i read somewhere that you can't.

but what if we do this

typedef struct u_int3_t {
unsigned int u_int3_t:3;
} CMI;

u_int3_t Array[100];

will this give me an array of bitfields?

or double array
u_int3_t Array[100][100];

I'm designing a big struct that is going to end up with 1000's of bit
field. i need them to be in arrays or organized somehow otherwise it
will be a @#$%$. space is at a premium so I have to save as many bits
as possible.

Thanks,

Nov 2 '06 #1
9 4748

On Wed, 1 Nov 2006 ma*****@gmail.com wrote:
>
is it possible to have pointers to bit fields?
i read somewhere that you can't.
Right. You can't.
but what if we do this

typedef struct u_int3_t {
unsigned int u_int3_t:3;
} CMI;

u_int3_t Array[100];

will this give me an array of bitfields?
No, of course not. It will give you an array of 100 objects of type
'u_int3_t', whatever that is. If you added the line

typedef struct u_int3_t u_int3_t;

to your program, above that declaration, then you'd have an array of
100 objects of type 'struct u_int3_t', and each of those objects would
contain a bitfield; but that's not really the same thing as an "array
of bitfields."
I'm designing a big struct that is going to end up with 1000's of bit
field. i need them to be in arrays or organized somehow otherwise it
will be a @#$%$. space is at a premium so I have to save as many bits
as possible.
Read your compiler's documentation to see whether it provides
useful non-standard extensions such as __attribute__((packed)) or
a __packed keyword. Standard C provides no such attribute, though.

With most compilers, your best bet will be something like

/* Hope this takes exactly 24 bits! */
__packed struct u_8int3_t { /* __packed is not standard */
int f0: 3;
int f1: 3;
int f2: 3;
int f3: 3;
int f4: 3;
int f5: 3;
int f6: 3;
int f7: 3;
};

struct u_8int3_t arr[13]; /* 13 >= 100/8 */

However, this whole thing smells of premature optimization. You
should write the code in the most natural way first, and then test
it to see whether you really /need/ to go through all this pain.

HTH,
-Arthur
Nov 2 '06 #2
On 1 Nov 2006 16:53:07 -0800, ma*****@gmail.com wrote:
>is it possible to have pointers to bit fields?
i read somewhere that you can't.

but what if we do this

typedef struct u_int3_t {
unsigned int u_int3_t:3;
} CMI;

u_int3_t Array[100];
That should not compile. Change it to:

struct u_int3_t Array[100];

or:

CMI Array[100];

Here is an excellent URL that you should read:

http://web.torek.net/torek/c/types2.html
>will this give me an array of bitfields?
Yes. But it may not be what you expected.
>I'm designing a big struct that is going to end up with 1000's of bit
field. i need them to be in arrays or organized somehow otherwise it
will be a @#$%$. space is at a premium so I have to save as many bits
as possible.
What exactly are you trying to accomplish? I think we need further
information to really help you.

--
jay
Nov 2 '06 #3
ma*****@gmail.com wrote:
is it possible to have pointers to bit fields?
i read somewhere that you can't.

but what if we do this

typedef struct u_int3_t {
unsigned int u_int3_t:3;
} CMI;

u_int3_t Array[100];

will this give me an array of bitfields?

or double array
u_int3_t Array[100][100];

I'm designing a big struct that is going to end up with 1000's of bit
field. i need them to be in arrays or organized somehow otherwise it
will be a @#$%$. space is at a premium so I have to save as many bits
as possible.
abstraction. Write a library (API) that implements an array of
bitfields.

void set_field (unsigned value, int index);
unsigned get_field (int index);

the implementation details can be hidden from the rest of your
application.
The library can be optimised (IF NECESSARY) independently of the app.

Say all the bit fields are of length 3. Then divide by 3 to find the
appropriate
byte and offset (can't be bothered to work out the details). Can your
bitfields
cross byte boundarys? Your choice. If no its faster. If yes its
smaller.

BTW if you use a whole byte for each field you are only using a few
K...
Whats this running in a washing machine controller?
--
Nick Keighley

Programming should never be boring, because anything
mundane and repetitive should be done by the computer.
~Alan Turing

Nov 2 '06 #4
Thank you for the advice.

I think I should forget about the struct with arrays of structs with
bitfields (what i was trying to do before), and go with one massive
struct of bitfields, because they will pack even closer, even though it
will be ugly.

I'm using gcc 3.2.3 on RHES3 to compile the code. it doesn't need to be
protable to any other OS or arch. so i will go with the
__attribute__((__packed__)).

This isn't going in a washing machine micro controller, but it needs to
packet tightly because were looking at storing 500,000,000 of those
little monsters in a distributed berkelydb
so 1 bit = 60Mb and the entire db is now looking like 5Tb.

having one giant struct with bitfields seems like the most efficient
way because it can pack everything down to the bit.

I'm still weary of using __attribute__((__packed__)) on the struct.
will i be able to access/update the struct's fields like normal as long
as i don't put too large a number in there?

Nov 2 '06 #5
ma*****@gmail.com writes:
[...]
I'm still weary of using __attribute__((__packed__)) on the struct.
will i be able to access/update the struct's fields like normal as long
as i don't put too large a number in there?
(I assume you mean "wary", not "weary".)

gnu.gcc.help is a good place to ask about gcc-specific extensions.

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Nov 2 '06 #6
ma*****@gmail.com wrote:
I think I should forget about the struct with arrays of structs with
bitfields (what i was trying to do before), and go with one massive
struct of bitfields, because they will pack even closer, even though it
will be ugly.

I'm using gcc 3.2.3 on RHES3 to compile the code. it doesn't need to be
protable to any other OS or arch. so i will go with the
__attribute__((__packed__)).
but one day you *will* need to port it...

This isn't going in a washing machine micro controller, but it needs to
packet tightly because were looking at storing 500,000,000 of those
little monsters in a distributed berkelydb
so 1 bit = 60Mb and the entire db is now looking like 5Tb.
so why did you say thousands when you meant hundreds of millions?
That's *5* orders of magnitude difference! I tended not to get good
marks on my physics assignments when I was that far out...
having one giant struct with bitfields seems like the most efficient
way because it can pack everything down to the bit.

I'm still weary of using __attribute__((__packed__)) on the struct.
will i be able to access/update the struct's fields like normal as long
as i don't put too large a number in there?
why not write code to pack and unpack the items yourself. The compiler
is going to have to generate pretty similar code to access the bit
fields
anyway.
--
Nick Keighley

We recommend, rather, that users take advantage of the extensions of
GNU C and disregard the limitations of other compilers. Aside from
certain supercomputers and obsolete small machines, there is less
and less reason ever to use any other C compiler other than for
bootstrapping GNU CC.
(Using and Porting GNU CC)

Nov 3 '06 #7
This isn't going in a washing machine micro controller, but it needs to
packet tightly because were looking at storing 500,000,000 of those
little monsters in a distributed berkelydb
so 1 bit = 60Mb and the entire db is now looking like 5Tb.

so why did you say thousands when you meant hundreds of millions?
That's *5* orders of magnitude difference! I tended not to get good
marks on my physics assignments when I was that far out...
there will be hundreds of millions of the structs with the thousands of
bitfields.

how about something that will compress the data before I store it into
berkelydb. that way i can use normal non bitfield variables ( like
normal poeple ). any suggestions ?

Nov 3 '06 #8
In article <11**********************@f16g2000cwb.googlegroups .com>,
<ma*****@gmail.comwrote:
This isn't going in a washing machine micro controller, but it needs to
packet tightly because were looking at storing 500,000,000 of those
little monsters in a distributed berkelydb
so 1 bit = 60Mb and the entire db is now looking like 5Tb.
>so why did you say thousands when you meant hundreds of millions?
That's *5* orders of magnitude difference! I tended not to get good
marks on my physics assignments when I was that far out...
>there will be hundreds of millions of the structs with the thousands of
bitfields.
>how about something that will compress the data before I store it into
berkelydb. that way i can use normal non bitfield variables ( like
normal poeple ). any suggestions ?
At this point we start getting into algorithms questions rather than
C-specific questions.

In order to provide you with the best advice about compressing the
data, we would have to know the relative number of times that the
data will be written and read, and some kind of information about the
relative amount of time "just being stored" against the time being
read. We would also need some idea about probability distributions
of the bits.

- If data is written once and then re-read many many times,
then it becomes cost effective to have the packing code "work hard",
to make the data compact and easy to read quickly even if computing
it the first time is a pain

- If data is mostly sitting around stored and not looked at very often,
then compactness becomes more important than read time

- If most bits are set the same way or if there would tend to be
large blocks with similar properties, then we might choose different
compression algorithms

But these are all matters of algorithms, and are probably best dealt
with in comp.compression.
I don't know what the data represents, so I'll go ahead and ask:
What is the importance that a particular bit retrieval be correct?
I ask because for -some- applications, efficiencies in time and
storage can be achieved by using probablistic storage rather than
guaranteed storage.

One example of this is the storage of words for spell checking
purposes: you don't need to store the words themselves, you only need
to store information -about- the words, and it does not have to
be fully accurate because the consequences of getting a spell
check slightly wrong are usually not extremely high. So one
approach used in the spell-checking example is to use several
(e.g., 6) different hash algorithms applied to the word, with each
hash algorithm outputting a bit position in the same array.
On storage of a new word, you set the bit at each of those positions.
To check a word you check each of those bit positions; if any of the
bits are *not* set then the word is not in the original list; if all
of the bits -are- set, then either the word was in the original list,
or else there was a probabilistic clash with a set of other words.
The larger the bit array you use, and the more hash functions you use,
the lower the chance of a "false positive" -- but that means
you can trade-off time (number of hashes processed) and space
(size of the bit array) against probability of accuracy (when the
consequences of inaccuracy are not severe.)

But that's an algorithm, and algorithms in general are
comp.algorithms
--
"It is important to remember that when it comes to law, computers
never make copies, only human beings make copies. Computers are given
commands, not permission. Only people can be given permission."
-- Brad Templeton
Nov 3 '06 #9
Thanks for the advice.

right now I'm experimenting with zlib.h is that a good general purpose
compression library?

I was trying to come up with a new scheme where I just have numbered
states and then just store the number of the state in berkelydb. but I
ended up with too many states. so that won't work.
In order to provide you with the best advice about compressing the
data, we would have to know the relative number of times that the
data will be written and read, and some kind of information about the
relative amount of time "just being stored" against the time being
read. We would also need some idea about probability distributions
of the bits.
will be writted:read, 9:10 so almost as many writes as reads. but space
is more premium than CPU
- If data is mostly sitting around stored and not looked at very often,
then compactness becomes more important than read time
some of the data will become inactive then later cleaned up. about 10%
of the data will be accessed several times daily. ( please don't
recommend cacheing )
- If most bits are set the same way or if there would tend to be
large blocks with similar properties, then we might choose different
compression algorithms
I think there will be a lot of 0's so I believe any general compression
algorithm will perform well.
I don't know what the data represents, so I'll go ahead and ask:
What is the importance that a particular bit retrieval be correct?
I ask because for -some- applications, efficiencies in time and
storage can be achieved by using probablistic storage rather than
guaranteed storage.
95+% accuray would be acceptable I'd have to go negotiate that :p

so far its looking like a giant struct with bitfields that will be
compressed with zlib.h then stored in berkelydb.

Nov 3 '06 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: Young Seeker | last post by:
Hi, I created two forms using forms wizard. One was for a master table that has just an ID and Description column. Another was for a transaction table that has foreign keys from the master table...
1
by: Miguelito Bain | last post by:
hi everybody. i'm trying to find out what you can and can't do with/in calculated fields. any help, pointers, and/or advice would be greatly appreciated. i'm a newbie, but i want to learn, and...
3
by: sparks | last post by:
I was copying fields from one table to another. IF the var name starts with milk I change it to egg and create it in the destination table. It works fine but I want to copy the description as...
24
by: Roman Mashak | last post by:
Hello, All! I have a structure: typedef struct cmd_line_s { char *cl_action; char *cl_args; unsigned char cl_args_num; void *cl_handler; } cmd_line_t;
24
by: Donald Grove | last post by:
I want to populate an array with values from an ado recordset (multiple rows) I use the absolute position of the cursor in the recordset to define the row of my array to be populated. I have a...
3
by: HockeyFan | last post by:
I know that <input type="hidden" fields are used within a form, but do <ASP:HiddenField s have to be in a form to be used? I've got a VB-based (codebehind) that catches the click event of a button...
9
by: sean.scanlon | last post by:
can someone help understand how i can could access a struct field dymanically like: foo->fields ? when i try to compile this i get the following error: 'struct pwd' has no member named 'fields'...
4
by: call_me_anything | last post by:
I have different kind of data structures in different applications. The data structures are big and complex and I would like to print the members of each struct. Can we write a generic piece of...
0
by: =?Utf-8?B?Tkg=?= | last post by:
Hi, Using vb.net and asp.net 2.0 can anyone give me some pointers on how to go about allowing users to add new fields to a webform and persist those changes to the form and underlying tables? ...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.