473,735 Members | 1,819 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Variadic functions calling variadic functions with the argument list, HLL bit shifts on LE processors


Hi,

I hope you can help me understand the varargs facility.

Say I am programming in ISO C including stdarg.h and I declare a
function as so:

void log_printf(cons t char* logfilename, const char* formatter, ...);

Then, I want to call it as so:

int i = 1;
const char* s = "string";

log_printf("log file.txt", "Int: %d String %s", i, s);

Then, in the definition of log_printf I have something along the lines
of this:

void log_printf(cons t char* logfilename, const char* formatter, ...){

va_list ap;

FILE* logfile;
if(logfilename == NULL) { return; }
if(formatter == NULL) { return; }
logfile = fopen(logfilena me, "a");
if( logfile == NULL ) { return; }

va_start(ap, formatter);

fprintf(logfile , formatter, ap );

va_end(ap);

fprintf(logfile , "\n");
fclose(logfile) ;
return;

}

As you can see I want to call fprintf with its variable argument list
that is the same list of arguments as passed to log_printf.

Please explain the (a) correct implementation of this function. I
looked to the C Reference Manual and Schildt's handy C/C++ Programmer's
Reference and various platform's man pages and documentation with
regards to this, and don't quite get it. I search for va_start(ap,
fmt).
Then, also I have some questions about shift on little-endian
processors. As an obligatory topicality justification, performance
issues of C are on-topic on comp.lang.c. Anyways, in the HLL C when
you shift the 32 bit unsigned value 0x00010000 one bit right the result
is equal to 0x00008000. Now, on little-endian architectures that is
represented in memory as

00 00 01 00

and then

00 80 00 00

On the LE register as well, its representation is

00 00 01 00

and then shifting it one bit right in the HLL leads to

00 80 00 00

but one would hope that the shr for shift right instruction would
instead leave:

00 00 00 80

so I am wondering if besides using assembler instructions there are
other well known methods in the high level language for shifting,
particularly in terms of one bit shift and multiples of eight bit
shifts.

That's basically about the conflict between msb->lsb bit fill order but
LSB->MSB, LE or Little-Endian byte order, and correspondingly between
lsb->msb and MSB->LSB, BE or Big-Endian, in terms of high level shift
and instruction level shift. I have not decompiled the output of the
compiler on the LE machine, maybe that would be the most direct way to
see what the compiler does to accomplish bit shifts across byte
boundaries that preserve numeric values, but if you happen to know
offhand, that would be interesting and appreciated. I'm basically
looking at the basic necessity of assembler for a bit scanner. It is
acceptably easily done in the HLL, I'm just concerned about trivial
issues of compiler instruction generation.

Here's another question, but it's about the linker. Say, a unit
contains multiple symbols, that have by default the extern storage
class. I guess it's up to the linker to either draw in all symbols of
the module or pull the used symbols out of the compiled module. I look
at the output with gcc, and it draws in symbols that are unused, just
because some other symbol in the same compilation unit is used. for
example:

[space:~/Desktop/fc_test] nm Build/test
....
00003c7c T _fc_mem_free
00003c38 T _fc_mem_malloc
00003cc4 T _fc_mem_realloc
00003d10 T _fc_mem_stackal loc
00003d6c T _fc_mem_stackfr ee
....

All of those definitions are in the same compilation or translation
unit, but only fc_mem_stackall oc and fc_mem_stackfre e are actually
used, and there is no reason for that code to be in the output. I
change the optimization to -O3 and the unnecessary symbols are still
there.

[space:~/Desktop/fc_test] space% cc -v
Reading specs from /usr/libexec/gcc/darwin/ppc/2.95.2/specs
Apple Computer, Inc. version gcc-926, based on gcc version 2.95.2
19991024 (release)

It's one consideration to divide the compilation units so that each
compilation unit contains only one function, but that gets very bulky,
as some of the translation units in this project have dozens of nearly
identical functions, and I would rather not have hundreds of source
code files if I could hint to the linker to remove unnecessary code.
Some programs linking to the static library only need one of those
hundreds of generated functions.

So:

1. variadic function arguments going to called variadic function?
2. HLL shift on Little-Endian?
3. linker hints for unneeded symbols besides compiling into separate
units

Then I have some questions about best practices with signals, Microsoft
SEH or Structured Exception Handling, portability and runtime
dependency of longjmp, and stuff.

Thank you!

Ross F.

Nov 14 '05 #1
19 4258
Ross A. Finlayson wrote:
Hi,

I hope you can help me understand the varargs facility.

Say I am programming in ISO C including stdarg.h and I declare a
function as so:

void log_printf(cons t char* logfilename, const char* formatter, ...);

Then, I want to call it as so:

int i = 1;
const char* s = "string";

log_printf("log file.txt", "Int: %d String %s", i, s);

Then, in the definition of log_printf I have something along the lines
of this:

void log_printf(cons t char* logfilename, const char* formatter, ...){

va_list ap;

FILE* logfile;
if(logfilename == NULL) { return; }
if(formatter == NULL) { return; }
logfile = fopen(logfilena me, "a");
if( logfile == NULL ) { return; }

va_start(ap, formatter);

fprintf(logfile , formatter, ap );

va_end(ap);

fprintf(logfile , "\n");
fclose(logfile) ;
return;

}

As you can see I want to call fprintf with its variable argument list
that is the same list of arguments as passed to log_printf.

Yuo want vfprintf(); that's exactly what it's for.
Then, also I have some questions about shift on little-endian
processors. As an obligatory topicality justification, performance
issues of C are on-topic on comp.lang.c. Anyways, in the HLL C when
Erm, no -- at least not at the implementation level.

[snip]

1. variadic function arguments going to called variadic function?
2. HLL shift on Little-Endian?
3. linker hints for unneeded symbols besides compiling into separate
units
The operation of particular linkers is not topical here.
Then I have some questions about best practices with signals, Microsoft
SEH or Structured Exception Handling, portability and runtime
dependency of longjmp, and stuff.

Thank you!

Ross F.


HTH,
--ag
--
Artie Gold -- Austin, Texas
http://it-matters.blogspot.com (new post 12/5)
http://www.cafepress.com/goldsays
Nov 14 '05 #2
Artie Gold wrote:
The operation of particular linkers is not topical here.

Then I have some questions about best practices with signals, Microsoft SEH or Structured Exception Handling, portability and runtime
dependency of longjmp, and stuff.

Thank you!

Ross F.


HTH,
--ag
--
Artie Gold -- Austin, Texas
http://it-matters.blogspot.com (new post 12/5)
http://www.cafepress.com/goldsays


Thanks!

Yeah that vfprintf is just what I needed.

http://www.cppreference.com/stdio/vp..._vsprintf.html

About the linker, some linkers do carefully extract only the needed
symbols, and it is a linker and linker option dependent issue. While
that is so, it affects the design of the C program, because to have the
code generation be of reduced size without mryiad linker
configurations, then there ends up being the clutter of hundreds of
compilation units.

About the HLL shift on LE architecture, it generally doesn't matter,
yet, in wanting to consider good practices for the design of the HLL C
language source code to be used without reconfiguration with respect to
the processor upon which the generated instructions execute.

About the signals and stuff, that's basically about a shim, and other
considerations of, say, having a library component allocate heap
memory, report errors, and other runtime aspects.

Hey thanks again.

Ross F.

Nov 14 '05 #3
>I hope you can help me understand the varargs facility.

Say I am programming in ISO C including stdarg.h and I declare a
function as so:

void log_printf(cons t char* logfilename, const char* formatter, ...);

Then, I want to call it as so:

int i = 1;
const char* s = "string";

log_printf("lo gfile.txt", "Int: %d String %s", i, s);

Then, in the definition of log_printf I have something along the lines
of this:

void log_printf(cons t char* logfilename, const char* formatter, ...){

va_list ap;

FILE* logfile;
if(logfilename == NULL) { return; }
if(formatter == NULL) { return; }
logfile = fopen(logfilena me, "a");
if( logfile == NULL ) { return; }

va_start(ap, formatter);

fprintf(logfile , formatter, ap );
The only reason for the existence of the vfprintf() function is to
deal with this kind of problem.

va_end(ap);

fprintf(logfile , "\n");
fclose(logfile) ;
return;

}
Then, also I have some questions about shift on little-endian
processors. As an obligatory topicality justification, performance
issues of C are on-topic on comp.lang.c. Anyways, in the HLL C when
you shift the 32 bit unsigned value 0x00010000 one bit right the result
is equal to 0x00008000.
Yes, regardless of endianness. You are supposed to write programs
so they don't care about endianness.
Now, on little-endian architectures that is
represented in memory as

00 00 01 00

and then

00 80 00 00

On the LE register as well, its representation is
There's no such thing as a "LE register" on most popular CPU
architectures. Registers have whatever width they have, and the
shift is done on the value. A key feature of registers on many
CPUs is that they do not have a memory address and therefore do not
have endianness.

00 00 01 00

and then shifting it one bit right in the HLL leads to

00 80 00 00

but one would hope that the shr for shift right instruction would
instead leave:

00 00 00 80

so I am wondering if besides using assembler instructions there are
other well known methods in the high level language for shifting,
particularly in terms of one bit shift and multiples of eight bit
shifts.
The compiler needs to use assembler instructions. There
isn't much else that it CAN do. You were expecting magic?
Transmute Endianness incantations?
That's basically about the conflict between msb->lsb bit fill order but
LSB->MSB, LE or Little-Endian byte order, and correspondingly between
lsb->msb and MSB->LSB, BE or Big-Endian, in terms of high level shift
and instruction level shift.
What conflict? A shift operation generally maps directly to an
assembly-language shift instruction. Sometimes a shift for a variable
amount might have to use a loop.

It's shifting a VALUE.
I have not decompiled the output of the
compiler on the LE machine, maybe that would be the most direct way to
see what the compiler does to accomplish bit shifts across byte
boundaries that preserve numeric values, but if you happen to know
offhand, that would be interesting and appreciated.
A value gets shifted in a register. Registers don't HAVE byte boundaries,
or at least not ones that make any difference, performance wise.
I'm basically
looking at the basic necessity of assembler for a bit scanner. It is
acceptably easily done in the HLL, I'm just concerned about trivial
issues of compiler instruction generation.

Here's another question, but it's about the linker. Say, a unit
contains multiple symbols, that have by default the extern storage
class. I guess it's up to the linker to either draw in all symbols of
the module or pull the used symbols out of the compiled module. I look
at the output with gcc, and it draws in symbols that are unused, just
because some other symbol in the same compilation unit is used. for
Can you suggest a way for gcc to determine what part of the object
to leave out based on which symbols are wanted and which aren't?
It's not easy.
example:

[space:~/Desktop/fc_test] nm Build/test
...
00003c7c T _fc_mem_free
00003c38 T _fc_mem_malloc
00003cc4 T _fc_mem_realloc
00003d10 T _fc_mem_stackal loc
00003d6c T _fc_mem_stackfr ee
...

All of those definitions are in the same compilation or translation
unit, but only fc_mem_stackall oc and fc_mem_stackfre e are actually
used, and there is no reason for that code to be in the output. I
change the optimization to -O3 and the unnecessary symbols are still
there.

[space:~/Desktop/fc_test] space% cc -v
Reading specs from /usr/libexec/gcc/darwin/ppc/2.95.2/specs
Apple Computer, Inc. version gcc-926, based on gcc version 2.95.2
19991024 (release)

It's one consideration to divide the compilation units so that each
compilation unit contains only one function, but that gets very bulky,
as some of the translation units in this project have dozens of nearly
identical functions, and I would rather not have hundreds of source
code files if I could hint to the linker to remove unnecessary code.
Some programs linking to the static library only need one of those
hundreds of generated functions.

So:

1. variadic function arguments going to called variadic function?
2. HLL shift on Little-Endian?
3. linker hints for unneeded symbols besides compiling into separate
units

Then I have some questions about best practices with signals, Microsoft
SEH or Structured Exception Handling, portability and runtime
dependency of longjmp, and stuff.

Thank you!

Ross F.

Gordon L. Burditt
Nov 14 '05 #4
Gordon Burditt wrote:
...
Then, also I have some questions about shift on little-endian
processors. As an obligatory topicality justification, performance
issues of C are on-topic on comp.lang.c. Anyways, in the HLL C when
you shift the 32 bit unsigned value 0x00010000 one bit right the resultis equal to 0x00008000.
Yes, regardless of endianness. You are supposed to write programs
so they don't care about endianness.
Now, on little-endian architectures that is
represented in memory as

00 00 01 00

and then

00 80 00 00

On the LE register as well, its representation is


There's no such thing as a "LE register" on most popular CPU
architectures. Registers have whatever width they have, and the
shift is done on the value. A key feature of registers on many
CPUs is that they do not have a memory address and therefore do not
have endianness.

00 00 01 00

and then shifting it one bit right in the HLL leads to

00 80 00 00

but one would hope that the shr for shift right instruction would
instead leave:

00 00 00 80

so I am wondering if besides using assembler instructions there are
other well known methods in the high level language for shifting,
particularly in terms of one bit shift and multiples of eight bit
shifts.


The compiler needs to use assembler instructions. There
isn't much else that it CAN do. You were expecting magic?
Transmute Endianness incantations?
That's basically about the conflict between msb->lsb bit fill order butLSB->MSB, LE or Little-Endian byte order, and correspondingly betweenlsb->msb and MSB->LSB, BE or Big-Endian, in terms of high level shiftand instruction level shift.


What conflict? A shift operation generally maps directly to an
assembly-language shift instruction. Sometimes a shift for a

variable amount might have to use a loop.

It's shifting a VALUE.
I have not decompiled the output of the
compiler on the LE machine, maybe that would be the most direct way tosee what the compiler does to accomplish bit shifts across byte
boundaries that preserve numeric values, but if you happen to know
offhand, that would be interesting and appreciated.
A value gets shifted in a register. Registers don't HAVE byte

boundaries, or at least not ones that make any difference, performance wise.
I'm basically
looking at the basic necessity of assembler for a bit scanner. It isacceptably easily done in the HLL, I'm just concerned about trivial
issues of compiler instruction generation.

Here's another question, but it's about the linker. Say, a unit
contains multiple symbols, that have by default the extern storage
class. I guess it's up to the linker to either draw in all symbols ofthe module or pull the used symbols out of the compiled module. I lookat the output with gcc, and it draws in symbols that are unused, justbecause some other symbol in the same compilation unit is used. for


Can you suggest a way for gcc to determine what part of the object
to leave out based on which symbols are wanted and which aren't?
It's not easy.
example:

[space:~/Desktop/fc_test] nm Build/test
...
00003c7c T _fc_mem_free
00003c38 T _fc_mem_malloc
00003cc4 T _fc_mem_realloc
00003d10 T _fc_mem_stackal loc
00003d6c T _fc_mem_stackfr ee
...

All of those definitions are in the same compilation or translation
unit, but only fc_mem_stackall oc and fc_mem_stackfre e are actually
used, and there is no reason for that code to be in the output. I
change the optimization to -O3 and the unnecessary symbols are still
there.

[space:~/Desktop/fc_test] space% cc -v
Reading specs from /usr/libexec/gcc/darwin/ppc/2.95.2/specs
Apple Computer, Inc. version gcc-926, based on gcc version 2.95.2
19991024 (release)

It's one consideration to divide the compilation units so that each
compilation unit contains only one function, but that gets very bulky,as some of the translation units in this project have dozens of nearlyidentical functions, and I would rather not have hundreds of source
code files if I could hint to the linker to remove unnecessary code.
Some programs linking to the static library only need one of those
hundreds of generated functions.

So:

1. variadic function arguments going to called variadic function?
2. HLL shift on Little-Endian?
3. linker hints for unneeded symbols besides compiling into separate
units

Then I have some questions about best practices with signals, MicrosoftSEH or Structured Exception Handling, portability and runtime
dependency of longjmp, and stuff.

Thank you!

Ross F.

Gordon L. Burditt


Hi,

Not really, no.

About Little-Endian, I'm aware many chips have big and little-endian
modes, eg SPARC, PowerPC, Itanium, but the Intel x86 chips are
little-endian, and as well the low order bytes of the word are aliased
to specific sections of the register, and there are zillions of those
things sitting around.

About the code size handling, that's a good question, I don't offhand
know the binary format of the object files, ELF, COFF, PE, a.out, with
DWARF or STABS or ???

I guess there's basically static and dynamic analysis. That's
different than static and extern storage, with the auto and the hey hey
and the I-don't-know. It's relocatable code, mostly, if you can tell
the compiler that you aren't calling any functions by fixed literal
addresses in the objects, or even if you are, then it should be able to
resolve that, subject to limitations in the intermediate object format,
it does or otherwise it wouldn't not satisfy the symbols in unused
compilation units, ie, linking unused objects.

Some compilers do that, reduced size, they have to do it somehow. If I
have 45 compilation objects in the build, I'd rather not have 145, but
if I have a compilation unit with 20 nearly identical functions with
widely varying runtime, and the program only uses one of those
functions, then I definitely want the least amount of stuff there, from
compile time.

About Little-Endian and shift, I'm trying to think of an example. Say
I have a 32 bit unsigned word with each byte being an encoded symbol.

0x AA BB CC DD

If a certain bit in another register is set, I want to shift that
right.

0x 00 AA BB CC

Then, regardless of whether CC or DD is in the low byte, I test that
byte for a bit. If it's set, there's this processing, and then what's
left is conditionally shifted 16 bits.

0x 00 00 00 AA
or
0x 00 00 AA BB

Now, all I'm interested in is the low byte there. So, on the x86
register eax, say, it is as so

-------eax -----
--eah--- ---eal---
-ah- -al-
AA 00 00 00

then, if I want to move that byte off then I think it has to go onto ah
or al to be moved into a memory location at a byte offset. So, I'm
causing myself grief about flipping the original number to 0x DD CC BB
AA and shifting it the other way, for the Little-Endian case, because
those bit tests (and + jnz, I suppose) or moves work off of the byte
size register aliases off of the x86.

Then I get to thinking about my near complete lack of knowledge of how
the Big-Endian processor best or most happily works with moving bytes
off of the register into memory, using C. What are some good examples
of useful knowledge from C of the byte order of the processor, and how
it loads bytes to and from memory, or casting int & 0xFF to byte?

Besides this kind of stuff, all the non-fixed width or standard char*
or wchar_t* function integer variables have size of sizeof(register _t)
from machine/types.h, but not actually including that file or using
that type, partially because it's signed and thus right shifts have
sign extension, which is unfeasible for sequential testing of each bit
of the register, with sign-entending the right shift of the initially
single bit mask. That's not pedantically about the C language
specification, but it is about proper usage.

So, I should probably go compile some shifts and step through their
disassembly on the Pentium, to perhaps enhance but hopefully diminish
my deep personal confusion about low-voltage electronics.

Then, I'm wondering about signal-friendly uses of basically heap memory
or I/O functions. That's about installing a handler to clean up the
custom library junk, for reliability, but also to some extent about the
existence, on one single-core processor, of one thread of execution.

Also, when I'm accessing this buffer full of bytes, I assume the buffer
itself is word aligned and its length is an integral number of words,
and I am concerned that bounds checking will complain. So, that is not
really a problem, it just needs some code to treat the beginning and
end of the memory buffer as bytes and the rest as words.

I was playing with the vprintf and getting undefined behavior for not
calling va_start, it was printing FEEDFACE. Now, CAFEBABE I have seen
before, DEADBEEF, but FEEDFACE was a new one to me. It works great,
I'm really happy about that.

Hey, I compile with -pedantic -ansi, and besides complaining about long
long not being ANSI from stdlib.h, it says "ANSI C does not support
const or volatile functions" for something along the lines of const
char* name(int number);.

Anyways that's a lot of unrelated garbage to your question. So, no, I
don't offhand know how the linker works.

Thanks! Warm regards,

Ross F.

Nov 14 '05 #5
>About Little-Endian, I'm aware many chips have big and little-endian
modes, eg SPARC, PowerPC, Itanium, but the Intel x86 chips are
little-endian, and as well the low order bytes of the word are aliased
to specific sections of the register, and there are zillions of those
things sitting around.
So what? Write your code so it doesn't matter what endian the
processor is.
About Little-Endian and shift, I'm trying to think of an example. Say
I have a 32 bit unsigned word with each byte being an encoded symbol.

0x AA BB CC DD
It's a VALUE. Quit putting those stupid spaces in there.
If a certain bit in another register is set, I want to shift that
right.

0x 00 AA BB CC

Then, regardless of whether CC or DD is in the low byte, I test that
byte for a bit. If it's set, there's this processing, and then what's
left is conditionally shifted 16 bits.

0x 00 00 00 AA
or
0x 00 00 AA BB

Now, all I'm interested in is the low byte there. So, on the x86
register eax, say, it is as so

-------eax -----
--eah--- ---eal---
-ah- -al-
AA 00 00 00

then, if I want to move that byte off then I think it has to go onto ah
or al to be moved into a memory location at a byte offset.
A one-byte value does not have byte order.
So, I'm
causing myself grief about flipping the original number to 0x DD CC BB
AA and shifting it the other way, for the Little-Endian case, because
those bit tests (and + jnz, I suppose) or moves work off of the byte
size register aliases off of the x86.
What the heck are you talking about? You think flipping the order
is FASTER? It's not. You think flipping the order and shifting gives
you the same answer? It doesn't. Reversing the order of BYTES does not
reverse the order of BITS.

If you're doing a 32-bit shift, IT DOES A 32-BIT SHIFT. Stupid register
aliases have nothing to do with it. By definition, AL refers to the low-order
byte of EAX on an Intel processor, and has nothing to do with whether the
processor is big-endian or little-endian. If you're doing a 32-bit bitwise
AND, it does a full 32-bit bitwise AND. byte size register aliases have
nothing to do with it.
Then I get to thinking about my near complete lack of knowledge of how
the Big-Endian processor best or most happily works with moving bytes
off of the register into memory, using C. What are some good examples
of useful knowledge from C of the byte order of the processor, and how
it loads bytes to and from memory, or casting int & 0xFF to byte?
When you use C, you're not supposed to CARE about the endianness, or lack
thereof, of the processor.

If it wants to load a 32-bit value out of memory, IT LOADS A 32-bit VALUE
OUT OF MEMORY. There are no speed issues about which end is done first,
so you shouldn't care. The bytes wind up in the right places in the register
based on endianness or lack thereof.
So, I should probably go compile some shifts and step through their
disassembly on the Pentium, to perhaps enhance but hopefully diminish
my deep personal confusion about low-voltage electronics.


The specification of the software interface has nothing to do with whether
the Pentium uses low-voltage electronics or quantum warp fields or orbital
mind-control lasers. It still works the same.

Gordon L. Burditt
Nov 14 '05 #6
Gordon Burditt wrote:
About Little-Endian, I'm aware many chips have big and little-endian
modes, eg SPARC, PowerPC, Itanium, but the Intel x86 chips are
little-endian, and as well the low order bytes of the word are aliasedto specific sections of the register, and there are zillions of thosethings sitting around.
So what? Write your code so it doesn't matter what endian the
processor is.
About Little-Endian and shift, I'm trying to think of an example. SayI have a 32 bit unsigned word with each byte being an encoded symbol.
0x AA BB CC DD


It's a VALUE. Quit putting those stupid spaces in there.
If a certain bit in another register is set, I want to shift that
right.

0x 00 AA BB CC

Then, regardless of whether CC or DD is in the low byte, I test that
byte for a bit. If it's set, there's this processing, and then what'sleft is conditionally shifted 16 bits.

0x 00 00 00 AA
or
0x 00 00 AA BB

Now, all I'm interested in is the low byte there. So, on the x86
register eax, say, it is as so

-------eax -----
--eah--- ---eal---
-ah- -al-
AA 00 00 00

then, if I want to move that byte off then I think it has to go onto ahor al to be moved into a memory location at a byte offset.


A one-byte value does not have byte order.
So, I'm
causing myself grief about flipping the original number to 0x DD CC BBAA and shifting it the other way, for the Little-Endian case, becausethose bit tests (and + jnz, I suppose) or moves work off of the byte
size register aliases off of the x86.


What the heck are you talking about? You think flipping the order
is FASTER? It's not. You think flipping the order and shifting

gives you the same answer? It doesn't. Reversing the order of BYTES does not reverse the order of BITS.

If you're doing a 32-bit shift, IT DOES A 32-BIT SHIFT. Stupid register aliases have nothing to do with it. By definition, AL refers to the low-order byte of EAX on an Intel processor, and has nothing to do with whether the processor is big-endian or little-endian. If you're doing a 32-bit bitwise AND, it does a full 32-bit bitwise AND. byte size register aliases have nothing to do with it.
Then I get to thinking about my near complete lack of knowledge of howthe Big-Endian processor best or most happily works with moving bytesoff of the register into memory, using C. What are some good examplesof useful knowledge from C of the byte order of the processor, and howit loads bytes to and from memory, or casting int & 0xFF to byte?
When you use C, you're not supposed to CARE about the endianness, or

lack thereof, of the processor.

If it wants to load a 32-bit value out of memory, IT LOADS A 32-bit VALUE OUT OF MEMORY. There are no speed issues about which end is done first, so you shouldn't care. The bytes wind up in the right places in the register based on endianness or lack thereof.
So, I should probably go compile some shifts and step through their
disassembly on the Pentium, to perhaps enhance but hopefully diminishmy deep personal confusion about low-voltage electronics.
The specification of the software interface has nothing to do with

whether the Pentium uses low-voltage electronics or quantum warp fields or orbital mind-control lasers. It still works the same.

Gordon L. Burditt


Well, sure, C has no concept of endian-ness, and the code might be
compiled on a PDP- or Middle-Endian system, but it's a fact that for
all intents and purposes the byte order of the platform is Big-Endian,
or Little-Endian.

Now, with dealing with an int, it is true that there is no reason to
care what the byte order is, unless you have to deal with ints written
to file or stream as a sequential sequence of bytes.

Basically I'm talking processing data that is a sequence of bytes,
8-bit octets. It can be processed by loading a byte at a time, where
if CHAR_BIT does not equal 8, for example on some strange processor
with 36 bit words and 9 bit bytes that is basically mythical, the data
in question is still stored as a sequence of 8-bit octets of binary
digits, or bytes.

As well, the sequence of the bytes increases as the memory address
increases. That is to say, the most significant byte of the data
stream is at the lower memory address, similarly to how the most
significant byte of a Big-Endian integer is stored at the lower memory
address.

So, to be portable and byte-order agnostic, load a byte at a time of
the input data stream and process it.

To be slightly more pragmatic, load as many bytes as fit on a register
at once onto the register. It's important from the processor's
perspective that the memory address from which those BYTES_PER_WORD
many bytes are loaded is aligned to the size of the word, that
sizeof(word_t) divides evenly into the address. On some platforms,
where that is not the case, the program results SIGBUS Bus Error
signal, and on others the processor grumpily stops what it's doing and
loads the unaligned word. Here, a word means the generic register
size, and not necessarily the octet pair, where the DWORD type from
windows.h is the type of the register word on 32-bit systems, or a type
defined as project_word_t, or project_sword_t , where the default word
type is unsigned.

So anyways, a point here is that on the little-endian machine, when the
word from the data sequence is loaded, say 0xAABBCCDD, then the number
on the register is 0xDDCCBBAA. Now, let's say for some bizarre reason
the idea is to test each bit in data sequence in order, even if a table
lookup might _seem_ more efficient, because the loop fits on a cache
line and there are more aligned accesses. Anyways, on the
Little-Endian processor, then there is the consideration of scanning
what are bits 31 to 0 of the data sequence, on the register they are
bits 7 to 0, 15 to 8, 23 to 16, and 31 to 24.

If the fill order of the bits is reversed, then the idea is to scan in
order bit 7 to bit 0 of each byte, then on the little-endian processor
the register word can be processed as bit 0 to bit 31 on the register.
I digress.

http://groups-beta.google.com/group/...29d6567da68957

There are very real considerations of how to organize the data within a
word where only pieces of the word have meaning, that is, instead of
being an integer it is basically a char[4], or char[8] on 64-bit
processor words, and on different endian architectures, those arrays
will essentially be in reversed order when loaded onto the register by
loading that word from memory.

One of those considerations is in taking one of those char values and
extracting it from the int value. When that's done, then if the value
happens to be on one of those aliased byte registers on the x86, then
it is moved off into memory with one instruction, otherwise it has to
be copied to the register and then moved. In the high level language
C, it's possible to design the organization of the embedded codes
within the integer so that they naturally fall upon these boundaries,
making it easier for the compiler to not generate what would be
unneeded operations in light of design and compile time decisions.

The same holds true, in an opposite kind of way, for big-endian
processors and they're methods of moving part of the contents of the
register to a byte location.

The point in making these kinds of design decisions and wasting the
time to come to these useful conclusions instead of just using getc and
putc on a file pointer, which is great, is that they allow some
rational accomodation of the processor features without totally
specializing for each processor, where whether the processor is Big- or
Little-Endian, or not, makes a difference in the correct algorithms in
terms of words and sequences of bytes, and specializing the functions
for those what are basically high level characteristics , in terms of
their immediate consequence the word as an array or vector of smaller
sized scalar integers.

The data streams are coming in network order, that means big-endian.
Data is most efficiently loaded onto the processor register a processor
register word at a time.

Anyways, I guess I'm asking not to hear "no, program about bytes" but
rather, yes, "C is a useful high level programming language meant for
generation that closely approximates the machine features and here are
some good practices for working with sequences of data organized in
bytes."

Thanks again, I appreciate your insight. Warm regards,

Ross F.
Is this wrong?

#ifndef fc_typedefs_h
#define fc_typedefs_h

#include <stddef.h> /* wchar_t */

#include <limits.h>
#if CHAR_BIT != 8
#error This application is supported only on systems with CHAR_BIT==8
#endif

#define PLATFORM_BYTE_O RDER be

/** This is the type of the generic data pointer. */
typedef void* fc_ptr_t;

/** This is the scalar unsigned integer type of the generic data
pointer. */
typedef unsigned int fc_ptr_int_t;

/** This is the size of memory blocks in bytes, sizeof's size_t. */
typedef unsigned int fc_size_t;

/** This should be the unsigned word size of the processor, 32, 64,
128, ..., eg register_t, and not subject to right shift sign extension.
*/
// typedef unsigned int fc_word_t
typedef unsigned int fc_word_t;

/** This should be the signed word size of the processor, 32, 64, 128,
...., eg register_t. */
typedef int fc_sword_t;

/** This is sizeof(fc_word_ t) * 8, the processor register word width in
bits. */
#define fc_word_width 32

/** This macro suffixes the function identifier with the word width
type size identifier. */
#define fc_word_width_s uffix(x) x##_32

/** This is the number of bits in a byte, eg CHAR_BIT, and must be 8.
*/
#define fc_bits_in_byte 8

/** This is fc_word_width / fc_bits_in_byte -1, eg 32/8 -1 = 3, 64/8 -
1 = 7. */
#define fc_align_word_m ask (0x00000003)

/** This is fc_word_width - 1. */
#define fc_word_max_shi ft 31

/** This is the literal suffix for literals 0x12345678 of fc_word_t, eg
empty or UL, then ULL, ..., and should probably be unchanged. */
#define UL UL

/** This is the default unsigned scalar integer type. */
typedef unsigned int fc_uint_t;
/** This is the default signed scalar integer type. */
typedef int fc_int_t;

/** This type is returned from functions as error or status code. */
typedef fc_int_t fc_ret_t;

/** This is the type of the symbol from the coding alphabet, >= 8 bits
wide, sizeof == 1. */
typedef unsigned char fc_symbol_t;

/** This is a character type for use with standard functions. */
typedef char fc_char;
/** This is a wide character type for use with standard functions. */
typedef wchar_t fc_wchar_t;

/** This is a fixed-width scalar type. */
typedef char fc_int8;
/** This is a fixed-width scalar type. */
typedef unsigned char fc_uint8;
/** This is a fixed-width scalar type. */
typedef short fc_int16;
/** This is a fixed-width scalar type. */
typedef unsigned short fc_uint16;
/** This is a fixed-width scalar type. */
typedef int fc_int32;
/** This is a fixed-width scalar type. */
typedef unsigned int fc_uint32;

#if fc_word_width == 64

/** This is a fixed-width scalar type. */
typedef long long fc_int64;
/** This is a fixed-width scalar type. */
typedef unsigned long long fc_uint64;

#endif /* fc_word_width == 64 */

/** This is a Boolean type. */
typedef fc_word_t fc_bool_t;

#ifndef NULL
#define NULL ((void*)0)
#endif

#endif /* fc_typedefs_h */

Nov 14 '05 #7
In article <11************ **********@l41g 2000cwc.googleg roups.com>,
Ross A. Finlayson <ra*@tiki-lounge.com> wrote:
:Basically I'm talking processing data that is a sequence of bytes,
:8-bit octets. It can be processed by loading a byte at a time, where
:if CHAR_BIT does not equal 8, for example on some strange processor
:with 36 bit words and 9 bit bytes that is basically mythical, the data
:in question is still stored as a sequence of 8-bit octets of binary
:digits, or bytes.

Mythical? The Sigma Xerox 5, 7, and 9; the Honeywell L6 and L66;
The PDP-6 and PDP-10 (used for TOPS-20). And probably others.

For awhile in the early 80's, it looked like 36 bit words were going
to replace 32 bit words.

Now, if you'd said "legendary" instead of "mythical". ..
--
Reviewers should be required to produce a certain number of
negative reviews - like police given quotas for handing out
speeding tickets. -- The Audio Anarchist
Nov 14 '05 #8
>Well, sure, C has no concept of endian-ness, and the code might be
compiled on a PDP- or Middle-Endian system, but it's a fact that for
all intents and purposes the byte order of the platform is Big-Endian,
or Little-Endian.
And the processor's price tag might be denominated in dollars or
it might be in Euros, but that isn't relevant either.
Now, with dealing with an int, it is true that there is no reason to
care what the byte order is, unless you have to deal with ints written
to file or stream as a sequential sequence of bytes.
In that case, you have to deal with the bytes in the ENDIAN ORDER THEY
WERE WRITTEN IN, not the endian order of the native processor.
Basically I'm talking processing data that is a sequence of bytes,
8-bit octets. It can be processed by loading a byte at a time, where
if CHAR_BIT does not equal 8, for example on some strange processor
with 36 bit words and 9 bit bytes that is basically mythical, the data
in question is still stored as a sequence of 8-bit octets of binary
digits, or bytes.

As well, the sequence of the bytes increases as the memory address
increases. That is to say, the most significant byte of the data
stream is at the lower memory address, similarly to how the most
significant byte of a Big-Endian integer is stored at the lower memory
address.

So, to be portable and byte-order agnostic, load a byte at a time of
the input data stream and process it.

To be slightly more pragmatic, load as many bytes as fit on a register
at once onto the register. It's important from the processor's
perspective that the memory address from which those BYTES_PER_WORD
many bytes are loaded is aligned to the size of the word, that
sizeof(word_ t) divides evenly into the address.
You are often NOT guaranteed this in network packets.
The required alignment is NOT necessarily that the address
is a multiple of the word size of that type. For example, an
8-byte quantity might have to be aligned on a multiple of *4*,
not 8.
On some platforms,
where that is not the case, the program results SIGBUS Bus Error
signal, and on others the processor grumpily stops what it's doing and
loads the unaligned word.
Or sometimes it loads part of the *WRONG* word.
Here, a word means the generic register
size, and not necessarily the octet pair, where the DWORD type from
windows.h is the type of the register word on 32-bit systems, or a type
defined as project_word_t, or project_sword_t , where the default word
type is unsigned.
There is no <windows.h> in C, nor any of the *-crap-word types.

So anyways, a point here is that on the little-endian machine, when the
word from the data sequence is loaded, say 0xAABBCCDD, then the number
on the register is 0xDDCCBBAA. Now, let's say for some bizarre reason
the idea is to test each bit in data sequence in order, even if a table
lookup might _seem_ more efficient, because the loop fits on a cache
line and there are more aligned accesses. Anyways, on the
Little-Endian processor, then there is the consideration of scanning
what are bits 31 to 0 of the data sequence, on the register they are
bits 7 to 0, 15 to 8, 23 to 16, and 31 to 24.
The bits are numbered 0x00000001, 0x00000002, 0x00000004, 0x00000008,
0x00000010, ... 0x80000000. You can use a value of 1 advanced by shifting
to sequence through them. C does *NOT* define a bit numbering.
There is no universal agreement on little-endian machines whether bit 31
refers to the least-significant or most-significant bit of a 32-bit word.
There is no universal agreement on big-endian machines either.
If the fill order of the bits is reversed, then the idea is to scan in
order bit 7 to bit 0 of each byte, then on the little-endian processor
the register word can be processed as bit 0 to bit 31 on the register.
I digress.
There is no such bit numbering. If you want to define your own, don't
present it as being universal, and you have to define what it is first.

http://groups-beta.google.com/group/...29d6567da68957

There are very real considerations of how to organize the data within a
word where only pieces of the word have meaning, that is, instead of
being an integer it is basically a char[4], or char[8] on 64-bit
processor words, and on different endian architectures, those arrays
will essentially be in reversed order when loaded onto the register by
loading that word from memory.
There's a reason why char arrays and multi-byte integers are
treated differently.
One of those considerations is in taking one of those char values and
extracting it from the int value. When that's done, then if the value
happens to be on one of those aliased byte registers on the x86, then
it is moved off into memory with one instruction, otherwise it has to
be copied to the register and then moved. In the high level language
C, it's possible to design the organization of the embedded codes
within the integer so that they naturally fall upon these boundaries,
making it easier for the compiler to not generate what would be
unneeded operations in light of design and compile time decisions.
You're trying to save amounts of CPU time that may take years to
recover if you have to recompile the code once to get the savings.
The same holds true, in an opposite kind of way, for big-endian
processors and they're methods of moving part of the contents of the
register to a byte location.

The point in making these kinds of design decisions and wasting the
time to come to these useful conclusions instead of just using getc and
putc on a file pointer, which is great, is that they allow some
rational accomodation of the processor features without totally
specializing for each processor, where whether the processor is Big- or
Little-Endian, or not, makes a difference in the correct algorithms in
terms of words and sequences of bytes, and specializing the functions
for those what are basically high level characteristics , in terms of
their immediate consequence the word as an array or vector of smaller
sized scalar integers.
Such micro-optimization is a waste of time, and encourages such
things as the carefully hand-tuned bubble sort.
The data streams are coming in network order, that means big-endian.
Data is most efficiently loaded onto the processor register a processor
register word at a time.

Anyways, I guess I'm asking not to hear "no, program about bytes" but
rather, yes, "C is a useful high level programming language meant for
generation that closely approximates the machine features and here are
some good practices for working with sequences of data organized in
bytes."
Unless you have benchmarks that demonstrate that the section of code
you are discussing is a bottleneck, I think you should be IGNORING
the endianness of the machine and write portable code.
Thanks again, I appreciate your insight. Warm regards,

Ross F.
Is this wrong?

#ifndef fc_typedefs_h
#define fc_typedefs_h

#include <stddef.h> /* wchar_t */

#include <limits.h>
#if CHAR_BIT != 8
#error This application is supported only on systems with CHAR_BIT==8
#endif

#define PLATFORM_BYTE_O RDER be

/** This is the type of the generic data pointer. */
typedef void* fc_ptr_t;

/** This is the scalar unsigned integer type of the generic data
pointer. */
typedef unsigned int fc_ptr_int_t;

/** This is the size of memory blocks in bytes, sizeof's size_t. */
typedef unsigned int fc_size_t;

/** This should be the unsigned word size of the processor, 32, 64,
128, ..., eg register_t, and not subject to right shift sign extension.
*/
// typedef unsigned int fc_word_t
typedef unsigned int fc_word_t;

/** This should be the signed word size of the processor, 32, 64, 128,
..., eg register_t. */
typedef int fc_sword_t;

/** This is sizeof(fc_word_ t) * 8, the processor register word width in
bits. */
#define fc_word_width 32

/** This macro suffixes the function identifier with the word width
type size identifier. */
#define fc_word_width_s uffix(x) x##_32

/** This is the number of bits in a byte, eg CHAR_BIT, and must be 8.
*/
#define fc_bits_in_byte 8

/** This is fc_word_width / fc_bits_in_byte -1, eg 32/8 -1 = 3, 64/8 -
1 = 7. */
#define fc_align_word_m ask (0x00000003)

/** This is fc_word_width - 1. */
#define fc_word_max_shi ft 31

/** This is the literal suffix for literals 0x12345678 of fc_word_t, eg
empty or UL, then ULL, ..., and should probably be unchanged. */
#define UL UL

/** This is the default unsigned scalar integer type. */
typedef unsigned int fc_uint_t;
/** This is the default signed scalar integer type. */
typedef int fc_int_t;

/** This type is returned from functions as error or status code. */
typedef fc_int_t fc_ret_t;

/** This is the type of the symbol from the coding alphabet, >= 8 bits
wide, sizeof == 1. */
typedef unsigned char fc_symbol_t;

/** This is a character type for use with standard functions. */
typedef char fc_char;
/** This is a wide character type for use with standard functions. */
typedef wchar_t fc_wchar_t;

/** This is a fixed-width scalar type. */
typedef char fc_int8;
/** This is a fixed-width scalar type. */
typedef unsigned char fc_uint8;
/** This is a fixed-width scalar type. */
typedef short fc_int16;
/** This is a fixed-width scalar type. */
typedef unsigned short fc_uint16;
/** This is a fixed-width scalar type. */
typedef int fc_int32;
/** This is a fixed-width scalar type. */
typedef unsigned int fc_uint32;

#if fc_word_width == 64

/** This is a fixed-width scalar type. */
typedef long long fc_int64;
/** This is a fixed-width scalar type. */
typedef unsigned long long fc_uint64;

#endif /* fc_word_width == 64 */

/** This is a Boolean type. */
typedef fc_word_t fc_bool_t;

#ifndef NULL
#define NULL ((void*)0)
#endif

#endif /* fc_typedefs_h */

Nov 14 '05 #9
Hi,

I was wrong about the deal with having the literal 0xAABBCCDD, shifting
it right 24 bits, and having it not be on the low byte of the register.
The little-endian processor would write it to memory in order as DD CC
BB AA, but that is irrelevant, and casting the word to byte is as
simple as using the aliased byte register. For that I am glad.

There's still the notion of interpreting vectors of bytes in a data
stream. Generally they're not byte-swapped in register words, so when
four bytes of the "Big-Endian" data stream are moved onto a
Little-Endian register, then the low word of the register contains the
first of those elements instead of the fourth.

That is vaguely troubling. In terms of testing each bit of the input
sequence in order, for example with entropy-coded data, besides fill
bytes and mixed-mode coding, besides that, the order of the bits in the
sequence isn't 31-0 on the LE register it's 7-0, 15-8, 23-16, and
31-24.

A simple consideration with that is to just swap the order of the bytes
on the register. In C, that's a function, because it uses temp
variables, but there are instructions to accomplish that affect.

Say for example, and I know this is slow, but the idea is to test bit
31 to bit 0 and process based upon that.

word_t bitmask = 0x80000000;

Now, on big or little endian processors, that has bit 31 set, where the
bits are numbered from the right starting at zero.

Then, say there's a word aligned pointer to the data.

word_t* inputdata

Then, a variable to hold the current word of input data.

word_t currentinput;

Then, get the currentinput from the inputdata.

currentinput = *inputdata++;

That dereferences the input pointer and sets the value of the current
input to the that value, and then it increments the inputdata pointer.
When I say increment the input pointer, it adds the sizeof word_t to
the pointer, which is an integer, ie that's the same as

currentinput = *inputdata;
inputdata = inputdata + sizeof(word_t);

Except that addition is illegal on pointer types. Postfix increment by
pointer type size: OK, comparison of pointer with < and >, OK, adding
pointers and integers, bad, leading to casting the pointer to an
integer sufficiently large to contain the pointer, on those platforms
with such an integer type, or otherwise in general keeping pointers and
integers separate.

So anyways then currentinput has the first four bytes of data. Let's
say the data in the stream was

0x11 0x22 0x44 0x88

or

00010001 00100010 01000100 10001000

So anyways on the little endian processor testing against the bitmask
yields true, because on the register that data was loaded right to left
from the increasing memory:

8844221

And then although the first bit of the data stream was zero, the first
bit on the register from loading the initial subsequence of the data is
one.

So, on little-endian architecture, after loading the data, swap the
bytes on that register.

#if PLATFORM_BYTE_O RDER == le
reverse_bytes(c urrentinput)
#endif

Otherwise, the bitmask starts as

unsigned bitmask 0x00000080;

and then after every test against current input, the bitmask is tested
against the constant 0x01010101, if nonzero then shift the bitmask left
15 bits, else, shift right 1 bit. If it's nonzero, then before
shifting left, test against 0x01000000 and if that's nonzero break the
loop and go to the current input.

Now, for each of the 32 bits there is a test against the byte boundary
mask, and so swapping the bytes beforehand leads to the same number or
slightly less tests, and, the branch is more likely to be predicted as
it is taken only with P=1/32 instead of P=1/8.

An opposite case exists in the rare condition where the bits are filled
in each byte with lsb-to-msb bit order instead of that msb-to-lsb or
7-0. Then, start the bitmask on 0x00000001 and shift it left, swapping
the bytes of input on big-endian instead.

BE/msb:
BE/lsb: swap, reverse
LE/msb: swap
LE/lsb: reverse

The reverse is compiled in with different constants and shifts, the
swap is an extra function.

Luckily, most data is plain byte organized and msb-to-lsb, but some
data formats, for example Deflate, have both msb and lsb filled codes
of varying sizes.

Anyways I was wrong about the literal in the C and the shift and that.
That is to say

int num = 0xAABBCCDD;

is on the Little-Endian register

------eax-----
--eah-- --eal---
-ah- -al-

AABBCCDD

So, that concern was unjustified.

While that is so, processing the data in word width increments at a
time is more efficient that processing it a byte at a time, except in
the case of implementations using type indexed lookup tables where the
lookup table with 2^32 entries is excessive.

There are other considerations with the lookup tables' suitabilityand
wird width. For example, to reverse the bits in a byte, it is
convenient to have a 256 entry table of bytes and select from the input
byte its reversed output byte. I think that was the way to do it.
Yet, say the notion is to reverse all the bits in a word, or all the
bits in each byte of a word. Then, you don't want a table to reverse
all the 2^32 possible values, and it gets to that something like this,
I copied it from somebody:

fc_uint32 fc_reversebits_ bytes_32(fc_uin t32 x){

fc_uint32 left;
fc_uint32 right;

left = x & 0xAAAAAAAAUL;
right = x & 0x55555555UL;
left = left >> 1;
right = right << 1;
x = left | right;

left = x & 0xCCCCCCCCUL;
right = x & 0x33333333UL;
left = left >> 2;
right = right << 2;
x = left | right;

left = x & 0xF0F0F0F0UL;
right = x & 0x0F0F0F0FUL;
left = left >> 4;
right = right << 4;
x = left | right;

return x;

}

is probably faster than something like this, which might be faster:

fc_uint32 fc_reversebits_ bytes_32_b(fc_u int32 in){

fc_uint32 offset;
fc_uint32 reversed = 0;

offset = in >> 24;
reversed = reversed | fc_reversebits_ bytes_table_32[offset];

in = in & 0x00FFFFFF;
in = in | 0x01000000;
offset = in >> 16;
reversed = reversed | fc_reversebits_ bytes_table_32[offset];

in = in & 0x0000FFFF;
in = in | 0x00020000;
offset = in >> 8;
reversed = reversed | fc_reversebits_ bytes_table_32[offset];

in = in & 0x000000FF;
in = in | 0x00000300;
reversed = reversed | fc_reversebits_ bytes_table_32[in];

return reversed;

}

where that's a table 256*4 = 1024 entries of four bytes each, three
zero. Partially that's so because the implementation for the 64 bit
word register is:

fc_uint64 fc_reversebits_ bytes_64(fc_uin t64 x){

fc_uint64 left;
fc_uint64 right;

left = x & 0xAAAAAAAAAAAAA AAAULL;
right = x & 0x5555555555555 555ULL;
left = left >> 1;
right = right << 1;
x = left | right;

left = x & 0xCCCCCCCCCCCCC CCCULL;
right = x & 0x3333333333333 333ULL;
left = left >> 2;
right = right << 2;
x = left | right;

left = x & 0xF0F0F0F0F0F0F 0F0ULL;
right = x & 0x0F0F0F0F0F0F0 F0FULL;
left = left >> 4;
right = right << 4;
x = left | right;

return x;
}

with the same number of instructions, perhaps, as the 32-bit
implementation.

So, as the word width increases, hopefully in terms of powers of two,
but in 24 or 36 or whatever, rethink of some of these fundamental
bit-twiddling algorithms ensues.

I guess that's so for a lot of the vector operations, for example as
there are with the MMX or SSE or AltiVec or SPARC graphics extensions.
Does anyone know a cross-platform C library or code and style to access
some subset of the vector operations on those things? As it is, each
is quite separate.

Yeah, so anyways, my bad about the misinterpretati on the C language
integer literal in terms of the LE processor register.

You had a good example about the consideration of cycle shaving and the
fact that an "archaic" computer that costs fifteen dollars at a yard
sale runs faster than the 40 million dollar supercomputer from only
twenty-five years ago. While that is so, cutting the cycle count, or
approximation in terms of processor register renaming and instruction
pipelining, in half, does double the speed.

Thank you,

Ross F.

Nov 14 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

2
3410
by: Michael Schmitt | last post by:
Hello. What is the usual way for running functions in parallel on a multiple-processor machine. Actually I want to run a single computationally expensive function with different parameter sets. Running the functions in different threads doesn't seem to work, because of the global interpreter lock. Would it help to fork processes, which run the single function with a given parameter set? Is there any simple way, how this forked worker...
99
5910
by: David MacQuigg | last post by:
I'm not getting any feedback on the most important benefit in my proposed "Ideas for Python 3" thread - the unification of methods and functions. Perhaps it was buried among too many other less important changes, so in this thread I would like to focus on that issue alone. I have edited the Proposed Syntax example below to take out the changes unecessary to this discussion. I left in the change of "instance variable" syntax (...
89
6509
by: Sweety | last post by:
hi, Is main function address is 657. its show in all compiler. try it & say why? bye,
5
3676
by: goldfita | last post by:
I'm not sure if I'm doing something stupid, or if I'm not understanding how to use variadic functions. These functions appear to be working individually. In ast_call, I can vprintf the argument list and it looks fine. If I invoke write_cmd, it seems to work fine. But when I invoke write_cmd in ast_call with argp, I get garbage in argp in write_cmd. I don't know where it's breaking. int ast_call(struct ast_connection *conn, char...
14
4195
by: jontwang | last post by:
How do you get a count of the number of arguments passed into a variadic macro?
12
6534
by: Laurent Deniau | last post by:
I was playing a bit with the preprocessor of gcc (4.1.1). The following macros expand to: #define A(...) __VA_ARGS__ #define B(x,...) __VA_ARGS__ A() -nothing, *no warning* A(x) -x B() -nothing, *warning ISO C99 requires rest arguments to be used*
9
3273
by: CryptiqueGuy | last post by:
Consider the variadic function with the following prototype: int foo(int num,...); Here 'num' specifies the number of arguments, and assume that all the arguments that should be passed to this function are of type int. (My question has nothing to do with the definition of the function foo, so don't bother about it.) If I call the function as: foo(2,3,4,5,6,7,8);/*More arguments than expected*/
4
4943
by: Boltar | last post by:
Hi I think I posted a question similar to this on here a while back but I can't find the thread and I need something more general anyway. My problem is I have 2 variadic functions , one of which needs to call the other passing its entire argument list but the other needs to be callable from anywhere else in the program as a variadic too, ie it *can't* have va_list as a paramater type.
8
3450
by: Christof Warlich | last post by:
Hi, is there any way to access individual elements in the body of a variadic macro? I only found __VA_ARGS__, which always expands to the complete list. Thanks for any help, Christof
0
8965
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8786
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9466
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
1
9255
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
9202
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
8202
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
6748
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6050
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
2
2741
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.