>I am developing some code in C which first I compile on my linux
>machine and afterwards
I test on another special hardware which has almost no debug
capabilities at all.
Just about everything you are asking for is system-specific.
Do the host system and the target system at least have the same
CPU architecture? Code size can be radically different for the
same program on, say, the PDP-8 vs. the x86 architecture.
>Usually I get a lot of errors in the latter,
because I have memory size limitations. So, I wonder what are the best
practices to know for a given program, once it is compiled and
before its execution:
- what will be the stack size available for the program
The stack size *available* depends on your hardware and how you
lay out the address. If space is really, really tight you may
have to move the split between, say, stack and data for every
recompilation to make it fit.
You probably want the stack size *required*, which is harder to figure.
Each function call requires a certain amount of stack, generally
the size of all auto variables for that function plus some overhead
for linkage, unless you're using variable-length arrays or alloca().
For [3456]86 architecture, I'll guess about 32 bytes of linkage
overhead. Look at generated code for a more accurate answer. You
may also need some initial stack overhead for whatever calls main(),
like command-line arguments if these are used.
The requirement of the whole program is the worst-case requirement
of all functions, which depends on what functions call what other
functions. If the program is recursive, this might be near-infinite,
in which case your limited-memory system should likely be using a
different non-recursive algorithm.
For example:
main() uses 100 bytes and calls A, B, and C.
A uses 200 bytes and calls C.
B uses 800 bytes and doesn't call anything.
C uses 300 bytes and doesn't call anything.
D uses 15000 bytes (but nobody calls it).
The worst case paths are:
main+A+C = 100+200+300 = 600.
main+B = 100+800 = 900.
main+C = 100+300 = 400.
The worst case here is main calling B, 900 bytes.
>- what will be the size of the code
- what will be the size of the allocated data (global variables, etc)
If the executable produced has headers like those produced by standard
Linux tools (and the version you run *on Linux* will almost certainly
have them), the size(1) command will give you the size of code,
(initialized) data, and uninitialized data. (This does not, however,
include the size of any shared libraries (on Linux) or the BIOS or
OS on the target system.)
Other tools such as objdump may give you finer detail on portions of
the object code.
>- if it is also possible to estimate the size of the heap used
This is a runtime issue. If possible, put monitoring in the program
that runs on Linux to track the maximum amount of simultaneously
allocated memory from malloc(). Otherwise, estimate it by hand.
The result may depend on input to the program. It might even depend
on *timing* of input to the program (e.g. if this thing is acting
as a router and it buffers packets if it can't handle them fast
enough, up to a limit.)
malloc() has overhead. On a 32-bit machine, rounding the requested
amount up to a multiple of 4 and add 4 is typical of a couple of
malloc() implementations.