By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
424,983 Members | 2,049 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 424,983 IT Pros & Developers. It's quick & easy.

should program call stack grow upward or downwards?

P: n/a
I know this is a very fundamental question. I am still quite confused
if the program call stack stack should always grows upwards from the
bottom, or the opposite, or doesn't matter??

That means the stack pointer should go upwards when there are "push"
operations,
and stack pointer should go downards when there are "pop" operations??

If this is the case, the address should go upwards (increasing) or
downards (decreasing) then? i.e. The top of the stack should have
address 00000000, or FFFFFFFF? How do we decide that?

Please advice. thanks!!

Sep 21 '06 #1
Share this Question
Share on Google+
24 Replies


P: n/a
John wrote:
I know this is a very fundamental question. I am still quite confused
if the program call stack stack should always grows upwards from the
bottom, or the opposite, or doesn't matter??
I don't see why you consider it a fundamental question
as opposed to a mere technicality. In any case , the C
standard doesn't even mention stack much less which
direction it grows towards to. So in the context of portable
C programming your question is meaningless.
>
That means the stack pointer should go upwards when there are "push"
operations,
and stack pointer should go downards when there are "pop" operations??

If this is the case, the address should go upwards (increasing) or
downards (decreasing) then? i.e. The top of the stack should have
address 00000000, or FFFFFFFF? How do we decide that?
Perhaps there is some way to detect it on your platform
assuming it uses a stack but not a portable way. And of
course the way you phrase the question suggests that you
think that the stack pointer is 32 bits and that all memory
addresses are available to it. The latter assumption is highly
doubtful on a modern operating system.

By the way , a quick Google search produced at least 2 more
topics which ask more or less the same question. Does anyone
have any insights as to why people think this is important and
why they think it is relevant to C programming ? I suspect that
there may be some misunderstanding hidden somewhere.

Sep 21 '06 #2

P: n/a
acd

John wrote:
I know this is a very fundamental question. I am still quite confused
if the program call stack stack should always grows upwards from the
bottom, or the opposite, or doesn't matter??

That means the stack pointer should go upwards when there are "push"
operations,
and stack pointer should go downards when there are "pop" operations??

If this is the case, the address should go upwards (increasing) or
downards (decreasing) then? i.e. The top of the stack should have
address 00000000, or FFFFFFFF? How do we decide that?
Note, I cannot imagine that this is a homework question, it is too
stupid.

It does not matter, it only has to be done consistent.
Many processors (including x86, 68K, PowerPC, most 8-bit
microcontrollers) implement the stack such that it grows downward.
In a typical application, the heap memory and the stack share a memory
region, the heap
grows upward from botton, the stack downward from top.

Andreas

Sep 21 '06 #3

P: n/a
John wrote:
I know this is a very fundamental question. I am still quite confused
if the program call stack stack should always grows upwards from the
bottom, or the opposite, or doesn't matter??
There's no guarantee in C that there is a stack, and there are
platforms (with C compilers) out there that don't have one. On the
(much more common) platforms that have a stack, it can grow up or down.

Sep 21 '06 #4

P: n/a

John wrote:
I know this is a very fundamental question. I am still quite confused
if the program call stack stack should always grows upwards from the
bottom, or the opposite, or doesn't matter??

That means the stack pointer should go upwards when there are "push"
operations,
and stack pointer should go downards when there are "pop" operations??

If this is the case, the address should go upwards (increasing) or
downards (decreasing) then? i.e. The top of the stack should have
address 00000000, or FFFFFFFF? How do we decide that?

Please advice. thanks!!
The call stack direction is dictated not by how a standard C program
works but instead by the underlying hardware. If your programming on an
Intel Architecture (PC Type) platform for example the stack will grow
downwards. On the other hand if you are programming on an 8051
architecture microcontroller the stack will grow upwards.

Sep 21 '06 #5

P: n/a
"John" <ja*****@gmail.comwrites:
I know this is a very fundamental question. I am still quite confused
if the program call stack stack should always grows upwards from the
bottom, or the opposite, or doesn't matter??

That means the stack pointer should go upwards when there are "push"
operations,
and stack pointer should go downards when there are "pop" operations??

If this is the case, the address should go upwards (increasing) or
downards (decreasing) then? i.e. The top of the stack should have
address 00000000, or FFFFFFFF? How do we decide that?
Most processors have a procedure call standard (so code from different
compilers can be linked together). If this is the case for your
processor, you should follow that.

If not, or if it is a hypothetical question, it does, as you suspect,
depend on the processor architecure, specifically the set of
instructions that can manipulate the stack.

If you have dedicated stack instructions, these will usually dictate
which way the stack grows and whether the stack pointer points to the
top element of the stack or the the first empty slot above it (where
"top" and "above" do not imply a specific direction of growth).

If you don't have dedicated stack instructions, the available
memory-access instructions may influence the choice. If, for example,
offsets in load/store instructions are positive only, it makes sense
to either let the stack grow upwards and address relative to a frame
pointer (FP) that points to the bottom of the frame or let the stack
grow downwards and address relative to a top-of-stack pointer (SP).
You can also have both FP and SP, which allows the frame to be
extended at run-time. If you have both positive and negative offsets,
but in a very limited range, the FP may point into the middle of the
frame instead of the end, so you can use both positive and negative
offsets when accessing the frame.

If you can address sub-word units on the stack, byte order may matter,
so you would prefer an upwards-growing stack for little-endian
architectures and an downwards-growing stack for big-endian.

In other cases, the choice of stack orientation can be arbitrary or
based on tradition.

If you have two stacks in your language, you would normally want one
to grow upwards and the other downwards, starting from opposite ends
of a common memory area, as this gives maximal utilisation of memory.

Torben

Sep 21 '06 #6

P: n/a
"John" <ja*****@gmail.comwrites:
I know this is a very fundamental question. I am still quite confused
if the program call stack stack should always grows upwards from the
bottom, or the opposite, or doesn't matter??
That's definitely not a fundamental question as far as writing
portable C programs is concerned but you can read that in parallel
answers.
If this is the case, the address should go upwards (increasing) or
downards (decreasing) then? i.e. The top of the stack should have
address 00000000, or FFFFFFFF? How do we decide that?
You can try running the following code:

#v+
#include <stdio.h>

void foo(int num) {
printf("%x\n", &num);
if (!num) foo(1);
}

int main(void) {
foo(0);
return 0;
}
#v-

Be be sure to disable all optimizations as some fancy compiler may try
to optimize recursive call to some kind of jump which does not use
stack or even inline the second call.

--
Best regards, _ _
.o. | Liege of Serenly Enlightened Majesty of o' \,=./ `o
..o | Computer Science, Michal "mina86" Nazarewicz (o o)
ooo +--<mina86*tlen.pl>--<jid:mina86*jabber.org>--ooO--(_)--Ooo--
Sep 21 '06 #7

P: n/a
Op Thu, 21 Sep 2006 09:56:34 +0200 schreef Torben Ęgidius Mogensen:

<snip>
If, for example,
offsets in load/store instructions are positive only, it makes sense
to either let the stack grow upwards and address relative to a frame
You may have meant downwards, see the MC 6800 (not 68000)
--
Coos
Sep 21 '06 #8

P: n/a
John wrote:
I know this is a very fundamental question.
It is an entirely architecture-specific question, related more to the
memory model of a given platform than to any compiler design. Compilers
are at liberty to manage function calls, not necessarily using a stack
at all.
Sep 21 '06 #9

P: n/a
Michal Nazarewicz <mi****@tlen.plwrites:
[...]
You can try running the following code:

#v+
#include <stdio.h>

void foo(int num) {
printf("%x\n", &num);
if (!num) foo(1);
}

int main(void) {
foo(0);
return 0;
}
#v-
Yes, you can do that if you don't mind a little undefined behavior.

The "%x" format expects an unsigned int. You're passing it an int*.
Apart from the fact that the types don't match, they may not even be
the same size.

Use "%p" to print pointer values; that's what it's for:

printf("%p\n", (void*)&num);

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Sep 21 '06 #10

P: n/a
>I know this is a very fundamental question. I am still quite confused
>if the program call stack stack should always grows upwards from the
bottom, or the opposite, or doesn't matter??
Yes. The important point is that YOU DON'T GET TO CHOOSE.
Also, it either DOES or it DOESN'T. There is no SHOULD DO or
SHOULDN'T DO.
>That means the stack pointer should go upwards when there are "push"
operations,
and stack pointer should go downards when there are "pop" operations??
"UP" means towards the address at the top of a page in a textbook.
"DOWN" means towards the address at the bottom of a page in a textbook.
Neither has any relevance to the numerical value of the address, which
may be 00000000 at the equator and 7FFFFFFF at the North Pole.
>If this is the case, the address should go upwards (increasing) or
downards (decreasing) then?
There is no SHOULD. And upwards does not mean increasing.
>i.e. The top of the stack should have
address 00000000, or FFFFFFFF? How do we decide that?
*YOU* don't get to decide. Accept what you get. Don't write programs
that care about stack growth direction.
Sep 21 '06 #11

P: n/a
Coos Haak <ch*****@hccnet.nlwrites:
Op Thu, 21 Sep 2006 09:56:34 +0200 schreef Torben Ęgidius Mogensen:

<snip>
>If, for example,
offsets in load/store instructions are positive only, it makes sense
to either let the stack grow upwards and address relative to a frame
You may have meant downwards, see the MC 6800 (not 68000)
That was in the "or" part you didn't cite.

Torben
Sep 22 '06 #12

P: n/a
Op Fri, 22 Sep 2006 09:35:53 +0200 schreef Torben Ęgidius Mogensen:
Coos Haak <ch*****@hccnet.nlwrites:
>Op Thu, 21 Sep 2006 09:56:34 +0200 schreef Torben Ęgidius Mogensen:

<snip>
>>If, for example,
offsets in load/store instructions are positive only, it makes sense
to either let the stack grow upwards and address relative to a frame
>You may have meant downwards, see the MC 6800 (not 68000)

That was in the "or" part you didn't cite.

Torben
I missed the word bottom (the FP) and read it as top (the SP). Yes, thank
you.
--
Coos
Sep 22 '06 #13

P: n/a
In article <11**********************@h48g2000cwc.googlegroups .com>,
<sj*******@yahoo.comwrote:
>John wrote:
>I know this is a very fundamental question. I am still quite confused
if the program call stack stack should always grows upwards from the
bottom, or the opposite, or doesn't matter??

There's no guarantee in C that there is a stack, and there are
platforms (with C compilers) out there that don't have one. On the
(much more common) platforms that have a stack, it can grow up or down.
C does in fact require a stack, unless by "stack" you actually
mean "contiguous region of memory used to store function invocation
records"[1]. Function calls and returns are required to do The Right
Thing, and multiple calls to the same function aren't allowed to interfere
with each other; if you can figure out how to do that without something
that looks, acts, and smells like a stack, I would really like to
know how.

Of course, there are several reasonable ways to implement such a stack;
you could allocate a contiguous region of memory and keep your activation
records (with where-to-return information and storage for automatic
variables) there, in which the OP's question would actually make sense
(and the correct answer would be "it depends"), or you could dynamically
allocate each invocation record and free them when the function returns
(this would be isomorphic to a linked-list implementation of a stack),
or you could combine the two (dynamically allocate memory in blocks;
when you call a function and don't have room in the current block,
allocate another one; when you return and empty a block, deallocate and
return to the previous one).

(I see this is crossposted to comp.arch; if anybody there is still
reading, this would probably be a good place to jump in with some more
esoteric ones.)

There are even more unreasonable ways to implement it as well; if an
implementation has enough tame rhinodaemons that are not being used for
other purposes, it's free to ask them to hold onto the last function's
call information, as long as they give it back in the right order when
it's needed.
dave

[1] And if a C implementation uses that, the discussion elsethread
indicates that the most correct answer to the OP's question is
probably neither "up" nor "down", but "in".

--
Dave Vandervies dj******@csclub.uwaterloo.ca
[P]lease continue to follow this thread. It's helpful to have the dissenting
viewpoint coming from someone who's not a flaming asshole.
--Steve Gravrock in comp.lang.c
Sep 24 '06 #14

P: n/a
>C does in fact require a stack, unless by "stack" you actually
>mean "contiguous region of memory used to store function invocation
records"[1]. Function calls and returns are required to do The Right
Thing, and multiple calls to the same function aren't allowed to interfere
with each other; if you can figure out how to do that without something
that looks, acts, and smells like a stack, I would really like to
know how.
OS/360 used a linked list of "save areas" containing saved registers,
return addresses, and if desired, local variables. (Now, granted,
when I was working with it, C didn't exist yet, or at least it
wasn't available outside Bell Labs.) Reentrant functions (required
in C unless the compiler could prove it wasn't necessary) would
allocate a new save area with GETMAIN and free it with FREEMAIN.
Non-reentrant functions would allocate a single static save area.
>Of course, there are several reasonable ways to implement such a stack;
you could allocate a contiguous region of memory and keep your activation
records (with where-to-return information and storage for automatic
variables) there, in which the OP's question would actually make sense
(and the correct answer would be "it depends"), or you could dynamically
allocate each invocation record and free them when the function returns
(this would be isomorphic to a linked-list implementation of a stack),
I'm not so sure this meets the definition of a "stack". In any
case, such an implementation has no sensible definition of "growth
direction" for a stack.
Sep 24 '06 #15

P: n/a

go***********@burditt.org (Gordon Burditt) writes:
OS/360 used a linked list of "save areas" containing saved registers,
return addresses, and if desired, local variables. (Now, granted,
when I was working with it, C didn't exist yet, or at least it
wasn't available outside Bell Labs.) Reentrant functions (required
in C unless the compiler could prove it wasn't necessary) would
allocate a new save area with GETMAIN and free it with FREEMAIN.
Non-reentrant functions would allocate a single static save area.
minor note ... the savearea allocation was the responsibility of
the calling program ... but the saving of registers were the
responsibility of the called program ... i.e. on program entry,
the instruction sequence was typically:

STM 14,12,12(13)

i.e. "store multiple" registers 14,15,0,...,12 ... starting at
(decimal) 12 offset from location pointed to by register 13.

for more detailed discussion ... i've done a q&d conversion of the old
ios3270 green card to html ... and more detailed discussion of
call/save/return conventions can be found at:
http://www.garlic.com/~lynn/gcard.html#50
the called program only needed a new save area if it would it turn
call some other program. non-reentrant programs (that called other
programs) could allocate a single static savearea. only when you had
reentrant programs that also called other programs ... was there an
issue regarding dynamic save area allocations.

the original cp67 kernel had a convention that was somewhat more like
a stack. it had a contiguous subpool of 100 save areas. all module
call/return linkages were via supervisor call. it was the
responsibility of the supervisor call routine to allocate/deallocate
savearea for the call.

per aside, cp67 and unix can trace somewhat common heritage back to
ctss, i.e. cp67 work was done at the science center on the 4th flr
of 545 tech sq
http://www.garlic.com/~lynn/subtopic.html#545tech

including some people that had worked on ctss. multics was on the 5th
flr of 545 tech sq ... and also included some people that had worked
on ctss.

as i was doing various performance and scale-up work ... i made a
number of changes to the cp67 calling conventions.

for some number of high-use non-reentrant routines (that didn't call
any other modules), i changed the calling sequence from supervisor
call to simple "branch and link register" ... and then used a static
area for saving registers. for some number of high-use common library
routines ... the supervisor call linkage scenario had higher
pathlength that the function called ... so the switch to BALR call
convention for these routings significantly improved performance.

the other problem found with increasing load ... was that it became
more and more frequent that the system would exhaust the pool of 100
save areas (which caused it to abort). i redid the logic so that it
could dynamically increase and decrease the pool of save areas
.... significantly reducing system failures under heavy load.

there was subsequent generalized subpool enhancement for cp67
kernel dynamic storage management ... which also significantly
contributed to descreasing kernel overhead.

article from that work
Analysis of Free-storage Algorithms, B. Margolin et all, IBM Systems
Journal v10n4, 283-304, 1971

and from the citation site:
http://citeseer.ist.psu.edu/context/418230/0

misc. past postings mentioning cp67 kernel generalized subpool work:
http://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
http://www.garlic.com/~lynn/98.html#19 S/360 operating systems geneaology
http://www.garlic.com/~lynn/2000d.html#47 Charging for time-share CPU time
http://www.garlic.com/~lynn/2002.html#14 index searching
http://www.garlic.com/~lynn/2002h.html#87 Atomic operations redux
http://www.garlic.com/~lynn/2004g.html#57 Adventure game (was:PL/? History (was Hercules))
http://www.garlic.com/~lynn/2004h.html#0 Adventure game (was:PL/? History (was Hercules))
http://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
http://www.garlic.com/~lynn/2006e.html#40 transputers again was: The demise of Commodore
http://www.garlic.com/~lynn/2006j.html#21 virtual memory
http://www.garlic.com/~lynn/2006p.html#11 What part of z/OS is the OS?
Sep 24 '06 #16

P: n/a
thanks

regards
<a href=http://www.stormloader.com/users/unification/>MMOG</a>
<a href=http://unificationwarss.0catch.com>online games</a>
<a href=http://unificationwars.freewebspace.com/>free games</a>

Sep 25 '06 #17

P: n/a
On Sun, 24 Sep 2006 15:24:41 -0600 in alt.folklore.computers, Anne &
Lynn Wheeler <ly**@garlic.comwrote:
>there was subsequent generalized subpool enhancement for cp67
kernel dynamic storage management ... which also significantly
contributed to descreasing kernel overhead.

article from that work
Analysis of Free-storage Algorithms, B. Margolin et all, IBM Systems
Journal v10n4, 283-304, 1971
Image scan available (1.1MB) at:
http://www.research.ibm.com/journal/...ibmsj1004C.pdf

--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada

Br**********@CSi.com (Brian[dot]Inglis{at}SystematicSW[dot]ab[dot]ca)
fake address use address above to reply
Sep 25 '06 #18

P: n/a

In article <ef**********@rumours.uwaterloo.ca>,
dj******@caffeine.csclub.uwaterloo.ca (Dave Vandervies) writes:
|>
|(I see this is crossposted to comp.arch; if anybody there is still
|reading, this would probably be a good place to jump in with some more
|esoteric ones.)

You called?

Further to the Wheeler's postings, a lot of the customer and third-party
compilers (and some system interfaces!) used non-standard conventions,
because the standard linkage was so slow. I repeatedly (over 20 years)
pointed out to them and IBM Santa Teresa that the problem was not the
linkage but its poor implementations; I demonstrated that I could speed
up the Fortran call THREE times just by reordering the linkage data and
linkage instructions. But the message went largely unheard ....

PL/I supported GETMAIN R for linkage, but it was a damn-fool way of
providing reentrancy (though it was not PL/I's fault, but the linkage
editor's). Almost all other languages allocated a register to the
stack at startup and put up with the constraint that the couldn't be
called reentrantly from modules that used standard linkage.

Several compilers (PL/I, if I recall, and several of my libraries) used
a basically rising/falling stack, but would extend on overflow by GETMAIN.
This means that the stack segments need not be contiguous, and may be in
an unpredictable order. This was not limited to System/370.

And some used a secondary stack, which often went the opposite direction
to the main stack. As I have posted before, I don't know why this has
been forgotten, as it provides a significant performance enhancement for
free.

|There are even more unreasonable ways to implement it as well; if an
|implementation has enough tame rhinodaemons that are not being used for
|other purposes, it's free to ask them to hold onto the last function's
|call information, as long as they give it back in the right order when
|it's needed.

Yup. Seen that. But old age now means that I can't remember where.
Regards,
Nick Maclaren.
Sep 25 '06 #19

P: n/a
Anne & Lynn Wheeler wrote:
go***********@burditt.org (Gordon Burditt) writes:
>OS/360 used a linked list of "save areas" containing saved registers,
return addresses, and if desired, local variables. (Now, granted,
when I was working with it, C didn't exist yet, or at least it
wasn't available outside Bell Labs.) Reentrant functions (required
in C unless the compiler could prove it wasn't necessary) would
allocate a new save area with GETMAIN and free it with FREEMAIN.
Non-reentrant functions would allocate a single static save area.

minor note ... the savearea allocation was the responsibility of
the calling program ... but the saving of registers were the
responsibility of the called program ... i.e. on program entry,
the instruction sequence was typically:

STM 14,12,12(13)

i.e. "store multiple" registers 14,15,0,...,12 ... starting at
(decimal) 12 offset from location pointed to by register 13.

for more detailed discussion ... i've done a q&d conversion of the old
ios3270 green card to html ... and more detailed discussion of
call/save/return conventions can be found at:
http://www.garlic.com/~lynn/gcard.html#50
the called program only needed a new save area if it would it turn
call some other program. non-reentrant programs (that called other
programs) could allocate a single static savearea. only when you had
reentrant programs that also called other programs ... was there an
issue regarding dynamic save area allocations.

the original cp67 kernel had a convention that was somewhat more like
a stack. it had a contiguous subpool of 100 save areas. all module
call/return linkages were via supervisor call. it was the
responsibility of the supervisor call routine to allocate/deallocate
savearea for the call.

per aside, cp67 and unix can trace somewhat common heritage back to
ctss, i.e. cp67 work was done at the science center on the 4th flr
of 545 tech sq
http://www.garlic.com/~lynn/subtopic.html#545tech

including some people that had worked on ctss. multics was on the 5th
flr of 545 tech sq ... and also included some people that had worked
on ctss.

as i was doing various performance and scale-up work ... i made a
number of changes to the cp67 calling conventions.

for some number of high-use non-reentrant routines (that didn't call
any other modules), i changed the calling sequence from supervisor
call to simple "branch and link register" ... and then used a static
area for saving registers. for some number of high-use common library
routines ... the supervisor call linkage scenario had higher
pathlength that the function called ... so the switch to BALR call
convention for these routings significantly improved performance.

the other problem found with increasing load ... was that it became
more and more frequent that the system would exhaust the pool of 100
save areas (which caused it to abort). i redid the logic so that it
could dynamically increase and decrease the pool of save areas
... significantly reducing system failures under heavy load.

there was subsequent generalized subpool enhancement for cp67
kernel dynamic storage management ... which also significantly
contributed to descreasing kernel overhead.

article from that work
Analysis of Free-storage Algorithms, B. Margolin et all, IBM Systems
Journal v10n4, 283-304, 1971

and from the citation site:
http://citeseer.ist.psu.edu/context/418230/0

misc. past postings mentioning cp67 kernel generalized subpool work:
http://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
http://www.garlic.com/~lynn/98.html#19 S/360 operating systems geneaology
http://www.garlic.com/~lynn/2000d.html#47 Charging for time-share CPU time
http://www.garlic.com/~lynn/2002.html#14 index searching
http://www.garlic.com/~lynn/2002h.html#87 Atomic operations redux
http://www.garlic.com/~lynn/2004g.html#57 Adventure game (was:PL/? History (was Hercules))
http://www.garlic.com/~lynn/2004h.html#0 Adventure game (was:PL/? History (was Hercules))
http://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
http://www.garlic.com/~lynn/2006e.html#40 transputers again was: The demise of Commodore
http://www.garlic.com/~lynn/2006j.html#21 virtual memory
http://www.garlic.com/~lynn/2006p.html#11 What part of z/OS is the OS?
Moving to mini & micros, they all grow the stack downward: I assume
because most of the early ones (8086 being the exception) booted code
from zero, so there was usually ROM at low addresses. So grow the stack
toward it.
However, there's a case to be made for growing it upward, namely that
buffer overflows cannot mess with the saved register image or other
important areas: an overflow will only trash that function's own data,
then walk into unallocated space. Might make things a lot more robust.
Sep 25 '06 #20

P: n/a

In article <45***********************@per-qv1-newsreader-01.iinet.net.au>,
David R Brooks <da******@iinet.net.auwrites:
|>
|Moving to mini & micros, they all grow the stack downward: I assume
|because most of the early ones (8086 being the exception) booted code
|from zero, so there was usually ROM at low addresses. So grow the stack
|toward it.

Nope. Most do. I can't remember which did what, but there were both
directions and almost cewrtainly still are.

The actual reason was nowhere near so logical :-)
Regards,
Nick Maclaren.
Sep 25 '06 #21

P: n/a
In article <45***********************@per-qv1-newsreader-01.iinet.net.au>,
David R Brooks <da******@iinet.net.auwrote:
>Moving to mini & micros, they all grow the stack downward:
*all* ?? That's a pretty big territory to cover!
>I assume
because most of the early ones (8086 being the exception) booted code
from zero, so there was usually ROM at low addresses. So grow the stack
toward it.
However, there's a case to be made for growing it upward, namely that
buffer overflows cannot mess with the saved register image or other
important areas: an overflow will only trash that function's own data,
then walk into unallocated space. Might make things a lot more robust.
The buffer being overflowed could be one whose address came from
the calling routine.
--
Programming is what happens while you're busy making other plans.
Sep 25 '06 #22

P: n/a
David R Brooks wrote:
However, there's a case to be made for growing it upward, namely that
buffer overflows cannot mess with the saved register image or other
important areas: an overflow will only trash that function's own data,
then walk into unallocated space. Might make things a lot more robust.
Sorry, that's wrong.
(I've made the same mistake myself, so don't feel too bad! :-)

The problem with a buffer overflow in an upward-growing stack is that if
the buffer is allocated on the stack (a prerequisite for all bugs of
this type), and your function calls a library function to copy
something, without a boundary check, then the overflow will overwrite
the library function's stack area!

I.e. when returning from gets() or strcpy(), you pop an overwritten
return address, and end up either with a crash, or in the case of an
exploit, you return to a carefully-planned piece of trampoline code
somewhere in memory. :-(

This is identical to what happens in a downward growing stack, except
that the buffer you overflow is inside your parent's stack frame instead
of your own.

Terje

--
- <Te************@hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"
Sep 25 '06 #23

P: n/a
On Mon, 25 Sep 2006 01:23:42 -0800, David R Brooks wrote:
Moving to mini & micros, they all grow the stack downward: I assume
because most of the early ones (8086 being the exception) booted code
from zero, so there was usually ROM at low addresses. So grow the stack
toward it.
However, there's a case to be made for growing it upward, namely that
buffer overflows cannot mess with the saved register image or other
important areas: an overflow will only trash that function's own data,
then walk into unallocated space. Might make things a lot more robust.
I'm pretty sure that the PA-RISC stack grew upwards, in the standard
convention. As Terje pointed out, that probably didn't make it
significantly "safer".

Most modern processors don't have special stack pointers, so direction
comes down to convention, or comfort for the API specifiers. On the bare
metal, I've coded both directions on several processors. Made no great
difference. If anything, I'd say that post-increment address update is
probably a little bit more common than pre-decrement, so pushing things
onto upward-growing stacks is a bit easier than pushing downwards, at the
expense of popping. Many modern processors don't have any update
addressing modes, so that probably doesn't matter at all.

Cheers,

--
Andrew

Sep 25 '06 #24

P: n/a
In article <12*************@corp.supernews.com>,
Gordon Burditt <go***********@burditt.orgwrote:

[I wrote:]
>>Of course, there are several reasonable ways to implement such a stack;
you could allocate a contiguous region of memory and keep your activation
records (with where-to-return information and storage for automatic
variables) there, in which the OP's question would actually make sense
(and the correct answer would be "it depends"), or you could dynamically
allocate each invocation record and free them when the function returns
(this would be isomorphic to a linked-list implementation of a stack),

I'm not so sure this meets the definition of a "stack".
Push: Allocate new frame
Link new frame back to current frame
(We need to be able to find it back)
Populate new frame and set it as current

Pop: Follow current frame's link back to previous frame and set it
as current
Deallocate old frame

Walks like a stack, quacks like a stack...

In any
case, such an implementation has no sensible definition of "growth
direction" for a stack.
Which was precisely my point.
dave

--
Dave Vandervies dj******@csclub.uwaterloo.ca
Pragmatism leads you to omit NULL pointer tests. Pragmatism leads me to
include them. Pragmatism, then, is a somewhat Zen religion, is it not?
--Richard Heathfield in comp.lang.c
Sep 26 '06 #25

This discussion thread is closed

Replies have been disabled for this discussion.