By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
428,829 Members | 1,798 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 428,829 IT Pros & Developers. It's quick & easy.

malloc inside while loop

P: 4
Hi All,

I tried to execute follwoing program, which got killed by the OS.

#include<stdlib.h>

main()
{
while(1) { char *c=(char*)malloc(10000); }
}

I got one message stating "Killed" in the command prompt.
Pls, any one can explain me why it got killed by the OS.
Note:
OS: Linux
Compiler: g++
Jul 22 '09 #1
Share this Question
Share on Google+
9 Replies


Expert 10K+
P: 11,448
You simply allocated too much memory and your OS didn't like it,

kind regards,

Jos
Jul 22 '09 #2

P: 4
Thanks Jos for ur reply.
I need the details about how the memory allocated and in what condition the OS will kill the process.
Jul 22 '09 #3

gpraghuram
Expert 100+
P: 1,275
Hi,
Usually for every process OS allocates some memory and when u start grabbing mor and more memory then at one point the virtual memory available with OS will gtes exhausted and finally the process will get killed.
Raghu
Jul 23 '09 #4

Expert 100+
P: 2,398
What is the purpose of this program? All it does it is repeatedly allocate memory without ever using it or freeing it. Nothing is accomplished.

Let me guess .. this is a test question from your programming course.
I would prefer for you to tell us what you think this program does.
Then our responses can correct any errors and fill in any gaps.
Jul 23 '09 #5

P: 4
It is a test program to understand the memory managemet in Linux machine.
Can any give me a clear picture about the memory management in Linux.
Something like How memory is allocated & when/how it is deallocated.
What happens when the program tries to allocate more as the program does.
I wanted to know the detiail picture of the memory managed by the OS.

Thanks in advance.
Jul 23 '09 #6

ashitpro
Expert 100+
P: 542
By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer. In case Linux is employed under circumstances where it would be less desirable to suddenly lose some randomly picked processes, and moreover the kernel version is sufficiently recent, one can switch off this overcommitting behavior using a command like

# echo 2 > /proc/sys/vm/overcommit_memory

(Taken from malloc man page)...

This is the link for more info on OOM killer....
http://linux-mm.org/OOM_Killer
Jul 23 '09 #7

Expert 10K+
P: 11,448
@ashitpro
I didn't know that; I experienced that bug on an old AIX version years ago, but I can't reproduce it on my Linux box. I'll check it out ...

kind regards,

Jos
Jul 23 '09 #8

Expert 100+
P: 2,398
Follow link to Why Linux has an "OOM killer" and ... for an easy-to-follow presentation of the design tradeoffs behind the optimistic memory allocation strategy in Linux. I don't agree with everything in the post, but it is a good nontechnical introduction to the topic.

Saravanan82: please lay out for us your current understanding of memory allocation in Linux. That way nobody will waste time explaining things you already know.
Jul 23 '09 #9

P: 4
First i would like to thank you for the reply.

I dont know how the memory allocated by the OS, but have some idea about the process memory structure in the main memory.
That's the reason why i wanted to understand the memory allocated by the OS.
In one of my project we decided to use the wrapper for malloc, in order to avoid system calls coz in each malloc call it hits the OS and gets the memory for us(We need to overcome that process by allocating a bulk and manage it with our code).
Jul 23 '09 #10

Post your reply

Sign in to post your reply or Sign up for a free account.