By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
448,962 Members | 1,407 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 448,962 IT Pros & Developers. It's quick & easy.

Problem with malloc / calloc. Size not LARGE enough!

P: n/a
I have had this problem for months now and it has been nagging me.

I have a large project that has several C++ DLL's, one of them uses malloc /
calloc / free for the other DLL's.
I process very large datafiles (100MB to 300 MB) that have more than 524,000
lines with 5 columns of numbers in text. I allocate 8 arrays with 524,000 or
more doubles , and 10 arrays of 32,768 doubles.

Is there a min size for malloc / calloc required to succeed?

I found that the allocation fails on the smaller 32K arrays. If I make them
larger, the allocation succeeds. It doesn't seem like a pointer porblem in my
code could cause this failure, since I can make the buffer bigger and it is
OK.

Here's the code that I use in the DLL:

int _cdecl mmDBase_dataAlloc
( OUT void **ppdData
, IN int nNumPts
, IN int nElementSize)
{
// \todo: The allocator doesn't work well if there is less than 2^19
bytes
const int knMinBytes = 524288;
if ((nNumPts * nElementSize) < knMinBytes) {
/* DEBUG -> */ nNumPts = (int)(((double)knMinBytes / nElementSize) +
0.5);
}
(*ppdData) = calloc((size_t)nNumPts, (size_t)nElementSize);
if (NULL == *ppdData) {
char acMsg[120];
_snprintf(acMsg, sizeof(acMsg), MM_THISFUNC ": Memory allocation
failed: Points:%d, SizeOf:%d, Pointer:0x%p"
, nNumPts, nElementSize, ppdData);
return mmVisa_errUpdateErrorMsg(VI_ERROR_ALLOC, acMsg);
}
return 0;
} // mmDBase_dataAlloc()

I set a breakpoint at /* DEBUG -> */ and try the allocation with the desired
num bytes and it always fails. If I set the num bytes to the min size (512K
bytes) it always succeeds.

I just don't get it? Any sugestions?
Nov 17 '05 #1
Share this Question
Share on Google+
7 Replies


P: n/a
mef526 wrote:
I have had this problem for months now and it has been nagging me.

I have a large project that has several C++ DLL's, one of them uses malloc /
calloc / free for the other DLL's.
What do you mean, "for the other DLLs"? Do you mean that this one DLL
handles memory for the others?
I process very large datafiles (100MB to 300 MB) that have more than 524,000
lines with 5 columns of numbers in text. I allocate 8 arrays with 524,000 or
more doubles , and 10 arrays of 32,768 doubles.

Is there a min size for malloc / calloc required to succeed?
0 is the minimum, but malloc/calloc are never required to succeed. A
conforming (but useless) implementation might always return NULL.
I found that the allocation fails on the smaller 32K arrays.
The first one? Or do some succeed first?

If I make them larger, the allocation succeeds. It doesn't seem like a pointer porblem in my
code could cause this failure, since I can make the buffer bigger and it is
OK.
It's hard to know what it causing the problem, but a corrupt heap is one
possibility.

Here's the code that I use in the DLL:

int _cdecl mmDBase_dataAlloc
( OUT void **ppdData
, IN int nNumPts
, IN int nElementSize)
{
// \todo: The allocator doesn't work well if there is less than 2^19
bytes
const int knMinBytes = 524288;
if ((nNumPts * nElementSize) < knMinBytes) {
/* DEBUG -> */ nNumPts = (int)(((double)knMinBytes / nElementSize) +
0.5);
}
(*ppdData) = calloc((size_t)nNumPts, (size_t)nElementSize);
if (NULL == *ppdData) {
char acMsg[120];
_snprintf(acMsg, sizeof(acMsg), MM_THISFUNC ": Memory allocation
failed: Points:%d, SizeOf:%d, Pointer:0x%p"
, nNumPts, nElementSize, ppdData);
return mmVisa_errUpdateErrorMsg(VI_ERROR_ALLOC, acMsg);
}
return 0;
} // mmDBase_dataAlloc()

I set a breakpoint at /* DEBUG -> */ and try the allocation with the desired
num bytes and it always fails. If I set the num bytes to the min size (512K
bytes) it always succeeds.

I just don't get it? Any sugestions?


The behaviour you describe is quite strange. Do you get the exact same
behaviour with the debug version of the CRT (which uses an error
checking memory allocator)? What if you use Win32 memory allocation
functions directly, rather than the CRT ones?

Tom
Nov 17 '05 #2

P: n/a

"Tom Widmer" <to********@hotmail.com> wrote in message
news:%2***************@TK2MSFTNGP15.phx.gbl...
mef526 wrote:
I have had this problem for months now and it has been nagging me. I have
a large project that has several C++ DLL's, one of them uses malloc /
calloc / free for the other DLL's.
What do you mean, "for the other DLLs"? Do you mean that this one DLL
handles memory for the others?

Yes- MS recommnends that all memory allocation / deallocation take place in
a single DLL so as to avoid heap probllems
I process very large datafiles (100MB to 300 MB) that have more than
524,000 lines with 5 columns of numbers in text. I allocate 8 arrays with
524,000 or more doubles , and 10 arrays of 32,768 doubles. Is there a min
size for malloc / calloc required to succeed?
0 is the minimum, but malloc/calloc are never required to succeed. A
conforming (but useless) implementation might always return NULL.
I found that the allocation fails on the smaller 32K arrays.


The first one? Or do some succeed first?

It never did work. I noticed the problem several months ago when using test
files that had only 12K lines instead of the 600K line monsters that I have
to deal with every day. I put in this hack code back then.

If I make them
larger, the allocation succeeds. It doesn't seem like a pointer porblem
in my code could cause this failure, since I can make the buffer bigger
and it is OK.
It's hard to know what it causing the problem, but a corrupt heap is one
possibility.

You know, that's what it I thought too, but I have linted my code and
haven't found an error. I have been running with this hack for nearly 8
months now and I have never seen any strange behaviour. Plus, what would
make the heap start to work by using a bigger buffer? The reason I looked at
it again was that new development caused another problem. In researching it
I came accross it again. (I solved the other problem).


Here's the code that I use in the DLL:

int _cdecl mmDBase_dataAlloc
( OUT void **ppdData
, IN int nNumPts
, IN int nElementSize)
{
// \todo: The allocator doesn't work well if there is less than 2^19
bytes const int knMinBytes = 524288;
if ((nNumPts * nElementSize) < knMinBytes) {
/* DEBUG -> */ nNumPts = (int)(((double)knMinBytes /
nElementSize) + 0.5);
}
(*ppdData) = calloc((size_t)nNumPts, (size_t)nElementSize);
if (NULL == *ppdData) {
char acMsg[120];
_snprintf(acMsg, sizeof(acMsg), MM_THISFUNC ": Memory allocation
failed: Points:%d, SizeOf:%d, Pointer:0x%p"
, nNumPts, nElementSize, ppdData);
return mmVisa_errUpdateErrorMsg(VI_ERROR_ALLOC, acMsg);
}
return 0;
} // mmDBase_dataAlloc()

I set a breakpoint at /* DEBUG -> */ and try the allocation with the
desired num bytes and it always fails. If I set the num bytes to the min
size (512K bytes) it always succeeds. I just don't get it? Any
sugestions?

The behaviour you describe is quite strange. Do you get the exact same
behaviour with the debug version of the CRT (which uses an error checking
memory allocator)? What if you use Win32 memory allocation functions
directly, rather than the CRT ones? I am using the debug CRT - From MSDN:
Multi-threaded Debug DLL (/MDd)
Defines _DEBUG, _MT, and _DLL so that debug multithread- and DLL-specific
versions of the run-time routines are selected from the standard .h files.
It also causes the compiler to place the library name MSVCRTD.lib into the
..obj file.

However- I have have trouble using the heap debug functions like malloc_dbg
etc. I am using GSL (GNU Scientific lib) in the code and seems like the
stdlib.h in it conflicts with CRTDBG.H


Tom

Nov 17 '05 #3

P: n/a

"Tom Widmer" <to********@hotmail.com> wrote in message
news:%2***************@TK2MSFTNGP15.phx.gbl...
mef526 wrote:
I have had this problem for months now and it has been nagging me. I have
a large project that has several C++ DLL's, one of them uses malloc /
calloc / free for the other DLL's.
What do you mean, "for the other DLLs"? Do you mean that this one DLL
handles memory for the others?

Yes- MS recommnends that all memory allocation / deallocation take place in
a single DLL so as to avoid heap probllems
I process very large datafiles (100MB to 300 MB) that have more than
524,000 lines with 5 columns of numbers in text. I allocate 8 arrays with
524,000 or more doubles , and 10 arrays of 32,768 doubles. Is there a min
size for malloc / calloc required to succeed?
0 is the minimum, but malloc/calloc are never required to succeed. A
conforming (but useless) implementation might always return NULL.
I found that the allocation fails on the smaller 32K arrays.


The first one? Or do some succeed first?

It never did work. I noticed the problem several months ago when using test
files that had only 12K lines instead of the 600K line monsters that I have
to deal with every day. I put in this hack code back then.

If I make them
larger, the allocation succeeds. It doesn't seem like a pointer porblem
in my code could cause this failure, since I can make the buffer bigger
and it is OK.
It's hard to know what it causing the problem, but a corrupt heap is one
possibility.

You know, that's what it I thought too, but I have linted my code and
haven't found an error. I have been running with this hack for nearly 8
months now and I have never seen any strange behaviour. Plus, what would
make the heap start to work by using a bigger buffer? The reason I looked at
it again was that new development caused another problem. In researching it
I came accross it again. (I solved the other problem).


Here's the code that I use in the DLL:

int _cdecl mmDBase_dataAlloc
( OUT void **ppdData
, IN int nNumPts
, IN int nElementSize)
{
// \todo: The allocator doesn't work well if there is less than 2^19
bytes const int knMinBytes = 524288;
if ((nNumPts * nElementSize) < knMinBytes) {
/* DEBUG -> */ nNumPts = (int)(((double)knMinBytes /
nElementSize) + 0.5);
}
(*ppdData) = calloc((size_t)nNumPts, (size_t)nElementSize);
if (NULL == *ppdData) {
char acMsg[120];
_snprintf(acMsg, sizeof(acMsg), MM_THISFUNC ": Memory allocation
failed: Points:%d, SizeOf:%d, Pointer:0x%p"
, nNumPts, nElementSize, ppdData);
return mmVisa_errUpdateErrorMsg(VI_ERROR_ALLOC, acMsg);
}
return 0;
} // mmDBase_dataAlloc()

I set a breakpoint at /* DEBUG -> */ and try the allocation with the
desired num bytes and it always fails. If I set the num bytes to the min
size (512K bytes) it always succeeds. I just don't get it? Any
sugestions?

The behaviour you describe is quite strange. Do you get the exact same
behaviour with the debug version of the CRT (which uses an error checking
memory allocator)? What if you use Win32 memory allocation functions
directly, rather than the CRT ones? I am using the debug CRT - From MSDN:
Multi-threaded Debug DLL (/MDd)
Defines _DEBUG, _MT, and _DLL so that debug multithread- and DLL-specific
versions of the run-time routines are selected from the standard .h files.
It also causes the compiler to place the library name MSVCRTD.lib into the
..obj file.

However- I have have trouble using the heap debug functions like malloc_dbg
etc. I am using GSL (GNU Scientific lib) in the code and seems like the
stdlib.h in it conflicts with CRTDBG.H


Tom


Nov 17 '05 #4

P: n/a
mef526 wrote:
"Tom Widmer" <to********@hotmail.com> wrote in message
news:%2***************@TK2MSFTNGP15.phx.gbl...
mef526 wrote:
I have had this problem for months now and it has been nagging me. I have
a large project that has several C++ DLL's, one of them uses malloc /
calloc / free for the other DLL's.
What do you mean, "for the other DLLs"? Do you mean that this one DLL
handles memory for the others?


Yes- MS recommnends that all memory allocation / deallocation take place in
a single DLL so as to avoid heap probllems


Yes, this is the best way to avoid problems.
It's hard to know what it causing the problem, but a corrupt heap is one
possibility.


You know, that's what it I thought too, but I have linted my code and
haven't found an error. I have been running with this hack for nearly 8
months now and I have never seen any strange behaviour. Plus, what would
make the heap start to work by using a bigger buffer? The reason I looked at
it again was that new development caused another problem. In researching it
I came accross it again. (I solved the other problem).


Hmmm. What happens if you allocate the small arrays before the large
ones? Have you done a lot of allocations prior to the first failure?
The behaviour you describe is quite strange. Do you get the exact same
behaviour with the debug version of the CRT (which uses an error checking
memory allocator)? What if you use Win32 memory allocation functions
directly, rather than the CRT ones?


I am using the debug CRT - From MSDN:
Multi-threaded Debug DLL (/MDd)
Defines _DEBUG, _MT, and _DLL so that debug multithread- and DLL-specific
versions of the run-time routines are selected from the standard .h files.
It also causes the compiler to place the library name MSVCRTD.lib into the
.obj file.


Have you tried the release version then? It is possible you're suffering
from a strange artefact of the allocator implementation. Smaller blocks
might be allocated using one technique and larger ones another, and the
smaller block allocation technique is failing for some reason. Stepping
into the allocation calls might shed some light on this, if you
understand the basics of writing allocators.
However- I have have trouble using the heap debug functions like malloc_dbg
etc. I am using GSL (GNU Scientific lib) in the code and seems like the
stdlib.h in it conflicts with CRTDBG.H


A solution to your problem might be to use HeapCreate and HeapAlloc for
memory allocation, which will only involve modifying a few lines of code.

Tom
Nov 17 '05 #5

P: n/a
I have spent more time with this issue now and it looks like a heap
corruption problem. I have traced it down to a call to strdup(). I recoded
it to use malloc instead of strdup and the problem has gone away AFAICT. In
the strdup I do not pass the pointer across the DLL boundary, it is just for
scratch within a single function. It seems like the strdup uses a different
allocator and heap than the free() that is done later in the function.
Here's a helpful link that I found so far-

http://mail.python.org/pipermail/pyt...er/256723.html

and also

http://support.microsoft.com/kb/q190799/

http://www.tech-archive.net/Archive/...5-02/0564.html

Although MS won't cop to the problem within the same DLL. One thread I was
reading recommended turning off intrinsics.


"Tom Widmer" <to********@hotmail.com> wrote in message
news:%2****************@TK2MSFTNGP15.phx.gbl...
mef526 wrote:
"Tom Widmer" <to********@hotmail.com> wrote in message
news:%2***************@TK2MSFTNGP15.phx.gbl...
mef526 wrote:

I have had this problem for months now and it has been nagging me. I
have a large project that has several C++ DLL's, one of them uses malloc
/ calloc / free for the other DLL's.

What do you mean, "for the other DLLs"? Do you mean that this one DLL
handles memory for the others?


Yes- MS recommnends that all memory allocation / deallocation take place
in a single DLL so as to avoid heap probllems


Yes, this is the best way to avoid problems.
It's hard to know what it causing the problem, but a corrupt heap is one
possibility.


You know, that's what it I thought too, but I have linted my code and
haven't found an error. I have been running with this hack for nearly 8
months now and I have never seen any strange behaviour. Plus, what would
make the heap start to work by using a bigger buffer? The reason I looked
at it again was that new development caused another problem. In
researching it I came accross it again. (I solved the other problem).


Hmmm. What happens if you allocate the small arrays before the large ones?
Have you done a lot of allocations prior to the first failure?
The behaviour you describe is quite strange. Do you get the exact same
behaviour with the debug version of the CRT (which uses an error checking
memory allocator)? What if you use Win32 memory allocation functions
directly, rather than the CRT ones?


I am using the debug CRT - From MSDN:
Multi-threaded Debug DLL (/MDd)
Defines _DEBUG, _MT, and _DLL so that debug multithread- and DLL-specific
versions of the run-time routines are selected from the standard .h
files. It also causes the compiler to place the library name MSVCRTD.lib
into the .obj file.


Have you tried the release version then? It is possible you're suffering
from a strange artefact of the allocator implementation. Smaller blocks
might be allocated using one technique and larger ones another, and the
smaller block allocation technique is failing for some reason. Stepping
into the allocation calls might shed some light on this, if you understand
the basics of writing allocators.
However- I have have trouble using the heap debug functions like
malloc_dbg etc. I am using GSL (GNU Scientific lib) in the code and seems
like the stdlib.h in it conflicts with CRTDBG.H


A solution to your problem might be to use HeapCreate and HeapAlloc for
memory allocation, which will only involve modifying a few lines of code.

Tom

Nov 17 '05 #6

P: n/a
mef526 wrote:
I have spent more time with this issue now and it looks like a heap
corruption problem. I have traced it down to a call to strdup(). I recoded
it to use malloc instead of strdup and the problem has gone away AFAICT. In
the strdup I do not pass the pointer across the DLL boundary, it is just for
scratch within a single function. It seems like the strdup uses a different
allocator and heap than the free() that is done later in the function.
Here's a helpful link that I found so far-

http://mail.python.org/pipermail/pyt...er/256723.html Although MS won't cop to the problem within the same DLL. One thread I was
reading recommended turning off intrinsics.


I don't think intrinsics are part of the problem since strdup, malloc
and free aren't intrinsics AFAIK. I'm also not quite sure how strdup is
resolved to be the same as _strdup.

My best guess is that you have some messed up linker settings. Glad
you've solved it in another way, anyway.

Tom
Nov 17 '05 #7

P: n/a
Tom Widmer wrote:
mef526 wrote:
I have spent more time with this issue now and it looks like a heap
corruption problem. I have traced it down to a call to strdup(). I
recoded it to use malloc instead of strdup and the problem has gone
away AFAICT. In the strdup I do not pass the pointer across the DLL
boundary, it is just for scratch within a single function. It seems
like the strdup uses a different allocator and heap than the free()
that is done later in the function. Here's a helpful link that I found
so far-

http://mail.python.org/pipermail/pyt...er/256723.html


Although MS won't cop to the problem within the same DLL. One thread I
was reading recommended turning off intrinsics.

I don't think intrinsics are part of the problem since strdup, malloc
and free aren't intrinsics AFAIK. I'm also not quite sure how strdup is
resolved to be the same as _strdup.

My best guess is that you have some messed up linker settings. Glad
you've solved it in another way, anyway.

Tom

To track down memory corruption errors in code, App verifier is a great
tool. See the following link:
<http://msdn.microsoft.com/library/default.asp?url=/library/en-us/appverif/appverif/overview_of_application_verifier.asp>

Ronald Laeremans
Visual C++ team
Nov 17 '05 #8

This discussion thread is closed

Replies have been disabled for this discussion.