On Mar 1, 1:04 pm, "Peter Nilsson" <a...@acay.com.auwrote:
"user923005" <dcor...@connx.comwrote:
"ababeel" <farooq.o...@gmail.comwrote:
Yes...
I changed that, removed the if(newHashes) bit...still the same
though...
Please don't top post in comp.lang.c.
On Mar 1, 12:17 pm, Ben Pfaff <b...@cs.stanford.eduwrote:
"ababeel" <farooq.o...@gmail.comwrites:
char **newHashes;
if(newHashes)
free(newHashes);
That's a bug. You can't use the value of a variable you
haven't initialized. You certainly can't free a pointer
that has an indeterminate value.
If the routine is not too long (a couple hundred lines at most)
then post it.
We can't guess what you are doing wrong without seeing the code
in full.
We don't need the code in full, just the smallest compilable
snippet that exhibits the problem. Getting that snippet means
work for the OP, but what most OP's don't realise is that the
exercise is not just a usenet courtesy, it's an extremely
useful debugging technique in it's own right.
--
Peter
char *hash_lookup(t_hash_ctrl *p_hash_ctrl, char * hash_key, size_t
createSize, boolean* p_bCreated)
{
size_t i;
long malloc_size = 0; // size to malloc for the data buffer
(when used_limit == used_items)
t_dataStruct * tempPtr = 0;
*p_bCreated = FALSE;
char **newHashes = 0;
unsigned long h = hash(hash_key, p_hash_ctrl->key_size);
for(i = h %(p_hash_ctrl->hash_size); p_hash_ctrl->p_hashes[i];
i==0 ? i = p_hash_ctrl->hash_size - 1: --i)
{
if(!memcmp(hash_key, p_hash_ctrl->p_hashes[i] + p_hash_ctrl-
>key_offset - 1, p_hash_ctrl->key_size))
return p_hash_ctrl->p_hashes[i];
else
p_hash_ctrl->search_steps++;
}
// If we are only looking for an existing
// item then return that we haven't found it.
if(!createSize)
{
//printf("NO CREATE SIZE\n");
return NULL;
}
// If we have hit the hash table size limit
// then extend the hash table by a factor of 2.
ASSERT(p_hash_ctrl->used_limit < p_hash_ctrl->hash_size);
if(p_hash_ctrl->used_items == p_hash_ctrl->used_limit)
{
size_t new_hash_size = p_hash_ctrl->hash_size << 1; //
new_hash_size = hash_size * 2;
size_t new_used_limit = new_hash_size >1; //
new_used_limit = new_hash_size / 2;
newHashes = calloc(new_hash_size, sizeof(char *));
ASSERT(new_hash_size p_hash_ctrl->hash_size);
if(!newHashes)
syntax("Unable to allocate memory for newHashes");
for(i=0; i < p_hash_ctrl->hash_size; i++)
if(p_hash_ctrl->p_hashes[i])
{
size_t j;
for(j = hash(p_hash_ctrl->p_hashes[i] + p_hash_ctrl-
>key_offset - 1, p_hash_ctrl->key_size) % new_hash_size; newHashes[j];
j == 0 ? j = new_hash_size - 1 : --j)
{}
ASSERT(j>=0 && j<new_hash_size);
newHashes[j] = p_hash_ctrl->p_hashes[i];
}
// create a new data buffer
// and copy the address of the previous
// in the first prevAdd variable in hdr
malloc_size = ((new_used_limit - p_hash_ctrl->used_limit) *
p_hash_ctrl->data_size);
tempPtr = p_hash_ctrl->dataStruct;
p_hash_ctrl->dataStruct = malloc(sizeof(t_header) + malloc_size);
if(!p_hash_ctrl->dataStruct)
syntax("Could not allocate memory for the data buffer");
p_hash_ctrl->dataStruct->hdr.prevAdd = tempPtr;
p_hash_ctrl->dataStruct->hdr.data_items = 0;
p_hash_ctrl->dataStruct->p_data = (char *)p_hash_ctrl->dataStruct
+ sizeof(t_dataStruct);
free(p_hash_ctrl->p_hashes);
p_hash_ctrl->p_hashes = newHashes;
p_hash_ctrl->hash_size = new_hash_size;
p_hash_ctrl->used_limit = new_used_limit;
free(newHashes);
for(i = h % (p_hash_ctrl->hash_size); p_hash_ctrl->p_hashes[i];
i == 0 ? i = p_hash_ctrl->hash_size - 1 : --i)
{}
}
// Add new item to hash table.
*p_bCreated = TRUE;
p_hash_ctrl->p_hashes[i] = &p_hash_ctrl->dataStruct-
>p_data[p_hash_ctrl->dataStruct->hdr.data_items++ * p_hash_ctrl-
data_size];
p_hash_ctrl->used_items++;
return p_hash_ctrl->p_hashes[i];
}
Apologies for the earlier "top post" I am still new to using groups
and not too familiar with etiquettes as yet, will get there :)....The
code above is the routine due which the crash occurs, reproducing the
problem with a snippet of code is a bit difficult coz the problem
occurs in an update run which is a part of a big system....basically
this is where I narrowed the acc vio to. And as I said in my earlier
post hash table executes fine...causes some memory corruption which
later on causes the fopen error. It only happens if we get into the
newHashes calloc....