471,082 Members | 919 Online
Bytes | Software Development & Data Engineering Community
Post +

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 471,082 software developers and data experts.

stdext::hash_map

problem with deleting big sized hash_map.
have defined a comporator, where defined min_bucket_size (for speed
improvement, since i know that i need to store big ammount of data
(~1Mln)), everything seems to be good, working fast, but problems start
when try to .clear() or delete that hash_map... I think that it just
take ennormous amount of time to complete this thing. Waiting for
possible suggestions for solvation of this problem

Jun 6 '06 #1
4 7796
Dymus wrote:
problem with deleting big sized hash_map.
have defined a comporator, where defined min_bucket_size (for speed
improvement, since i know that i need to store big ammount of data
(~1Mln)), everything seems to be good, working fast, but problems start
when try to .clear() or delete that hash_map... I think that it just
take ennormous amount of time to complete this thing. Waiting for
possible suggestions for solvation of this problem


Define "enormous amount of time." 2 seconds, 30 minutes, a week? What
happens if you allocate 1 million separate chunks of memory the same
size as one key/value pair in your hash table and then try to delete
them all? Here's a simpler case: How long does the last line of the
following take?

#include <list>

// Substitute your value and key types here
struct MyData { int key; int value[ 3 ]; };

void Foo()
{
std::list< MyData > lst;
for( unsigned i=0; i < 1000000U; ++i )
{
lst.push_back( MyData() );
}

// Swap trick to clear and get rid of capacity, too
std::list<MyData>().swap( lst );
}

It might just be that it takes a while to free that much memory when
allocated separately since it is O( N ) to delete.

Cheers! --M

Jun 6 '06 #2
> #include <list>

// Substitute your value and key types here
struct MyData { int key; int value[ 3 ]; };

void Foo()
{
std::list< MyData > lst;
for( unsigned i=0; i < 1000000U; ++i )
{
lst.push_back( MyData() );
}

// Swap trick to clear and get rid of capacity, too
std::list<MyData>().swap( lst );
}

It might just be that it takes a while to free that much memory when
allocated separately since it is O( N ) to delete.

Cheers! --M


1: "ennormous time ... I left programm running for more than an hour,
and it still was working, however, there was no endless loop, since it
was stopped during the night (unfortunatelly log of running isn't
availabl, to see how much time it took.
2: this code works good, less than minute to create empty list, and
destroy it later, but does it solve the problem of quick search of
unique keys???
3: big thanks :)

Jun 7 '06 #3
Dymus wrote:
#include <list>

// Substitute your value and key types here
struct MyData { int key; int value[ 3 ]; };

void Foo()
{
std::list< MyData > lst;
for( unsigned i=0; i < 1000000U; ++i )
{
lst.push_back( MyData() );
}

// Swap trick to clear and get rid of capacity, too
std::list<MyData>().swap( lst );
}

It might just be that it takes a while to free that much memory when
allocated separately since it is O( N ) to delete.

Cheers! --M


1: "ennormous time ... I left programm running for more than an hour,
and it still was working, however, there was no endless loop, since it
was stopped during the night (unfortunatelly log of running isn't
availabl, to see how much time it took.
2: this code works good, less than minute to create empty list, and
destroy it later, but does it solve the problem of quick search of
unique keys???
3: big thanks :)


1. Well, it sounds like you have other problems, then. Reduce your
program to a *minimal* but *complete* program that demonstrates the
problem, and post it here. I'm guessing that you'll either figure out
the problem while doing that or we'll be able to help you figure it
out.

2. That code was just to test how long it took to free chunks of memory
allocated separately; it does nothing for the quick search problem
(quite the contrary!).

Cheers! --M

Jun 7 '06 #4

Dymus wrote:
problem with deleting big sized hash_map.
have defined a comporator, where defined min_bucket_size (for speed
improvement, since i know that i need to store big ammount of data
(~1Mln)), everything seems to be good, working fast, but problems start
when try to .clear() or delete that hash_map... I think that it just
take ennormous amount of time to complete this thing. Waiting for
possible suggestions for solvation of this problem


possible memory corruption?

I've experienced using gcc: calling pop_back() on an empty vector,
when the vector goes out of scope the destruction takes several seconds
and after this the program continues to run correctly.

Jun 7 '06 #5

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

reply views Thread by Jacek Generowicz | last post: by
2 posts views Thread by Mike Lischke | last post: by
4 posts views Thread by SebastianBr | last post: by
6 posts views Thread by SimpleCode | last post: by
1 post views Thread by Axel Gallus | last post: by
1 post views Thread by Mirco Wahab | last post: by
2 posts views Thread by marek.vondrak | last post: by
Jacky Luk
reply views Thread by Jacky Luk | last post: by

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.