473,322 Members | 1,806 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,322 software developers and data experts.

strncmp performance

Hi,
I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:

int string_equal(const char* s1,const char* s2){
while(*s1 && *s2 && *s1 == *s2){
s1++; s2++;
}
return *s1 == *s2;
}

but I don't get much performance gain. Any suggestion?

Thanks!
Nov 14 '05 #1
26 12914
The problem is probably not in strncmp(), but elsewhere.

Comparing strings linearly is an expensive proposition
when there are many strings to be compared. If there's
only a handful (10 or so) it's not bad, if there are
thousands, then strncmp() without an intelligent
algorithm (e.g. binary search) behind it is probably
not the way to go.

Try a newsgroup dealing with algorithms. Maybe
comp.programming?

pembed2003 wrote:
Hi,
I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:

int string_equal(const char* s1,const char* s2){
while(*s1 && *s2 && *s1 == *s2){
s1++; s2++;
}
return *s1 == *s2;
}

but I don't get much performance gain. Any suggestion?

Thanks!


--
"It is impossible to make anything foolproof because fools are so
ingenious" - A. Bloch

Nov 14 '05 #2

"pembed2003" <pe********@yahoo.com> wrote in message
news:db*************************@posting.google.co m...
Hi,
I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:

int string_equal(const char* s1,const char* s2){
while(*s1 && *s2 && *s1 == *s2){
s1++; s2++;
}
return *s1 == *s2;
}

but I don't get much performance gain. Any suggestion?


Probably because the native strncmp is highly optimized code? E.g. doing
word-comparisons instead of char comparisons can easily give you upto
2x/4x/8x/etc speedup [based on your machine register size]. It also makes
the code a bit more complex but for strings longer than a dozen or so chars
is faster.

Tom
Nov 14 '05 #3

"Tom St Denis" <to********@iahu.ca> wrote in message
news:RN*******************@news04.bloor.is.net.cab le.rogers.com...

"pembed2003" <pe********@yahoo.com> wrote in message
news:db*************************@posting.google.co m...
Hi,
I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:

int string_equal(const char* s1,const char* s2){
while(*s1 && *s2 && *s1 == *s2){
s1++; s2++;
}
return *s1 == *s2;
}

but I don't get much performance gain. Any suggestion?
Probably because the native strncmp is highly optimized code? E.g. doing
word-comparisons instead of char comparisons can easily give you upto
2x/4x/8x/etc speedup [based on your machine register size]. It also makes
the code a bit more complex but for strings longer than a dozen or so

chars is faster.

Tom

Combining Tom's suggestion with unrolling appropriate to the target
architecture should give additional performance on longer strings. Note
that the multi-byte comparisons won't gain speed without adjustments for
alignment, even on architectures which have alignment fixup. There's no
portable C way to optimize this for a given architecture, and then this
takes you beyond standard C on the most popular architectures.
Nov 14 '05 #4
Nick Landsberg <hu*****@att.net> wrote in message news:<fA**********************@bgtnsc05-news.ops.worldnet.att.net>...
The problem is probably not in strncmp(), but elsewhere.
No, I time the function and found out that the strcmp is the slowest
function of all. Without it (of course, I can't really remove it), the
whole function is much much faster.

Comparing strings linearly is an expensive proposition
when there are many strings to be compared. If there's
only a handful (10 or so) it's not bad, if there are
thousands, then strncmp() without an intelligent
algorithm (e.g. binary search) behind it is probably
not the way to go.
The function will eventually be moved to a server to do key/value pair
lookup so it will be called tens of thousands of times and that's why
I want to optimize it.

Try a newsgroup dealing with algorithms. Maybe
comp.programming?

If I can't get a good answer here, I will try there. Thanks!


pembed2003 wrote:
Hi,
I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:

int string_equal(const char* s1,const char* s2){
while(*s1 && *s2 && *s1 == *s2){
s1++; s2++;
}
return *s1 == *s2;
}

but I don't get much performance gain. Any suggestion?

Thanks!

Nov 14 '05 #5
"Tom St Denis" <to********@iahu.ca> wrote in message news:<RN*******************@news04.bloor.is.net.ca ble.rogers.com>...
"pembed2003" <pe********@yahoo.com> wrote in message
news:db*************************@posting.google.co m...
Hi,
I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:

int string_equal(const char* s1,const char* s2){
while(*s1 && *s2 && *s1 == *s2){
s1++; s2++;
}
return *s1 == *s2;
}

but I don't get much performance gain. Any suggestion?


Probably because the native strncmp is highly optimized code? E.g. doing
word-comparisons instead of char comparisons can easily give you upto
2x/4x/8x/etc speedup [based on your machine register size]. It also makes
the code a bit more complex but for strings longer than a dozen or so chars
is faster.

Tom


Thanks but I can't use word-comparisons because I need to know if the
strings match exactly or not, regardless of case.
Nov 14 '05 #6
In article <db*************************@posting.google.com> ,
pe********@yahoo.com (pembed2003) wrote:
Hi,
I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:

int string_equal(const char* s1,const char* s2){
while(*s1 && *s2 && *s1 == *s2){
This definitely does to many comparisons. If *s1 == *s2 then *s1 and *s2
are either both zero, or non of them is zero, so it is absolutely
pointless to check both of them.

s1++; s2++;
}
return *s1 == *s2;
}

but I don't get much performance gain. Any suggestion?


Instead of worrying about the performance of strncmp, you should ask
yourself why you make so many calls to it that it matters at all.

What are you doing that requires millions of calls to strncmp?

By the way, why strncmp and not strcmp?
Nov 14 '05 #7
In article <rl*******************@newssvr27.news.prodigy.com> ,
"Tim Prince" <tp*****@computer.org> wrote:
Combining Tom's suggestion with unrolling appropriate to the target
architecture should give additional performance on longer strings. Note
that the multi-byte comparisons won't gain speed without adjustments for
alignment, even on architectures which have alignment fixup. There's no
portable C way to optimize this for a given architecture, and then this
takes you beyond standard C on the most popular architectures.


User code has to be portable. strncmp implementations don't.
Nov 14 '05 #8
In article <db**************************@posting.google.com >,
pe********@yahoo.com (pembed2003) wrote:
The function will eventually be moved to a server to do key/value pair
lookup so it will be called tens of thousands of times and that's why
I want to optimize it.


Oh shit.

If I do a key/value pair lookup, no comparison function will be called
tens of thousands of times. Less than two compares on the average. I
suggest you buy yourself a nice book about data structures.
Nov 14 '05 #9

"pembed2003" <pe********@yahoo.com> wrote in message
news:db*************************@posting.google.co m...
Hi,
I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:

int string_equal(const char* s1,const char* s2){
while(*s1 && *s2 && *s1 == *s2){
s1++; s2++;
}
return *s1 == *s2;
}

but I don't get much performance gain. Any suggestion?

Thanks!


Have you tried forcing your compiler to use inline code rather than a
function call (either by implementing this as a Macro or by using the
'inline' keyword. You might find that setting up the stack frame (or
whatever) prior to making each call to your string_equal() function takes
just as long as the function itself.

Also it appears that you are implementing (a simplified version of) strcmp()
here NOT strncmp() and you are evaluating an extra condition in your loop
(drop the check on s2 for '\0'):.

int string_equal(const char * s1, const char * s2){
while(*s1 && * s1==*s2){
s1++; s2++;
}
return *s1==*s2;
}
Regarding memcmp() this is likely to be highly optimised and much faster
than strcmp(), but the problem is that you need to pass to it the length of
the strings that you are comparing. There might be a way you can make
use of memcmp() if you know that the strings are the same length. Do you
get this information somewhere earlier in your code (e.g when you first read
in the key value?). Otherwise try:

memcpy(s1,s2,strlen(s1));

It might still be faster than doing strcmp() even with the extra strlen()
call...

Sean

Nov 14 '05 #10
pembed2003 wrote:

I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:


String comparison can be inherently slow, because it has to
examine each character. It would be better to detail your
application and ask for advice on algorithms. comp.programming is
a suitable place for that.

For example, strings might be hashed, so that very few actual
comparisons need be made.

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!
Nov 14 '05 #11
pembed2003 wrote:
"Tom St Denis" <to********@iahu.ca> wrote in message news:<RN*******************@news04.bloor.is.net.ca ble.rogers.com>...
"pembed2003" <pe********@yahoo.com> wrote in message
news:db*************************@posting.google. com...
Hi,
I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:

int string_equal(const char* s1,const char* s2){
while(*s1 && *s2 && *s1 == *s2){
s1++; s2++;
}
return *s1 == *s2;
}

but I don't get much performance gain. Any suggestion?


Probably because the native strncmp is highly optimized code? E.g. doing
word-comparisons instead of char comparisons can easily give you upto
2x/4x/8x/etc speedup [based on your machine register size]. It also makes
the code a bit more complex but for strings longer than a dozen or so chars
is faster.

Tom

Thanks but I can't use word-comparisons because I need to know if the
strings match exactly or not, regardless of case.


Alas, you misunderstand here. ;-(
"Word", in the respondents case refers not to words like, well "word"
but "machine word", i.e. the number of bytes-at-a-time that's native to
your machine's architecture.

HTH,
--ag

--
Artie Gold -- Austin, Texas
Nov 14 '05 #12
"Sean Kenwrick" <sk*******@hotmail.com> wrote in message news:<bv**********@sparta.btinternet.com>...
"pembed2003" <pe********@yahoo.com> wrote in message
news:db*************************@posting.google.co m...
Hi,
I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:

int string_equal(const char* s1,const char* s2){
while(*s1 && *s2 && *s1 == *s2){
s1++; s2++;
}
return *s1 == *s2;
}

but I don't get much performance gain. Any suggestion?

Thanks!
Have you tried forcing your compiler to use inline code rather than a
function call (either by implementing this as a Macro or by using the
'inline' keyword. You might find that setting up the stack frame (or
whatever) prior to making each call to your string_equal() function takes
just as long as the function itself.

Also it appears that you are implementing (a simplified version of) strcmp()
here NOT strncmp() and you are evaluating an extra condition in your loop
(drop the check on s2 for '\0'):.

int string_equal(const char * s1, const char * s2){
while(*s1 && * s1==*s2){
s1++; s2++;
}
return *s1==*s2;
}
Regarding memcmp() this is likely to be highly optimised and much faster
than strcmp(), but the problem is that you need to pass to it the length of
the strings that you are comparing. There might be a way you can make
use of memcmp() if you know that the strings are the same length. Do you
get this information somewhere earlier in your code (e.g when you first read
in the key value?). Otherwise try:

memcpy(s1,s2,strlen(s1));


Do you mean memcmp() here instead? Yes I think memcpy is faster than
strcmp or strncmp but I need to find out the longer string and pass in
the lenght of that. Otherwise, something like:

char* s1 = "auto";
char* s2 = "auto insurance";

memcmp(s1,s2,strlen(s1));

will return 0 which isn't. I will need to do the extra work like:

int l1 = strlen(s1);
int l2 = strlen(s2);

memcmp(s1,s2,l1 > l2 ? l1 : l2);

Do you think that will be faster than a strcmp or strncmp?

It might still be faster than doing strcmp() even with the extra strlen()
call...

Sean

Nov 14 '05 #13
Christian Bau <ch***********@cbau.freeserve.co.uk> wrote in message news:<ch*********************************@slb-newsm1.svr.pol.co.uk>...
In article <db*************************@posting.google.com> ,
pe********@yahoo.com (pembed2003) wrote:
Hi,
I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:

int string_equal(const char* s1,const char* s2){
while(*s1 && *s2 && *s1 == *s2){


This definitely does to many comparisons. If *s1 == *s2 then *s1 and *s2
are either both zero, or non of them is zero, so it is absolutely
pointless to check both of them.

s1++; s2++;
}
return *s1 == *s2;
}

but I don't get much performance gain. Any suggestion?


Instead of worrying about the performance of strncmp, you should ask
yourself why you make so many calls to it that it matters at all.


I have a very simple hash table where the keys are string and values
are also string. What I want to let people do is:

hash_insert("key","value");

and then later retrive "value" by saying:

hash_lookup("key");

The hash algr. is very simple. It calculates the hash code like:

unsigned long hash_code(const char* s){
unsigned long int c = 0;
while(*s){
c = 131 * c + *s;
s++;
}
return c % MAX_HASH_SIZE;
}

It's possible for 2 different string to have the same hash code, so
when I am doing lookup, I am doing something like:

char* hash_lookup(const char* s){
unsigned long c = hash_code(s);
// Note I can't simply return the slot at c because
// it's possible that another different string is at
// this spot because of the hash algr. So I need to...
if(strcmp(s,(hash+c)->value) == 0){
// Found it...
}else{
// Do something else
}
}

So because of my hash algr., an extra strcmp is needed.
Nov 14 '05 #14
pe********@yahoo.com (pembed2003) wrote in message news:<db**************************@posting.google. com>...
"Tom St Denis" <to********@iahu.ca> wrote in message news:<RN*******************@news04.bloor.is.net.ca ble.rogers.com>...
"pembed2003" <pe********@yahoo.com> wrote in message
news:db*************************@posting.google.co m...
Hi,
I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:

int string_equal(const char* s1,const char* s2){
while(*s1 && *s2 && *s1 == *s2){
s1++; s2++;
}
return *s1 == *s2;
}

but I don't get much performance gain. Any suggestion?


Probably because the native strncmp is highly optimized code? E.g. doing
word-comparisons instead of char comparisons can easily give you upto
2x/4x/8x/etc speedup [based on your machine register size]. It also makes
the code a bit more complex but for strings longer than a dozen or so chars
is faster.

Tom


Thanks but I can't use word-comparisons because I need to know if the
strings match exactly or not, regardless of case.


Your usage from above doesn't explain strncmp, no length is passed in.
Nor does it work "regardless of case". You should describe your
problem more exactly, as just not doing the strncmp (or strcmps) may
be the correct answer. e.g. if you are looking up key/value pairs as
you say elsethread, the problem may be solved using a hashtable/hash
algorithm that minimizes collisions. You won't do many strcmps
per lookup under those circumstances.

If you are already using a nice algorithm, and still DO need to do
lots of strcmps, a few things might be useful to save time. For
example, you could compute the length of the keys of the key/value
pairs once and save that length with the key. Figure out the length
of the query. If those lengths aren't identical, skip the strcmp
call. If the lengths are known to be identical, your string_equal (a
name that is not well advised, as things starting str followed by a
lower case letter are reserved for the implementation) could be as
below.

#define STRN_EQUAL(s1,s2,len) (memcmp(s1,s2,len) == 0)

If all your keys are of the same length, well, you probably lose on
what I said above. If their lengths vary greatly, you probably win.

-David
Nov 14 '05 #15


Christian Bau wrote:
In article <db*************************@posting.google.com> ,
pe********@yahoo.com (pembed2003) wrote:

Hi,
I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:

int string_equal(const char* s1,const char* s2){
while(*s1 && *s2 && *s1 == *s2){

This definitely does to many comparisons. If *s1 == *s2 then *s1 and *s2
are either both zero, or non of them is zero, so it is absolutely
pointless to check both of them.
s1++; s2++;
}
return *s1 == *s2;
}

but I don't get much performance gain. Any suggestion?

Instead of worrying about the performance of strncmp, you should ask
yourself why you make so many calls to it that it matters at all.

What are you doing that requires millions of calls to strncmp?


This is indeed the key question. I have had a case where I had a large
number of names and had to repeatedly look for names in that list and
came up with a very different solution: I made an index over the list
and "matched" the names I was looking for. The point is, I didn't simply
iterate over the list or make some tree search or hash function, but a
finite state machine (FSM). For fixed strings such as names it is very easy
to make a 'deterministic finite automaton' (DFA), a class of FSMs. What you
do is, you create an FSM of which each state represents the substring matched
so far from the beginning, the start state representing nothing matched yet.
Each state that represents a complete string of the original list is marked
an 'accept' state'. The matching algorithm is very simple: you start
from the start state and for each character of the name you are looking
for you find a transition to another state. If no transition is found the
name is not in the list and the search terminates. Otherwise, you follow
the transition to the next state and the process repeats until you don't
find a transition for the current character or you are at the end of the
name you are looking for. If the latter happens and you are in an accept
state, you have matched the name.

The advantage of this algorithm: the time complexity is O(n) where n is
the length of the name you are looking for. You might be tempted to say
it's O(n * t) where t is the number of transitions you have to follow at
each state, but t is limited by the size |T| of the character set and thus
we get O(n * |T|) = O(n).

If you have to, say, find x names in a list of y names the total complexity
becomes O(x * n) where n is now the average length of the x names, compared
with O(x * y * n) for linear search (the factor n then stems from strcmp) or
O(x * log y * n) for tree search. Hashing should have O(x * n) in the best
case. You probably don't get any faster than an FSM.

Building the FSM is about just as simple: for each name that you add to the
FSM you follow the FSM built so far similarly to the matching algorithm
described above. When you don't find a transition you add one to a state that
you also add and you move to that state, from which point on the algorithm
continues. The state you are in when you are at the end of the added string
is marked 'accept'. You can probably figure out for yourself how you would
remove names from the DFA, it's not a big deal.

Note that this is a very simplistic form of DFA: in general they can be much
more powerful and match patterns with repetitions (think of * in Unix file
name patterns). These are called regular expressions. To learn how to construct
such DFAs, read for instance 'Compilers: theory, techniques and tools' by Aho,
Sethi and Ullman (the 'dragon book'). Or you can generate them with e.g. lex.

Another note: you can use such an FSM as real index, pointing back to your
list of names by turning the 'accept' field into a pointer to the corresponding
list item (and NULL in states that are not accept states).

Finally, if it has advantages, it probably has disadvantages as well. That's
right: it consumes more memory and you had better take some care to do your
memory allocation efficiently: allocate states and transitions 1024 at a time
or some such.

--
ir. H.J.H.N. Kenter ^^
Electronic Design & Tools oo ) Philips Research Labs
Building WAY 3.23 =x= \ ar**********@philips.com
Prof. Holstlaan 4 (WAY31) | \ tel. +31 40 27 45334
5656 AA Eindhoven /|__ \ tfx. +31 40 27 44626
The Netherlands (____)_/ http://www.kenter.demon.nl/

Famous last words: Segmentation Fault (core dumped)

Nov 14 '05 #16

You probably missed my point -

Comparing strings using any flavor of strcmp(), strncmp(),
requires that both strings be "walked" until a mismatch
occurs or until the end of the string or until "n" is reached.
This is true whether it is written in C or assembler.

If you have to call strncmp() thousands of times,
of *course* you will be seeing that strncmp() takes the
bulk of your time when you profile the code.

My point was rather than trying to optimize strncmp(),
you should try to minimize the calls to strncmp() by being
more clever in your use of data structures. For example,
if the list of KEYS in the KEY/value pairs can be sorted
a priori, then the use of a binary search algorithm
within the list, rather than a linear search, can drastically
reduce the number of times strncmp() will be called.
Similarly, a hashing algorithm of some kind might be used.
That's why I recommended going to comp.programming, or
(if there is such a thing) comp.algorithmes or something like
that.

pembed2003 wrote:
Nick Landsberg <hu*****@att.net> wrote in message news:<fA**********************@bgtnsc05-news.ops.worldnet.att.net>...
The problem is probably not in strncmp(), but elsewhere.

No, I time the function and found out that the strcmp is the slowest
function of all. Without it (of course, I can't really remove it), the
whole function is much much faster.

Comparing strings linearly is an expensive proposition
when there are many strings to be compared. If there's
only a handful (10 or so) it's not bad, if there are
thousands, then strncmp() without an intelligent
algorithm (e.g. binary search) behind it is probably
not the way to go.

The function will eventually be moved to a server to do key/value pair
lookup so it will be called tens of thousands of times and that's why
I want to optimize it.

Try a newsgroup dealing with algorithms. Maybe
comp.programming?

If I can't get a good answer here, I will try there. Thanks!


pembed2003 wrote:

Hi,
I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:

int string_equal(const char* s1,const char* s2){
while(*s1 && *s2 && *s1 == *s2){
s1++; s2++;
}
return *s1 == *s2;
}

but I don't get much performance gain. Any suggestion?

Thanks!


--
"It is impossible to make anything foolproof because fools are so
ingenious" - A. Bloch

Nov 14 '05 #17


pembed2003 wrote:

[ snip ]

I have a very simple hash table where the keys are string and values
are also string. What I want to let people do is:

hash_insert("key","value");

and then later retrive "value" by saying:

hash_lookup("key");

The hash algr. is very simple. It calculates the hash code like:

unsigned long hash_code(const char* s){
unsigned long int c = 0;
while(*s){
c = 131 * c + *s;
s++;
}
return c % MAX_HASH_SIZE;
}

At first glance, that does not look like a very robust
hash algorithm and it MAY be why you are doing so many
strncmp() calls. Have you any data on how many strings
hash into the same hash code? If it's more than 2 or 3
on average, then you should revise the hash algorithm
rather than trying to optimize strcmp().

A good hash algorithm can get down
to about 1.5 probes/search. Try the CRC algorithm
for starters.

It's possible for 2 different string to have the same hash code,
Very, very true. See above about a better hash algorithm.

so when I am doing lookup, I am doing something like:

char* hash_lookup(const char* s){
unsigned long c = hash_code(s);
// Note I can't simply return the slot at c because
// it's possible that another different string is at
// this spot because of the hash algr. So I need to...
if(strcmp(s,(hash+c)->value) == 0){
// Found it...
}else{
// Do something else
}
}

So because of my hash algr., an extra strcmp is needed.


--
"It is impossible to make anything foolproof because fools are so
ingenious" - A. Bloch

Nov 14 '05 #18
pe********@yahoo.com (pembed2003) wrote in message
<stuff snipped>

Do you mean memcmp() here instead? Yes I think memcpy is faster than
strcmp or strncmp but I need to find out the longer string and pass in
the lenght of that. Otherwise, something like:

char* s1 = "auto";
char* s2 = "auto insurance";

memcmp(s1,s2,strlen(s1));

will return 0 which isn't. I will need to do the extra work like:

int l1 = strlen(s1);
int l2 = strlen(s2);

memcmp(s1,s2,l1 > l2 ? l1 : l2);

Do you think that will be faster than a strcmp or strncmp?


As I mentioned in another part of the thread, if the lengths aren't the
same the strings aren't identical, in which case you must skip the memcmp.
Your memcmp will read memory past the end of one of the strings if the
length isn't equal, as you selected the longer of the two. But just
don't compare is a better solution.

Having seen the rest of the thread, your best bet seems to me
to get a better hashing algorithm and a large
enough (growing if needed) hashtable that you have few collisions.
A little profiling of your hashtable should show how well/poorly
your hash function is doing.

-David
Nov 14 '05 #19
In article <db*************************@posting.google.com> ,
pe********@yahoo.com (pembed2003) wrote:
It's possible for 2 different string to have the same hash code, so
when I am doing lookup, I am doing something like:

char* hash_lookup(const char* s){
unsigned long c = hash_code(s);
// Note I can't simply return the slot at c because
// it's possible that another different string is at
// this spot because of the hash algr. So I need to...
if(strcmp(s,(hash+c)->value) == 0){
// Found it...
}else{
// Do something else
}
}

So because of my hash algr., an extra strcmp is needed.


So you are worrying about the performance of not ten thousands of
strcmp's per key, but ten thousands of strcmp's in total? Please tell
us, what does the "Giga" in "Gigahertz" mean? How many strcmp's per
seconds can your machine do? Did you measure how long things take? Will
anybody notice if the code runs faster?
Nov 14 '05 #20

"pembed2003" <pe********@yahoo.com> wrote in message
news:db**************************@posting.google.c om...
"Sean Kenwrick" <sk*******@hotmail.com> wrote in message

news:<bv**********@sparta.btinternet.com>...
"pembed2003" <pe********@yahoo.com> wrote in message
news:db*************************@posting.google.co m...
Hi,
I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:

int string_equal(const char* s1,const char* s2){
while(*s1 && *s2 && *s1 == *s2){
s1++; s2++;
}
return *s1 == *s2;
}

but I don't get much performance gain. Any suggestion?

Thanks!


Have you tried forcing your compiler to use inline code rather than a
function call (either by implementing this as a Macro or by using the
'inline' keyword. You might find that setting up the stack frame (or whatever) prior to making each call to your string_equal() function takes just as long as the function itself.

Also it appears that you are implementing (a simplified version of) strcmp() here NOT strncmp() and you are evaluating an extra condition in your loop (drop the check on s2 for '\0'):.

int string_equal(const char * s1, const char * s2){
while(*s1 && * s1==*s2){
s1++; s2++;
}
return *s1==*s2;
}
Regarding memcmp() this is likely to be highly optimised and much faster
than strcmp(), but the problem is that you need to pass to it the length of the strings that you are comparing. There might be a way you can make use of memcmp() if you know that the strings are the same length. Do you get this information somewhere earlier in your code (e.g when you first read in the key value?). Otherwise try:

memcpy(s1,s2,strlen(s1));


Do you mean memcmp() here instead? Yes I think memcpy is faster than
strcmp or strncmp but I need to find out the longer string and pass in
the lenght of that. Otherwise, something like:

char* s1 = "auto";
char* s2 = "auto insurance";

memcmp(s1,s2,strlen(s1));

will return 0 which isn't. I will need to do the extra work like:

int l1 = strlen(s1);
int l2 = strlen(s2);

memcmp(s1,s2,l1 > l2 ? l1 : l2);

Do you think that will be faster than a strcmp or strncmp?


You need to examine your code for data-caching possibilities. What I mean
by this is that you evaluate a value at some stage which you keep and use
multiple times later on. In this case the important value is the length
of the strings. From a previous post it looks like you are evaluating
hash_keys() prior to posting keys into your lookup table - it seems that
this function is a likely candidate for calculating the length of the string
with little overhead (e.g save the original pointer and use pointer
arithmetic at the end to calculate the strlen()). You could then store
the length along with the other information in your lookup table.
Then you only need to do a memcmp() if the strings are of equal length, and
you already have the string lengths calculated if you do need to call
memcmp()...

Sean
Nov 14 '05 #21
Nick Landsberg wrote:
pembed2003 wrote:

[ snip ]

I have a very simple hash table where the keys are string and
values are also string. What I want to let people do is:

hash_insert("key","value");

and then later retrive "value" by saying:

hash_lookup("key");

The hash algr. is very simple. It calculates the hash code like:

unsigned long hash_code(const char* s){
unsigned long int c = 0;
while(*s){
c = 131 * c + *s;
s++;
}
return c % MAX_HASH_SIZE;
}


At first glance, that does not look like a very robust
hash algorithm and it MAY be why you are doing so many
strncmp() calls. Have you any data on how many strings
hash into the same hash code? If it's more than 2 or 3
on average, then you should revise the hash algorithm
rather than trying to optimize strcmp().

A good hash algorithm can get down to about 1.5 probes/search.
Try the CRC algorithm for starters.


His function sounds much too dependant on the low order bits of
the last character hashed.

To experiment with hash functions and immediately see the
probes/search and other statistics, the OP could try using the
hashlib package. It was born out of an investigation into hash
functions. There are some sample string hashing routines, and
references to other methods.

<http://cbfalconer.home.att.net/download/hashlib.zip>

--
Chuck F (cb********@yahoo.com) (cb********@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!

Nov 14 '05 #22
pe********@yahoo.com (pembed2003) wrote in message news:<db*************************@posting.google.c om>...
Christian Bau <ch***********@cbau.freeserve.co.uk> wrote in message news:<ch*********************************@slb-newsm1.svr.pol.co.uk>...
In article <db*************************@posting.google.com> ,
pe********@yahoo.com (pembed2003) wrote:
Hi,
I have an application where I use the strncmp function extensively to
compare 2 string to see if they are the same or not. Does anyone know
a faster replacement for strncmp? I notice there is a memncmp function
which is very fast but it doesn't work the same like strncmp so I
don't think I can use it. I also tried to write the string_equal
function myself like:

int string_equal(const char* s1,const char* s2){
while(*s1 && *s2 && *s1 == *s2){


This definitely does to many comparisons. If *s1 == *s2 then *s1 and *s2
are either both zero, or non of them is zero, so it is absolutely
pointless to check both of them.

s1++; s2++;
}
return *s1 == *s2;
}

but I don't get much performance gain. Any suggestion?


Instead of worrying about the performance of strncmp, you should ask
yourself why you make so many calls to it that it matters at all.


I have a very simple hash table where the keys are string and values
are also string. What I want to let people do is:

hash_insert("key","value");

and then later retrive "value" by saying:

hash_lookup("key");

The hash algr. is very simple. It calculates the hash code like:

unsigned long hash_code(const char* s){
unsigned long int c = 0;
while(*s){
c = 131 * c + *s;
s++;
}
return c % MAX_HASH_SIZE;
}

It's possible for 2 different string to have the same hash code, so
when I am doing lookup, I am doing something like:

char* hash_lookup(const char* s){
unsigned long c = hash_code(s);
// Note I can't simply return the slot at c because
// it's possible that another different string is at
// this spot because of the hash algr. So I need to...
if(strcmp(s,(hash+c)->value) == 0){
// Found it...
}else{
// Do something else
}
}

So because of my hash algr., an extra strcmp is needed.


Since it is a hashing algorithm I would assume that the chance of the
first character been the same as the second character is very high.
Only if the hash table becomes extremely full does this change.

If this is the case replace:

if (strcmp (s, (hash + c)->value) == 0) {
// Found it...
}

with

if (s[0] == *((hash + c)->value)) /* Compare the first two chars. */
{
if (s, (hash + c)->value) == 0) /* Only now compare strings. */
{
// Found it...
}
else
{
// Do something else ..
}
}
else
{
// Do something else ..
}

This will kick out the cases where the strings don't match rather
faster. It removes the overhead of a function call and the beginning
of a loop (though neither ammount to much these days).

You may want to consider using a flag that labels a bucket as full
rather than comparing the string in the bucket to the key. This way
the first time the program looks up a label no comparison is done -
this will be insertion.

You may also want to consider:

* Using a better hash function ( read
http://burtleburtle.net/bob/hash/index.html#lookup )

* Resizing the hash when it's nearly full

* Using linked lists as buckets.
Nov 14 '05 #23
"Sean Kenwrick" <sk*******@hotmail.com> wrote in message news:<bv6r33

[snip]

Do you mean memcmp() here instead? Yes I think memcpy is faster than
strcmp or strncmp but I need to find out the longer string and pass in
the lenght of that. Otherwise, something like:

char* s1 = "auto";
char* s2 = "auto insurance";

memcmp(s1,s2,strlen(s1));

will return 0 which isn't. I will need to do the extra work like:

int l1 = strlen(s1);
int l2 = strlen(s2);

memcmp(s1,s2,l1 > l2 ? l1 : l2);

Do you think that will be faster than a strcmp or strncmp?


You need to examine your code for data-caching possibilities. What I mean
by this is that you evaluate a value at some stage which you keep and use
multiple times later on. In this case the important value is the length
of the strings. From a previous post it looks like you are evaluating
hash_keys() prior to posting keys into your lookup table - it seems that
this function is a likely candidate for calculating the length of the string
with little overhead (e.g save the original pointer and use pointer
arithmetic at the end to calculate the strlen()). You could then store
the length along with the other information in your lookup table.
Then you only need to do a memcmp() if the strings are of equal length, and
you already have the string lengths calculated if you do need to call
memcmp()...

Sean


If fact, that's exactly what I did now. If the strlen isn't the same,
there is no chance for the strings to be the same. If they are the
same length, memcmp can be used. So instead of doing strcmp (or
strncmp) I am doing either strlen and/or memcmp which should be
faster. Another problem now I have encountered is that the string
passed in to my function is not from a C program, it's from a PHP
extension (which is written in C). Because of this, I sometimes got
segfault which I think is related to PHP not padding the string with
the NULL marker. My question is: Does memcmp care whethere there is a
NULL marker somewhere or not? Is there any circumstance where memcmp
might segfault?

Thanks!
Nov 14 '05 #24
I've just been reading thread and two things pop to mind. First of
all, the hash function you have chosen looks a little bit questionable
in terms of collisions. The FNV hash is well known to behave quite
well and will have performance identical to your hash function:

http://www.isthe.com/chongo/tech/comp/fnv/

Second, if your program still boils down to string comparison no
matter what, then you should consider converting your program over to
a library like The Better String library:

http://bstring.sf.net/

In this library, the length of each string is predetermined as they
are created or modified (this is very cheap, while leading to massive
performance improvements in some string functionality.) In this way
you can use memcmp() (which has the potential of being implemented to
use block comparisons) directly without incurring the string traversal
costs (in general.) The Better String library also includes its own
string comparison functions, of course, which additionally capture
trivial cases like strings having different lengths and aliased string
pointers in O(1) time.

Additionally calling strlen(), or using strcmp, or strncmp, or
whatever based on the assumption of using raw char * buffers will all
incurr an additional O(n) cost no matter how you slice it, which may
be a big factor in what is showing up on your bottom line. Using
libraries like Bstrlib (which essential has a O(1) strlen) as
described above is really the only way to avoid this cost.

As a point of disclosure, I am the author of Bstrlib -- other
libraries like Vstr (www.and.org/vstr) have comparable mechanisms.

--
Paul Hsieh
http://www.pobox.com/~qed/
http://bstring.sf.net/
Nov 14 '05 #25
In article <79**************************@posting.google.com >,
qe*@pobox.com (Paul Hsieh) wrote:
I've just been reading thread and two things pop to mind. First of
all, the hash function you have chosen looks a little bit questionable
in terms of collisions. The FNV hash is well known to behave quite
well and will have performance identical to your hash function:

http://www.isthe.com/chongo/tech/comp/fnv/

Second, if your program still boils down to string comparison no
matter what, then you should consider converting your program over to
a library like The Better String library:

http://bstring.sf.net/

In this library, the length of each string is predetermined as they
are created or modified (this is very cheap, while leading to massive
performance improvements in some string functionality.) In this way
you can use memcmp() (which has the potential of being implemented to
use block comparisons) directly without incurring the string traversal
costs (in general.) The Better String library also includes its own
string comparison functions, of course, which additionally capture
trivial cases like strings having different lengths and aliased string
pointers in O(1) time.


If the data is completely under your control, you could make different
changes: Store all the strings in arrays of unsigned long instead of
unsigned char. In the table, end every string with at least one zero and
a 1, with the 1 being the last byte in an unsigned long. In the strings
that you pass in, end every string with at least two zeroes, with the
last zero being the last byte in an unsigned long.

You can know compare one unsigned long at a time. You don't need to
check for the end of the strings because the data you pass in and the
data in your table will be different. After finding the first unsigned
long that is different, the strings are equal if the difference between
the two strings is 1 and the last byte of the unsigned long that you
took from the table is 1.
Nov 14 '05 #26
qe*@pobox.com (Paul Hsieh) wrote in message news:<79**************************@posting.google. com>...
I've just been reading thread and two things pop to mind. First of
all, the hash function you have chosen looks a little bit questionable
in terms of collisions. The FNV hash is well known to behave quite
well and will have performance identical to your hash function:

http://www.isthe.com/chongo/tech/comp/fnv/

Second, if your program still boils down to string comparison no
matter what, then you should consider converting your program over to
a library like The Better String library:

http://bstring.sf.net/


Thanks for pointing out a better hash algr. and string library. I will
consider using both in my application. As I have pointed out in
another thread, my problem now seems to be in PHP not padding the
string with \0 which result in segfault.
Nov 14 '05 #27

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

9
by: jacob navia | last post by:
What is the result of that? I am reimplementing the C library, and I would like to know. Now I have it to return zero. Is that correct? The standard says: >> The strncmp function compares...
1
by: Bert | last post by:
I have a question about buffer overflows with strcmp or strncmp (strnicmp). It's best described by providing an example Example: There is a string "sBuf" of length 5000. In a for loop from 0 to...
4
by: Preets | last post by:
Hi, Can anybody please tell me the difference between memcmp() and strncmp() functions ?
3
by: arunraj2002in | last post by:
Hi All, I have doubt in strncmp. consider the following prog: int main() { int i; i = strncmp("Nstring","Astring",6);
12
by: Krumble Bunk | last post by:
Hi all, Having some trouble with a seemingly-simple task. Need to have a basic CLI functionality as per: prompt <- \n prompt <- \n promptquit <- should quit as per my snippet, but...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.