P: n/a

Hi there,
There's a classic hash function to hash strings, where MULT is defined
as "31":
//from programming pearls
unsigned int hash(char *ptr)
{ unsigned int h = 0;
unsigned char *p = ptr;
int n;
for (n = k; n 0; p++) {
h = MULT * h + *p;
if (*p == 0)
n;
}
return h % NHASH;
}
Why MULT defined as 31? ( How about 29? 24? or 26? )
Thanks,
Wenjie  
Share this Question
P: n/a
 go****@yahoo.com said:
Hi there,
There's a classic hash function to hash strings, where MULT is defined
as "31":
<snip>
>
Why MULT defined as 31? ( How about 29? 24? or 26? )
Why not find out yourself? Generic hashing routines are intended to get you
a decent spread of hashes for arbitrary data. So cook up a few million
records of arbitrary data, and see what kind of spreads you get for various
multipliers.
If you call it research, maybe you can even fool someone into paying you.

Richard Heathfield
"Usenet is a strange place"  dmr 29/7/1999 http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)  
P: n/a
 go****@yahoo.com wrote:
Hi there,
There's a classic hash function to hash strings, where MULT is defined
as "31":
//from programming pearls
unsigned int hash(char *ptr)
{ unsigned int h = 0;
unsigned char *p = ptr;
int n;
for (n = k; n 0; p++) {
h = MULT * h + *p;
if (*p == 0)
n;
}
return h % NHASH;
}
It looks like this code was probably mistranscribed.
There's this undefined quantity `k' floating around, and
the behavior at endofstring can only be called broken.
But that doesn't seem central to your question:
Why MULT defined as 31? ( How about 29? 24? or 26? )
It's a mixture of superstition and good sense.
First, the superstition: People who write code having
to do with hash tables apparently recall that prime numbers
are particularly "good" for them. It seems they don't always
remember just what the "goodness" was or in what connection,
but they'll throw prime numbers into the mix whenever they
can. They'll throw in prime numbers even if they're not too
sure what a prime number is! A colleague of mine once ran
across this little coding gem:
#define HASHSIZE 51 /* a smallish prime */
Second, the good sense: Suppose MULT were 26, and consider
hashing a hundredcharacter string. How much influence does
the string's first character have on the final value of `h',
just before the mod operation? The first character's value
will have been multiplied by MULT 99 times, so if the arithmetic
were done in infinite precision the value would consist of some
jumble of bits followed by 99 loworder zero bits  each time
you multiply by MULT you introduce another loworder zero, right?
The computer's finite arithmetic just chops away all the excess
highorder bits, so the first character's actual contribution to
`h' is ... precisely zero! The `h' value depends only on the
rightmost 32 string characters (assuming a 32bit int), and even
then things are not wonderful: the first of those final 32 bytes
influences only the leftmost bit of `h' and has no effect on
the remaining 31. Clearly, an evenvalued MULT is a poor idea.
Need MULT be prime? Not as far as I know (I don't know
everything); any odd value ought to suffice. 31 may be attractive
because it is close to a power of two, and it may be easier for
the compiler to replace a possibly slow multiply instruction with
a shift and subtract (31*x == (x << 5)  x) on machines where it
makes a difference. Setting MULT one greater than a power of two
(e.g., 33) would also be easy to optimize, but might produce too
"simple" an arrangement: mostly a juxtaposition of two copies
of the original set of bits, with a little mixing in the middle.
So you want an odd MULT that has plenty of onebits.
You'd also like a MULT that "smears" its operand bits across
as much of `h' as you can manage, so MULT shouldn't be too small
(consider the degenerate case of MULT==1). Also, MULT shouldn't
be too large  to put it differently, UINT_MAXMULT shouldn't be
too small. How small is "too small" depends to some extent on
the expected lengths of the strings; I suspect 31 is "too small"
if the strings are short (the values won't have time to invade
the highorder part of `h'). I think it would be wiser to choose
MULT somewhere between sqrt(UINT_MAX) and UINT_MAXsqrt(UINT_MAX),
making sure it's odd and has lots of onebits. Primality doesn't
seem important here  but perhaps someone else may offer a good
reason to choose a prime. Sometimes superstition has valid roots.

Eric Sosman es*****@acmdotorg.invalid  
P: n/a

Thanks Eric for a good rundown of hash multipliers. A lot of us don't
have a good feel for the tradeoffs in this area.
FWIW: Many of the Borland compilers (wickedly fast if you don't already
know), used a multiplier of "13" for hashing.  
P: n/a

Eric Sosman said:
A colleague of mine once ran
across this little coding gem:
#define HASHSIZE 51 /* a smallish prime */
<snorfl>
1 is prime, 3 is prime, 5 is prime, 7 is prime, 9 is prime, 11 is prime...

Richard Heathfield
"Usenet is a strange place"  dmr 29/7/1999 http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)  
P: n/a

Eric Sosman wrote:
It looks like this code was probably mistranscribed.
There's this undefined quantity `k' floating around, and
the behavior at endofstring can only be called broken.
But that doesn't seem central to your question:
int k =2; /*defined before the function, from the book: programming
pearl*/
Why MULT defined as 31? ( How about 29? 24? or 26? )
It's a mixture of superstition and good sense.
<snip>
Thanks for your comments Eric. It is very helpful. Complementary to
your comments on prime number hash table size, Indeed I just found a
web page: http://www.concentric.net/~Ttwang/tech/primehash.htm  
P: n/a

Richard Heathfield wrote:
go****@yahoo.com said:
Hi there,
There's a classic hash function to hash strings, where MULT is defined
as "31":
<snip>
Why MULT defined as 31? ( How about 29? 24? or 26? )
Why not find out yourself? Generic hashing routines are intended to get you
a decent spread of hashes for arbitrary data. So cook up a few million
records of arbitrary data, and see what kind of spreads you get for various
multipliers.
There's an alternative way to discuss it on the usenet.
>
If you call it research, maybe you can even fool someone into paying you.
Sometimes "Usenet is a strange place."  
P: n/a

Eric Sosman wrote:
You'd also like a MULT that "smears" its operand bits across
as much of `h' as you can manage, so MULT shouldn't be too small
(consider the degenerate case of MULT==1). Also, MULT shouldn't
be too large  to put it differently, UINT_MAXMULT shouldn't be
too small. How small is "too small" depends to some extent on
the expected lengths of the strings; I suspect 31 is "too small"
if the strings are short (the values won't have time to invade
the highorder part of `h').
I have had some success using the multiplier from some "approved"
linearconguential PRNG before  even with "difficult" input data such
as words which are all 35 letters and all capitals. I think of it as
a "peturbed" random number generator, in that the input values (chars,
or whatever) replace the additive step in the PRNG. I don't know how
much justification there really is for that way of thinking, but I find
it reassuring...
E.g. (digging out some old C code)
static HASH
lc_hash(uchar *ptr)
{
HASH hash;
for (hash = 0; *ptr; ptr++)
{
hash += *ptr;
hash *= 597485621;
}
return hash;
}
(HASH is some kind of unsigned integer)
 chris  
P: n/a

I wrote:
I have had some success using the multiplier from some "approved"
linearconguential PRNG before  even with "difficult" input data
such as words which are all 35 letters and all capitals.
I forgot to add that by "some success" I meant that I measured the
distributions of final values for a range of table sizes and found (a)
no apparent significant divergence from randomness and (b) no apparent
tendency to "resonate" with any particular sizes. That was using
realworld sized samples of realworld data.
So I used poweroftwo sized hashtables, which I'm sure that Eric, as a
founder member and leading light of the Campaign To Stamp Out Primes,
would find pleasing  were it not that he is also active their sister
organisation: the Campaign To Stamp Out Powers Of Two. ;)
 chris  
P: n/a

Richard Heathfield wrote:
Eric Sosman said:
A colleague of mine once ran
across this little coding gem:
#define HASHSIZE 51 /* a smallish prime */
<snorfl>
1 is prime, 3 is prime, 5 is prime, 7 is prime, 9 is prime, 11 is prime...
When picking a prime for a middlingsize table, I just recall that
Knuth's MIX language (a horribly ancient IBM 650 emualike) also
denotes a prime in roman numerals. If that's not big enough, the
International Motherhood of Multiple Marine Mammal Machinists is also a
prime.  
P: n/a

Richard Heathfield wrote:
> go****@yahoo.com said:
Hi there,
There's a classic hash function to hash strings, where MULT is defined
as "31":
<snip>
Why MULT defined as 31? ( How about 29? 24? or 26? )
Why not find out yourself? Generic hashing routines are intended to get you
a decent spread of hashes for arbitrary data. So cook up a few million
records of arbitrary data, and see what kind of spreads you get for various
multipliers.
If you call it research, maybe you can even fool someone into paying you.
Hey, Richard, you are the one who said:
"Give me a couple of years and a large research grant,
and I'll give you a receipt."
;)

++
 Charles and Francis Richmond richmond at plano dot net 
++  
P: n/a

"Ancient_Hacker" <gr**@comcast.netwrites:
When picking a prime for a middlingsize table, I just recall that
Knuth's MIX language (a horribly ancient IBM 650 emualike) also
denotes a prime in roman numerals.
I just use a decent hash algorithm. Then I can use a
poweroftwo size table, saving an expensive remaindering
operation on every table search.

Ben Pfaff
email: bl*@cs.stanford.edu
web: http://benpfaff.org  
P: n/a

Ancient_Hacker wrote:
>
Richard Heathfield wrote:
Eric Sosman said:
A colleague of mine once ran
across this little coding gem:
>
#define HASHSIZE 51 /* a smallish prime */
<snorfl>
1 is prime, 3 is prime, 5 is prime, 7 is prime, 9 is prime, 11 is prime...
When picking a prime for a middlingsize table, I just recall that
Knuth's MIX language (a horribly ancient IBM 650 emualike) also
denotes a prime in roman numerals. If that's not big enough, the
International Motherhood of Multiple Marine Mammal Machinists is also a
prime.
"If 91 were prime, it would be a counterexample to your conjecture."
 Bruce Wheeler

++
 Charles and Francis Richmond richmond at plano dot net 
++  
P: n/a

Chris Uppal wrote:
I wrote:
>>I have had some success using the multiplier from some "approved" linearconguential PRNG before  even with "difficult" input data such as words which are all 35 letters and all capitals.
I forgot to add that by "some success" I meant that I measured the
distributions of final values for a range of table sizes and found (a)
no apparent significant divergence from randomness and (b) no apparent
tendency to "resonate" with any particular sizes. That was using
realworld sized samples of realworld data.
So I used poweroftwo sized hashtables, which I'm sure that Eric, as a
founder member and leading light of the Campaign To Stamp Out Primes,
would find pleasing  were it not that he is also active their sister
organisation: the Campaign To Stamp Out Powers Of Two. ;)
Superstition! Fight it everywhere! Join the Society to
Suppress Multiples of Unity!

Eric Sosman es*****@acmdotorg.invalid  
P: n/a

Ben Pfaff wrote:
"Ancient_Hacker" <gr**@comcast.netwrites:
>When picking a prime for a middlingsize table, I just recall that Knuth's MIX language (a horribly ancient IBM 650 emualike) also denotes a prime in roman numerals.
I just use a decent hash algorithm. Then I can use a
poweroftwo size table, saving an expensive remaindering
operation on every table search.
Computing the remainder is expensive? Yeah, sure, true on 8086.
But on anything recent? I thought on any pipelined processor
(which mean most processors in use today) essentially all
registertoregister integer operations could be completed in
only one cycle.
To put it another way, is there any modern processor (even an
embedded processor) where "mod" is actually more expensive than
"and"? I agree the former takes more silicon, but it seems
like most chips have gone ahead and devoted the silicon to it
to make it fast.
 Logan  
P: n/a

Richard Heathfield wrote:
1 is prime, 3 is prime, 5 is prime, 7 is prime, 9 is prime, 11 is prime...
<OT>1 is not prime, the first prime number is 2, the only even prime. The
definition of a prime number is: "has excatly 2 distinct divisors, 1 and
itself".</OT>

Roland Csaszar  \\\ ///  +43 316 495 2129
Software Development  \\\ ///  http://www.knapp.com
KNAPP Logistics Automation  \\V//  mailto:ro************@knapp.com  
P: n/a

Logan Shaw wrote:
Ben Pfaff wrote:
"Ancient_Hacker" <gr**@comcast.netwrites:
When picking a prime for a middlingsize table, I just recall that
Knuth's MIX language (a horribly ancient IBM 650 emualike) also
denotes a prime in roman numerals.
I just use a decent hash algorithm. Then I can use a
poweroftwo size table, saving an expensive remaindering
operation on every table search.
Computing the remainder is expensive? Yeah, sure, true on 8086.
But on anything recent? I thought on any pipelined processor
(which mean most processors in use today) essentially all
registertoregister integer operations could be completed in
only one cycle.
To put it another way, is there any modern processor (even an
embedded processor) where "mod" is actually more expensive than
"and"? I agree the former takes more silicon, but it seems
like most chips have gone ahead and devoted the silicon to it
to make it fast.
 Logan
I made a quick measurement of "modulus" v.s. "and" or "&". It shows
that modulus is expensive than "and" and "&" on my "modern" PC
generally(there are cases when the output is similar):
%: 1.09143201161e006
&: 4.56217200789e007
and: 4.06694146882e007
That's not significant though (IMHO).

def testMOD():
"""Modulus Timing"""
aha = 100%3
def testAND():
"""AND Timing"""
aha = 100 & 3
def testAND2():
"""AND Timing"""
aha = 100 and 3
if __name__=='__main__':
from timeit import Timer
t = Timer("testMOD()", "from __main__ import testMOD")
print "%: ", t.timeit(number=100000)/100000
t = Timer("testAND()", "from __main__ import testAND")
print "&: ", t.timeit(number=100000)/100000
t = Timer("testAND2()", "from __main__ import testAND2")
print "and: ", t.timeit(number=100000)/100000  
P: n/a

Logan Shaw wrote:
Ben Pfaff wrote:
>Ancient_Hacker wrote:
>>When picking a prime for a middlingsize table, I just recall that Knuth's MIX language (a horribly ancient IBM 650 emualike) also denotes a prime in roman numerals.
I just use a decent hash algorithm. Then I can use a poweroftwo size table, saving an expensive remaindering operation on every table search.
Computing the remainder is expensive? Yeah, sure, true on 8086.
But on anything recent? I thought on any pipelined processor
(which mean most processors in use today) essentially all
registertoregister integer operations could be completed in
only one cycle.
Seems you thought wrong. Division is expensive.
To put it another way, is there any modern processor (even an
embedded processor) where "mod" is actually more expensive than
"and"? I agree the former takes more silicon, but it seems
like most chips have gone ahead and devoted the silicon to it
to make it fast.
Is NetBurst modern enough? http://en.wikipedia.org/wiki/NetBurst
DIV takes 6680 cycles on Northwood, 5670 cycles on Prescott.
AND takes only half a cycle (!) on Prescott. http://www.intel.com/design/pentium4/manuals/248966.htm
As far as AMD is concerned, 32bit DIV takes ~40 cycles.  
P: n/a
 go****@yahoo.com wrote:
Eric Sosman wrote:
> It looks like this code was probably mistranscribed. There's this undefined quantity `k' floating around, and the behavior at endofstring can only be called broken. But that doesn't seem central to your question:
int k =2; /*defined before the function, from the book: programming
pearl*/
I still think it's mistranscribed. Either that, or it's
not intended as a hash function for strings, but for groups of
k strings stored consecutively (first character of second string
right after the '\0' of the first, and so on).
Next time you post a code snippet, consider including a
brief description of what it's supposed to do. If you don't,
people will have to guess about the intent of the code and
may guess wrong. "//from programming pearls" does not qualify
as a specification ...

Eric Sosman es*****@acmdotorg.invalid  
P: n/a

In article <NJ******************************@comcast.com>, Eric Sosman <es*****@acmdotorg.invalidwrites:
A colleague of mine once ran
across this little coding gem:
#define HASHSIZE 51 /* a smallish prime */
Well, at least it's smallish, for suitable definitions of "ish".
How small is "too small" depends to some extent on
the expected lengths of the strings; I suspect 31 is "too small"
if the strings are short (the values won't have time to invade
the highorder part of `h').
Depends on the size of the hash value, of course. If this is used
to calculate a 16bit hash, then any string of at least two printable
ASCII characters affects both bytes of the value, any string of
at least three printable ASCII characters affects at least the lower
15 bits, and any string of at least three ASCII nonwhitespace
printable characters affects all 16 bits.
If it's used to calculate a 32bit hash, then even relatively short
strings like "abcdef" (in ASCII) affect all 32 bits.
I think it would be wiser to choose
MULT somewhere between sqrt(UINT_MAX) and UINT_MAXsqrt(UINT_MAX),
making sure it's odd and has lots of onebits. Primality doesn't
seem important here  but perhaps someone else may offer a good
reason to choose a prime.
Well, if you're not using the whole range of hash values  if you're
taking the value modulo hash table size  then you want MULT to be
relatively prime to the modulus. For example, say MULT is 6 and your
table size is 15; you'll tend to get clumping around every third
bucket, because 3 is a common factor.
Sometimes the hash table size is fixed, but if it's variable, then
you'll want to choose MULT so that it's unlikely to share factors
with the modulus. Making MULT a prime that's not too small will do
the trick; for most applications, it seems unlikely that the hash
table size will be a multiple of 31.
That's my guess, anyway.
That said, I suspect you're right, and using a prime in this sort
of application may often owe as much to cargocult programming as
to real analysis. I note that this construct is basically a
linearcongruential PRNG with a variable increment, and primality
isn't a requirement for the multiplier in a good LCPRNG. Most of
the multipliers listed in the table of "good" LCPRNG parameters
in Schneier's _Applied Cryptography_ are composite.

Michael Wojcik mi************@microfocus.com
Even though there may be some misguided critics of what we're trying
to do, I think we're on the wrong path.  Reagan  
P: n/a

Eric Sosman wrote:
go****@yahoo.com wrote:
Eric Sosman wrote:
It looks like this code was probably mistranscribed. There's this undefined quantity `k' floating around, and the behavior at endofstring can only be called broken. But that doesn't seem central to your question:
int k =2; /*defined before the function, from the book: programming
pearl*/
I still think it's mistranscribed. Either that, or it's
not intended as a hash function for strings, but for groups of
k strings stored consecutively (first character of second string
right after the '\0' of the first, and so on).
Next time you post a code snippet, consider including a
brief description of what it's supposed to do. If you don't,
people will have to guess about the intent of the code and
may guess wrong. "//from programming pearls" does not qualify
as a specification ...
Thanks for your suggestion. I was careless when copying the codes
snip but it seems not an independent function for ordinary string
hasing: http://netlib.belllabs.com/cm/cs/pearls/markovhash.c  
P: n/a

Logan Shaw wrote:
>
To put it another way, is there any modern processor (even an
embedded processor) where "mod" is actually more expensive than
"and"? I agree the former takes more silicon, but it seems
like most chips have gone ahead and devoted the silicon to it
to make it fast.
 Logan
most CPU's nowdays have at least two integer and fp divide units, and
can do a divide in two clock cycles, and overlapped with other
operations as well. So divide and mod are not the 48cycle monsters
they were just a few silicon generations ago.  
P: n/a

Logan Shaw <ls**********@austin.rr.comwrites:
[...]
To put it another way, is there any modern processor (even an
embedded processor) where "mod" is actually more expensive than
"and"?
ARM (at least up to 9/ v5). The following code
unsigned x(unsigned y, unsigned z)
{
return y % z;
}
unsigned x1(unsigned y, unsigned z)
{
return y & z;
}
compiles to
x:
@ args = 0, pretend = 0, frame = 0
@ frame_needed = 0, uses_anonymous_args = 0
str lr, [sp, #4]!
bl __umodsi3
ldr pc, [sp], #4
..Lfe1:
.size x,.Lfe1x
.align 2
.global x1
.type x1,function
x1:
@ args = 0, pretend = 0, frame = 0
@ frame_needed = 0, uses_anonymous_args = 0
@ link register save eliminated.
and r0, r0, r1
@ lr needed for prologue
mov pc, lr
ie it calls a library routine for mod. With a constant second
operand, the compiler can eventually avoid the 'generic' division
routine, but that will still be several instructions instead of
one.  
P: n/a

"Ancient_Hacker" <gr**@comcast.netwrites:
Logan Shaw wrote:
>To put it another way, is there any modern processor (even an embedded processor) where "mod" is actually more expensive than "and"? I agree the former takes more silicon, but it seems like most chips have gone ahead and devoted the silicon to it to make it fast.
 Logan
most CPU's nowdays have at least two integer and fp divide units,
"Most CPUs nowadays" *are* *not* 'recent' x86compatible PC CPUs.  
P: n/a

Logan Shaw <ls**********@austin.rr.comwrites:
To put it another way, is there any modern processor (even an
embedded processor) where "mod" is actually more expensive than
"and"?
Please name a processor on which "mod" and "and" have the same
cost.

"...In the UNIX world, people tend to interpret `nontechnical user'
as meaning someone who's only ever written one device driver."
Daniel Pead  
P: n/a

Ancient_Hacker wrote:
Logan Shaw wrote:
To put it another way, is there any modern processor (even an
embedded processor) where "mod" is actually more expensive than
"and"?
I would suggest that there is basically no processor anywhere where
that isn't true. From Cray vector processors to digital watches  its
the same deal.
[...] I agree the former takes more silicon, but it seems
like most chips have gone ahead and devoted the silicon to it
to make it fast.
 Logan
most CPU's nowdays have at least two integer and fp divide units,
Excuse me? The Alpha processor had a single multistaged integer
divide unit (i.e., it could compute about 4 bits of integer result per
clock), because those guys were macho. I'm not sure if there are any
others that you could classify as a high performance CPU.
In general CPUs reuse many of their other functional units
(specifically their multipliers and adders) to simulate a divide over
multiple clocks. In fact, most CPUs only provide a floating point
division state machine and use it to generate the integer divide. Its
just a matter of making the most use out of your transisters. The
moment you try to make a divider in your CPU microarchitecture, it
becomes a better idea to turn those transistors into extra multipliers
and adders (on average there will just be a bigger performance impact
for doing so). I.e., it never has been, and never will be a
particularly good idea to make a custom divider, and as a consequence
it will never come down to a single clock (or 2); which I don't think
is possible anyways.
In the Itanium, for example, rather than making a division unit, they
decided to make two parallel multiply and add units instead. Its
clearly better to do things that way.
[...] and
can do a divide in two clock cycles, and overlapped with other
operations as well.
CPU manufacturers have, at various times, done clever things with their
divide mechanisms  but escaping their inevitable slowness is not one
of them. About the fastest dividers in existence are the SIMD
reciprocal dividers in the latest x86 processors that can compute 4
parallel 32bit floating point approximate results in about 12 clocks.
But you can't shrink the 12 clocks, you can't get fully IEEE accurate
results, and you can't get fewer than 4 of them done at a time (except
by ignoring the extra ones you compute.) It certainly isn't going to
help you compute an isolated integer modulo.
The AMD Athlon did a neat thing where they would allow other FP
instructions to use the few dead slots in the FPU while it executed the
microcode for its division. But this does nothing for serial
calculations such as computing the offset into a hash (the software
just has to wait for the result before it can proceed.)
[....] So divide and mod are not the 48cycle monsters
they were just a few silicon generations ago.
Well good ones are about 20 clocks (the GrahmSmidt formula is an
ingeneous way of reducing a divide to a critical path of about 5 serial
floating point operations). But its *really* hard (and not worth the
effort) to get them to go any faster.

Paul Hsieh http://www.pobox.com/~qed/ http://bstring.sf.net/  
P: n/a
 we******@gmail.com wrote:
Ancient_Hacker wrote:
Logan Shaw wrote:
most CPU's nowdays have at least two integer and fp divide units,
Excuse me?
You are correct, there's a scarcity of multiple divide units. Also the
times I gave can be complicated by a minimum latency.
The AMD processors do have separate floating add and multiply units.
They also have 128 bit SIMD instructions that do four FP ops at a gulp.
Also there's reciprocal divide, with three variations of precision,
and in very few cycles.
You're spot on it's usually better to devote the silicon to more
multipliers. Division is a hard nut to crack.  
P: n/a

Richard Heathfield wrote:
Eric Sosman said:
>A colleague of mine once ran across this little coding gem:
#define HASHSIZE 51 /* a smallish prime */
<snorfl>
1 is prime, 3 is prime, 5 is prime, 7 is prime, 9 is prime, 11 is prime...
No. A prime is evenly divisible only by itself or unity (1). 2 is prime.
9 isn't prime. Did I miss something?

Joe Wright
"Everything should be made as simple as possible, but not simpler."
 Albert Einstein   
P: n/a

Joe Wright <jo********@comcast.netwrites:
Richard Heathfield wrote:
>Eric Sosman said:
>>A colleague of mine once ran across this little coding gem:
#define HASHSIZE 51 /* a smallish prime */
<snorfl> 1 is prime, 3 is prime, 5 is prime, 7 is prime, 9 is prime, 11 is prime...
No. A prime is evenly divisible only by itself or unity (1). 2 is prime.
9 isn't prime. Did I miss something?
The joke.

"Ho ho ho. I _so_ enjoy making a fool out of myself."
Linus, on Christmas Day, 2000  
P: n/a

Joe Wright said:
Richard Heathfield wrote:
>Eric Sosman said:
>>A colleague of mine once ran across this little coding gem:
#define HASHSIZE 51 /* a smallish prime */
<snorfl>
1 is prime, 3 is prime, 5 is prime, 7 is prime, 9 is prime, 11 is prime...
No.
Really?
A prime is evenly divisible only by itself or unity (1).
That definition is inadequate, since it fails to exclude negative numbers
and fractions from the universe of discourse. (3 is divisible by 1 and 3,
so by your definition it is not prime. If you then say "well, not
negatives, obviously", you still have to face 3/2, which fits your
definition nicely, and yet is not prime.)
Also, your definition is insufficient to the necessary task of excluding 1
from primality.
2 is prime. 9 isn't prime. Did I miss something?
Yes.

Richard Heathfield
"Usenet is a strange place"  dmr 29/7/1999 http://www.cpax.org.uk
email: rjh at above domain (but drop the www, obviously)  
P: n/a

Joe Wright <jo********@comcast.netwrites:
Richard Heathfield wrote:
>Eric Sosman said:
>>A colleague of mine once ran across this little coding gem:
#define HASHSIZE 51 /* a smallish prime */
<snorfl> 1 is prime, 3 is prime, 5 is prime, 7 is prime, 9 is prime, 11 is prime...
No. A prime is evenly divisible only by itself or unity (1). 2 is
prime. 9 isn't prime. Did I miss something?
It's the punchline to a joke, if I'm not mistaken.

Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.  
P: n/a
 go****@yahoo.com wrote:
>
There's a classic hash function to hash strings, where MULT is
defined as "31":
//from programming pearls
unsigned int hash(char *ptr)
{ unsigned int h = 0;
unsigned char *p = ptr;
int n;
for (n = k; n 0; p++) {
h = MULT * h + *p;
if (*p == 0)
n;
}
return h % NHASH;
}
Why MULT defined as 31? ( How about 29? 24? or 26? )
The primes 31 and 37 have been experimentally found to be
efficacious. 31 has the advantage that it consists of a string of
1 bits, and multiplication can be easily effected by shifting in
systems without a fast integer multiply.
That function as written above is faulty. Try:
/* Hash a string quantity */
unsigned long hshstrhash(const char * string)
{
unsigned long h;
h = 0;
while (*string) {
h = h * 31UL + (unsigned char) *string++;
}
return h;
} /* hshstrhash */
and leave the reduction modulo NHASH to the caller.

"The most amazing achievement of the computer software industry
is its continuing cancellation of the steady and staggering
gains made by the computer hardware industry..."  Petroski

Posted via a free Usenet account from http://www.teranews.com  
P: n/a

Joe Wright wrote:
Richard Heathfield wrote:
>Eric Sosman said:
>>A colleague of mine once ran across this little coding gem:
#define HASHSIZE 51 /* a smallish prime */
<snorfl>
1 is prime, 3 is prime, 5 is prime, 7 is prime, 9 is prime, 11 is prime...
No. A prime is evenly divisible only by itself or unity (1). 2 is prime.
9 isn't prime. Did I miss something?
Mathematician: A prime is a positive integer with
two and only two distinct exact divisors, namely, itself
and unity.
Engineer: 3 is prime, 5 is prime, 7 is prime, 9 is an
observational error, 11 is prime, ...
Programmer: 3 is prime, 5 is prime, 7 is prime, 9 is
prime, 11 is prime, ...
(Oh, sorry: I forgot the part about "A mathematician,
an engineer, and a programmer walk into a bar ...")
Working the professions in the opposite direction, it seems
these three abouttobeplastered savants had been asked to find
the aerodynamic drag on a new railway locomotive. The programmer
offered to write a gigantic nonlinear partial differential
equation solver and run it for eight months (plus debugging time)
on a massive bank of linked supercomputers doing somewhat over
three petaflops. The engineer pointed out that this would be
hideously expensive, and instead proposed building a fullscale
model of the locomotive and testing it in a giant wind tunnel.
The mathematician tuttutted both of them for their pathetic
naïveté and said "Consider a spherical train ..."

Eric Sosman es*****@acmdotorg.invalid  
P: n/a

WARNING: This article contains no topical material.
Eric Sosman <es*****@acmdotorg.invalidwrites:
Joe Wright wrote:
>Richard Heathfield wrote:
[...]
>>1 is prime, 3 is prime, 5 is prime, 7 is prime, 9 is prime, 11 is prime...
No. A prime is evenly divisible only by itself or unity (1). 2 is prime. 9 isn't prime. Did I miss something?
Mathematician: A prime is a positive integer with
two and only two distinct exact divisors, namely, itself
and unity.
Engineer: 3 is prime, 5 is prime, 7 is prime, 9 is an
observational error, 11 is prime, ...
Programmer: 3 is prime, 5 is prime, 7 is prime, 9 is
prime, 11 is prime, ...
(Oh, sorry: I forgot the part about "A mathematician,
an engineer, and a programmer walk into a bar ...")
<WAY_WAY_OT>
Here's the version I'm familiar with (potentially offensive to
engineers):
A mathematician, a physicist, an engineer, and a computer scientist
are asked to test the hypothesis that all odd numbers greater than 1
are prime.
Mathematician:
3 is prime, 5 is prime, 7 is prime, 9 is composite.
The hypothesis is false.
Physicist:
3 is prime, 5 is prime, 7 is prime, 9 is composite, 11 is prime,
13 is prime, ...
The hypothesis is true within the bounds of experimental error.
Engineer:
3 is prime, 5 is prime, 7 is prime, 9 is prime, 11 is prime, 13 is
prime, 15 is prime, ...
Computer scientist:
3 is prime, 5 is prime, 5 is prime, 5 is prime, 5 is prime, 5 is
prime, 5 is prime, ...
(The computer scientist part is my own original contribution to the
joke.)
</WAY_WAY_OT>

Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.  
P: n/a

On Wed, 20 Sep 2006 23:23:05 0400, Eric Sosman
<es*****@acmdotorg.invalidwrote:
>Joe Wright wrote:
>Richard Heathfield wrote:
>>Eric Sosman said:
A colleague of mine once ran across this little coding gem:
#define HASHSIZE 51 /* a smallish prime */
<snorfl>
1 is prime, 3 is prime, 5 is prime, 7 is prime, 9 is prime, 11 is prime...
No. A prime is evenly divisible only by itself or unity (1). 2 is prime. 9 isn't prime. Did I miss something?
Mathematician: A prime is a positive integer with two and only two distinct exact divisors, namely, itself and unity.
Engineer: 3 is prime, 5 is prime, 7 is prime, 9 is an observational error, 11 is prime, ...
(should be a Physicist instead of an Engineer)
>
Programmer: 3 is prime, 5 is prime, 7 is prime, 9 is prime, 11 is prime, ...
This is the Engineer; Here's the programmer 
Programmer: 3 is prime, 5 is prime, 5 is prime, 5 is prime, 5 is
prime, ...
>
(Oh, sorry: I forgot the part about "A mathematician, an engineer, and a programmer walk into a bar ...")
Working the professions in the opposite direction, it seems these three abouttobeplastered savants had been asked to find the aerodynamic drag on a new railway locomotive. The programmer offered to write a gigantic nonlinear partial differential equation solver and run it for eight months (plus debugging time) on a massive bank of linked supercomputers doing somewhat over three petaflops. The engineer pointed out that this would be hideously expensive, and instead proposed building a fullscale model of the locomotive and testing it in a giant wind tunnel. The mathematician tuttutted both of them for their pathetic naïveté and said "Consider a spherical train ..."
 
P: n/a

Ben Pfaff wrote:
Logan Shaw <ls**********@austin.rr.comwrites:
>To put it another way, is there any modern processor (even an embedded processor) where "mod" is actually more expensive than "and"?
Please name a processor on which "mod" and "and" have the same
cost.
Having read much of this discussion, it would appear that I made
a bit of a leap in assuming the two were comparable in speed.
I guess this was all based on what I know about pipelined processors,
which is that many integer operations go through the pipeline without
stalling (otherwise the pipeline isn't that useful). Apparently it
wasn't valid to go from "many" to "most, including division".
Perhaps if we had been comparing leftshifts with multiplies, the
kind of leap I made would've been more reasonable, or at least closer
to reality. But I guess "and" and "mod" are more different than
leftshift and multiply are...
 Logan  
P: n/a

Keith Thompson wrote:
Mathematician:
3 is prime, 5 is prime, 7 is prime, 9 is composite.
The hypothesis is false.
My recollection of the Mathematicians take:
3 is prime, 5 is prime, 7 is prime, the rest follow by induction.

Paul Hsieh http://www.pobox.com/~qed/ http://bstring.sf.net/  
P: n/a

In article <ln***************@nuthaus.mib.org>,
Keith Thompson <ks***@mib.orgwrote:
>Computer scientist:
3 is prime, 5 is prime, 5 is prime, 5 is prime, 5 is prime, 5 is
prime, 5 is prime, ...
Typical programmer:
3 is prime, 5 is prime, 6.9999999998 is prime, ...
 Richard  
P: n/a

Ben Pfaff wrote:
Joe Wright <jo********@comcast.netwrites:
>Richard Heathfield wrote:
>>Eric Sosman said:
A colleague of mine once ran across this little coding gem:
#define HASHSIZE 51 /* a smallish prime */ <snorfl> 1 is prime, 3 is prime, 5 is prime, 7 is prime, 9 is prime, 11 is prime...
No. A prime is evenly divisible only by itself or unity (1). 2 is prime. 9 isn't prime. Did I miss something?
The joke.
I see it now. 3 times 17 is 51.

Joe Wright
"Everything should be made as simple as possible, but not simpler."
 Albert Einstein   
P: n/a
 ri*****@cogsci.ed.ac.uk (Richard Tobin) writes:
In article <ln***************@nuthaus.mib.org>,
Keith Thompson <ks***@mib.orgwrote:
>>Computer scientist: 3 is prime, 5 is prime, 5 is prime, 5 is prime, 5 is prime, 5 is prime, 5 is prime, ...
Typical programmer:
3 is prime, 5 is prime, 6.9999999998 is prime, ...
Student with slide rule:
"Two times three is five point nine nine nine ... aw heck, call it six."
(A very *old* joke.)

Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <* <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.  
P: n/a

In article <_K******************************@comcast.com>, Eric Sosman <es*****@acmdotorg.invalidwrites:
>
Engineer: 3 is prime, 5 is prime, 7 is prime, 9 is an
observational error, 11 is prime, ...
Like Richard Harter, I think the version I heard ascribes this one
to a Physicist.
The Engineer version I know goes something like: 3 is prime, 5 is
prime, 7 is prime, 9 is ... OK, we have a 25% defect rate.
Programmer: 3 is prime, 5 is prime, 7 is prime, 9 is
prime, 11 is prime, ...
I've seen Programmer given the "looping" answer, though I think it
looped on 7 ("7 is prime, 7 is prime, ..."), which IMO makes more
sense  the solution breaks when it reaches the first input that it
doesn't handle correctly.
Another Programmer answer might be: 3 is prime, 5 is prime, 7 is
prime. All the test cases pass  let's ship.
Like the various humorous versions of Dining Philosophers, lightbulb
jokes, and so forth, this seems to be a generative structure. Here
are some other possibilities, off the top of my head:
Executive: 3 is prime, 5 is prime, 7 is prime, 9 is prime (for
accounting purposes), ...
Marketer: 3 is prime, 5 is prime, 7 is prime, 9 is Prime 2.0!, ...
Rhetorician: 3 is prime, 5 is prime, 7 is prime. Let us pass over
9 in silence, and dwell no more on its merits or faults. 11 is
prime...
Multiculturalist: 3 is prime, 5 is prime, 7 is prime. Excluding 9
from the primes would recapitulate the inequal relations inherent
in patriarchalimperialist hegemony. Let us instead recognize that
9 is differentlyprimal.
Dadaist: 3 is prime, 5 is prime, 7 is prime, banana, 11 is prime...

Michael Wojcik mi************@microfocus.com
"We are facing a dire shortage of clowns," said Erickson, also known as
Jingles.  
P: n/a
 ri*****@cogsci.ed.ac.uk (Richard Tobin) wrote:
# In article <ln***************@nuthaus.mib.org>,
# Keith Thompson <ks***@mib.orgwrote:
#
# >Computer scientist:
# 3 is prime, 5 is prime, 5 is prime, 5 is prime, 5 is prime, 5 is
# prime, 5 is prime, ...
#
# Typical programmer:
#
# 3 is prime, 5 is prime, 6.9999999998 is prime, ...
Prove that all odd integers higher than 2 are prime:
Chemist: 3 is prime, 5 is prime, 7 is prime... hey, let's publish!
Computer programmer: "3 is prime, 5 is prime, 7 is prime, 9 is... 3 is
prime, 5 is prime, 7 is prime, 9 is... 3 is..."
Um, right. Okay, how about this:
"3 is not prime, 5 is not prime, 7 is not prime, 9 is not prime..."
So much for the beta releases. Ship this:
"3 is prime, 5 is prime, 7 is prime, 9 is a feature, 11 is prime..." and
put on the cover "More prime numbers than anyone else in the industry!"
Drunk: 3 is an odd prime, 5 is an odd prime, 7 is an odd prime,
9 is a very odd prime...
Economist: 3 is prime, 5 is prime, 7 is prime, 9 is not prime. Look,
the prime rate is dropping!
Engineer: 3 is prime, 5 is prime, 7 is prime, 9 is... 9 is... well,
if you approximate, 9 is prime, 11 is prime, 13 is prime...
Engineer: 3 is prime, 5 is prime, 7 is prime, 9 is not working. Hand
me the pliers.
New Yorker: 3 is prime, 5 is prime, 7 is prime, 9 is... NONE OF YOUR
DAMN BUSINESS!
Physicist: 3 is prime, 5 is prime, 7 is prime, 9 is... uh, 9 is an
experimental error, 11 is prime, 13 is prime...
Theologist: 3 is prime and that's good enough for me!

SM Ryan http://www.rawbw.com/~wyrmwif/
Quit killing people. That's high profile.  
P: n/a

>go****@yahoo.com wrote:
>Why MULT defined as 31? ( How about 29? 24? or 26? )
In article <NJ******************************@comcast.com>
Eric Sosman <es*****@acmdotorg.invalidwrote:
[much snippage]
>... 31 may be attractive because it is close to a power of two, and it may be easier for the compiler to replace a possibly slow multiply instruction with a shift and subtract (31*x == (x << 5)  x) on machines where it makes a difference. Setting MULT one greater than a power of two (e.g., 33) would also be easy to optimize, but might produce too "simple" an arrangement: mostly a juxtaposition of two copies of the original set of bits, with a little mixing in the middle. So you want an odd MULT that has plenty of onebits.
I thought the same, some (ok, now "many") years ago, and suggested
trying both 31 and 33 back when the original Berkeley DB was being
written and tested.
On a number of sample data sets, it turns out that 33 performed
slightly better, given the way the hash value was used.
Because of the optimization issue (multiply into shiftandadd),
"multiply by 33 and add *p++" performed better than some other
"better" (in terms of bit distribution) algorithms. That is, the
other algorithms resulted in slightly fewer hash collisions, but
the time saved resolving such collisions was insufficient to make
up for the extra CPU time used calculating the morecomplex hash
function.
Of course, this was in the 1990s, when 300 MHz was an amazingly
fast CPU. :)

InRealLife: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.  
P: n/a

Chris Torek wrote:
Because of the optimization issue (multiply into shiftandadd),
"multiply by 33 and add *p++" performed better than some other
"better" (in terms of bit distribution) algorithms. That is, the
other algorithms resulted in slightly fewer hash collisions, but
the time saved resolving such collisions was insufficient to make
up for the extra CPU time used calculating the morecomplex hash
function.
So what cpu is this that can shift by 5 then add (mult by 33) faster
than it can shift by 5 and subtract (mult by 31) ?
Regarding the performance of mod operations:
On the powerPCs I use at work division takes 19 cycles.  
P: n/a

>Chris Torek wrote:
>Because of the optimization issue (multiply into shiftandadd), "multiply by 33 and add *p++" performed better than some other "better" (in terms of bit distribution) algorithms. That is, the other algorithms resulted in slightly fewer hash collisions, but the time saved resolving such collisions was insufficient to make up for the extra CPU time used calculating the morecomplex hash function.
In article <11**********************@m73g2000cwd.googlegroups .com>,
Samuel Stearley <ny****@gmail.comwrote:
>So what cpu is this that can shift by 5 then add (mult by 33) faster than it can shift by 5 and subtract (mult by 31) ?
Hmm, I did realize, after I posted that, that what I wrote could
be misread that way. 33 produced *better* distributions (but only
very slightly) than 31, for the test data. So "h = (h*33) + c"
was better than "h = (h*31) + c". Other hash functions existed
that produced evenbetter distributions than either of those 
for instance, doing CRCs on the data  but took sufficiently
longer, etc.
>Regarding the performance of mod operations: On the powerPCs I use at work division takes 19 cycles.
If the division instruction (and/or registers; see, e.g., MIPS)
can "free run" while the CPU does other things, *and* the C
compiler is able to insert "useful" instructions between the
start of the "divide" operation and the use of its result, the
cycles required for division can sometimes be hidden. For most
hash algorithms, however, there is just not enough insertable
work:
hash = compute_hash(key);
entry_ptr = table[hash % tablesize];
... do stuff with *entry_ptr ...
Here, there are usually no more than two or three instructions'
worth of work to move, which will take only a cycle or two, leaving
most of those 19 as "not executed in parallel with anything".
It is worth adding that the Berkeley DB code uses (used?) a
"variablesize poweroftwo mod via mask" on the result of the hash
in this case, via code much (but not exactly) like this:
SOMETYPE lookup(SOMETYPE2 *table, const char *p) {
char *s;
unsigned char c;
unsigned int h;
SOMETYPE3 *entry_ptr;
for (h = 0; (c = *p) != 0; p++)
h = h * 33 + c;
entry_ptr = table>entries[h & table>mask];
while (entry_ptr != NULL)
if (entry_ptr>hash == h && strcmp(entry_ptr>key, p) == 0)
... found, stop looping ...
else
entry_ptr = entry_ptr>next;
}
(This is an "open chain" hash that is only suitable for inmemory
storage; Berkeley DB includes a bunch of methods that work better
for ondisk storage. The main advantage to open chain hashing is
that it is quite simple. In this case, the adjustable mask allows
the table size to be doubled whenever some desired "maximum load
factor" is exceeded.)

InRealLife: Chris Torek, Wind River Systems
Salt Lake City, UT, USA (40°39.22'N, 111°50.29'W) +1 801 277 2603
email: forget about it http://web.torek.net/torek/index.html
Reading email is like searching for food in the garbage, thanks to spammers.   This discussion thread is closed Replies have been disabled for this discussion.   Question stats  viewed: 4275
 replies: 44
 date asked: Sep 19 '06
