473,327 Members | 2,094 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,327 software developers and data experts.

Efficient String Lookup?

I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
is anything), which I want to match with a test string (e.g 'abcdef').
What would be the best way for me to store my strings so lookup is as
fast as possible?
Jul 18 '05 #1
21 2391
Chris S. wrote:
I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
is anything), which I want to match with a test string (e.g 'abcdef').
What would be the best way for me to store my strings so lookup is as
fast as possible?


If it were me, I would store them as compiled regular expressions.

See the re module documentation and use re.compile().

If you want a better solution, it might help if you supply a little more
information about your problem and why this solution is unsuitable
(maybe it is :]).
--
Michael Hoffman
Jul 18 '05 #2
Chris S. wrote:
I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
is anything), which I want to match with a test string (e.g 'abcdef').
What would be the best way for me to store my strings so lookup is as
fast as possible?


As a compiled regular expression, I guess - you don't give much info here,
so maybe there is a better way. But to me it looks like a classic regexp
thing. Maybe if your wildcards are equivalent to .*, then using subsequent
string searches lik this helps you:

pattern = 'abc#e#'.split('#')
s = 'abcdef'
found = True
pos = 0
for p in pattern:
h = s.find(p)
if h != -1:
p = h + 1b
else:
found = False
That might be faster, if the string.find operation uses something else than
simple brute force linear searching - but I don't know enough about the
internals of python's string implementation to give an definite answer
here.

But to be honest: I don't think regexps are easy to beat, unless your
usecase is modeled in a way that makes it prone to other approaches.

--
Regards,

Diez B. Roggisch
Jul 18 '05 #3
I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
is anything), which I want to match with a test string (e.g 'abcdef').
What would be the best way for me to store my strings so lookup is as
fast as possible?


Start with a Trie, and virtually merge branches as necessary.

- Josiah

Jul 18 '05 #4
Josiah Carlson wrote:
I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
is anything), which I want to match with a test string (e.g 'abcdef').
What would be the best way for me to store my strings so lookup is as
fast as possible?

Start with a Trie, and virtually merge branches as necessary.

- Josiah


Yup, you might also have a look at the Aho-Corasick algorithm, which can
match a test string against a big number of strings quite efficiently :

http://www-sr.informatik.uni-tuebing...ler/AC/AC.html

You'll have to adapt the algorithm so that it can handle your wilcard,
though. I found it easy to implement the '.' (one character wildcard),
but the '.*' (zero or more character wildcard) forces you to have
backtracking.

But if you can use the regexp engine, do not hesitate, it will save you
a lot of headaches. Unless of course you're a student and your teacher
asked you this question ;).

Regards,

Nicolas
Jul 18 '05 #5
Michael Hoffman wrote:
Chris S. wrote:
I have a number of strings, containing wildcards (e.g. 'abc#e#' where
# is anything), which I want to match with a test string (e.g
'abcdef'). What would be the best way for me to store my strings so
lookup is as fast as possible?

If it were me, I would store them as compiled regular expressions.

See the re module documentation and use re.compile().

If you want a better solution, it might help if you supply a little more
information about your problem and why this solution is unsuitable
(maybe it is :]).


The problem is I want to associate some data with my pattern, as in a
dictionary. Basically, my application consists of a number of
conditions, represented as strings with wildcards. Associated to each
condition is arbitrary data explaining "what I must do". My task is to
find this data by match a state string with these condition strings. Of
course, the brute force approach is to just add each pattern to a
dictionary, and linearly seach every key for a match. To improve this, I
considered a trie, implemented as a special dictionary:

class Trie(dict):
'''Implements a traditional Patricia-style Tria.
Keys must be sequence types. None key represents a value.'''

def __init__(self):
dict.__init__(self)

def __setitem__(self, key, value):
assert key, 'invalid key '+str(key)
d = self
last = None
for n in key:
if n not in d:
dict.__setitem__(d, n, {})
last = (d,n)
d = dict.__getitem__(d, n)
(d,n) = last
dict.__getitem__(d, n)[None] = value

def __getitem__(self, key):
d = self
for n in key:
assert n in d, 'invalid key '+str(key)
d = dict.__getitem__(d, n)
assert None in d, 'missing value for key '+str(key)
return dict.__getitem__(d, None)

def __delitem__(self, key):
previous = []
d = self
for n in key:
assert n in d, 'invalid key '+str(key)
previous.append((d,n))
d = dict.__getitem__(d, n)
assert None in d, 'missing value for key '+str(key)
# remove value
dict.__delitem__(d, None)
# find and remove empty keys
while len(previous):
(d,n) = previous.pop()
if not len(dict.__getitem__(d, n)):
dict.__delitem__(d, n)
else:
break

However, I'm uncertain about the efficiency of this approach. I'd like
to use regexps, but how would I associate data with each pattern?
Jul 18 '05 #6
Diez B. Roggisch wrote:
That might be faster, if the string.find operation uses something else than
simple brute force linear searching - but I don't know enough about the
internals of python's string implementation to give an definite answer
here.

But to be honest: I don't think regexps are easy to beat, unless your
usecase is modeled in a way that makes it prone to other approaches.


The problem is, I want to find all patterns that match my test string,
so even with re I'd still have to search through every pattern, which is
what I'm trying to avoid. Something like a trie might be better, but
they don't seem very efficient when implemented in Python.
Jul 18 '05 #7
Chris S. wrote:
The problem is I want to associate some data with my pattern, as in a
dictionary. Basically, my application consists of a number of
conditions, represented as strings with wildcards. Associated to each
condition is arbitrary data explaining "what I must do". ... However, I'm uncertain about the efficiency of this approach. I'd like
to use regexps, but how would I associate data with each pattern?


One way is with groups. Make each pattern into a regexp
pattern then concatenate them as
(pat1)|(pat2)|(pat3)| ... |(patN)

Do the match and find which group has the non-None value.

You may need to tack a "$" on the end of string (in which
case remember to enclose everything in a () so the $ doesn't
affect only the last pattern).

One things to worry about is you can only have 99 groups
in a pattern.

Here's example code.
import re

config_data = [
("abc#e#", "Reactor meltdown imminent"),
("ab##", "Antimatter containment field breach"),
("b####f", "Coffee too strong"),
]

as_regexps = ["(%s)" % pattern.replace("#", ".")
for (pattern, text) in config_data]

full_regexp = "|".join(as_regexps) + "$"
pat = re.compile(full_regexp)
input_data = [
"abadb",
"abcdef",
"zxc",
"abcq",
"b1234f",
]

for text in input_data:
m = pat.match(text)
if not m:
print "%s? That's okay." % (text,)
else:
for i, val in enumerate(m.groups()):
if val is not None:
print "%s? We've got a %r warning!" % (text,
config_data[i][1],)

Here's the output I got when I ran it
abadb? We've got a 'Antimatter containment field breach' warning!
abcdef? We've got a 'Reactor meltdown imminent' warning!
zxc? That's okay.
abcq? We've got a 'Antimatter containment field breach' warning!
b1234f? We've got a 'Coffee too strong' warning!
Andrew
da***@dalkescientific.com
Jul 18 '05 #8
On Sat, 16 Oct 2004 09:11:37 GMT, "Chris S." <ch*****@NOSPAM.udel.edu> wrote:
I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
is anything), which I want to match with a test string (e.g 'abcdef').
What would be the best way for me to store my strings so lookup is as
fast as possible?


Insufficient info. But 'fast as possible' suggests putting your strings in
a flex grammar and generating a parser in c. See
http://www.gnu.org/software/flex/
Defining a grammar is a good exercise in precise definition of the problem anyway ;-)

If you want to do it in python, you still need to be more precise...

- is # a single character? any number of characters?
- if your test string were 'abcdefabcdef' would you want 'abc#e#' to match the whole thing?
or match abcdef twice?
- if one wild card string matches, does that "use up" the test string so other wild card strings
mustn't match? If so, what has priority? Longest? shortest? Other criterion?
- etc etc

Regards,
Bengt Richter
Jul 18 '05 #9
Bengt Richter wrote:
On Sat, 16 Oct 2004 09:11:37 GMT, "Chris S." <ch*****@NOSPAM.udel.edu> wrote:

I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
is anything), which I want to match with a test string (e.g 'abcdef').
What would be the best way for me to store my strings so lookup is as
fast as possible?

Insufficient info. But 'fast as possible' suggests putting your strings in
a flex grammar and generating a parser in c. See
http://www.gnu.org/software/flex/
Defining a grammar is a good exercise in precise definition of the problem anyway ;-)

If you want to do it in python, you still need to be more precise...

- is # a single character? any number of characters?
- if your test string were 'abcdefabcdef' would you want 'abc#e#' to match the whole thing?
or match abcdef twice?
- if one wild card string matches, does that "use up" the test string so other wild card strings
mustn't match? If so, what has priority? Longest? shortest? Other criterion?
- etc etc


Sorry for the ambiguity. My case is actually pretty simple. '#'
represents any single character, so it's essentially the same as re's
'.'. The match must be exact, so the string and pattern must be of equal
lengths. Each wildcard is independent of other wildcards. For example,
suppose we restricted the possible characters to 1 and 0, then the
pattern '##' would only match '00', '01', '10', and '11'. This pattern
would not match '0', '111', etc. I feel that a trie would work well, but
I'm concerned that for large patterns, the overhead in the Python
implementation would be too inefficient.
Jul 18 '05 #10
Andrew Dalke wrote:
One way is with groups. Make each pattern into a regexp
pattern then concatenate them as
(pat1)|(pat2)|(pat3)| ... |(patN)

Do the match and find which group has the non-None value.

You may need to tack a "$" on the end of string (in which
case remember to enclose everything in a () so the $ doesn't
affect only the last pattern).

One things to worry about is you can only have 99 groups
in a pattern.

Here's example code.
import re

config_data = [
("abc#e#", "Reactor meltdown imminent"),
("ab##", "Antimatter containment field breach"),
("b####f", "Coffee too strong"),
]

as_regexps = ["(%s)" % pattern.replace("#", ".")
for (pattern, text) in config_data]

full_regexp = "|".join(as_regexps) + "$"
pat = re.compile(full_regexp)
input_data = [
"abadb",
"abcdef",
"zxc",
"abcq",
"b1234f",
]

for text in input_data:
m = pat.match(text)
if not m:
print "%s? That's okay." % (text,)
else:
for i, val in enumerate(m.groups()):
if val is not None:
print "%s? We've got a %r warning!" % (text,
config_data[i][1],)

Here's the output I got when I ran it
abadb? We've got a 'Antimatter containment field breach' warning!
abcdef? We've got a 'Reactor meltdown imminent' warning!
zxc? That's okay.
abcq? We've got a 'Antimatter containment field breach' warning!
b1234f? We've got a 'Coffee too strong' warning!


Thanks, that's almost exactly what I'm looking for. The only downside I
see is that I still need to add and remove patterns, so continually
recompiling the expression might be expensive.
Jul 18 '05 #11
Andrew Dalke wrote:
Here's the output I got when I ran it
abadb? We've got a 'Antimatter containment field breach' warning!
abcdef? We've got a 'Reactor meltdown imminent' warning!
zxc? That's okay.
abcq? We've got a 'Antimatter containment field breach' warning!
b1234f? We've got a 'Coffee too strong' warning!


Actually, I've noticed some strange behavior. It seems to match more
than one character per wild card. For instance, your code matches
'abaxile', 'abaze', and 'abbacomes' to the pattern 'ab##'. I'm not an
expert with rex, but your expression looks correct. What could be
causing this?
Jul 18 '05 #12
On Sun, 17 Oct 2004 05:50:49 GMT, "Chris S." <ch*****@NOSPAM.udel.edu> wrote:
Bengt Richter wrote:
On Sat, 16 Oct 2004 09:11:37 GMT, "Chris S." <ch*****@NOSPAM.udel.edu> wrote:

I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
is anything), which I want to match with a test string (e.g 'abcdef').
What would be the best way for me to store my strings so lookup is as
fast as possible?

Insufficient info. But 'fast as possible' suggests putting your strings in
a flex grammar and generating a parser in c. See
http://www.gnu.org/software/flex/
Defining a grammar is a good exercise in precise definition of the problem anyway ;-)

If you want to do it in python, you still need to be more precise...

- is # a single character? any number of characters?
- if your test string were 'abcdefabcdef' would you want 'abc#e#' to match the whole thing?
or match abcdef twice?
- if one wild card string matches, does that "use up" the test string so other wild card strings
mustn't match? If so, what has priority? Longest? shortest? Other criterion?
- etc etc


Sorry for the ambiguity. My case is actually pretty simple. '#'
represents any single character, so it's essentially the same as re's
'.'. The match must be exact, so the string and pattern must be of equal
lengths. Each wildcard is independent of other wildcards. For example,
suppose we restricted the possible characters to 1 and 0, then the
pattern '##' would only match '00', '01', '10', and '11'. This pattern
would not match '0', '111', etc. I feel that a trie would work well, but
I'm concerned that for large patterns, the overhead in the Python
implementation would be too inefficient.


So is the set of patterns static and you want to find which pattern(s!)
match dynamic input? How many patterns vs inputs strings? What max
length patterns, input strings? Total volume?

Regards,
Bengt Richter
Jul 18 '05 #13
Chris S. wrote:
Andrew Dalke wrote:
Here's the output I got when I ran it
abadb? We've got a 'Antimatter containment field breach' warning!
abcdef? We've got a 'Reactor meltdown imminent' warning!
zxc? That's okay.
abcq? We've got a 'Antimatter containment field breach' warning!
b1234f? We've got a 'Coffee too strong' warning!

Actually, I've noticed some strange behavior. It seems to match more
than one character per wild card. For instance, your code matches
'abaxile', 'abaze', and 'abbacomes' to the pattern 'ab##'. I'm not an
expert with rex, but your expression looks correct. What could be
causing this?


Spoke too soon. I seems all you needed was to change:

full_regexp = "|".join(as_regexps) + "$"

to:

full_regexp = "$|".join(as_regexps) + "$"

However, I noticed rex still doesn't return multiple matches. For
instance, matching 'abc' to the given the patterns '#bc', 'a#c', and
'ab#', your code only returns a match to the first pattern '#bc'. Is
this standard behavior or is it possible to change this?
Jul 18 '05 #14
Chris S. wrote:
'abaxile', 'abaze', and 'abbacomes' to the pattern 'ab##'. I'm not an
expert with rex, but your expression looks correct. What could be
causing this?


To avoid this, one would have to add to the patterns \b AFAIR, so that it
matches whole words only.

Regards,

--
* Piotr (pitkali) Kalinowski * mailto: pitkali (at) o2 (dot) pl *
* Registered Linux User No. 282090 * Powered by Gentoo Linux *
* Fingerprint: D5BB 27C7 9993 50BB A1D2 33F5 961E FE1E D049 4FCD *
Jul 18 '05 #15
Bengt Richter wrote:
On Sun, 17 Oct 2004 05:50:49 GMT, "Chris S." <ch*****@NOSPAM.udel.edu> wrote:
Sorry for the ambiguity. My case is actually pretty simple. '#'
represents any single character, so it's essentially the same as re's
'.'. The match must be exact, so the string and pattern must be of equal
lengths. Each wildcard is independent of other wildcards. For example,
suppose we restricted the possible characters to 1 and 0, then the
pattern '##' would only match '00', '01', '10', and '11'. This pattern
would not match '0', '111', etc. I feel that a trie would work well, but
I'm concerned that for large patterns, the overhead in the Python
implementation would be too inefficient.

So is the set of patterns static and you want to find which pattern(s!)
match dynamic input? How many patterns vs inputs strings? What max
length patterns, input strings? Total volume?


Patterns and inputs are dynamic, input more so than patterns. The
number, length, and volume of patterns and strings should be arbitrary.
Jul 18 '05 #16
On Sun, 17 Oct 2004 07:18:07 GMT, "Chris S." <ch*****@NOSPAM.udel.edu> wrote:
Bengt Richter wrote:
On Sun, 17 Oct 2004 05:50:49 GMT, "Chris S." <ch*****@NOSPAM.udel.edu> wrote:
Sorry for the ambiguity. My case is actually pretty simple. '#'
represents any single character, so it's essentially the same as re's
'.'. The match must be exact, so the string and pattern must be of equal
lengths. Each wildcard is independent of other wildcards. For example,
suppose we restricted the possible characters to 1 and 0, then the
pattern '##' would only match '00', '01', '10', and '11'. This pattern
would not match '0', '111', etc. I feel that a trie would work well, but
I'm concerned that for large patterns, the overhead in the Python
implementation would be too inefficient.

So is the set of patterns static and you want to find which pattern(s!)
match dynamic input? How many patterns vs inputs strings? What max
length patterns, input strings? Total volume?


Patterns and inputs are dynamic, input more so than patterns. The
number, length, and volume of patterns and strings should be arbitrary.

Strategies for performance will vary according to volume (small => anything ok), relative
sizes of strings and respective sets of strings, assuming enough work to make you notice a difference.
patterns>>inputs => walk tables based on patterns using inputs vs patterns<<inputs =>
walk tables base on input sets based on strategic walks through patterns)
Max length of strings, patterns will make some things worthwhile if strings are small,
dumb if they are thousands of characters. Who can tell what performance tricks you may need
or not? Why don't you just try (^pattern$|^pat2$|...) for every pattern to rescan the whole
input for each pattern, and we'll worry about performance later ;-)

Regards,
Bengt Richter
Jul 18 '05 #17
Sorry for the ambiguity. My case is actually pretty simple. '#'
represents any single character, so it's essentially the same as re's
'.'. The match must be exact, so the string and pattern must be of equal
lengths. Each wildcard is independent of other wildcards. For example,
suppose we restricted the possible characters to 1 and 0, then the
pattern '##' would only match '00', '01', '10', and '11'. This pattern
would not match '0', '111', etc. I feel that a trie would work well, but
I'm concerned that for large patterns, the overhead in the Python
implementation would be too inefficient.


Having implemented what is known as a burst trie (a trie where you don't
expand a branch until it has more than 'k' entries) in Python, it ends
up taking up much more space, but that isn't really an issue unless you
have large numbers of strings (millions), or the strings are long
(kilobytes).

If you want to make it more efficient (space-wise), write the algorithm
and structures in pure C, then wrap it with SWIG. Add options for
inserting and deleting strings, and also querying for strings that match
a certain pattern.

Thinking about it, if your dictionary is very restricted, you could just
toss all strings in a balanced search tree, doing a similar tree
traversal as the trie solution. Much less overhead, most of the
benefits.

- Josiah

Jul 18 '05 #18
Chris S. wrote:
Actually, I've noticed some strange behavior. It seems to match more
than one character per wild card. For instance, your code matches
'abaxile', 'abaze', and 'abbacomes' to the pattern 'ab##'. I'm not an
expert with rex, but your expression looks correct. What could be
causing this?


It's matching the prefix. To make it match the string and
only the string you need a $. Either do

(pat1$)|(pat2$)| ... |(patN$)

or do

((pat1)|(pat2)| ... |(patN))$

If you do the last, don't forget to omit group(1) in
the list of results, or use the non-capturing group
notation, which I believe is (?: ... ) as in

(?:(pat1)|(pat2)| ... |(patN))$

Andrew
da***@dalkescientific.com
Jul 18 '05 #19
Chris S. wrote:
Spoke too soon.
As did I. :)
However, I noticed rex still doesn't return multiple matches. For
instance, matching 'abc' to the given the patterns '#bc', 'a#c', and
'ab#', your code only returns a match to the first pattern '#bc'. Is
this standard behavior or is it possible to change this?


This is standard behavior. You can't change it. The
only easy solution along these lines is to have a triangular
table of

(pat1)|(pat2)| .... |(patN)
(pat2)| .... |(patN)
...
(patN)

and if group i matched at a point x then do another
search using the (i+1)th entry in that table at that
point. Repeat until no more matches at x.

I don't know of any off-the-shelf solution for Python
for what you want to do, other than the usual "try
each pattern individually." You'll need to make some
sort of state table (or trie in your case) and do it
that way.

You *can* use Perl's regexps for this sort of thing. That
regexp language allowed embedded Perl code, so this will
get you an answer
% perl -ne 'while (/((.bc)(?{print "Match 1 at ", length($`),
"\n"})^)|((a.c)(?{print "Match 2 at ", length($`), "\n"})^)|./g){}'
This is abc acbcb
Match 1 at 8
Match 2 at 8
Match 1 at 13

Breaking the pattern down I'm using "while (/ ... /g) {}" to
match everything in the input string ($_), which comes from
each line of stdin (because of the '-n' command-line flag).

The pattern is

((.bc)(?{print "Match 1 at ", length($`), "\n"})^)
|((a.c)(?{print "Match 2 at ", length($`), "\n"})^)
|.

That is, match ".bc" then execute the corresponding piece
of embedded Perl code. This prints "Match 1 at " followed
by the length of the text before the current match, which
corresponds to the position of the match.

(If you only need that there is a match, you don't need
then. Using $` in perl gives a performance hit.)

After it executes, the subgroup matches (embedded executable
code always passes, and it consumes no characters). But
then it gets to the '^' test which fails because this is
never at the beginning of the string.

So the regexp engine tries the next option, which is the
"Match 2 at .." test and print. After the print (if
indeed there is a match 2) it also fails.

This takes the engine to the last option which is the "."
character. And that always passes.

Hence this pattern always consumes one and only one character.
I could put it inside a (...)* to match all characters,
but decided instead to use the while(/.../g){} to do the looping.
Why? Old practices and not well-determined reason.

(The while loop works because of the 'g' flag on the pattern.)

You talk about needing to eek all the performance you can
out of the system. Have you tried the brute force approach
of just doing N regexp tests?

If you need the performance, it's rather easy to convert
a simple trie into C code, save the result on the fly
to a file, compile that into a Python shared library, and
import that library, to get a function that does the
tests given a string. Remember to give a new name to
each shared library as otherwise the importing gets confused.

Andrew
da***@dalkescientific.com
Jul 18 '05 #20
Chris S. wrote:
I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
is anything), which I want to match with a test string (e.g 'abcdef').
What would be the best way for me to store my strings so lookup is as
fast as possible?


A very flexible and fast tool is mxTextTools
http://www.egenix.com/files/python/mxTextTools.html
see also
http://simpleparse.sourceforge.net
which contains a more recent (non recursive) version of the pattern
matching machine
It's faster then regexp (at least sometimes) but much more flexible
Each pattern is stored as (high level) "assembler" for a pattern
matching machine written in C. A Pattern is just a Python tuple
which can be stored (pickled)

--
Helmut Jarausch

Lehrstuhl fuer Numerische Mathematik
RWTH - Aachen University
D 52056 Aachen, Germany
Jul 18 '05 #21
"Chris S." <ch*****@NOSPAM.udel.edu> wrote in message news:<dB5cd.387$B34.355@trndny02>...
I have a number of strings, containing wildcards (e.g. 'abc#e#' where #
is anything), which I want to match with a test string (e.g 'abcdef').
What would be the best way for me to store my strings so lookup is as
fast as possible?

Regular expressions are probably not the simplest or most
straight-forward option here.

A rather simpler and more straight-forward approach is (a) pre-compute
all the possible strings (i.e., interpolate all possible wildcard
values) along with their associated values; and (b) use a dictionary.
Pre-computation in your case is simple because you only have
single-character wildcards. This has four benefits: it uses a standard
and highly optimized native Python data structure (the dictionary);
it's very fast; it's very simple to update or modify keys and values
("cargo"); and it identifies pattern collisions (i.e, patterns that
have two or more actions associated with them).

On the other hand, if the number of pre-computed strings is "too
large" and there is a lot of prefix and/or suffix similarity in the
patterns, you could consider using an automaton, trie, or a DAG
(directed acyclic graph). Any string-matching automaton or trie will
do automatic prefix pattern compression and will allow you to attach
arbitrary data to the matching state. If there is considerable suffix
similarity, you could also consider a DAG since it ties together
similar suffixes with identical cargo.

BTW: There is no reason to use an Aho-Corasick automaton for this
problem since you don't need its
find-all-matches-in-a-given-sequence-in-a-single-pass functionality
(and that functionality adds substantial complexity to automaton
generation).
Jul 18 '05 #22

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: Linus Nikander | last post by:
Having recently load-tested the application we are developing I noticed that one of the most expensive (time-wise) calls was my fetch of a db-connection from the defined db-pool. At present I fetch...
6
by: Narendra C. Tulpule | last post by:
Hi, if you know the Python internals, here is a newbie question for you. If I have a list with 100 elements, each element being a long string, is it more efficient to maintain it as a dictionary...
6
by: JezB | last post by:
What is the most efficient way to scan an array for a match ? I could just iterate directly through it comparing each array entry with what I am looking for, I could use an enumerator to do the...
11
by: hoopsho | last post by:
Hi Everyone, I am trying to write a program that does a few things very fast and with efficient use of memory... a) I need to parse a space-delimited file that is really large, upwards fo a...
2
by: David Pratt | last post by:
Hi. I like working with lists of dictionaries since order is preserved in a list when I want order and the dictionaries make it explicit what I have got inside them. I find this combination very...
6
by: Karlo Lozovina | last post by:
Let's say I have a class with few string properties and few integers, and a lot of methods defined for that class. Now if I have hundreds of thousands (or even more) of instances of that class -...
9
by: igor.tatarinov | last post by:
Hi, I am pretty new to Python and trying to use it for a relatively simple problem of loading a 5 million line text file and converting it into a few binary files. The text file has a fixed format...
2
by: Ryan Liu | last post by:
Is DataRow uses DataRow and DataRow much efficient than DataRow? Thanks, ~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~. Ryan Liu Shanghai Fengpu Software Co. Ltd Shanghai , China
3
by: Ken Fine | last post by:
This is a question that someone familiar with ASP.NET and ADO.NET DataSets and DataTables should be able to answer fairly easily. The basic question is how I can efficiently match data from one...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
0
by: ArrayDB | last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
1
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: af34tf | last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.