472,110 Members | 2,298 Online
Bytes | Software Development & Data Engineering Community
Post +

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 472,110 software developers and data experts.

some kind of detector, need advices...

Joh
Hello,

(sorry long)

i think i have missed something in the code below, i would like to
design some kind of detector with python, but i feel totally in a no
way now and need some advices to advance :(

data = "it is an <atag> example of the kind of </atag> data it must
handle and another kind of data".split(" ")
(actually data are splitted line by line in a file, and contained
other than simple words so using ' '<space> is just to post here)

i would like to be able to write some kind of easy rule like :
detect1 = """th.* kind of data"""
or better :
detect2 = """th.* * data""" ### second '*' could be seen like a joker,
as in re, some sort of "skip zero or more line"
which would give me spans where it matched, here :
[(6, 11), (15, 19)]

i have written code below which may handle detect1 , but still unable
to adapt it to detect2. i think i may miss some step back in case of
failed match.
def ignore(s): if s.startswith("<"):
return True
return False
class Rule: def __init__(self, rule, separator = " "):
self.rule = tuple(rule.split(separator))
self.length = len(self.rule)
self.compiled = []
self.filled = 0
for i in range(self.length):
current = self.rule[i]
if current == '*':
### special case, one may advance...
self.compiled.append('*')
else:
self.filled += 1
self.compiled.append(re.compile(current))
self.compiled = tuple(self.compiled)
###
def match(self, lines, ignore = None):
spans = []
i, current, memorized, matched = 0, 0, None, None
while 1:
if i == len(lines):
break
line = lines[i]
i += 1
print "%3d: %s (%s)" % (i, line, current),
if ignore and ignore(line):
print ' - ignored'
continue
regexp = self.compiled[current]
if regexp == '*':
### HERE I NEED SOME ADVICES...
elif hasattr(regexp, 'search') and regexp.search(line):
### match current pattern
print ' + matched',
matched = True
else:
current, memorized, matched = 0, None, None
if matched:
if memorized is None:
memorized = i - 1
if current == self.filled - 1:
print " + detected!",
spans.append((memorized, i))
current, memorized = 0, None
current += 1
print
return spans

data = "it is an <atag> example of the kind of </atag> data it must handle and another kind of data".split(" ") detect = """th.* kind of data"""
r = Rule(detect, ' ') ; r.match(data, ignore)

1: it (0)
2: is (0)
3: an (0)
4: <atag> (0) - ignored
5: example (0)
6: of (0)
7: the (0) + matched
8: kind (1) + matched
9: of (2) + matched
10: </atag> (3) - ignored
11: data (3) + matched + detected!
12: it (1)
13: must (0)
14: handle (0)
15: and (0)
16: another (0) + matched
17: kind (1) + matched
18: of (2) + matched
19: data (3) + matched + detected!
[(6, 11), (15, 19)] ### actually they are indexes in list and +1 to
have line numbers
Jul 18 '05 #1
2 2232
On 14 Jul 2004 23:37:37 -0700, Joh <jo******@yahoo.fr> wrote:
Hello,

(sorry long)

[snip]


am I the only one who didn't understand this?

--
John Lenton (jl*****@gmail.com) -- Random fortune:
bash: fortune: command not found
Jul 18 '05 #2
I'm not really sure I followed that either, but here's a restatement
of the problem according to my feeble understanding-

Joh wants to break a string into tokens (in this case, just separated
by whitespace) and then perform pattern matching based on the tokens.
He then wants to find the span of matched patterns, in terms of token
numbers.

For example, given the input "this is a test string" and the pattern
"is .* test", the program will first tokenize the string:

tokens = ['this', 'is', 'a', 'test', 'string']

....and then will match ['is', 'a', test'] and return the indices of
the matched interval [(1, 3)]. (I don't know if that interval is
supposed to be inclusive)

Note that .* would match zero or more tokens, not characters.
Additionally, it seems that xml-ish tags ("<atag>" in the example)
would be ignored.

Implementing this from scratch would require hacking together some
kind of LR parser. Blah. Fortunately, because tokenization is
trivial, it is possible translate all of the "detectors" (aka
token-matching patterns) directly into regular expressions; then, all
you have to do is correlate the match object intervals (indexed by
character) into token intervals (indexed by token).

To translate a "detector" into a proper regular expression, just make
a few substitutions:

def detector_to_re(detector):
""" translate a token pattern into a regular expression """
# could be more efficient, but this is good for readability

# "." => "(\S+)" (match one token)
detector = re.sub('\.', r'(\S+)', detector)

# whitespace block => "(\s+)" (match a stretch of whitespace)
detector = re.sub('\s+', r'(\s+)', detector)

return detector

def apply_detector(detector, data):

# compile mapping of character -> token indices
i = 0
token_indices = {}
for match in re.finditer('\S+', data):
token_indices[match.start()] = i
token_indices[match.end()] = i
i += 1

# ignore tags
data = re.sub('<.*?>', '', data)

detector_re = re.compile(detector_to_re(detector))

intervals = []

for match in detector_re.finditer():
intervals.append(
(
token_indices[match.start()],
token_indices[match.end()]
)
)

return intervals
On second thought, probably best to throw out this whole scheme and
just use regular expressions, even if it's less convenient. This way
you don't mix syntaxes of regexp vs. token exp (like "th.* *" would
do...)
Jul 18 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

39 posts views Thread by Holly | last post: by
4 posts views Thread by Chinmoy Mukherjee | last post: by
2 posts views Thread by Nico | last post: by
24 posts views Thread by Kevin | last post: by
24 posts views Thread by Jim Michaels | last post: by
1 post views Thread by murdla | last post: by
reply views Thread by Dmitriy V'jukov | last post: by

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.