I've got a list of word substrings (the "tokens") which I need to align
to a string of text (the "sentence"). The sentence is basically the
concatenation of the token list, with spaces sometimes inserted beetween
tokens. I need to determine the start and end offsets of each token in
the sentence. For example::
py> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
py> text = '''\
.... She's gonna write
.... a book?'''
py> list(offsets(tokens, text))
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
Here's my current definition of the offsets function::
py> def offsets(tokens, text):
.... start = 0
.... for token in tokens:
.... while text[start].isspace():
.... start += 1
.... text_token = text[start:start+len(token)]
.... assert text_token == token, (text_token, token)
.... yield start, start + len(token)
.... start += len(token)
....
I feel like there should be a simpler solution (maybe with the re
module?) but I can't figure one out. Any suggestions?
STeVe 9 1394
Steven Bethard wrote: I feel like there should be a simpler solution (maybe with the re module?) but I can't figure one out. Any suggestions?
using the finditer pattern I just posted in another thread:
tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
text = '''\
She's gonna write
a book?'''
import re
tokens.sort() # lexical order
tokens.reverse() # look for longest match first
pattern = "|".join(map(re.escape, tokens))
pattern = re.compile(pattern)
I get
print [m.span() for m in pattern.finditer(text)]
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
which seems to match your version pretty well.
hope this helps!
</F>
Fredrik Lundh wrote: Steven Bethard wrote: I feel like there should be a simpler solution (maybe with the re module?) but I can't figure one out. Any suggestions?
using the finditer pattern I just posted in another thread:
tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?'] text = '''\ She's gonna write a book?'''
import re
tokens.sort() # lexical order tokens.reverse() # look for longest match first pattern = "|".join(map(re.escape, tokens)) pattern = re.compile(pattern)
I get
print [m.span() for m in pattern.finditer(text)] [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
which seems to match your version pretty well.
That's what I was looking for. Thanks!
STeVe
"Steven Bethard" <st************@gmail.com> wrote in message
news:dp********************@comcast.com... I've got a list of word substrings (the "tokens") which I need to align to a string of text (the "sentence"). The sentence is basically the concatenation of the token list, with spaces sometimes inserted beetween tokens. I need to determine the start and end offsets of each token in the sentence. For example::
py> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?'] py> text = '''\ ... She's gonna write ... a book?''' py> list(offsets(tokens, text)) [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
Hey, I get the same answer with this:
===================
from pyparsing import oneOf
tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
text = '''\
She's gonna write
a book?'''
tokenlist = oneOf( " ".join(tokens) )
offsets = [(start,end) for token,start,end in tokenlist.scanString(text) ]
print offsets
===================
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
Of course, pyparsing may be a bit heavyweight to drag into a simple function
like this, and certainly not near as fast as regexp. But it was such a nice
way to show how scanString works.
Pyparsing's "oneOf" helper function takes care of the same longest match
issues that Fredrik Lundh handles using sort, reverse, etc. Just so long as
none of the tokens has an embedded space character.
-- Paul
Paul McGuire wrote: "Steven Bethard" <st************@gmail.com> wrote in message news:dp********************@comcast.com...
I've got a list of word substrings (the "tokens") which I need to align to a string of text (the "sentence"). The sentence is basically the concatenation of the token list, with spaces sometimes inserted beetween tokens. I need to determine the start and end offsets of each token in the sentence. For example::
py> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?'] py> text = '''\ ... She's gonna write ... a book?''' py> list(offsets(tokens, text)) [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
=================== from pyparsing import oneOf
tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?'] text = '''\ She's gonna write a book?'''
tokenlist = oneOf( " ".join(tokens) ) offsets = [(start,end) for token,start,end in tokenlist.scanString(text) ]
print offsets =================== [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
Now that's a pretty solution. Three cheers for pyparsing! :)
STeVe
Steven Bethard wrote: I've got a list of word substrings (the "tokens") which I need to align to a string of text (the "sentence"). The sentence is basically the concatenation of the token list, with spaces sometimes inserted beetween tokens. I need to determine the start and end offsets of each token in the sentence. For example::
py> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?'] py> text = '''\ ... She's gonna write ... a book?''' py> list(offsets(tokens, text)) [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
Here's my current definition of the offsets function::
py> def offsets(tokens, text): ... start = 0 ... for token in tokens: ... while text[start].isspace(): ... start += 1 ... text_token = text[start:start+len(token)] ... assert text_token == token, (text_token, token) ... yield start, start + len(token) ... start += len(token) ...
I feel like there should be a simpler solution (maybe with the re module?) but I can't figure one out. Any suggestions?
STeVe
Hi Steve:
Any reason you can't simply use str.find in your offsets function? def offsets(tokens, text):
... ptr = 0
... for token in tokens:
... fpos = text.find(token, ptr)
... if fpos != -1:
... end = fpos + len(token)
... yield (fpos, end)
... ptr = end
... list(offsets(tokens, text))
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
and then, for an entry in the wacky category, a difflib solution:
def offsets(tokens, text):
... from difflib import SequenceMatcher
... s = SequenceMatcher(None, text, "\t".join(tokens))
... for start, _, length in s.get_matching_blocks():
... if length:
... yield start, start + length
... list(offsets(tokens, text))
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
cheers
Michael
Steven Bethard wrote: I feel like there should be a simpler solution (maybe with the re module?) but I can't figure one out. Any suggestions?
using the finditer pattern I just posted in another thread:
tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?'] text = '''\ She's gonna write a book?'''
import re
tokens.sort() # lexical order tokens.reverse() # look for longest match first pattern = "|".join(map(re.escape, tokens)) pattern = re.compile(pattern)
I get
print [m.span() for m in pattern.finditer(text)] [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
which seems to match your version pretty well.
That's what I was looking for. Thanks!
except that I misread your problem statement; the RE solution above allows the
tokens to be specified in arbitrary order. if they've always ordered, you can re-
place the code with something like:
# match tokens plus optional whitespace between each token
pattern = "\s*".join("(" + re.escape(token) + ")" for token in tokens)
m = re.match(pattern, text)
result = (m.span(i+1) for i in range(len(tokens)))
which is 6-7 times faster than the previous solution, on my machine.
</F>
Fredrik Lundh wrote: Steven Bethard wrote:
I feel like there should be a simpler solution (maybe with the re module?) but I can't figure one out. Any suggestions?
using the finditer pattern I just posted in another thread:
tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?'] text = '''\ She's gonna write a book?'''
import re
tokens.sort() # lexical order tokens.reverse() # look for longest match first pattern = "|".join(map(re.escape, tokens)) pattern = re.compile(pattern)
I get
print [m.span() for m in pattern.finditer(text)] [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
which seems to match your version pretty well.
That's what I was looking for. Thanks!
except that I misread your problem statement; the RE solution above allows the tokens to be specified in arbitrary order. if they've always ordered, you can re- place the code with something like:
# match tokens plus optional whitespace between each token pattern = "\s*".join("(" + re.escape(token) + ")" for token in tokens) m = re.match(pattern, text) result = (m.span(i+1) for i in range(len(tokens)))
which is 6-7 times faster than the previous solution, on my machine.
Ahh yes, that's faster for me too. Thanks again!
STeVe
Michael Spencer wrote: Steven Bethard wrote:
I've got a list of word substrings (the "tokens") which I need to align to a string of text (the "sentence"). The sentence is basically the concatenation of the token list, with spaces sometimes inserted beetween tokens. I need to determine the start and end offsets of each token in the sentence. For example::
py> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?'] py> text = '''\ ... She's gonna write ... a book?''' py> list(offsets(tokens, text)) [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
[snip] and then, for an entry in the wacky category, a difflib solution:
>>> def offsets(tokens, text): ... from difflib import SequenceMatcher ... s = SequenceMatcher(None, text, "\t".join(tokens)) ... for start, _, length in s.get_matching_blocks(): ... if length: ... yield start, start + length ... >>> list(offsets(tokens, text))
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
That's cool, I've never seen that before. If you pass in str.isspace,
you can even drop the "if length:" line::
py> def offsets(tokens, text):
.... s = SequenceMatcher(str.isspace, text, '\t'.join(tokens))
.... for start, _, length in s.get_matching_blocks():
.... yield start, start + length
....
py> list(offsets(tokens, text))
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24,
25), (25, 25)]
I think I'm going to have to take a closer look at
difflib.SequenceMatcher; I have to do things similar to this pretty often...
STeVe
Steven Bethard wrote: Michael Spencer wrote:
Steven Bethard wrote:
I've got a list of word substrings (the "tokens") which I need to align to a string of text (the "sentence"). The sentence is basically the concatenation of the token list, with spaces sometimes inserted beetween tokens. I need to determine the start and end offsets of each token in the sentence. For example::
py> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?'] py> text = '''\ ... She's gonna write ... a book?''' py> list(offsets(tokens, text)) [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)] [snip]
and then, for an entry in the wacky category, a difflib solution:
>>> def offsets(tokens, text): ... from difflib import SequenceMatcher ... s = SequenceMatcher(None, text, "\t".join(tokens)) ... for start, _, length in s.get_matching_blocks(): ... if length: ... yield start, start + length ... >>> list(offsets(tokens, text)) [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
That's cool, I've never seen that before. If you pass in str.isspace, you can even drop the "if length:" line::
py> def offsets(tokens, text): ... s = SequenceMatcher(str.isspace, text, '\t'.join(tokens)) ... for start, _, length in s.get_matching_blocks(): ... yield start, start + length ... py> list(offsets(tokens, text)) [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25), (25, 25)]
Sorry, that should have been::
list(offsets(tokens, text))[:-1]
since the last item is always the zero-length one. Which means you
don't really need str.isspace either.
STeVe This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Steven Bethard |
last post by:
I have a string with a bunch of whitespace in it, and a series of chunks
of that string whose indices I need to find. However, the chunks have
been whitespace-normalized, so that multiple spaces...
|
by: Anouar |
last post by:
The point was to make a program that gives you the possibilty to enter
a sentence and when you press "enter" it should ask a number for
example when i press 4
It should take the 4th word out of...
|
by: smith flyers |
last post by:
i got the following error when compiling the codes..
the name 'word' does not exist in the class or namespace 'SimpleMath.Pig'
Can anyone troubleshoot on this . Thank you
...
|
by: iElf |
last post by:
can someone point out my error?
cus this is printing out garbage and not the longest word...
Program ( I underlined the loop Im trying to make to read and find the longest word) everything else...
|
by: valechale |
last post by:
I need some serious help!! I am supposed to convert an english sentence to pig latin.... This is what I have so far but everytime I go to compile it it says that word is undeclared!! can someone...
|
by: alivip |
last post by:
I write code to get most frequent words in the file
I won't to implement bigram probability by modifying the code to do the following:
How can I get every Token (word) and ...
|
by: Archanak |
last post by:
Hi,
I have a word like this: "Rna binding proteins" and i want to match this exact phrase. I have written code like this:
$sentence="Overall, the participation of additional RNA binding...
|
by: fellya |
last post by:
Hi,
i don't have enough experience in writing codes in Python but now i'm trying to see how i can start using Python.
I've tried to write a simple program that can display a sentence. now my...
|
by: whodgson |
last post by:
The following code accepts a string from the user and swaps the first and last letters in each word. It prints out the original string incorrectly by dropping off the first letter of the first word....
|
by: DolphinDB |
last post by:
Tired of spending countless mintues downsampling your data? Look no further!
In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
|
by: ryjfgjl |
last post by:
ExcelToDatabase: batch import excel into database automatically...
|
by: isladogs |
last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM).
In this month's session, we are pleased to welcome back...
|
by: isladogs |
last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM).
In this month's session, we are pleased to welcome back...
|
by: jfyes |
last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
|
by: ArrayDB |
last post by:
The error message I've encountered is; ERROR:root:Error generating model response: exception: access violation writing 0x0000000000005140, which seems to be indicative of an access violation...
|
by: Defcon1945 |
last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
|
by: Faith0G |
last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
|
by: isladogs |
last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM).
In this session, we are pleased to welcome former...
| |