473,659 Members | 2,683 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

aligning a set of word substrings to sentence

I've got a list of word substrings (the "tokens") which I need to align
to a string of text (the "sentence") . The sentence is basically the
concatenation of the token list, with spaces sometimes inserted beetween
tokens. I need to determine the start and end offsets of each token in
the sentence. For example::

py> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
py> text = '''\
.... She's gonna write
.... a book?'''
py> list(offsets(to kens, text))
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]

Here's my current definition of the offsets function::

py> def offsets(tokens, text):
.... start = 0
.... for token in tokens:
.... while text[start].isspace():
.... start += 1
.... text_token = text[start:start+len (token)]
.... assert text_token == token, (text_token, token)
.... yield start, start + len(token)
.... start += len(token)
....

I feel like there should be a simpler solution (maybe with the re
module?) but I can't figure one out. Any suggestions?

STeVe
Dec 1 '05 #1
9 1407
Steven Bethard wrote:
I feel like there should be a simpler solution (maybe with the re
module?) but I can't figure one out. Any suggestions?


using the finditer pattern I just posted in another thread:

tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
text = '''\
She's gonna write
a book?'''

import re

tokens.sort() # lexical order
tokens.reverse( ) # look for longest match first
pattern = "|".join(map(re .escape, tokens))
pattern = re.compile(patt ern)

I get

print [m.span() for m in pattern.findite r(text)]
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]

which seems to match your version pretty well.

hope this helps!

</F>

Dec 1 '05 #2
Fredrik Lundh wrote:
Steven Bethard wrote:
I feel like there should be a simpler solution (maybe with the re
module?) but I can't figure one out. Any suggestions?


using the finditer pattern I just posted in another thread:

tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
text = '''\
She's gonna write
a book?'''

import re

tokens.sort() # lexical order
tokens.reverse( ) # look for longest match first
pattern = "|".join(map(re .escape, tokens))
pattern = re.compile(patt ern)

I get

print [m.span() for m in pattern.findite r(text)]
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]

which seems to match your version pretty well.


That's what I was looking for. Thanks!

STeVe
Dec 1 '05 #3
"Steven Bethard" <st************ @gmail.com> wrote in message
news:dp******** ************@co mcast.com...
I've got a list of word substrings (the "tokens") which I need to align
to a string of text (the "sentence") . The sentence is basically the
concatenation of the token list, with spaces sometimes inserted beetween
tokens. I need to determine the start and end offsets of each token in
the sentence. For example::

py> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
py> text = '''\
... She's gonna write
... a book?'''
py> list(offsets(to kens, text))
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]


Hey, I get the same answer with this:

=============== ====
from pyparsing import oneOf

tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
text = '''\
She's gonna write
a book?'''

tokenlist = oneOf( " ".join(toke ns) )
offsets = [(start,end) for token,start,end in tokenlist.scanS tring(text) ]

print offsets
=============== ====
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
Of course, pyparsing may be a bit heavyweight to drag into a simple function
like this, and certainly not near as fast as regexp. But it was such a nice
way to show how scanString works.

Pyparsing's "oneOf" helper function takes care of the same longest match
issues that Fredrik Lundh handles using sort, reverse, etc. Just so long as
none of the tokens has an embedded space character.

-- Paul
Dec 1 '05 #4
Paul McGuire wrote:
"Steven Bethard" <st************ @gmail.com> wrote in message
news:dp******** ************@co mcast.com...
I've got a list of word substrings (the "tokens") which I need to align
to a string of text (the "sentence") . The sentence is basically the
concatenati on of the token list, with spaces sometimes inserted beetween
tokens. I need to determine the start and end offsets of each token in
the sentence. For example::

py> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
py> text = '''\
... She's gonna write
... a book?'''
py> list(offsets(to kens, text))
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]


=============== ====
from pyparsing import oneOf

tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
text = '''\
She's gonna write
a book?'''

tokenlist = oneOf( " ".join(toke ns) )
offsets = [(start,end) for token,start,end in tokenlist.scanS tring(text) ]

print offsets
=============== ====
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]


Now that's a pretty solution. Three cheers for pyparsing! :)

STeVe
Dec 2 '05 #5
Steven Bethard wrote:
I've got a list of word substrings (the "tokens") which I need to align
to a string of text (the "sentence") . The sentence is basically the
concatenation of the token list, with spaces sometimes inserted beetween
tokens. I need to determine the start and end offsets of each token in
the sentence. For example::

py> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
py> text = '''\
... She's gonna write
... a book?'''
py> list(offsets(to kens, text))
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]

Here's my current definition of the offsets function::

py> def offsets(tokens, text):
... start = 0
... for token in tokens:
... while text[start].isspace():
... start += 1
... text_token = text[start:start+len (token)]
... assert text_token == token, (text_token, token)
... yield start, start + len(token)
... start += len(token)
...

I feel like there should be a simpler solution (maybe with the re
module?) but I can't figure one out. Any suggestions?

STeVe


Hi Steve:

Any reason you can't simply use str.find in your offsets function?
def offsets(tokens, text): ... ptr = 0
... for token in tokens:
... fpos = text.find(token , ptr)
... if fpos != -1:
... end = fpos + len(token)
... yield (fpos, end)
... ptr = end
... list(offsets(to kens, text)) [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
and then, for an entry in the wacky category, a difflib solution:
def offsets(tokens, text): ... from difflib import SequenceMatcher
... s = SequenceMatcher (None, text, "\t".join(token s))
... for start, _, length in s.get_matching_ blocks():
... if length:
... yield start, start + length
... list(offsets(to kens, text)) [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]


cheers
Michael

Dec 2 '05 #6
Steven Bethard wrote:
I feel like there should be a simpler solution (maybe with the re
module?) but I can't figure one out. Any suggestions?


using the finditer pattern I just posted in another thread:

tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
text = '''\
She's gonna write
a book?'''

import re

tokens.sort() # lexical order
tokens.reverse( ) # look for longest match first
pattern = "|".join(map(re .escape, tokens))
pattern = re.compile(patt ern)

I get

print [m.span() for m in pattern.findite r(text)]
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]

which seems to match your version pretty well.


That's what I was looking for. Thanks!


except that I misread your problem statement; the RE solution above allows the
tokens to be specified in arbitrary order. if they've always ordered, you can re-
place the code with something like:

# match tokens plus optional whitespace between each token
pattern = "\s*".join( "(" + re.escape(token ) + ")" for token in tokens)
m = re.match(patter n, text)
result = (m.span(i+1) for i in range(len(token s)))

which is 6-7 times faster than the previous solution, on my machine.

</F>

Dec 2 '05 #7
Fredrik Lundh wrote:
Steven Bethard wrote:

I feel like there should be a simpler solution (maybe with the re
module?) but I can't figure one out. Any suggestions?

using the finditer pattern I just posted in another thread:

tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
text = '''\
She's gonna write
a book?'''

import re

tokens.sort( ) # lexical order
tokens.rever se() # look for longest match first
pattern = "|".join(map(re .escape, tokens))
pattern = re.compile(patt ern)

I get

print [m.span() for m in pattern.findite r(text)]
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]

which seems to match your version pretty well.


That's what I was looking for. Thanks!

except that I misread your problem statement; the RE solution above allows the
tokens to be specified in arbitrary order. if they've always ordered, you can re-
place the code with something like:

# match tokens plus optional whitespace between each token
pattern = "\s*".join( "(" + re.escape(token ) + ")" for token in tokens)
m = re.match(patter n, text)
result = (m.span(i+1) for i in range(len(token s)))

which is 6-7 times faster than the previous solution, on my machine.


Ahh yes, that's faster for me too. Thanks again!

STeVe
Dec 2 '05 #8
Michael Spencer wrote:
Steven Bethard wrote:
I've got a list of word substrings (the "tokens") which I need to
align to a string of text (the "sentence") . The sentence is basically
the concatenation of the token list, with spaces sometimes inserted
beetween tokens. I need to determine the start and end offsets of
each token in the sentence. For example::

py> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
py> text = '''\
... She's gonna write
... a book?'''
py> list(offsets(to kens, text))
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]

[snip]
and then, for an entry in the wacky category, a difflib solution:
>>> def offsets(tokens, text): ... from difflib import SequenceMatcher
... s = SequenceMatcher (None, text, "\t".join(token s))
... for start, _, length in s.get_matching_ blocks():
... if length:
... yield start, start + length
... >>> list(offsets(to kens, text))

[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]


That's cool, I've never seen that before. If you pass in str.isspace,
you can even drop the "if length:" line::

py> def offsets(tokens, text):
.... s = SequenceMatcher (str.isspace, text, '\t'.join(token s))
.... for start, _, length in s.get_matching_ blocks():
.... yield start, start + length
....
py> list(offsets(to kens, text))
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24,
25), (25, 25)]

I think I'm going to have to take a closer look at
difflib.Sequenc eMatcher; I have to do things similar to this pretty often...

STeVe
Dec 2 '05 #9
Steven Bethard wrote:
Michael Spencer wrote:
Steven Bethard wrote:
I've got a list of word substrings (the "tokens") which I need to
align to a string of text (the "sentence") . The sentence is
basically the concatenation of the token list, with spaces sometimes
inserted beetween tokens. I need to determine the start and end
offsets of each token in the sentence. For example::

py> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
py> text = '''\
... She's gonna write
... a book?'''
py> list(offsets(to kens, text))
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24,
25)]

[snip]

and then, for an entry in the wacky category, a difflib solution:
>>> def offsets(tokens, text):

... from difflib import SequenceMatcher
... s = SequenceMatcher (None, text, "\t".join(token s))
... for start, _, length in s.get_matching_ blocks():
... if length:
... yield start, start + length
...
>>> list(offsets(to kens, text))

[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24,
25)]

That's cool, I've never seen that before. If you pass in str.isspace,
you can even drop the "if length:" line::

py> def offsets(tokens, text):
... s = SequenceMatcher (str.isspace, text, '\t'.join(token s))
... for start, _, length in s.get_matching_ blocks():
... yield start, start + length
...
py> list(offsets(to kens, text))
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24,
25), (25, 25)]


Sorry, that should have been::
list(offsets(to kens, text))[:-1]
since the last item is always the zero-length one. Which means you
don't really need str.isspace either.

STeVe
Dec 2 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

6
1646
by: Steven Bethard | last post by:
I have a string with a bunch of whitespace in it, and a series of chunks of that string whose indices I need to find. However, the chunks have been whitespace-normalized, so that multiple spaces and newlines have been converted to single spaces as if by ' '.join(chunk.split()). Some example data to clarify my problem: py> text = """\ .... aaa bb ccc .... dd eee. fff gggg .... hh i.
2
2215
by: Anouar | last post by:
The point was to make a program that gives you the possibilty to enter a sentence and when you press "enter" it should ask a number for example when i press 4 It should take the 4th word out of the sentence and print it example sentence "hello i am Anouar" the space after hello should not be a problem for the program so whe i ask for the second word it should give "i" for an answer.. this is what i've got so far
1
1389
by: smith flyers | last post by:
i got the following error when compiling the codes.. the name 'word' does not exist in the class or namespace 'SimpleMath.Pig' Can anyone troubleshoot on this . Thank you ////////////////////// using System; namespace SimpleMath
1
3994
by: iElf | last post by:
can someone point out my error? cus this is printing out garbage and not the longest word... Program ( I underlined the loop Im trying to make to read and find the longest word) everything else works perfectly: #include <stdio.h> #include <string.h>
2
2871
by: valechale | last post by:
I need some serious help!! I am supposed to convert an english sentence to pig latin.... This is what I have so far but everytime I go to compile it it says that word is undeclared!! can someone please help me.. #include <iostream> using std::cout; using std::cin; using std::endl;
0
1913
by: alivip | last post by:
I write code to get most frequent words in the file I won't to implement bigram probability by modifying the code to do the following: How can I get every Token (word) and PreviousToken(Previous word) and frequency and probability From text file and put each one in cell in table For example if the text file content is "Every man has a price. Every woman has a price." First Token(word) is "Every" PreviousToken(Previous...
12
13003
by: Archanak | last post by:
Hi, I have a word like this: "Rna binding proteins" and i want to match this exact phrase. I have written code like this: $sentence="Overall, the participation of additional RNA binding proteins in controlling beta-F1-ATPase expression."; $word="RNA binding proteins";
1
4342
by: fellya | last post by:
Hi, i don't have enough experience in writing codes in Python but now i'm trying to see how i can start using Python. I've tried to write a simple program that can display a sentence. now my problem is how to write a code using split function to split that sentence into words then print out each word separately. let me give u an example: >>>sentence=" My question is to know how to write a code in Python" then the output of this...
2
1835
by: whodgson | last post by:
The following code accepts a string from the user and swaps the first and last letters in each word. It prints out the original string incorrectly by dropping off the first letter of the first word. I would like to establish what error in the code is causing the first word to be mangled. #include<iostream> #include<cstring> #include<cstdlib> using namespace std; int main() { cout<<"Transposing letters in a string \n"; cout<<"Type a...
0
8427
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
1
8525
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8627
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
7356
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
6179
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
4175
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
4335
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
2750
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
1737
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.