473,748 Members | 11,145 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Removing comments... tokenize error

In analysing a very big application (pysol) made of almost
100 sources, I had the need to remove comments.

Removing the comments which take all the line is straightforward ...

Instead for the embedded comments I used the tokenize module.

To my surprise the analysed output is different from the input
(the last tuple element should exactly replicate the input line)
The error comes out in correspondance of a triple string.
I don't know if this has already been corrected (I use Python 2.3)
or perhaps is a mistake on my part...

Next you find the script I use to replicate the strange behaviour:

import tokenize

Input = "pippo1"
Output = "pippo2"

f = open(Input)
fOut=open(Outpu t,"w")

nLastLine=0
for i in tokenize.genera te_tokens(f.rea dline):
.. if nLastLine != (i[2])[0]: # the 3rd element of the tuple is
.. . nLastLine = (i[2])[0] # (startingRow, startingCol)
.. . fOut.write(i[4])

f.close()
fOut.close()

The file to be used (pippo1) contains an extract:

class SelectDialogTre eData:
.. img = None
.. def __init__(self):
.. . self.tree_xview = (0.0, 1.0)
.. . self.tree_yview = (0.0, 1.0)
.. . if self.img is None:
.. . . SelectDialogTre eData.img = (makeImage(dith er=0, data="""
R0lGODlhEAAOAPI FAAAAAICAgMDAwP//AP///4AAAAAAAAAAACH5 BAEAAAUALAAAAAA QAA4AAAOL
WLrcGxA6FoYYYoR ZwhCDMAhDFCkBoa 6sGgBFQAzCIAzCI AzCEACFAEEwEAwE A8FAMBAEAIUAYSA Y
CAaCgWAgGAQAhQB BMBAMBAPBQDAQBA CFAGEgGAgGgoFgI BgEAAUBBAIDAgMC AwIDAgMCAQAFAQQ D
AgMCAwIDAgMCAwE ABSaiogAKAKeoqa kFCQA7"""), makeImage(dithe r=0, data="""
R0lGODlhEAAOAPI FAAAAAICAgMDAwP//AP///4AAAAAAAAAAACH5 BAEAAAUALAAAAAA QAA4AAAN3
WLrcHBA6Foi1YZZ AxBCDQESREhCDMA iDcFkBUASEMAiDM AiDMAgBAGlIGgQA gZeSEAAIAoAAQTA Q
DAQDwUAwAEAAhQB BMBAMBAPBQBAABA CFAGEgGAgGgoFgI AAEAAoBBAMCAwID AgMCAwEAAApERI4 L
jpWWlgkAOw==""" ), makeImage(dithe r=0, data="""
R0lGODdhEAAOAPI AAAAAAAAAgICAgM DAwP///wAAAAAAAAAAACwA AAAAEAAOAAADTii 63DowyiiA
GCHrnQUQAxcQAAE QgAAIg+MCwkDMdD 0LgDDUQG8LAMGg1 gPYBADBgFbs1QQA wYDWBNQEAMHABrA R
BADBwOsVAFzoqlq dAAA7"""), makeImage(dithe r=0, data="""
R0lGODdhEAAOAPI AAAAAAAAAgICAgM DAwP8AAP///wAAAAAAACwAAAAA EAAOAAADVCi63Do wyiiA
GCHrnQUQAxcUQAE UgAAIg+MCwlDMdD 0LgDDQBE3UAoBgU CMUCDYBQDCwEWwF AUAwqBEKBJsAIBj Q
CDRCTQAQDKBQAcD FBrjf8Lg7AQA7"" "))

The output of tokenize (pippo2) gives instead:

class SelectDialogTre eData:
.. img = None
.. def __init__(self):
.. . self.tree_xview = (0.0, 1.0)
.. . self.tree_yview = (0.0, 1.0)
.. . if self.img is None:
.. . . SelectDialogTre eData.img = (makeImage(dith er=0, data="""
AgMCAwIDAgMCAwE ABSaiogAKAKeoqa kFCQA7"""), makeImage(dithe r=0, data="""
jpWWlgkAOw==""" ), makeImage(dithe r=0, data="""
BADBwOsVAFzoqlq dAAA7"""), makeImage(dithe r=0, data="""
CDRCTQAQDKBQAcD FBrjf8Lg7AQA7"" "))

.... with a big difference! Why?
Jul 18 '05 #1
16 2300
"qwweeeit" <qw******@yahoo .it> wrote:
I don't know if this has already been corrected (I use Python 2.3)
or perhaps is a mistake on my part...
it's a mistake on your part. adding a print statement to the for-
loop might help you figure it out:
nLastLine=0
for i in tokenize.genera te_tokens(f.rea dline): print i . if nLastLine != (i[2])[0]: # the 3rd element of the tuple is
. . nLastLine = (i[2])[0] # (startingRow, startingCol)
. . fOut.write(i[4])


(hints: what happens if a token spans multiple lines? and how does
the tokenize module deal with comments?)

</F>

Jul 18 '05 #2
Thanks! If you answer to my posts one more time I could consider you as
my tutor...

It was strange to have found a bug...! In any case I will not go deeper
into the matter, because for me it's enough your explanatiom.
I corrected the problem by hand removing the tokens spanning multiple lines
(there were only 8 cases...).

Instead I haven't understood your hint about comments...
I succeded in realizing a python script which removes comments.

Here it is (in all its cumbersome and criptic appearence!...) :

# removeCommentsT ok.py
import tokenize
Input = "pippo1"
Output = "pippo2"
f = open(Input)
fOut=open(Outpu t,"w")

nLastLine=0
for i in tokenize.genera te_tokens(f.rea dline):
.. if i[0]==52 and nLastLine != (i[2])[0]:
.. . fOut.write((i[4].replace(i[1],'')).rstrip()+ '\n')
.. . nLastLine=(i[2])[0]
.. elif i[0]==4 and nLastLine != (i[2])[0]:
.. . fOut.write((i[4]))
.. . nLastLine=(i[2])[0]
f.close()
fOut.close()

Some explanations for the guys like me...:
- 52 and 4 are the arbitrary codes for comments and NEWLINE respectively
- the comment removing is obtained by clearing the comment (i[1]) in the
input line (i[4])
- I also right trimmed the line to get rid off the remaining blanks.
Jul 18 '05 #3
qwweeeit wrote:
Thanks! If you answer to my posts one more time I could consider you as my tutor...

It was strange to have found a bug...! In any case I will not go deeper into the matter, because for me it's enough your explanatiom.
I corrected the problem by hand removing the tokens spanning multiple lines (there were only 8 cases...).

Instead I haven't understood your hint about comments...
I succeded in realizing a python script which removes comments.

Here it is (in all its cumbersome and criptic appearence!...) :

# removeCommentsT ok.py
import tokenize
Input = "pippo1"
Output = "pippo2"
f = open(Input)
fOut=open(Outpu t,"w")

nLastLine=0
for i in tokenize.genera te_tokens(f.rea dline):
. if i[0]==52 and nLastLine != (i[2])[0]:
. . fOut.write((i[4].replace(i[1],'')).rstrip()+ '\n')
. . nLastLine=(i[2])[0]
. elif i[0]==4 and nLastLine != (i[2])[0]:
. . fOut.write((i[4]))
. . nLastLine=(i[2])[0]
f.close()
fOut.close()

Some explanations for the guys like me...:
- 52 and 4 are the arbitrary codes for comments and NEWLINE respectively - the comment removing is obtained by clearing the comment (i[1]) in the input line (i[4])
- I also right trimmed the line to get rid off the remaining blanks.

Tokenizer sends multiline strings and comments as a single token.

############### ############### ############### ############### ##########
# python comment and whitespace stripper :)
############### ############### ############### ############### ##########

import keyword, os, sys, traceback
import StringIO
import token, tokenize
__credits__ = 'just another tool that I needed'
__version__ = '.7'
__author__ = 'M.E.Farmer'
__date__ = 'Jan 15 2005, Oct 24 2004'

############### ############### ############### ############### ##########

class Stripper:
"""python comment and whitespace stripper :)
"""
def __init__(self, raw):
self.raw = raw

def format(self, out=sys.stdout, comments=0, spaces=1,
untabify=1, eol='unix'):
''' strip comments, strip extra whitespace,
convert EOL's from Python code.
'''
# Store line offsets in self.lines
self.lines = [0, 0]
pos = 0
# Strips the first blank line if 1
self.lasttoken = 1
self.temp = StringIO.String IO()
self.spaces = spaces
self.comments = comments

if untabify:
self.raw = self.raw.expand tabs()
self.raw = self.raw.rstrip ()+' '
self.out = out

self.raw = self.raw.replac e('\r\n', '\n')
self.raw = self.raw.replac e('\r', '\n')
self.lineend = '\n'

# Gather lines
while 1:
pos = self.raw.find(s elf.lineend, pos) + 1
if not pos: break
self.lines.appe nd(pos)

self.lines.appe nd(len(self.raw ))
# Wrap text in a filelike object
self.pos = 0

text = StringIO.String IO(self.raw)

# Parse the source.
## Tokenize calls the __call__
## function for each token till done.
try:
tokenize.tokeni ze(text.readlin e, self)
except tokenize.TokenE rror, ex:
traceback.print _exc()

# Ok now we write it to a file
# but we also need to clean the whitespace
# between the lines and at the ends.
self.temp.seek( 0)

# Mac CR
if eol == 'mac':
self.lineend = '\r'
# Windows CR LF
elif eol == 'win':
self.lineend = '\r\n'
# Unix LF
else:
self.lineend = '\n'

for line in self.temp.readl ines():
if spaces == -1:
self.out.write( line.rstrip()+s elf.lineend)
else:
if not line.isspace():
self.lasttoken= 0
self.out.write( line.rstrip()+s elf.lineend)
else:
self.lasttoken+ =1
if self.lasttoken< =self.spaces and self.spaces:
self.out.write( self.lineend)
def __call__(self, toktype, toktext,
(srow,scol), (erow,ecol), line):
''' Token handler.
'''
# calculate new positions
oldpos = self.pos
newpos = self.lines[srow] + scol
self.pos = newpos + len(toktext)

#kill the comments
if not self.comments:
# Kill the comments ?
if toktype == tokenize.COMMEN T:
return

# handle newlines
if toktype in [token.NEWLINE, tokenize.NL]:
self.temp.write (self.lineend)
return

# send the original whitespace, if needed
if newpos > oldpos:
self.temp.write (self.raw[oldpos:newpos])

# skip indenting tokens
if toktype in [token.INDENT, token.DEDENT]:
self.pos = newpos
return

# send text to the temp file
self.temp.write (toktext)
return
############### ############### ############### ############### ##########

def Main():
import sys
if sys.argv[1]:
filein = open(sys.argv[1]).read()
Stripper(filein ).format(out=sy s.stdout, comments=1, untabify=1,
eol='win')

############### ############### ############### ############### ##########

if __name__ == '__main__':
Main()

M.E.Farmer

Jul 18 '05 #4
My code, besides beeing cumbersome and criptic, has another quality:
it is
buggy!
I apologize for that; obviously I discovered it after posting (in the
best tradition of Murphy's law!).
When I will find the solution I let you know, also if the problem is
made difficult for the fact that the for cycle is indexed in terms of
a 5 element tuple, not very easy (at least for me!...).
Jul 18 '05 #5
Hi,
I have no more need to corret my code's bugs and send to clp group a
working application (I don't think that there was an eager
expectation...) .
Your code is perfectly working (as you can expect from a guru...).
Thank you and bye.
Jul 19 '05 #6
Hi,

At last I succeded in implementing a cross reference tool!
(with your help and that of other gurus...).
Now I can face the problem (for me...) of understanding your
code (I have not grasped the classes and objects...).

I give you a brief example of the xref output (taken from your code,
also if the line numbers don't match, because I modified your code,
not beeing interested in eof's other than Linux).

and 076 if self.lasttoken< =self.spaces and
self.spaces:
append 046 self.lines.appe nd(pos)
append 048 self.lines.appe nd(len(self.raw ))
argv 116 if sys.argv[1]:
argv 117 filein = open(sys.argv[1]).read()
__author__ 010 __author__ = s_
break 045 if not pos: break
__call__ 080 def __call__(self, toktype, toktext, (srow,scol),
.. .
(erow,ecol), line):
class 015 class Stripper:
COMMENT 092 if toktype == tokenize.COMMEN T:
comments 021 def format(self, out=sys.stdout, comments=0,
spaces=1,untabi fy=1):
comments 033 self.comments = comments
comments 090 if not self.comments:
comments 118 Stripper(filein ).format(out=sy s.stdout,
comments=0, .
untabify=1)
__credits__ 008 __credits__ = s_
__date__ 011 __date__ = s_
DEDENT 105 if toktype in [token.INDENT, token.DEDENT]:
def 018 def __init__(self, raw):
def 021 def format(self, out=sys.stdout, comments=0,
.. spaces=1,untabi fy=1):
def 080 def __call__(self, toktype, toktext, (srow,scol),
(erow,ecol), line):
def 114 def Main():
ecol 080 def __call__(self, toktype, toktext, (srow,scol),
.. (erow,ecol), line):
erow 080 def __call__(self, toktype, toktext, (srow,scol),
.. (erow,ecol), line):
ex 059 except tokenize.TokenE rror, ex:
except 059 except tokenize.TokenE rror, ex:
expandtabs 036 self.raw = self.raw.expand tabs()
filein 117 filein = open(sys.argv[1]).read()
filein 118 Stripper(filein ).format(out=sy s.stdout,
comments=0,

untabify=1)
find 044 pos = self.raw.find(s elf.lineend, pos) + 1
format 021 def format(self, out=sys.stdout, comments=0,
spaces=1,untabi fy=1):
format 118 Stripper(filein ).format(out=sy s.stdout,
comments=0,
untabify=1)
import 005 import keyword, os, sys, traceback
import 006 import StringIO
import 007 import token, tokenize
import 115 import sys
INDENT 105 if toktype in [token.INDENT, token.DEDENT]:
__init__ 018 def __init__(self, raw):
isspace 071 if not line.isspace():
keyword 005 import keyword, os, sys, traceback
lasttoken 030 self.lasttoken = 1
lasttoken 072 self.lasttoken= 0
lasttoken 075 self.lasttoken+ =1
lasttoken 076 if self.lasttoken< =self.spaces and
self.spaces:
....

To obtain this output, you must remove comments and empty lines, move
strings in a db file, leaving as place holder s_ for normal strings
and m_ for triple strings.
See an example:

m_ """python comment and whitespace stripper :)""" #016
m_ ''' strip comments, strip extra whitespace, convert EOL's from
Python
code.'''#023
m_ ''' Token handler.''' #082

s_ 'just another tool that I needed' |008 __credits__ = 'just another
tool
that I needed'
s_ '.7' |009 __version__ = '.7'
s_ 'M.E.Farmer' |010 __author__ = 'M.E.Farmer'
s_ 'Jan 15 2005, Oct 24 2004' |011 __date__ = 'Jan 15 2005, Oct 24
2004'
s_ ' ' |037 self.raw = self.raw.rstrip ()+'
'
s_ '\n' |040 self.lineend = '\n'
s_ '__main__' |122 if __name__ == '__main__':

I think that this tool is very useful.

Bye
Jul 19 '05 #7
Glad you are making progress ;)
I give you a brief example of the xref output (taken from your >code,
also if the line numbers don't match, because I modified >your code,
not beeing interested in eof's other than Linux).


What happens when you try to analyze a script from a diffrent os ? It
usually looks like a skewed mess, that is why I have added EOL
conversion so it is painless for you to convert to your eol of choice.
The code I posted consist of a class and a Main function.
The class has three methods.
__init__ is called by Python when you create an instance of the class
Stripper. All __init__ does here is just set a class variable self.raw
..
format is called explicitly with a few arguments to start the
tokenizer.
__call__ is special it is not easy to grasp how this even works.. at
first.
In Python when you treat an instance like a function, Python invokes
the __call__method of that instance if present and if it is callable().
example:
try:
tokenize.tokeni ze(text.readlin e, self)
except tokenize.TokenE rror, ex:
traceback.print _exc()
The snippet above is from the Stripper class.
Notice that tokenize.tokeni ze is being feed a reference to self ( if
this code is running self is an instance of Stripper ).
tokenize.tokeni ze is really a hidden loop.
Each token generated is sent to self as five parts toktype, toktext,
(startrow,start col), (endrow,endcol) , and line. Self is callable and
has a __call__ method so tokenize sends really sends the five part
info to __call__ for every token.
If this was obvious then ignore it ;)

M.E.Farmer

Jul 19 '05 #8

Great tool, indeed! But doc strings stay in the source text.

If you do need to remove doc strings as well, add the following into
the __call__ method.

.... # kill doc strings
.... if not self.docstrings :
.... if toktype == tokenize.STRING and len(toktext) >= 6:
.... t = toktext.lstrip( 'rRuU')
.... if ((t.startswith( "'''") and t.endswith("''' ")) or
.... (t.startswith(' """') and t.endswith('""" '))):
.... return

as shown in the original post below. Also, set self.docstrings in the
format method, similar to self.comments as shown below in lines
starting with '...'.
/Jean Brouwers

M.E.Farmer wrote:
qwweeeit wrote:
Thanks! If you answer to my posts one more time I could consider
you as
my tutor...

It was strange to have found a bug...! In any case I will not go deeper
into the matter, because for me it's enough your explanatiom.
I corrected the problem by hand removing the tokens spanning

multiple lines
(there were only 8 cases...).

Instead I haven't understood your hint about comments...
I succeded in realizing a python script which removes comments.

Here it. is (in all its cumbersome and criptic appearence!...) :

# removeCommentsT ok.py
import tokenize
Input = "pippo1"
Output = "pippo2"
f = open(Input)
fOut=open(Outpu t,"w")

nLastLine=0
for i in tokenize.genera te_tokens(f.rea dline):
. if i[0]==52 and nLastLine != (i[2])[0]:
. . fOut.write((i[4].replace(i[1],'')).rstrip()+ '\n')
. . nLastLine=(i[2])[0]
. elif i[0]==4 and nLastLine != (i[2])[0]:
. . fOut.write((i[4]))
. . nLastLine=(i[2])[0]
f.close()
fOut.close()

Some explanations for the guys like me...:
- 52 and 4 are the arbitrary codes for comments and NEWLINE respectively
- the comment removing is obtained by clearing the comment (i[1])

in the
input line (i[4])
- I also right trimmed the line to get rid off the remaining
blanks. Tokenizer sends multiline strings and comments as a single token.

############### ############### ############### ############### ########## # python comment and whitespace stripper :)
############### ############### ############### ############### ##########
import keyword, os, sys, traceback
import StringIO
import token, tokenize
__credits__ = 'just another tool that I needed'
__version__ = '.7'
__author__ = 'M.E.Farmer'
__date__ = 'Jan 15 2005, Oct 24 2004'

############### ############### ############### ############### ##########
class Stripper:
"""python comment and whitespace stripper :)
"""
def __init__(self, raw):
self.raw = raw
.... def format(self, out=sys.stdout, comments=0, docstrings=0,
spaces=1, untabify=1, eol='unix'):
''' strip comments, strip extra whitespace,
convert EOL's from Python code.
'''
# Store line offsets in self.lines
self.lines = [0, 0]
pos = 0
# Strips the first blank line if 1
self.lasttoken = 1
self.temp = StringIO.String IO()
self.spaces = spaces
self.comments = comments .... self.docstrings = docstrings
if untabify:
self.raw = self.raw.expand tabs()
self.raw = self.raw.rstrip ()+' '
self.out = out

self.raw = self.raw.replac e('\r\n', '\n')
self.raw = self.raw.replac e('\r', '\n')
self.lineend = '\n'

# Gather lines
while 1:
pos = self.raw.find(s elf.lineend, pos) + 1
if not pos: break
self.lines.appe nd(pos)

self.lines.appe nd(len(self.raw ))
# Wrap text in a filelike object
self.pos = 0

text = StringIO.String IO(self.raw)

# Parse the source.
## Tokenize calls the __call__
## function for each token till done.
try:
tokenize.tokeni ze(text.readlin e, self)
except tokenize.TokenE rror, ex:
traceback.print _exc()

# Ok now we write it to a file
# but we also need to clean the whitespace
# between the lines and at the ends.
self.temp.seek( 0)

# Mac CR
if eol == 'mac':
self.lineend = '\r'
# Windows CR LF
elif eol == 'win':
self.lineend = '\r\n'
# Unix LF
else:
self.lineend = '\n'

for line in self.temp.readl ines():
if spaces == -1:
self.out.write( line.rstrip()+s elf.lineend)
else:
if not line.isspace():
self.lasttoken= 0
self.out.write( line.rstrip()+s elf.lineend)
else:
self.lasttoken+ =1
if self.lasttoken< =self.spaces and self.spaces:
self.out.write( self.lineend)
def __call__(self, toktype, toktext,
(srow,scol), (erow,ecol), line):
''' Token handler.
'''
# calculate new positions
oldpos = self.pos
newpos = self.lines[srow] + scol
self.pos = newpos + len(toktext)

#kill the comments
if not self.comments:
# Kill the comments ?
if toktype == tokenize.COMMEN T:
return
.... # kill doc strings
.... if not self.docstrings :
.... if toktype == tokenize.STRING and len(toktext) >= 6:
.... t = toktext.lstrip( 'rRuU')
.... if ((t.startswith( "'''") and t.endswith("''' ")) or
.... (t.startswith(' """') and t.endswith('""" '))):
.... return
# handle newlines
if toktype in [token.NEWLINE, tokenize.NL]:
self.temp.write (self.lineend)
return

# send the original whitespace, if needed
if newpos > oldpos:
self.temp.write (self.raw[oldpos:newpos])

# skip indenting tokens
if toktype in [token.INDENT, token.DEDENT]:
self.pos = newpos
return

# send text to the temp file
self.temp.write (toktext)
return
############### ############### ############### ############### ##########
def Main():
import sys
if sys.argv[1]:
filein = open(sys.argv[1]).read()
Stripper(filein ).format(out=sy s.stdout, comments=1, untabify=1, eol='win')

############### ############### ############### ############### ##########
if __name__ == '__main__':
Main()

M.E.Farmer


Jul 19 '05 #9
Hi,

Importing a text file from another o.s. is not a problem : I convert
it immediately using the powerful shell functions of Linux (and Unix).

I thank you for the explanation about classes, but I am rather dumb
and
by now I resolved all my problems without them...
Speaking of problems..., I have yet an error in parsing for literal
strings,
when there is more than one literal string by source line.

Perhaps it's time to use classes...
Jul 19 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

1
2744
by: Andr? Roberge | last post by:
According to the Python documentation: 18.5 tokenize -- Tokenizer for Python source .... The primary entry point is a generator: generate_tokens(readline) .... An older entry point is retained for backward compatibility: tokenize(readline) ====
5
1797
by: Christopher Benson-Manica | last post by:
The function in question follows: vector<string>& tokenize( const string& s, vector<string>& v, char delimiter=',' ) { int delim_idx, begin_idx=0, len=s.length(); for( delim_idx=s.find_first_of(delimiter,begin_idx) ;
20
17241
by: bubunia2000 | last post by:
Hi all, I heard that strtok is not thread safe. So I want to write a sample program which will tokenize string without using strtok. Can I get a sample source code for the same. For exp: 0.0.0.0--->I want to tokenize the string using delimiter as as dot. Regards
1
2276
by: Tim | last post by:
I ran into a problem with a script i was playing with to check code indents and need some direction. It seems to depend on if tabsize is set to 4 in editor and spaces and tabs indents are mixed on consecutive lines. Works fine when editors tabsize was 8 regardless if indents are mixed. Below are how the 3 test files are laid out, the sample code and output I get. Any help on how to detect this correctly would be appreciated.
0
2184
by: noobcprogrammer | last post by:
#include "IndexADT.h" int IndexInit(IndexADT* word) { word->head = NULL; word->wordCount = 0; return 1; } int IndexCreate(IndexADT* wordList,char* argv)
2
7339
by: beatTheDevil | last post by:
Hey guys, As the title says I'm trying to make a regular expression (regex/regexp) for use in removing the comments from code. In this case, this particular regex is meant to match /* ... */ comments. I'm using Ruby v.1.8.6 Here's my regex: multiline_comments = /\/\*(.*?)\*\// When I try myStr.gsub(multiline_comments, "")
1
2353
by: Nicolas M | last post by:
Hi, i've got a problem : i want to iterate over a list of string created via tokenize(), but i also want to fetch an attribute value of a node that as an attribute with the value of my current list item : <xsl:for-each select="tokenize(my_string)"> <xsl:variable name="label" select="."/> <xsl:variable name="id" select="//mynode/@id"/> <xsl:value-of select="$id"/>
6
2260
by: Olagato | last post by:
I need to transform this: <urlset xmlns="http://www.google.com/schemas/sitemap/0.84"> <url> <loc>http://localhost/index.php/index./Paths-for-the-extreme-player</ loc> </url> <url> <loc>http://localhost/index.php/index.php/Games/The-edge-of-the- wall</loc>
1
1243
by: George Sakkis | last post by:
The tokenize.generate_tokens function seems to handle in a context- sensitive manner the new line after a comment: .... # hello world .... x = ( .... # hello world .... ) .... ''' .... print repr(t) ....
0
8989
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9537
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
9367
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
9319
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
6073
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4599
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
0
4869
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
3309
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
3
2213
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.