According to the Python documentation:
18.5 tokenize -- Tokenizer for Python source
....
The primary entry point is a generator:
generate_tokens(readline)
....
An older entry point is retained for backward compatibility:
tokenize(readline[, tokeneater])
====
Does this mean that one should preferably use generate_tokens? If so,
what are the advantages?
André Roberge