On Mar 17, 9:27 am, Iain King <iaink...@gmail.comwrote:
On Mar 17, 6:56 am, Dan Bishop <danb...@yahoo.comwrote:
On Mar 17, 1:15 am, Girish <girish....@gmail.comwrote:
I have a string a = "['xyz', 'abc']".. I would like to convert it to a
list with elements 'xyz' and 'abc'. Is there any simple solution for
this??
Thanks for the help...
eval(a) will do the job, but you have to be very careful about using
that function. An alternative is
[s.strip('\'"') for s in a.strip('[]').split(', ')]
This will fall over if xyz or abc include any of the characters your
stripping/splitting on (e.g if xyz is actually "To be or not to be,
that is the question"). Unless you can guarantee they won't, you'll
need to write (or rather use) a parser that understands the syntax.
Iain
Thinking about this some more; could the string module not use a
simple tokenizer method? I know that relentlessly adding features to
built-ins is a bad idea, so I'm not sure if this falls within
batteries-included, or is actually just adding bulk. On the one hand,
it's not difficult to write a simple state-based token parser
yourself, but on the other it is also quite easy to include a pile of
bugs when you do. By simple I mean something like:
def tokenize(string, delim, closing_delim=None, escape_char=None)
which would return a list (or a generator) of all the parts of the
string enclosed by delim (or which begin with delim and end with
closing_delim if closing_delim is set), ignoring any delimiters which
have been escaped by escape_char. Throw an exception if the string
is malformed? (odd number of delimiters, or opening/closing delims
don't match)
In the OP's case, he could get what he want's with a simple: l =
a.tokenize("'")
The point of this ramble not being that this is a how to solve the
OP's question, but wondering if it would be a good inclusion to the
language in general. Or there's actually a module which already does
it that I couldn't find and I'm an idiot...
Iain