I want to convert a string to command..
For example i have a string:
a="['1']"
I want to do this list..
How can i do ? 35 4853
On Oct 18, 10:23 am, Abandoned <best...@gmail.comwrote:
I want to convert a string to command..
For example i have a string:
a="['1']"
I want to do this list..
How can i do ?
Use the builtin function "eval".
Abandoned wrote:
I want to convert a string to command..
For example i have a string:
a="['1']"
I want to do this list..
How can i do ?
The correct wording here would be expression. To evaluate expressions, there
is the function eval:
a = eval("['1']")
But beware: if the expression contains some potentially harmful code, it
will be executed. So it is generally considered bad style to use eval.
An example that fails would be
a = eval("10**2000**2000")
or such thing.
Another alternative is to use parsers like simplejson to extract the
information. This of course only works, if your expressions are valid json.
Diez
Thanks you all answer..
But "eval" is very slow at very big dictionary {2:3,4:5,6:19....}
(100.000 elements)
Is there any easy alternative ?
Abandoned wrote:
Thanks you all answer..
But "eval" is very slow at very big dictionary {2:3,4:5,6:19....}
(100.000 elements)
Is there any easy alternative ?
How big? How slow? For me, a 10000-element list takes 0.04 seconds to be
parsed. Which I find fast.
Diez
On Oct 18, 6:14 pm, "Diez B. Roggisch" <de...@nospam.web.dewrote:
Abandoned wrote:
Thanks you all answer..
But "eval" is very slow at very big dictionary {2:3,4:5,6:19....}
(100.000 elements)
Is there any easy alternative ?
How big? How slow? For me, a 10000-element list takes 0.04 seconds to be
parsed. Which I find fast.
Diez
173.000 dict elements and it tooks 2.2 seconds this very big time for
my project
Abandoned <be*****@gmail.comwrites:
173.000 dict elements and it tooks 2.2 seconds this very big time
for my project
If you're generating the string from Python, use cPickle instead.
Much faster:
>>import time d = dict((i, i+1) for i in xrange(170000)) len(d)
170000
>>s=repr(d) t0 = time.time(); d2 = eval(s); t1 = time.time() t1-t0
1.5457899570465088
>>import cPickle as pickle s = pickle.dumps(d, -1) len(s)
1437693
>>t0 = time.time(); d2 = pickle.loads(s); t1 = time.time() t1-t0
0.060307979583740234
>>len(d2)
170000
That is 25x speedup. Note that cPickle's format is binary. Using the
textual format makes for more readable pickles, but reduces the
speedup to "only" 9.5x on my machine.
P.S.
Before someone says that using pickle is unsafe, remember that he is
considering *eval* as the alternative. :-)
Abandoned wrote:
On Oct 18, 6:14 pm, "Diez B. Roggisch" <de...@nospam.web.dewrote:
>Abandoned wrote:
Thanks you all answer..
But "eval" is very slow at very big dictionary {2:3,4:5,6:19....}
(100.000 elements)
Is there any easy alternative ?
How big? How slow? For me, a 10000-element list takes 0.04 seconds to be parsed. Which I find fast.
Diez
173.000 dict elements and it tooks 2.2 seconds this very big time for
my project
Where does the data come from?
Diez
On Oct 18, 6:26 pm, Hrvoje Niksic <hnik...@xemacs.orgwrote:
Abandoned <best...@gmail.comwrites:
173.000 dict elements and it tooks 2.2 seconds this very big time
for my project
If you're generating the string from Python, use cPickle instead.
Much faster:
>import time d = dict((i, i+1) for i in xrange(170000)) len(d)
170000
>s=repr(d) t0 = time.time(); d2 = eval(s); t1 = time.time() t1-t0
1.5457899570465088
>import cPickle as pickle s = pickle.dumps(d, -1) len(s)
1437693
>t0 = time.time(); d2 = pickle.loads(s); t1 = time.time() t1-t0
0.060307979583740234>>len(d2)
170000
That is 25x speedup. Note that cPickle's format is binary. Using the
textual format makes for more readable pickles, but reduces the
speedup to "only" 9.5x on my machine.
P.S.
Before someone says that using pickle is unsafe, remember that he is
considering *eval* as the alternative. :-)
import cPickle as pickle
a="{2:3,4:6,2:7}"
s=pickle.dumps(a, -1)
g=pickle.loads(s);
print g
'{2:3,4:6,2:7}'
Thank you very much for your answer but result is a string ??
On Oct 18, 6:35 pm, "Diez B. Roggisch" <de...@nospam.web.dewrote:
Abandoned wrote:
On Oct 18, 6:14 pm, "Diez B. Roggisch" <de...@nospam.web.dewrote:
Abandoned wrote:
Thanks you all answer..
But "eval" is very slow at very big dictionary {2:3,4:5,6:19....}
(100.000 elements)
Is there any easy alternative ?
How big? How slow? For me, a 10000-element list takes 0.04 seconds to be
parsed. Which I find fast.
Diez
173.000 dict elements and it tooks 2.2 seconds this very big time for
my project
Where does the data come from?
Diez
Data come from database..
I want to cache to speed up my system and i save the dictionary to
database for speed up but eval is very slow for do this.
Not: 2.2 second only eval operation.
On Thu, 18 Oct 2007 08:41:30 -0700, Abandoned wrote:
import cPickle as pickle
a="{2:3,4:6,2:7}"
s=pickle.dumps(a, -1)
g=pickle.loads(s);
print g
'{2:3,4:6,2:7}'
Thank you very much for your answer but result is a string ??
In Python terms yes, strings in Python can contain any byte value. If you
want to put this into a database you need a BLOB column or encode it as
base64 or something similar more ASCII safe.
Ciao,
Marc 'BlackJack' Rintsch
On Oct 18, 6:51 pm, Marc 'BlackJack' Rintsch <bj_...@gmx.netwrote:
On Thu, 18 Oct 2007 08:41:30 -0700, Abandoned wrote:
import cPickle as pickle
a="{2:3,4:6,2:7}"
s=pickle.dumps(a, -1)
g=pickle.loads(s);
print g
'{2:3,4:6,2:7}'
Thank you very much for your answer but result is a string ??
In Python terms yes, strings in Python can contain any byte value. If you
want to put this into a database you need a BLOB column or encode it as
base64 or something similar more ASCII safe.
Ciao,
Marc 'BlackJack' Rintsch
'{2:3,4:6,2:7}' already in database, i select this and convert to real
dictionary..
Abandoned wrote:
On Oct 18, 6:35 pm, "Diez B. Roggisch" <de...@nospam.web.dewrote:
>Abandoned wrote:
On Oct 18, 6:14 pm, "Diez B. Roggisch" <de...@nospam.web.dewrote: Abandoned wrote:
Thanks you all answer..
But "eval" is very slow at very big dictionary {2:3,4:5,6:19....}
(100.000 elements)
Is there any easy alternative ?
>How big? How slow? For me, a 10000-element list takes 0.04 seconds to be parsed. Which I find fast.
>Diez
173.000 dict elements and it tooks 2.2 seconds this very big time for
my project
Where does the data come from?
Diez
Data come from database..
I want to cache to speed up my system and i save the dictionary to
database for speed up but eval is very slow for do this.
Not: 2.2 second only eval operation.
Does the dictionary change often?
And you should store a pickle to the database then. Besides, making a
database-query of that size (after all, we're talking a few megs here) will
take a while as well - so are you SURE the 2.2 seconds are a problem? Or is
it just that you think they are?
Diez
On Oct 18, 6:57 pm, "Diez B. Roggisch" <de...@nospam.web.dewrote:
Abandoned wrote:
On Oct 18, 6:35 pm, "Diez B. Roggisch" <de...@nospam.web.dewrote:
Abandoned wrote:
On Oct 18, 6:14 pm, "Diez B. Roggisch" <de...@nospam.web.dewrote:
Abandoned wrote:
Thanks you all answer..
But "eval" is very slow at very big dictionary {2:3,4:5,6:19....}
(100.000 elements)
Is there any easy alternative ?
How big? How slow? For me, a 10000-element list takes 0.04 seconds to
be parsed. Which I find fast.
Diez
173.000 dict elements and it tooks 2.2 seconds this very big time for
my project
Where does the data come from?
Diez
Data come from database..
I want to cache to speed up my system and i save the dictionary to
database for speed up but eval is very slow for do this.
Not: 2.2 second only eval operation.
Does the dictionary change often?
And you should store a pickle to the database then. Besides, making a
database-query of that size (after all, we're talking a few megs here) will
take a while as well - so are you SURE the 2.2 seconds are a problem? Or is
it just that you think they are?
Diez- Hide quoted text -
- Show quoted text -
I'm very confused :(
I try to explain main problem...
I have a table like this:
id-1 | id-2 | value
23 24 34
56 68 66
56 98 32455
55 62 655
56 28 123
..... ( 3 millions elements)
I select where id=56 and 100.000 rows are selecting but this took 2
second. (very big for my project)
I try cache to speed up this select operation..
And create a cache table:
id-1 | all
56 {68:66, 98:32455, 62:655}
When i select where id 56 i select 1 row and its took 0.09 second but
i must convert text to dictionary..
Have you got any idea what can i do this conver operation ?
or
Have you got any idea what can i do cache for this table ?
Abandoned <be*****@gmail.comwrites:
import cPickle as pickle
a="{2:3,4:6,2:7}"
s=pickle.dumps(a, -1)
g=pickle.loads(s);
print g
'{2:3,4:6,2:7}'
Thank you very much for your answer but result is a string ??
Because you gave it a string. If you give it a dict, you'll get a
dict:
>>import cPickle as pickle a = {1:2, 3:4} s = pickle.dumps(a, -1) g = pickle.loads(s) g
{1: 2, 3: 4}
If your existing database already has data in the "{...}" format, then
eval it only the first time. Then you'll get the dict which you can
cache thruogh the use of dumps/loads.
On Oct 18, 7:02 pm, Hrvoje Niksic <hnik...@xemacs.orgwrote:
Abandoned <best...@gmail.comwrites:
import cPickle as pickle
a="{2:3,4:6,2:7}"
s=pickle.dumps(a, -1)
g=pickle.loads(s);
print g
'{2:3,4:6,2:7}'
Thank you very much for your answer but result is a string ??
Because you gave it a string. If you give it a dict, you'll get a
dict:
>import cPickle as pickle a = {1:2, 3:4} s = pickle.dumps(a, -1) g = pickle.loads(s) g
{1: 2, 3: 4}
If your existing database already has data in the "{...}" format, then
eval it only the first time. Then you'll get the dict which you can
cache thruogh the use of dumps/loads.
Sorry i can't understand :(
Yes my database already has data in the "{..}" format and i select
this and i want to use it for dictionary..
in your example:
first data is a string
finally data is already string
I want to command like eval. (eval is not good because it is slow for
my project)
Abandoned <be*****@gmail.comwrites:
I select where id=56 and 100.000 rows are selecting but this took 2
second. (very big for my project)
I try cache to speed up this select operation..
And create a cache table:
id-1 | all
56 {68:66, 98:32455, 62:655}
If you use Python to create this cache table, then simply don't dump
it as a dictionary, but as a pickle:
id-1 | all
56 <some weird string produced by cPickle.dumps>
When you load it, convert the string to dict with cPickle.loads
instead of with eval.
On Oct 18, 9:09 am, Abandoned <best...@gmail.comwrote:
On Oct 18, 6:57 pm, "Diez B. Roggisch" <de...@nospam.web.dewrote:
Abandoned wrote:
On Oct 18, 6:35 pm, "Diez B. Roggisch" <de...@nospam.web.dewrote:
>Abandoned wrote:
On Oct 18, 6:14 pm, "Diez B. Roggisch" <de...@nospam.web.dewrote:
>Abandoned wrote:
Thanks you all answer..
But "eval" is very slow at very big dictionary {2:3,4:5,6:19....}
(100.000 elements)
Is there any easy alternative ?
>How big? How slow? For me, a 10000-element list takes 0.04 seconds to
>be parsed. Which I find fast.
>Diez
173.000 dict elements and it tooks 2.2 seconds this very big time for
my project
>Where does the data come from?
>Diez
Data come from database..
I want to cache to speed up my system and i save the dictionary to
database for speed up but eval is very slow for do this.
Not: 2.2 second only eval operation.
Does the dictionary change often?
And you should store a pickle to the database then. Besides, making a
database-query of that size (after all, we're talking a few megs here) will
take a while as well - so are you SURE the 2.2 seconds are a problem? Or is
it just that you think they are?
Diez- Hide quoted text -
- Show quoted text -
I'm very confused :(
I try to explain main problem...
I have a table like this:
id-1 | id-2 | value
23 24 34
56 68 66
56 98 32455
55 62 655
56 28 123
.... ( 3 millions elements)
I select where id=56 and 100.000 rows are selecting but this took 2
second. (very big for my project)
I try cache to speed up this select operation..
And create a cache table:
id-1 | all
56 {68:66, 98:32455, 62:655}
When i select where id 56 i select 1 row and its took 0.09 second but
i must convert text to dictionary..
Have you got any idea what can i do this conver operation ?
or
Have you got any idea what can i do cache for this table ?
I think several people have given you the correct answer, but for some
reason you aren't getting it. Instead of saving the string
representation of a dictionary to the database, pickle the dictionary
(not the string representation, but the actual dictionary) and save
the pickled object as a BLOB to the database. using pickle to re-
create the dictionary should be much faster than evaluating a string
representation of it.
Matt
Abandoned <be*****@gmail.comwrites:
Sorry i can't understand :(
Yes my database already has data in the "{..}" format and i select
this and i want to use it for dictionary..
But, do you use Python to create that data? If so, simply convert it
to pickle binary format instead of "{...}". As explained in my other
post:
id-1 | all
56 {68:66, 98:32455, 62:655}
If you use Python to create this cache table, then simply don't dump
it as a dictionary, but as a pickle:
id-1 | all
56 <some weird string produced by cPickle.dumps>
When you load it, convert the string to dict with cPickle.loads
instead of with eval.
On Oct 18, 7:40 pm, Hrvoje Niksic <hnik...@xemacs.orgwrote:
Abandoned <best...@gmail.comwrites:
Sorry i can't understand :(
Yes my database already has data in the "{..}" format and i select
this and i want to use it for dictionary..
But, do you use Python to create that data? If so, simply convert it
to pickle binary format instead of "{...}". As explained in my other
post:
id-1 | all
56 {68:66, 98:32455, 62:655}
If you use Python to create this cache table, then simply don't dump
it as a dictionary, but as a pickle:
id-1 | all
56 <some weird string produced by cPickle.dumps>
When you load it, convert the string to dict with cPickle.loads
instead of with eval.
Yes i understand and this very very good ;)
But i have a problem..
a=eval(a)
a=pickle.dumps(a, -1)
cursor.execute("INSERT INTO cache2 VALUES ('%s')" % (a))
conn.commit()
and give me error:
psycopg2.ProgrammingError: invalid byte sequence for encoding "UTF8":
0x80
HINT: This error can also happen if the byte sequence does not match
the encoding expected by the server, which is controlled by
"client_encoding".
On 10/18/07, Adam Atlas <ad**@atlas.stwrote:
>
Use the builtin function "eval".
What is the difference with os.system()?
--
Sebastián Bassi (セバスティアン). Diplomado en Ciencia y Tecnolog*a.
Curso Biologia molecular para programadores: http://tinyurl.com/2vv8w6
GPG Fingerprint: 9470 0980 620D ABFC BE63 A4A4 A3DE C97D 8422 D43D
"Matimus" <mc******@gmail.comwrote in message
news:11**********************@q5g2000prf.googlegro ups.com...
I think several people have given you the correct answer, but for some
reason you aren't getting it. Instead of saving the string
representation of a dictionary to the database...
Mind you, if this were Jeopardy, "Store a binary pickle
of a denormalized table back in the database" would
be a tough one.
Abandoned a crit :
On Oct 18, 6:51 pm, Marc 'BlackJack' Rintsch <bj_...@gmx.netwrote:
>On Thu, 18 Oct 2007 08:41:30 -0700, Abandoned wrote:
>>import cPickle as pickle a="{2:3,4:6,2:7}" s=pickle.dumps(a, -1) g=pickle.loads(s); print g '{2:3,4:6,2:7}' Thank you very much for your answer but result is a string ??
In Python terms yes, strings in Python can contain any byte value. If you want to put this into a database you need a BLOB column or encode it as base64 or something similar more ASCII safe.
Ciao, Marc 'BlackJack' Rintsch
'{2:3,4:6,2:7}' already in database, i select this and convert to real
dictionary..
MVHO is that whoever uses a RDBMS to store language-specific serialized
collections should be shot down without sommation.
Abandoned a crit :
(snip)
import cPickle as pickle
a="{2:3,4:6,2:7}"
s=pickle.dumps(a, -1)
g=pickle.loads(s);
print g
'{2:3,4:6,2:7}'
Thank you very much for your answer but result is a string ??
Of course it's a string. That's what you pickled. What did you hope ? If
you want a dict back, then pickle a dict.
Richard Brodie a crit :
"Matimus" <mc******@gmail.comwrote in message
news:11**********************@q5g2000prf.googlegro ups.com...
>I think several people have given you the correct answer, but for some reason you aren't getting it. Instead of saving the string representation of a dictionary to the database...
Mind you, if this were Jeopardy, "Store a binary pickle
of a denormalized table back in the database" would
be a tough one.
Indeed.
Abandoned a crit :
(snip)
I'm very confused :(
I try to explain main problem...
I have a table like this:
id-1 | id-2 | value
23 24 34
56 68 66
56 98 32455
55 62 655
56 28 123
.... ( 3 millions elements)
I select where id=56 and 100.000 rows are selecting but this took 2
second. (very big for my project)
Not to bad in the absolute.
I try cache to speed up this select operation..
And create a cache table:
id-1 | all
56 {68:66, 98:32455, 62:655}
I really doubt this is the right way to go.
When i select where id 56 i select 1 row and its took 0.09 second but
i must convert text to dictionary..
Have you got any idea what can i do this conver operation ?
Other alread answered
Have you got any idea what can i do cache for this table ?
Depends on your RDBMS. And as far as I'm concerned, I'd start by trying
to find out how to optimize this query within the RDBMS - good ones are
usually highly optimized softwares with provision for quite a lot of
performance tuning.
Abandoned <be*****@gmail.comwrites:
When you load it, convert the string to dict with cPickle.loads
instead of with eval.
Yes i understand and this very very good ;)
Good! :-)
psycopg2.ProgrammingError: invalid byte sequence for encoding "UTF8":
0x80
HINT: This error can also happen if the byte sequence does not match
the encoding expected by the server, which is controlled by
"client_encoding".
Use a different column type for cache2's column, one more appropriate
for storing binary characters (perhaps BYTEA for Postgres). Don't
forget to also use a bind variable, something like:
cursor.execute("INSERT INTO cache2 VALUES (?)", a)
Using "INSERT ... ('%s')" % (a) won't work, since the huge binary
string in a can contain arbitrary characters, including the single
quote.
On Thu, 2007-10-18 at 19:53 +0200, Hrvoje Niksic wrote:
Don't
forget to also use a bind variable, something like:
cursor.execute("INSERT INTO cache2 VALUES (?)", a)
I second the advice, but that code won't work. The bind parameters must
be a sequence, and psycopg2 (unfortunately) uses %s for parameter
markers, instead of the SQL standard question mark. So the actual code
would be
cursor.execute("INSERT INTO cache2 VALUES (%s)", (a,) )
HTH,
--
Carsten Haese http://informixdb.sourceforge.net
On Oct 18, 1:38 pm, Bruno Desthuilliers <bruno.
42.desthuilli...@wtf.websiteburo.oops.comwrote:
Abandoned a crit :
(snip)
I'm very confused :(
I try to explain main problem...
I have a table like this:
id-1 | id-2 | value
23 24 34
56 68 66
56 98 32455
55 62 655
56 28 123
.... ( 3 millions elements)
I select where id=56 and 100.000 rows are selecting but this took 2
second. (very big for my project)
Not to bad in the absolute.
I try cache to speed up this select operation..
And create a cache table:
id-1 | all
56 {68:66, 98:32455, 62:655}
I really doubt this is the right way to go.
When i select where id 56 i select 1 row and its took 0.09 second but
i must convert text to dictionary..
Have you got any idea what can i do this conver operation ?
Other alread answered
Have you got any idea what can i do cache for this table ?
Depends on your RDBMS. And as far as I'm concerned, I'd start by trying
to find out how to optimize this query within the RDBMS - good ones are
usually highly optimized softwares with provision for quite a lot of
performance tuning.
Just the overhead of the query is a killer compared to a dictionary
lookup in Python, even if all you're doing is selecting an integer
from a 1-row, 1-column table.
Usually you can get around that by making a single query to return all
of your results (or a handful of queries), but occasionally it just
doesn't matter how fast the DB can get to the data--the simple act of
asking it is slow enough on its own.
On Thu, 18 Oct 2007 14:05:34 -0300, Sebastian Bassi wrote:
On 10/18/07, Adam Atlas <ad**@atlas.stwrote:
>> Use the builtin function "eval".
What is the difference with os.system()?
Everything.
eval() evaluates Python expressions like "x.append(2+3)".
os.system() calls your operating system's shell with a command.
--
Steven.
On Oct 18, 8:53 pm, Hrvoje Niksic <hnik...@xemacs.orgwrote:
Abandoned <best...@gmail.comwrites:
When you load it, convert the string to dict with cPickle.loads
instead of with eval.
Yes i understand and this very very good ;)
Good! :-)
psycopg2.ProgrammingError: invalid byte sequence for encoding "UTF8":
0x80
HINT: This error can also happen if the byte sequence does not match
the encoding expected by the server, which is controlled by
"client_encoding".
Use a different column type for cache2's column, one more appropriate
for storing binary characters (perhaps BYTEA for Postgres). Don't
forget to also use a bind variable, something like:
cursor.execute("INSERT INTO cache2 VALUES (?)", a)
Using "INSERT ... ('%s')" % (a) won't work, since the huge binary
string in a can contain arbitrary characters, including the single
quote.
I tryed:
cursor.execute("INSERT INTO cache2 VALUES (?)", a)
and
cursor.execute("INSERT INTO cache2 VALUES (%s)", (a,) )
but the result is same..
psycopg2.ProgrammingError: invalid byte sequence for encoding "UTF8":
0x80
HINT: This error can also happen if the byte sequence does not match
the encoding expected by the server, which is controlled by
"client_encoding".
Abandoned <be*****@gmail.comwrites:
>Use a different column type for cache2's column, one more appropriate for storing binary characters (perhaps BYTEA for Postgres). Don't forget to also use a bind variable, something like:
cursor.execute("INSERT INTO cache2 VALUES (?)", a)
Using "INSERT ... ('%s')" % (a) won't work, since the huge binary string in a can contain arbitrary characters, including the single quote.
I tryed:
cursor.execute("INSERT INTO cache2 VALUES (?)", a)
Why are you ignoring the first sentence: "Use a different column type
for cache2's column, ..."? The use of bind variables in INSERT will
work only *after* you do the rest of the work.
Abandoned wrote:
I'm very confused :(
I try to explain main problem...
That's always a good first step; try to remember that when you start
your next thread.
I have a table like this:
id-1 | id-2 | value
23 24 34
56 68 66
56 98 32455
55 62 655
56 28 123
.... ( 3 millions elements)
I select where id=56 and 100.000 rows are selecting but this took 2
second. (very big for my project)
I try cache to speed up this select operation..
And create a cache table:
id-1 | all
56 {68:66, 98:32455, 62:655}
When i select where id 56 i select 1 row and its took 0.09 second but
i must convert text to dictionary..
Before you go on with your odd caching schemes -- is the database properly
indexed? Something like
CREATE UNIQUE INDEX mytable_id1_id2 ON mytable (id-1, id-2);
(actual syntax may differ) might speed up the lookup operation
enough that you can do without caching.
Peter sj*******@yahoo.com a crit :
On Oct 18, 1:38 pm, Bruno Desthuilliers <bruno.
42.desthuilli...@wtf.websiteburo.oops.comwrote:
>Abandoned a crit : (snip)
>>I'm very confused :( I try to explain main problem... I have a table like this: id-1 | id-2 | value 23 24 34 56 68 66 56 98 32455 55 62 655 56 28 123 .... ( 3 millions elements) I select where id=56 and 100.000 rows are selecting but this took 2 second. (very big for my project)
Not to bad in the absolute.
>>I try cache to speed up this select operation.. And create a cache table: id-1 | all 56 {68:66, 98:32455, 62:655}
I really doubt this is the right way to go.
>>When i select where id 56 i select 1 row and its took 0.09 second but i must convert text to dictionary.. Have you got any idea what can i do this conver operation ?
Other alread answered
>>Have you got any idea what can i do cache for this table ?
Depends on your RDBMS. And as far as I'm concerned, I'd start by trying to find out how to optimize this query within the RDBMS - good ones are usually highly optimized softwares with provision for quite a lot of performance tuning.
Just the overhead of the query is a killer compared to a dictionary
lookup in Python, even if all you're doing is selecting an integer
from a 1-row, 1-column table.
Indeed. But then why use a RDBMS at all ? Please understand that I'm not
saying that a RDBMS will beat a plain dict lookup not that a RDBMS will
solve world's problem, but that storing pickled Python's dicts into a
RDBMS is certainly not the best thing to do. It will *still* have the db
connection overhead anyway, and will be a nightmare to maintain in sync
with the real state of the db. Which is why I suggest *first* looking
for RDBMS-side tuning and optimization - which may include third-part
cache systems FWIW.
Peter Otten a écrit :
(snip)
Before you go on with your odd caching schemes -- is the database properly
indexed? Something like
CREATE UNIQUE INDEX mytable_id1_id2 ON mytable (id-1, id-2);
(actual syntax may differ) might speed up the lookup operation
enough that you can do without caching.
Im my arms(tm) ! At least some sensible advice...
Hrvoje Niksic <hn*****@xemacs.orgwrites:
If you're generating the string from Python, use cPickle instead.
Much faster:
[...]
>>>t0 = time.time(); d2 = eval(s); t1 = time.time(); t1-t0
1.5457899570465088
>>>t0 = time.time(); d2 = pickle.loads(s); t1 = time.time(); t1-t0
0.060307979583740234
It just occurred to me, for simple data structures like the ones we're
discussing here (dicts of ints), marshal should also be considered.
marshal is the module used for generating and loading .pyc files and,
while it doesn't support all the bells and whistles of pickle, it's
very fast:
>>t0 = time.time(); d2 = marshal.loads(s); t1 = time.time(); t1-t0
0.029728889465332031
Benchmarks made with the timeit module confirm this difference.
Marshal has the added advantage of using much less space than the
(binary) pickle -- the example dictionary provided above pickled to a
string of 2667791 bytes, while marshal produced a string of 1700002
bytes. This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: |
last post by:
How do I convert a single character, e.g. "a" into char
for use in the 'split' command?
p.s.
I have option strict on
Tia.
|
by: ka1cqd |
last post by:
I have been looking all over the place for a method to take command line
arguments and convert them to a string or wstring so i can process the data
and then covert the resulting strings to...
|
by: dba_222 |
last post by:
Dear Experts,
Ok, I hate to ask such a seemingly dumb question, but I've
already spent far too much time on this. More that I
would care to admit.
In Sql server, how do I simply change a...
|
by: Mike Collins |
last post by:
This worked in the command window while in debug mode
?table.Rows.ItemArray.ToString()
"f0165f94-648f-4997-b578-11d89c8b1f61"
But gives the error below when I compile.
Cannot implicitly...
|
by: GM |
last post by:
Dear all,
Could you all give me some guide on how to convert my big5 string to
unicode using python? I already knew that I might use cjkcodecs or
python 2.4 but I still don't have idea on what...
|
by: keliie |
last post by:
Hello (from Access novice),
I'm building a switchboard form (using a Treeview object). The treeview
is populated by two tables (tblSwitchboardParent and
tblSwitchboardChild). Within...
|
by: comp.lang.tcl |
last post by:
My TCL proc, XML_GET_ALL_ELEMENT_ATTRS, is supposed to convert an XML
file into a TCL list as follows:
attr1 {val1} attr2 {val2} ... attrN {valN}
This is the TCL code that does this:
set...
|
by: Franky |
last post by:
I have a Command Prompt window open and select all the characters and copy
them to the clipboard.
I then read them from the clipboard
str = CType(DataO.GetData(DataFormats.OemText, False),...
|
by: =?Utf-8?B?cm9uZSBtYXRpYXM=?= |
last post by:
I have the same task to do but everytime I tried to parse my code I get a
null value returned after executing "dtMaterials.WriteXml(swMaterials);". I
am using the following code: Hope you can hep...
|
by: mamul |
last post by:
Hi please some one can help me. how to convert char * to string?
i have take char *argv from command line and want to pass to a function as string object(string str)
i want to first convert argv...
|
by: isladogs |
last post by:
The next Access Europe meeting will be on Wednesday 7 Feb 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:30 (7.30PM).
In this month's session, the creator of the excellent VBE...
|
by: MeoLessi9 |
last post by:
I have VirtualBox installed on Windows 11 and now I would like to install Kali on a virtual machine. However, on the official website, I see two options: "Installer images" and "Virtual machines"....
|
by: DolphinDB |
last post by:
The formulas of 101 quantitative trading alphas used by WorldQuant were presented in the paper 101 Formulaic Alphas. However, some formulas are complex, leading to challenges in calculation.
Take...
|
by: DolphinDB |
last post by:
Tired of spending countless mintues downsampling your data? Look no further!
In this article, youll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
|
by: Aftab Ahmad |
last post by:
Hello Experts!
I have written a code in MS Access for a cmd called "WhatsApp Message" to open WhatsApp using that very code but the problem is that it gives a popup message everytime I clicked on...
|
by: ryjfgjl |
last post by:
ExcelToDatabase: batch import excel into database automatically...
|
by: marcoviolo |
last post by:
Dear all,
I would like to implement on my worksheet an vlookup dynamic , that consider a change of pivot excel via win32com, from an external excel (without open it) and save the new file into a...
|
by: isladogs |
last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM).
In this month's session, we are pleased to welcome back...
|
by: PapaRatzi |
last post by:
Hello,
I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
| |