By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
455,587 Members | 1,632 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 455,587 IT Pros & Developers. It's quick & easy.

postgresql modules and performance

P: n/a
Hello,

I started to write my PostgreSQL layer. I tried pyPgSQL and PyGreSQL. I
made a *very minimal* performance test and comparsion with the same
thing in php. Table "movie" has 129 record and many fields.

I found PyGreSQL / DB-API / fetchall horrible slow (32 sec in my test).
PHP did 13 secs and it gave the result in associative array. Maybe I did
something bad.

pyPgSQL / DB-API raised a futurewarning and didn't worked.

pyPgSQL / libpg worked well. I can create php-like dictionary result in
14 secs.

Here is the code. It's only a test...

Mage

------ test.py --------
import mpypg
import datetime

print 'This is a python postgesql module'

db = mpypg.mpypgdb('dbname=test host=localhost')

def test():
res = db.query('select * from movie')
#print res['fields']
#print res['rows']
#pass

def test2():
res = db.query('select * from movie')
#print res['fields']
#print len(res['rows'])
#print res['rows']
print len(res)
print res[1]

start = datetime.datetime.now()

for i in range(100):
test()
#pass

end = datetime.datetime.now()

print end - start

test2()

------ mpypg.py --------
from pyPgSQL import libpq
from pyPgSQL import PgSQL
import sys
import pgdb

mpypg_connect_error_message = 'Could not connect to the database'
mpypg_query_error_message = 'Could not run the query'

class mpypgdb:
'my database class'
def __init__(self,str):
try:
self.database = libpq.PQconnectdb(str)
except:
mpypg_error_msg(mpypg_connect_error_message)

def query(self,query):
try:
res = self.database.query(query)
fields = tuple([res.fname(i) for i in range(res.nfields)])
rows = []

'''
for name in fields:
rows[name] = []
for i in range(res.ntuples):
for j in range(len(fields)):
rows[fields[j]].append(res.getvalue(i,j))
'''
'''
for j in range(len(fields)):
rows[fields[j]] = tuple([res.getvalue(i,j) for i in
range(res.ntuples)])
'''

for i in range(res.ntuples):
rows.append(dict([(fields[j], res.getvalue(i,j)) for j
in range(len(fields))]))

res.clear()
result = {'fields': fields, 'rows': rows}
return result;
except:
mpypg_error_msg(mpypg_query_error_message)
class mpgdb:
'my database class'
def __init__(self,str):
try:
self.database = pgdb.connect(database='test')
except:
mpypg_error_msg(mpypg_connect_error_message)

def query(self,query):
try:
cursor = self.database.cursor()
res = cursor.execute(query)

#result = {'fields': fields, 'rows': cursor.fetchall()}
result = [cursor.fetchone() for i in range(cursor.rowcount)]

#result = cursor.fetchall()
return result;
except:
mpypg_error_msg(mpypg_query_error_message)

def mpypg_error_msg(message):
'Database error handler'
sys.exit(message)
Jul 19 '05 #1
Share this Question
Share on Google+
2 Replies


P: n/a
Mage wrote:
Hello,

I started to write my PostgreSQL layer. I tried pyPgSQL and PyGreSQL. I
made a *very minimal* performance test and comparsion with the same
thing in php. Table "movie" has 129 record and many fields.

I found PyGreSQL / DB-API / fetchall horrible slow (32 sec in my test).
PHP did 13 secs and it gave the result in associative array. Maybe I did
something bad.


Have you tried psycopg?

Reinhold
Jul 19 '05 #2

P: n/a
Reinhold Birkenfeld wrote:

Have you tried psycopg?

Thank you. It did the fetchall() in 11 secs and 13 secs with dictionary
creating. It's the fastest so far.

Mage

Jul 19 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.