473,288 Members | 1,726 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,288 software developers and data experts.

Parallel insert to postgresql with thread

Hi..
I use the threading module for the fast operation. But i have some
problems..
This is my code sample:
=================
conn =
psycopg2.connect(user='postgres',password='postgre s',database='postgres')
cursor = conn.cursor()
class paralel(Thread):
def __init__ (self, veriler, sayii):
Thread.__init__(self)
def run(self):
save(a, b, c)

def save(a,b,c):
cursor.execute("INSERT INTO keywords (keyword) VALUES
('%s')" % a)
conn.commit()
cursor.execute("SELECT
CURRVAL('keywords_keyword_id_seq')")
idd=cursor.fetchall()
return idd[0][0]

def start(hiz):
datas=[........]
for a in datas:
current = paralel(a, sayii)
current.start()
==================
And it gives me different errors to try parallel insert. My querys
work in normal operation but in paralel don't work.
How can i insert data to postgresql the same moment ?
errors:
no results to fetch
cursor already closed

Oct 25 '07 #1
6 12071
Abandoned wrote:
Hi..
I use the threading module for the fast operation. But i have some
problems..
This is my code sample:
=================
conn =
psycopg2.connect(user='postgres',password='postgre s',database='postgres')
cursor = conn.cursor()
class paralel(Thread):
def __init__ (self, veriler, sayii):
Thread.__init__(self)
def run(self):
save(a, b, c)

def save(a,b,c):
cursor.execute("INSERT INTO keywords (keyword) VALUES
('%s')" % a)
conn.commit()
cursor.execute("SELECT
CURRVAL('keywords_keyword_id_seq')")
idd=cursor.fetchall()
return idd[0][0]

def start(hiz):
datas=[........]
for a in datas:
current = paralel(a, sayii)
current.start()
==================
And it gives me different errors to try parallel insert. My querys
work in normal operation but in paralel don't work.
How can i insert data to postgresql the same moment ?
errors:
no results to fetch
cursor already closed
DB modules aren't necessarily thread-safe. Most of the times, a connection
(and of course their cursor) can't be shared between threads.

So open a connection for each thread.

Diez
Oct 25 '07 #2
Diez B. Roggisch wrote:
Abandoned wrote:
>Hi..
I use the threading module for the fast operation. But ....
[in each thread]
>def save(a,b,c):
cursor.execute("INSERT INTO ...
conn.commit()
cursor.execute(...)
How can i insert data to postgresql the same moment ?...

DB modules aren't necessarily thread-safe. Most of the times, a connection
(and of course their cursor) can't be shared between threads.

So open a connection for each thread.
Note that your DB server will have to "serialize" your inserts, so
unless there is some other reason for the threads, a single thread
through a single connection to the DB is the way to go. Of course
it may be clever enough to behave "as if" they are serialized, but
mostly of your work parallelizing at your end simply creates new
work at the DB server end.

-Scott David Daniels
Sc***********@Acm.Org
Oct 25 '07 #3

On Oct 25, 2007, at 7:28 AM, Scott David Daniels wrote:
Diez B. Roggisch wrote:
>Abandoned wrote:
>>Hi..
I use the threading module for the fast operation. But ....
[in each thread]
>>def save(a,b,c):
cursor.execute("INSERT INTO ...
conn.commit()
cursor.execute(...)
How can i insert data to postgresql the same moment ?...

DB modules aren't necessarily thread-safe. Most of the times, a
connection
(and of course their cursor) can't be shared between threads.

So open a connection for each thread.

Note that your DB server will have to "serialize" your inserts, so
unless there is some other reason for the threads, a single thread
through a single connection to the DB is the way to go. Of course
it may be clever enough to behave "as if" they are serialized, but
mostly of your work parallelizing at your end simply creates new
work at the DB server end.
Fortunately, in his case, that's not necessarily true. If they do
all their work with the same connection then, yes, but there are
other problems with that as mention wrt thread safety and psycopg2.
If he goes the recommended route with a separate connection for each
thread, then Postgres will not serialize multiple inserts coming from
separate connections unless there is something like and ALTER TABLE
or REINDEX concurrently happening on the table. The whole serialized
inserts thing is strictly something popularized by MySQL and is by no
means necessary or standard (as with a lot of MySQL).

Erik Jones

Software Developer | Emma®
er**@myemma.com
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com
Oct 25 '07 #4
Erik Jones wrote:
>
On Oct 25, 2007, at 7:28 AM, Scott David Daniels wrote:
>Diez B. Roggisch wrote:
>>Abandoned wrote:
Hi..
I use the threading module for the fast operation. But ....
[in each thread]
>>>def save(a,b,c):
cursor.execute("INSERT INTO ...
conn.commit()
cursor.execute(...)
How can i insert data to postgresql the same moment ?...

DB modules aren't necessarily thread-safe. Most of the times, a
connection (and ... cursor) can't be shared between threads.
So open a connection for each thread.

Note that your DB server will have to "serialize" your inserts, so
... a single thread through a single connection to the DB is the way
to go. Of course it (the DB server) may be clever enough to behave
"as if" they are serialized, but most of your work parallelizing at
your end simply creates new work at the DB server end.

Fortunately, in his case, that's not necessarily true.... If he
goes the recommended route with a separate connection for each thread,
then Postgres will not serialize multiple inserts coming from separate
connections unless there is something like and ALTER TABLE or REINDEX
concurrently happening on the table.
The whole serialized inserts thing is strictly something popularized
by MySQL and is by no means necessary or standard (as with a lot of
MySQL).
But he commits after every insert, which _does_ force serialization (if
only to provide safe transaction boundaries). I understand you can get
clever at how to do it, _but_ preserving ACID properties is exactly what
I mean by "serialize," and while I like to bash MySQL as well as the
next person, I most certainly am not under the evil sway of the vile
MySQL cabal.

The server will have to be able to abort each transaction
_independently_ of the others, and so must serialize any index
updates that share a page by, for example, landing in the same node
of a B-Tree.

-Scott David Daniels
Sc***********@Acm.Org
Oct 26 '07 #5
If you're not Scott Daniels, beware that this conversation has gone
horribly off topic and, unless you have an interest in PostreSQL, you
may not want to bother reading on...

On Oct 25, 2007, at 9:46 PM, Scott David Daniels wrote:
Erik Jones wrote:
>>
On Oct 25, 2007, at 7:28 AM, Scott David Daniels wrote:
>>Diez B. Roggisch wrote:
Abandoned wrote:
Hi..
I use the threading module for the fast operation. But ....
[in each thread]
def save(a,b,c):
cursor.execute("INSERT INTO ...
conn.commit()
cursor.execute(...)
How can i insert data to postgresql the same moment ?...

DB modules aren't necessarily thread-safe. Most of the times, a
connection (and ... cursor) can't be shared between threads.
So open a connection for each thread.

Note that your DB server will have to "serialize" your inserts, so
... a single thread through a single connection to the DB is the way
to go. Of course it (the DB server) may be clever enough to behave
"as if" they are serialized, but most of your work parallelizing at
your end simply creates new work at the DB server end.

Fortunately, in his case, that's not necessarily true.... If he
goes the recommended route with a separate connection for each
thread,
then Postgres will not serialize multiple inserts coming from
separate
connections unless there is something like and ALTER TABLE or REINDEX
concurrently happening on the table.
The whole serialized inserts thing is strictly something popularized
by MySQL and is by no means necessary or standard (as with a lot of
MySQL).

But he commits after every insert, which _does_ force serialization
(if
only to provide safe transaction boundaries). I understand you can
get
clever at how to do it, _but_ preserving ACID properties is exactly
what
I mean by "serialize,"
First, bad idea to work with your own definition of a very domain
specific and standardized term. Especially when Postgres's Multi-
Version Concurrency Control mechanisms are designed specifically for
the purpose of preserve ACID compliance without forcing serialized
transactions on the user.

Second, unless he specifically sets his transaction level to
serializable, he will be working in read-committed mode. What this
specifically means is that two (or more) transactions writing to the
same table will not block any of the others. Let's say the user has
two concurrent inserts to run on the same table that, for whatever
reason, take a while to run (for example, they insert the results of
some horribly complex or inefficient select), if either is run in
serializable mode then which ever one starts a fraction of a second
sooner will run until completion before the second is even allowed to
begin. In (the default) read-committed mode they will both begin
executing as soon as they are called and will write their data
regardless of conflicts. At commit time (which may be sometime later
for transactions with multiple statements are used) is when conflicts
are resolved. So, if between the two example transactions there does
turn out to be a conflict betwen their results, whichever commits
second will roll back and, since the data written by the second
transaction will not be marked as committed, it will never be visible
to any other transactions and the space will remain available for
future transactions.

Here's the relevant portion of the Postgres docs on all of this:
http://www.postgresql.org/docs/8.2/i...tive/mvcc.html
and while I like to bash MySQL as well as the
next person, I most certainly am not under the evil sway of the vile
MySQL cabal.
Good to hear ;)
>
The server will have to be able to abort each transaction
_independently_ of the others, and so must serialize any index
updates that share a page by, for example, landing in the same node
of a B-Tree.
There is nothing inherent in B-Trees that prevents identical datum
from being written in them. If there was the only they'd be good for
would be unique indexes. Even if you do use a unique index, as noted
above, constraints and conflicts are only enforced at commit time.

Erik Jones

Software Developer | Emma®
er**@myemma.com
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com
Oct 26 '07 #6
Le Thu, 25 Oct 2007 13:27:40 +0200, Diez B. Roggisch a écritÂ*:
DB modules aren't necessarily thread-safe. Most of the times, a
connection (and of course their cursor) can't be shared between threads.

So open a connection for each thread.

Diez
DB modules following DBAPI2 must define the following attribute:

"""
threadsafety

Integer constant stating the level of thread safety the
interface supports. Possible values are:

0 Threads may not share the module.
1 Threads may share the module, but not connections.
2 Threads may share the module and connections.
3 Threads may share the module, connections and
cursors.
"""

http://www.python.org/dev/peps/pep-0249/

--
Laurent POINTAL - la*************@laposte.net
Oct 26 '07 #7

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

10
by: Joshua Nussbaum | last post by:
I came up with what I think is a good idea for making multithreading programming easier in any .NET language. I dont know where else to post it, so I'll try here. ..NET 2.0 adds the capability...
16
by: Philip Boonzaaier | last post by:
I want to be able to generate SQL statements that will go through a list of data, effectively row by row, enquire on the database if this exists in the selected table- If it exists, then the colums...
7
by: Steven D.Arnold | last post by:
How good is Postgres' performance for massive simultaneous insertions into the same heavily-indexed table? Are there any studies or benchmarks I can look at for that? I understand Postgres uses...
11
by: Sezai YILMAZ | last post by:
Hello I need high throughput while inserting into PostgreSQL. Because of that I did some PostgreSQL insert performance tests. ------------------------------------------------------------ --...
12
by: Peter Eisentraut | last post by:
Is there any practical limit on the number of parallel connections that a PostgreSQL server can service? We're in the process of setting up a system that will require up to 10000 connections open...
1
by: Edwin Grubbs | last post by:
Hello, I have experienced problems with postgres hanging when two inserts reference the same foreign key. It appears that the second insert is waiting for the first insert to release a lock....
14
by: Dave Booker | last post by:
I'm doing some analysis that is readily broken up into many independent pieces, on a multicore machine. I thought it would be best to just queue like 1000 of these pieces in the ThreadPool, and...
3
by: John | last post by:
I have a program that needs to run on a regular basis that looks at a queue table in my database. If there are items in the queue database I need to grab the data from the database and pass it to...
2
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 7 Feb 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:30 (7.30PM). In this month's session, the creator of the excellent VBE...
0
by: DolphinDB | last post by:
The formulas of 101 quantitative trading alphas used by WorldQuant were presented in the paper 101 Formulaic Alphas. However, some formulas are complex, leading to challenges in calculation. Take...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: Aftab Ahmad | last post by:
So, I have written a code for a cmd called "Send WhatsApp Message" to open and send WhatsApp messaage. The code is given below. Dim IE As Object Set IE =...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.