473,394 Members | 1,759 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,394 software developers and data experts.

Parallel insert to postgresql with thread

Hi..
I use the threading module for the fast operation. But i have some
problems..
This is my code sample:
=================
conn =
psycopg2.connect(user='postgres',password='postgre s',database='postgres')
cursor = conn.cursor()
class paralel(Thread):
def __init__ (self, veriler, sayii):
Thread.__init__(self)
def run(self):
save(a, b, c)

def save(a,b,c):
cursor.execute("INSERT INTO keywords (keyword) VALUES
('%s')" % a)
conn.commit()
cursor.execute("SELECT
CURRVAL('keywords_keyword_id_seq')")
idd=cursor.fetchall()
return idd[0][0]

def start(hiz):
datas=[........]
for a in datas:
current = paralel(a, sayii)
current.start()
==================
And it gives me different errors to try parallel insert. My querys
work in normal operation but in paralel don't work.
How can i insert data to postgresql the same moment ?
errors:
no results to fetch
cursor already closed

Oct 25 '07 #1
6 12101
Abandoned wrote:
Hi..
I use the threading module for the fast operation. But i have some
problems..
This is my code sample:
=================
conn =
psycopg2.connect(user='postgres',password='postgre s',database='postgres')
cursor = conn.cursor()
class paralel(Thread):
def __init__ (self, veriler, sayii):
Thread.__init__(self)
def run(self):
save(a, b, c)

def save(a,b,c):
cursor.execute("INSERT INTO keywords (keyword) VALUES
('%s')" % a)
conn.commit()
cursor.execute("SELECT
CURRVAL('keywords_keyword_id_seq')")
idd=cursor.fetchall()
return idd[0][0]

def start(hiz):
datas=[........]
for a in datas:
current = paralel(a, sayii)
current.start()
==================
And it gives me different errors to try parallel insert. My querys
work in normal operation but in paralel don't work.
How can i insert data to postgresql the same moment ?
errors:
no results to fetch
cursor already closed
DB modules aren't necessarily thread-safe. Most of the times, a connection
(and of course their cursor) can't be shared between threads.

So open a connection for each thread.

Diez
Oct 25 '07 #2
Diez B. Roggisch wrote:
Abandoned wrote:
>Hi..
I use the threading module for the fast operation. But ....
[in each thread]
>def save(a,b,c):
cursor.execute("INSERT INTO ...
conn.commit()
cursor.execute(...)
How can i insert data to postgresql the same moment ?...

DB modules aren't necessarily thread-safe. Most of the times, a connection
(and of course their cursor) can't be shared between threads.

So open a connection for each thread.
Note that your DB server will have to "serialize" your inserts, so
unless there is some other reason for the threads, a single thread
through a single connection to the DB is the way to go. Of course
it may be clever enough to behave "as if" they are serialized, but
mostly of your work parallelizing at your end simply creates new
work at the DB server end.

-Scott David Daniels
Sc***********@Acm.Org
Oct 25 '07 #3

On Oct 25, 2007, at 7:28 AM, Scott David Daniels wrote:
Diez B. Roggisch wrote:
>Abandoned wrote:
>>Hi..
I use the threading module for the fast operation. But ....
[in each thread]
>>def save(a,b,c):
cursor.execute("INSERT INTO ...
conn.commit()
cursor.execute(...)
How can i insert data to postgresql the same moment ?...

DB modules aren't necessarily thread-safe. Most of the times, a
connection
(and of course their cursor) can't be shared between threads.

So open a connection for each thread.

Note that your DB server will have to "serialize" your inserts, so
unless there is some other reason for the threads, a single thread
through a single connection to the DB is the way to go. Of course
it may be clever enough to behave "as if" they are serialized, but
mostly of your work parallelizing at your end simply creates new
work at the DB server end.
Fortunately, in his case, that's not necessarily true. If they do
all their work with the same connection then, yes, but there are
other problems with that as mention wrt thread safety and psycopg2.
If he goes the recommended route with a separate connection for each
thread, then Postgres will not serialize multiple inserts coming from
separate connections unless there is something like and ALTER TABLE
or REINDEX concurrently happening on the table. The whole serialized
inserts thing is strictly something popularized by MySQL and is by no
means necessary or standard (as with a lot of MySQL).

Erik Jones

Software Developer | Emma®
er**@myemma.com
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com
Oct 25 '07 #4
Erik Jones wrote:
>
On Oct 25, 2007, at 7:28 AM, Scott David Daniels wrote:
>Diez B. Roggisch wrote:
>>Abandoned wrote:
Hi..
I use the threading module for the fast operation. But ....
[in each thread]
>>>def save(a,b,c):
cursor.execute("INSERT INTO ...
conn.commit()
cursor.execute(...)
How can i insert data to postgresql the same moment ?...

DB modules aren't necessarily thread-safe. Most of the times, a
connection (and ... cursor) can't be shared between threads.
So open a connection for each thread.

Note that your DB server will have to "serialize" your inserts, so
... a single thread through a single connection to the DB is the way
to go. Of course it (the DB server) may be clever enough to behave
"as if" they are serialized, but most of your work parallelizing at
your end simply creates new work at the DB server end.

Fortunately, in his case, that's not necessarily true.... If he
goes the recommended route with a separate connection for each thread,
then Postgres will not serialize multiple inserts coming from separate
connections unless there is something like and ALTER TABLE or REINDEX
concurrently happening on the table.
The whole serialized inserts thing is strictly something popularized
by MySQL and is by no means necessary or standard (as with a lot of
MySQL).
But he commits after every insert, which _does_ force serialization (if
only to provide safe transaction boundaries). I understand you can get
clever at how to do it, _but_ preserving ACID properties is exactly what
I mean by "serialize," and while I like to bash MySQL as well as the
next person, I most certainly am not under the evil sway of the vile
MySQL cabal.

The server will have to be able to abort each transaction
_independently_ of the others, and so must serialize any index
updates that share a page by, for example, landing in the same node
of a B-Tree.

-Scott David Daniels
Sc***********@Acm.Org
Oct 26 '07 #5
If you're not Scott Daniels, beware that this conversation has gone
horribly off topic and, unless you have an interest in PostreSQL, you
may not want to bother reading on...

On Oct 25, 2007, at 9:46 PM, Scott David Daniels wrote:
Erik Jones wrote:
>>
On Oct 25, 2007, at 7:28 AM, Scott David Daniels wrote:
>>Diez B. Roggisch wrote:
Abandoned wrote:
Hi..
I use the threading module for the fast operation. But ....
[in each thread]
def save(a,b,c):
cursor.execute("INSERT INTO ...
conn.commit()
cursor.execute(...)
How can i insert data to postgresql the same moment ?...

DB modules aren't necessarily thread-safe. Most of the times, a
connection (and ... cursor) can't be shared between threads.
So open a connection for each thread.

Note that your DB server will have to "serialize" your inserts, so
... a single thread through a single connection to the DB is the way
to go. Of course it (the DB server) may be clever enough to behave
"as if" they are serialized, but most of your work parallelizing at
your end simply creates new work at the DB server end.

Fortunately, in his case, that's not necessarily true.... If he
goes the recommended route with a separate connection for each
thread,
then Postgres will not serialize multiple inserts coming from
separate
connections unless there is something like and ALTER TABLE or REINDEX
concurrently happening on the table.
The whole serialized inserts thing is strictly something popularized
by MySQL and is by no means necessary or standard (as with a lot of
MySQL).

But he commits after every insert, which _does_ force serialization
(if
only to provide safe transaction boundaries). I understand you can
get
clever at how to do it, _but_ preserving ACID properties is exactly
what
I mean by "serialize,"
First, bad idea to work with your own definition of a very domain
specific and standardized term. Especially when Postgres's Multi-
Version Concurrency Control mechanisms are designed specifically for
the purpose of preserve ACID compliance without forcing serialized
transactions on the user.

Second, unless he specifically sets his transaction level to
serializable, he will be working in read-committed mode. What this
specifically means is that two (or more) transactions writing to the
same table will not block any of the others. Let's say the user has
two concurrent inserts to run on the same table that, for whatever
reason, take a while to run (for example, they insert the results of
some horribly complex or inefficient select), if either is run in
serializable mode then which ever one starts a fraction of a second
sooner will run until completion before the second is even allowed to
begin. In (the default) read-committed mode they will both begin
executing as soon as they are called and will write their data
regardless of conflicts. At commit time (which may be sometime later
for transactions with multiple statements are used) is when conflicts
are resolved. So, if between the two example transactions there does
turn out to be a conflict betwen their results, whichever commits
second will roll back and, since the data written by the second
transaction will not be marked as committed, it will never be visible
to any other transactions and the space will remain available for
future transactions.

Here's the relevant portion of the Postgres docs on all of this:
http://www.postgresql.org/docs/8.2/i...tive/mvcc.html
and while I like to bash MySQL as well as the
next person, I most certainly am not under the evil sway of the vile
MySQL cabal.
Good to hear ;)
>
The server will have to be able to abort each transaction
_independently_ of the others, and so must serialize any index
updates that share a page by, for example, landing in the same node
of a B-Tree.
There is nothing inherent in B-Trees that prevents identical datum
from being written in them. If there was the only they'd be good for
would be unique indexes. Even if you do use a unique index, as noted
above, constraints and conflicts are only enforced at commit time.

Erik Jones

Software Developer | Emma®
er**@myemma.com
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com
Oct 26 '07 #6
Le Thu, 25 Oct 2007 13:27:40 +0200, Diez B. Roggisch a écritÂ*:
DB modules aren't necessarily thread-safe. Most of the times, a
connection (and of course their cursor) can't be shared between threads.

So open a connection for each thread.

Diez
DB modules following DBAPI2 must define the following attribute:

"""
threadsafety

Integer constant stating the level of thread safety the
interface supports. Possible values are:

0 Threads may not share the module.
1 Threads may share the module, but not connections.
2 Threads may share the module and connections.
3 Threads may share the module, connections and
cursors.
"""

http://www.python.org/dev/peps/pep-0249/

--
Laurent POINTAL - la*************@laposte.net
Oct 26 '07 #7

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

10
by: Joshua Nussbaum | last post by:
I came up with what I think is a good idea for making multithreading programming easier in any .NET language. I dont know where else to post it, so I'll try here. ..NET 2.0 adds the capability...
16
by: Philip Boonzaaier | last post by:
I want to be able to generate SQL statements that will go through a list of data, effectively row by row, enquire on the database if this exists in the selected table- If it exists, then the colums...
7
by: Steven D.Arnold | last post by:
How good is Postgres' performance for massive simultaneous insertions into the same heavily-indexed table? Are there any studies or benchmarks I can look at for that? I understand Postgres uses...
11
by: Sezai YILMAZ | last post by:
Hello I need high throughput while inserting into PostgreSQL. Because of that I did some PostgreSQL insert performance tests. ------------------------------------------------------------ --...
12
by: Peter Eisentraut | last post by:
Is there any practical limit on the number of parallel connections that a PostgreSQL server can service? We're in the process of setting up a system that will require up to 10000 connections open...
1
by: Edwin Grubbs | last post by:
Hello, I have experienced problems with postgres hanging when two inserts reference the same foreign key. It appears that the second insert is waiting for the first insert to release a lock....
14
by: Dave Booker | last post by:
I'm doing some analysis that is readily broken up into many independent pieces, on a multicore machine. I thought it would be best to just queue like 1000 of these pieces in the ThreadPool, and...
3
by: John | last post by:
I have a program that needs to run on a regular basis that looks at a queue table in my database. If there are items in the queue database I need to grab the data from the database and pass it to...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.