472,353 Members | 1,715 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 472,353 software developers and data experts.

A Question About Insertions -- Performance

I am doing to large dataset performance tests with 7.3.4b2 today and I noticed an interesting phenomenon. My shared memory buffers are set at 128MB. Peak postmaster usage appears to be around 90MB.

My test app performs inserts across 4 related tables, each set of 4 inserts representing a single theoretical "device" object. I report how many "devices" I have inserted, per second, for example...

[...]
41509 devices inserted, 36/sec
[1 second later]
41544 devices inserted, 35/sec
[...]

(to be clear, 41509 devices inserted equals 166036 actual, related rows in the db)

Performance follows an odd "peak and valley" pattern. It will start out with a high insertion rate (commits are performed after each "device set"), then after a few thousand device sets, performance will drop to 1 device/second for about 5 seconds. Then it will slowly ramp up over the next 10 seconds to /just below/ the previous high water mark. A few thousand inserts later, it will drop to 1 device/second again for 5 seconds, then slowly ramp up to just below the last high water mark.

Ad infinitum.

I am wondering:

1) What am I seeing here? This is on a 4-processor machine and postmaster has a CPU all to itself, so I ruled out processor contention.

2) Is there more performance tuning I could perform to flatten this out, or is this just completely normal? Postmaster never busts over 100MB out of the 128MB shared memory I've allocated to it, and according to <mumble mumble webpage mumble>, this is just about perfect for shared memory settings (100 to 120% high water mark).

Thanks.

---
Clay
Cisco Systems, Inc.
cl*****@cisco.com
(972) 813-5004
I've stopped 19,647 spam messages. You can too!
One month FREE spam protection at http://www.cloudmark.com/spamnetsig/}

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #1
8 1685
"Clay Luther" <cl*****@cisco.com> writes:
Performance follows an odd "peak and valley" pattern. It will start
out with a high insertion rate (commits are performed after each
"device set"), then after a few thousand device sets, performance will
drop to 1 device/second for about 5 seconds. Then it will slowly ramp
up over the next 10 seconds to /just below/ the previous high water
mark. A few thousand inserts later, it will drop to 1 device/second
again for 5 seconds, then slowly ramp up to just below the last high
water mark.


My best guess is that the dropoffs occur because of background checkpoint
operations, but there's not enough info here to prove it. Four inserts
per second seems horrendously slow in any case.

What are the table schemas (in particular, are there any foreign-key
constraints to check)?

Are you doing any vacuuming in this sequence? If so where?

What's the disk hardware like? Do you have WAL on its own disk drive?

regards, tom lane

PS: pgsql-performance would be a better list for this sort of issue.

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 11 '05 #2
"Clay Luther" <cl*****@cisco.com> writes:
Performance follows an odd "peak and valley" pattern. It will start
out with a high insertion rate (commits are performed after each
"device set"), then after a few thousand device sets, performance will
drop to 1 device/second for about 5 seconds. Then it will slowly ramp
up over the next 10 seconds to /just below/ the previous high water
mark. A few thousand inserts later, it will drop to 1 device/second
again for 5 seconds, then slowly ramp up to just below the last high
water mark.


My best guess is that the dropoffs occur because of background checkpoint
operations, but there's not enough info here to prove it. Four inserts
per second seems horrendously slow in any case.

What are the table schemas (in particular, are there any foreign-key
constraints to check)?

Are you doing any vacuuming in this sequence? If so where?

What's the disk hardware like? Do you have WAL on its own disk drive?

regards, tom lane

PS: pgsql-performance would be a better list for this sort of issue.

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 11 '05 #3
>>>>> "TL" == Tom Lane <tg*@sss.pgh.pa.us> writes:

TL> My best guess is that the dropoffs occur because of background checkpoint
TL> operations, but there's not enough info here to prove it. Four inserts
TL> per second seems horrendously slow in any case.

I'll concur with this diagnosis. I've been doing a bunch of
performance testing with various parameter settings, and the
checkpoint frequency is a big influence. For me, by making the
checkpoints occur as far apart as possible, the overall speed
improvement was incredible. Try bumping the number of
checkpoint_segments in your postgresql.conf file. For my tests I
compared the default 3 with 50 segments.

Check your logs to see if you are checkpointing too frequently.

Another thing that *realy* picks up speed is to batch your inserts in
transactions. I just altered an application yesterday that had a loop
like this:

foreach row fetched from table c:
update table a where id=row.id
update table b where id2=row.id2
send notice to id
end

there were several such loops going on for distinct sets of rows in
the same tables.

changing it so that it was inside a transaction, and every 100 times
thru the loop to do a commit pretty much made the time it took to run
on a large loop from 2.5 hours down to 1 hour, and another that took 2
hours down to 40 minutes.

I had to put in a bunch of additional error checking and rollback
logic, but in the last two years none of those error conditions have
ever triggered so I think I'm pretty safe even with having to redo up
to 100 records on a transaction error (ie, it is unlikely to happen).
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D. Khera Communications, Inc.
Internet: kh***@kciLink.com Rockville, MD +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #4
>>>>> "TL" == Tom Lane <tg*@sss.pgh.pa.us> writes:

TL> My best guess is that the dropoffs occur because of background checkpoint
TL> operations, but there's not enough info here to prove it. Four inserts
TL> per second seems horrendously slow in any case.

I'll concur with this diagnosis. I've been doing a bunch of
performance testing with various parameter settings, and the
checkpoint frequency is a big influence. For me, by making the
checkpoints occur as far apart as possible, the overall speed
improvement was incredible. Try bumping the number of
checkpoint_segments in your postgresql.conf file. For my tests I
compared the default 3 with 50 segments.

Check your logs to see if you are checkpointing too frequently.

Another thing that *realy* picks up speed is to batch your inserts in
transactions. I just altered an application yesterday that had a loop
like this:

foreach row fetched from table c:
update table a where id=row.id
update table b where id2=row.id2
send notice to id
end

there were several such loops going on for distinct sets of rows in
the same tables.

changing it so that it was inside a transaction, and every 100 times
thru the loop to do a commit pretty much made the time it took to run
on a large loop from 2.5 hours down to 1 hour, and another that took 2
hours down to 40 minutes.

I had to put in a bunch of additional error checking and rollback
logic, but in the last two years none of those error conditions have
ever triggered so I think I'm pretty safe even with having to redo up
to 100 records on a transaction error (ie, it is unlikely to happen).
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D. Khera Communications, Inc.
Internet: kh***@kciLink.com Rockville, MD +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #5
Vivek Khera wrote:
>> "TL" == Tom Lane <tg*@sss.pgh.pa.us> writes:


TL> My best guess is that the dropoffs occur because of background checkpoint
TL> operations, but there's not enough info here to prove it. Four inserts
TL> per second seems horrendously slow in any case.

I'll concur with this diagnosis. I've been doing a bunch of
performance testing with various parameter settings, and the
checkpoint frequency is a big influence. For me, by making the
checkpoints occur as far apart as possible, the overall speed
improvement was incredible. Try bumping the number of
checkpoint_segments in your postgresql.conf file. For my tests I
compared the default 3 with 50 segments.

Check your logs to see if you are checkpointing too frequently.


That warning message is only in 7.4.

--
Bruce Momjian | http://candle.pha.pa.us
pg***@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #6
Vivek Khera wrote:
>> "TL" == Tom Lane <tg*@sss.pgh.pa.us> writes:


TL> My best guess is that the dropoffs occur because of background checkpoint
TL> operations, but there's not enough info here to prove it. Four inserts
TL> per second seems horrendously slow in any case.

I'll concur with this diagnosis. I've been doing a bunch of
performance testing with various parameter settings, and the
checkpoint frequency is a big influence. For me, by making the
checkpoints occur as far apart as possible, the overall speed
improvement was incredible. Try bumping the number of
checkpoint_segments in your postgresql.conf file. For my tests I
compared the default 3 with 50 segments.

Check your logs to see if you are checkpointing too frequently.


That warning message is only in 7.4.

--
Bruce Momjian | http://candle.pha.pa.us
pg***@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #7
>>>>> "BM" == Bruce Momjian <pg***@candle.pha.pa.us> writes:
Check your logs to see if you are checkpointing too frequently.


BM> That warning message is only in 7.4.

Yes, but the checkpoint activity is still logged. On my 7.2 system,
I'm checkpointing about every 1.5 minutes at peak with 3 checkpoint
segments. I think I can speed it up even more by increasing them.
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #8
>>>>> "BM" == Bruce Momjian <pg***@candle.pha.pa.us> writes:
Check your logs to see if you are checkpointing too frequently.


BM> That warning message is only in 7.4.

Yes, but the checkpoint activity is still logged. On my 7.2 system,
I'm checkpointing about every 1.5 minutes at peak with 3 checkpoint
segments. I think I can speed it up even more by increasing them.
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #9

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: Clay Luther | last post by:
I am doing to large dataset performance tests with 7.3.4b2 today and I noticed an interesting phenomenon. My shared memory buffers are set at 128MB....
1
by: Isaac Blank | last post by:
Hi. We've run into a concurrency issue I do not have a clear solution for. In a DB2 UDB 7.2 database, we have several tables with a chain of...
9
by: Paul Steele | last post by:
I am writing a C# app that needs to periodically poll for cdroms and usb storage device insertions. I've looked at the WMI functions but haven't...
6
by: wASP | last post by:
Hello everyone, I'm new to C# and ASP.NET, so pardon my stupidity on this one. I'm having a problem with referencing methods/functions external...
7
by: Edward Yang | last post by:
A few days ago I started a thread "I think C# is forcing us to write more (redundant) code" and got many replies (more than what I had expected)....
2
by: Diego | last post by:
Hi everybody! I'm using DB2 PE v8.2.3 for linux. I've defined a database with the following schema: ANNOTATION(ID,AUTHOR,TEXT)...
6
by: ziman137 | last post by:
Hello all, I have a question and am seeking for some advice. I am currently working to implement an algorithmic library. Because the...
10
by: Ruan | last post by:
My confusion comes from the following piece of code: memo = {1:1, 2:1} def fib_memo(n): global memo if not n in memo: memo = fib_memo(n-1) +...
4
by: sinoodle | last post by:
Hello, I need to build a large database that has roughly 500,000 keys, and a variable amount of data for each key. The data for each key could...
5
by: frankw | last post by:
Hi, I have a hash_map with string as key and an object pointer as value. the object is like class{ public: float a; float b; ...
1
by: Kemmylinns12 | last post by:
Blockchain technology has emerged as a transformative force in the business world, offering unprecedented opportunities for innovation and...
0
by: Naresh1 | last post by:
What is WebLogic Admin Training? WebLogic Admin Training is a specialized program designed to equip individuals with the skills and knowledge...
0
by: Matthew3360 | last post by:
Hi there. I have been struggling to find out how to use a variable as my location in my header redirect function. Here is my code. ...
0
by: AndyPSV | last post by:
HOW CAN I CREATE AN AI with an .executable file that would suck all files in the folder and on my computerHOW CAN I CREATE AN AI with an .executable...
0
by: Arjunsri | last post by:
I have a Redshift database that I need to use as an import data source. I have configured the DSN connection using the server, port, database, and...
0
hi
by: WisdomUfot | last post by:
It's an interesting question you've got about how Gmail hides the HTTP referrer when a link in an email is clicked. While I don't have the specific...
0
by: Matthew3360 | last post by:
Hi, I have been trying to connect to a local host using php curl. But I am finding it hard to do this. I am doing the curl get request from my web...
0
Oralloy
by: Oralloy | last post by:
Hello Folks, I am trying to hook up a CPU which I designed using SystemC to I/O pins on an FPGA. My problem (spelled failure) is with the...
0
by: Carina712 | last post by:
Setting background colors for Excel documents can help to improve the visual appeal of the document and make it easier to read and understand....

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.