473,386 Members | 1,821 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

A Question About Insertions -- Performance

I am doing to large dataset performance tests with 7.3.4b2 today and I noticed an interesting phenomenon. My shared memory buffers are set at 128MB. Peak postmaster usage appears to be around 90MB.

My test app performs inserts across 4 related tables, each set of 4 inserts representing a single theoretical "device" object. I report how many "devices" I have inserted, per second, for example...

[...]
41509 devices inserted, 36/sec
[1 second later]
41544 devices inserted, 35/sec
[...]

(to be clear, 41509 devices inserted equals 166036 actual, related rows in the db)

Performance follows an odd "peak and valley" pattern. It will start out with a high insertion rate (commits are performed after each "device set"), then after a few thousand device sets, performance will drop to 1 device/second for about 5 seconds. Then it will slowly ramp up over the next 10 seconds to /just below/ the previous high water mark. A few thousand inserts later, it will drop to 1 device/second again for 5 seconds, then slowly ramp up to just below the last high water mark.

Ad infinitum.

I am wondering:

1) What am I seeing here? This is on a 4-processor machine and postmaster has a CPU all to itself, so I ruled out processor contention.

2) Is there more performance tuning I could perform to flatten this out, or is this just completely normal? Postmaster never busts over 100MB out of the 128MB shared memory I've allocated to it, and according to <mumble mumble webpage mumble>, this is just about perfect for shared memory settings (100 to 120% high water mark).

Thanks.

---
Clay
Cisco Systems, Inc.
cl*****@cisco.com
(972) 813-5004
I've stopped 19,647 spam messages. You can too!
One month FREE spam protection at http://www.cloudmark.com/spamnetsig/}

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #1
8 1739
"Clay Luther" <cl*****@cisco.com> writes:
Performance follows an odd "peak and valley" pattern. It will start
out with a high insertion rate (commits are performed after each
"device set"), then after a few thousand device sets, performance will
drop to 1 device/second for about 5 seconds. Then it will slowly ramp
up over the next 10 seconds to /just below/ the previous high water
mark. A few thousand inserts later, it will drop to 1 device/second
again for 5 seconds, then slowly ramp up to just below the last high
water mark.


My best guess is that the dropoffs occur because of background checkpoint
operations, but there's not enough info here to prove it. Four inserts
per second seems horrendously slow in any case.

What are the table schemas (in particular, are there any foreign-key
constraints to check)?

Are you doing any vacuuming in this sequence? If so where?

What's the disk hardware like? Do you have WAL on its own disk drive?

regards, tom lane

PS: pgsql-performance would be a better list for this sort of issue.

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 11 '05 #2
"Clay Luther" <cl*****@cisco.com> writes:
Performance follows an odd "peak and valley" pattern. It will start
out with a high insertion rate (commits are performed after each
"device set"), then after a few thousand device sets, performance will
drop to 1 device/second for about 5 seconds. Then it will slowly ramp
up over the next 10 seconds to /just below/ the previous high water
mark. A few thousand inserts later, it will drop to 1 device/second
again for 5 seconds, then slowly ramp up to just below the last high
water mark.


My best guess is that the dropoffs occur because of background checkpoint
operations, but there's not enough info here to prove it. Four inserts
per second seems horrendously slow in any case.

What are the table schemas (in particular, are there any foreign-key
constraints to check)?

Are you doing any vacuuming in this sequence? If so where?

What's the disk hardware like? Do you have WAL on its own disk drive?

regards, tom lane

PS: pgsql-performance would be a better list for this sort of issue.

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 11 '05 #3
>>>>> "TL" == Tom Lane <tg*@sss.pgh.pa.us> writes:

TL> My best guess is that the dropoffs occur because of background checkpoint
TL> operations, but there's not enough info here to prove it. Four inserts
TL> per second seems horrendously slow in any case.

I'll concur with this diagnosis. I've been doing a bunch of
performance testing with various parameter settings, and the
checkpoint frequency is a big influence. For me, by making the
checkpoints occur as far apart as possible, the overall speed
improvement was incredible. Try bumping the number of
checkpoint_segments in your postgresql.conf file. For my tests I
compared the default 3 with 50 segments.

Check your logs to see if you are checkpointing too frequently.

Another thing that *realy* picks up speed is to batch your inserts in
transactions. I just altered an application yesterday that had a loop
like this:

foreach row fetched from table c:
update table a where id=row.id
update table b where id2=row.id2
send notice to id
end

there were several such loops going on for distinct sets of rows in
the same tables.

changing it so that it was inside a transaction, and every 100 times
thru the loop to do a commit pretty much made the time it took to run
on a large loop from 2.5 hours down to 1 hour, and another that took 2
hours down to 40 minutes.

I had to put in a bunch of additional error checking and rollback
logic, but in the last two years none of those error conditions have
ever triggered so I think I'm pretty safe even with having to redo up
to 100 records on a transaction error (ie, it is unlikely to happen).
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D. Khera Communications, Inc.
Internet: kh***@kciLink.com Rockville, MD +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #4
>>>>> "TL" == Tom Lane <tg*@sss.pgh.pa.us> writes:

TL> My best guess is that the dropoffs occur because of background checkpoint
TL> operations, but there's not enough info here to prove it. Four inserts
TL> per second seems horrendously slow in any case.

I'll concur with this diagnosis. I've been doing a bunch of
performance testing with various parameter settings, and the
checkpoint frequency is a big influence. For me, by making the
checkpoints occur as far apart as possible, the overall speed
improvement was incredible. Try bumping the number of
checkpoint_segments in your postgresql.conf file. For my tests I
compared the default 3 with 50 segments.

Check your logs to see if you are checkpointing too frequently.

Another thing that *realy* picks up speed is to batch your inserts in
transactions. I just altered an application yesterday that had a loop
like this:

foreach row fetched from table c:
update table a where id=row.id
update table b where id2=row.id2
send notice to id
end

there were several such loops going on for distinct sets of rows in
the same tables.

changing it so that it was inside a transaction, and every 100 times
thru the loop to do a commit pretty much made the time it took to run
on a large loop from 2.5 hours down to 1 hour, and another that took 2
hours down to 40 minutes.

I had to put in a bunch of additional error checking and rollback
logic, but in the last two years none of those error conditions have
ever triggered so I think I'm pretty safe even with having to redo up
to 100 records on a transaction error (ie, it is unlikely to happen).
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D. Khera Communications, Inc.
Internet: kh***@kciLink.com Rockville, MD +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 11 '05 #5
Vivek Khera wrote:
>> "TL" == Tom Lane <tg*@sss.pgh.pa.us> writes:


TL> My best guess is that the dropoffs occur because of background checkpoint
TL> operations, but there's not enough info here to prove it. Four inserts
TL> per second seems horrendously slow in any case.

I'll concur with this diagnosis. I've been doing a bunch of
performance testing with various parameter settings, and the
checkpoint frequency is a big influence. For me, by making the
checkpoints occur as far apart as possible, the overall speed
improvement was incredible. Try bumping the number of
checkpoint_segments in your postgresql.conf file. For my tests I
compared the default 3 with 50 segments.

Check your logs to see if you are checkpointing too frequently.


That warning message is only in 7.4.

--
Bruce Momjian | http://candle.pha.pa.us
pg***@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #6
Vivek Khera wrote:
>> "TL" == Tom Lane <tg*@sss.pgh.pa.us> writes:


TL> My best guess is that the dropoffs occur because of background checkpoint
TL> operations, but there's not enough info here to prove it. Four inserts
TL> per second seems horrendously slow in any case.

I'll concur with this diagnosis. I've been doing a bunch of
performance testing with various parameter settings, and the
checkpoint frequency is a big influence. For me, by making the
checkpoints occur as far apart as possible, the overall speed
improvement was incredible. Try bumping the number of
checkpoint_segments in your postgresql.conf file. For my tests I
compared the default 3 with 50 segments.

Check your logs to see if you are checkpointing too frequently.


That warning message is only in 7.4.

--
Bruce Momjian | http://candle.pha.pa.us
pg***@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #7
>>>>> "BM" == Bruce Momjian <pg***@candle.pha.pa.us> writes:
Check your logs to see if you are checkpointing too frequently.


BM> That warning message is only in 7.4.

Yes, but the checkpoint activity is still logged. On my 7.2 system,
I'm checkpointing about every 1.5 minutes at peak with 3 checkpoint
segments. I think I can speed it up even more by increasing them.
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #8
>>>>> "BM" == Bruce Momjian <pg***@candle.pha.pa.us> writes:
Check your logs to see if you are checkpointing too frequently.


BM> That warning message is only in 7.4.

Yes, but the checkpoint activity is still logged. On my 7.2 system,
I'm checkpointing about every 1.5 minutes at peak with 3 checkpoint
segments. I think I can speed it up even more by increasing them.
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 11 '05 #9

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: Clay Luther | last post by:
I am doing to large dataset performance tests with 7.3.4b2 today and I noticed an interesting phenomenon. My shared memory buffers are set at 128MB. Peak postmaster usage appears to be around 90MB....
1
by: Isaac Blank | last post by:
Hi. We've run into a concurrency issue I do not have a clear solution for. In a DB2 UDB 7.2 database, we have several tables with a chain of foreign key relarionships: Table1 primary key x...
9
by: Paul Steele | last post by:
I am writing a C# app that needs to periodically poll for cdroms and usb storage device insertions. I've looked at the WMI functions but haven't found anything all that useful. The closest is...
6
by: wASP | last post by:
Hello everyone, I'm new to C# and ASP.NET, so pardon my stupidity on this one. I'm having a problem with referencing methods/functions external to a class member function. My code is as...
7
by: Edward Yang | last post by:
A few days ago I started a thread "I think C# is forcing us to write more (redundant) code" and got many replies (more than what I had expected). But after reading all the replies I think my...
2
by: Diego | last post by:
Hi everybody! I'm using DB2 PE v8.2.3 for linux. I've defined a database with the following schema: ANNOTATION(ID,AUTHOR,TEXT) ANNOTATION_BOOK(ANNOTATION_ID,OBJECT_ID)...
6
by: ziman137 | last post by:
Hello all, I have a question and am seeking for some advice. I am currently working to implement an algorithmic library. Because the performance is the most important factor in later...
10
by: Ruan | last post by:
My confusion comes from the following piece of code: memo = {1:1, 2:1} def fib_memo(n): global memo if not n in memo: memo = fib_memo(n-1) + fib_memo(n-2) return memo I used to think that...
4
by: sinoodle | last post by:
Hello, I need to build a large database that has roughly 500,000 keys, and a variable amount of data for each key. The data for each key could range from 100 bytes to megabytes.The data under...
5
by: frankw | last post by:
Hi, I have a hash_map with string as key and an object pointer as value. the object is like class{ public: float a; float b; ...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.