473,387 Members | 1,420 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,387 software developers and data experts.

pg_restore takes ages

Hi,
I use pg_restore to load a previously dumped database. (10mil records).
the load of the data runs quite fast but when starting creating the
triggers for foreign keys it takes forever.

Isnt there are a faster way. after all the triggers in the source db
already made sure the data was clean.

Thanks
Alex

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 12 '05 #1
14 13290
Alex wrote:
Hi,
I use pg_restore to load a previously dumped database. (10mil records).
the load of the data runs quite fast but when starting creating the
triggers for foreign keys it takes forever.

Isnt there are a faster way. after all the triggers in the source db
already made sure the data was clean.


You can try creating index/triggers first and load the data. At the end it will
take a while before you get a usable database with either approach but see what
works faster for you.

Personally I was in a situation where postgresql was hogging space while
creating index on a table that had 81M rows with 3GB disk footprint. I dropped
the table and recreated it. Also created index before loading data. The loading
was slow with this approach but it finished in 3 hours. And I had an updated
index as well. Just had to run vacuum over it.

Take your pick..

Shridhar

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 12 '05 #2
>>>>> "SD" == Shridhar Daithankar <sh*****************@persistent.co.in> writes:

SD> You can try creating index/triggers first and load the data. At the
SD> end it will take a while before you get a usable database with either
SD> approach but see what works faster for you.

The triggers and FK's don't do much at the time they are created.
They work upon update/insert/delete of data.

SD> footprint. I dropped the table and recreated it. Also created index
SD> before loading data. The loading was slow with this approach but it
SD> finished in 3 hours. And I had an updated index as well. Just had to
SD> run vacuum over it.

I cannot believe that this was faster than load data followed by
create index. Perhaps you needed to bump sort_mem so the index could
be created more efficiently. I also find that bumping up
checkpoint_segments to a high number speeds things up considerably.

--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D. Khera Communications, Inc.
Internet: kh***@kciLink.com Rockville, MD +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 12 '05 #3
Vivek Khera wrote:
SD> footprint. I dropped the table and recreated it. Also created index
SD> before loading data. The loading was slow with this approach but it
SD> finished in 3 hours. And I had an updated index as well. Just had to
SD> run vacuum over it.

I cannot believe that this was faster than load data followed by
create index. Perhaps you needed to bump sort_mem so the index could
be created more efficiently. I also find that bumping up
checkpoint_segments to a high number speeds things up considerably.


Well, In my case speed wasn't the issue. I put $PGDATA on a 12/13GB partition
and loaded 3GB of table. When I went to create index, it ran out of rest of the
free space which was close to 9GB. Actually I killed it because when it started
it had 9GB free and when I killed it, there was only 150MB free left.

Oracle had same problems. With tablespaces set to auto extent it ate huge amount
of space.

I posted this earlier and Tom remarked it the same, saying that it should be
same one way or other.

Anyway the project abandoned all the database and went to in memory structures..:-)

Shridhar
---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 12 '05 #4
On Fri, 3 Oct 2003, Vivek Khera wrote:
>> "SD" == Shridhar Daithankar <sh*****************@persistent.co.in> writes:


SD> You can try creating index/triggers first and load the data. At the
SD> end it will take a while before you get a usable database with either
SD> approach but see what works faster for you.

The triggers and FK's don't do much at the time they are created.
They work upon update/insert/delete of data.

SD> footprint. I dropped the table and recreated it. Also created index
SD> before loading data. The loading was slow with this approach but it
SD> finished in 3 hours. And I had an updated index as well. Just had to
SD> run vacuum over it.

I cannot believe that this was faster than load data followed by
create index. Perhaps you needed to bump sort_mem so the index could
be created more efficiently. I also find that bumping up
checkpoint_segments to a high number speeds things up considerably.


also, turning off fsync during the load helps a lot. Be sure to turn it
back on when you're done of course.
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 12 '05 #5
On Fri, Oct 03, 2003 at 01:06:26PM -0600, scott.marlowe wrote:
also, turning off fsync during the load helps a lot. Be sure to turn it
back on when you're done of course.


I'm not sure I understand why this is so. If I turn fsync off, it means
that I won't force the kernel to write WAL logs to disk, but they will
have to be written eventually. If you have tons of RAM it may well be
that the kernel will just keep dirty buffers in RAM, but if not there
should not be any difference. Am I missing something?

--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"Investigación es lo que hago cuando no sé lo que estoy haciendo"
(Wernher von Braun)

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 12 '05 #6
>>>>> "sm" == scott marlowe <scott.marlowe> writes:

sm> also, turning off fsync during the load helps a lot. Be sure to turn it
sm> back on when you're done of course.

Only if you don't have a battery-backed cache on your RAID. If you
do, it won't likely make a big difference. For me it was about 2
seconds over a 4-hour restore.

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 12 '05 #7
On Fri, 3 Oct 2003, Alvaro Herrera wrote:
On Fri, Oct 03, 2003 at 01:06:26PM -0600, scott.marlowe wrote:
also, turning off fsync during the load helps a lot. Be sure to turn it
back on when you're done of course.


I'm not sure I understand why this is so. If I turn fsync off, it means
that I won't force the kernel to write WAL logs to disk, but they will
have to be written eventually. If you have tons of RAM it may well be
that the kernel will just keep dirty buffers in RAM, but if not there
should not be any difference. Am I missing something?


Yes, you are. Basically, with fsync on, things have to happen in order.

I.e.

write to WAL what you're gonna do. WAIT for confirmation on write
write the tuples out. wait for confirmation
checkpoint the WAL. wait for confirmation

Notice the wait for confirmation above. Without fsync, there's no wait,
you just write it all out at once, and hope the machine / database doesn't
ever crash in the middle of a transaction.

Give it a try, turn off fsync, run pgbench -c 4 -t 1000, then turn it back
on and see how much it slows down.

Pull the plug while the transactions are running with fsync on, and your
machine, assuming it has a meta-data journaling file system, will come
right back, and postgresql will replay the WAL files and you'll have a
nice consistent database.

turn off fsync, initiate many transactions, pull the plug, and look at
your corrupted database refuse to start on update.

Note that if you're running on IDE drives, you already ARE probably
running with fsync off if write caching is enabled, so you'll need to turn
it off (hdparm -W0 /dev/hdx in linux) to ensure fsync actually works.
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #8
On Fri, 3 Oct 2003, Vivek Khera wrote:
>> "sm" == scott marlowe <scott.marlowe> writes:


sm> also, turning off fsync during the load helps a lot. Be sure to turn it
sm> back on when you're done of course.

Only if you don't have a battery-backed cache on your RAID. If you
do, it won't likely make a big difference. For me it was about 2
seconds over a 4-hour restore.


True, very true. Have you done the "pull the plug" test on it to make
sure it really works, by the way?
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 12 '05 #9
"scott.marlowe" <sc***********@ihs.com> writes:
Yes, you are. Basically, with fsync on, things have to happen in order.
I.e.
write to WAL what you're gonna do. WAIT for confirmation on write
write the tuples out. wait for confirmation
checkpoint the WAL. wait for confirmation
Not really. With fsync on, we *only* sync the WAL writes. Data writes
can happen whenever, so long as we know the corresponding WAL writes
went down first. We only wait for data writes to complete before
considering that a checkpoint is complete --- which is something that is
not in the main line of execution and doesn't block other activity.

This is one good reason for keeping WAL on a separate drive from the
data files --- you are then freeing the system to schedule data I/O as
optimally as it can.
Note that if you're running on IDE drives, you already ARE probably
running with fsync off if write caching is enabled, so you'll need to turn
it off (hdparm -W0 /dev/hdx in linux) to ensure fsync actually works.


It'd be interesting to think about whether a write-caching IDE drive
could safely be used for data storage, if WAL is elsewhere.

Right offhand I think the only problem is how to know when it's safe
to consider a checkpoint complete. Maybe all that would be needed is
a long enough time delay after issuing sync(2) in the checkpoint code.
Do these drives guarantee "data will be written within 30 seconds" or
something like that? Or can the delay be indefinite when load is heavy?

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 12 '05 #10
On Fri, 3 Oct 2003, Tom Lane wrote:
"scott.marlowe" <sc***********@ihs.com> writes:
Yes, you are. Basically, with fsync on, things have to happen in order.
I.e.
write to WAL what you're gonna do. WAIT for confirmation on write
write the tuples out. wait for confirmation
checkpoint the WAL. wait for confirmation
Not really. With fsync on, we *only* sync the WAL writes. Data writes
can happen whenever, so long as we know the corresponding WAL writes
went down first. We only wait for data writes to complete before
considering that a checkpoint is complete --- which is something that is
not in the main line of execution and doesn't block other activity.

This is one good reason for keeping WAL on a separate drive from the
data files --- you are then freeing the system to schedule data I/O as
optimally as it can.


Oh, Ok. That's why the WAL is such a choking point.
Note that if you're running on IDE drives, you already ARE probably
running with fsync off if write caching is enabled, so you'll need to turn
it off (hdparm -W0 /dev/hdx in linux) to ensure fsync actually works.


It'd be interesting to think about whether a write-caching IDE drive
could safely be used for data storage, if WAL is elsewhere.


Well, I just so happen to have a machine with two drives in it. I'll get
back to you on that.
Right offhand I think the only problem is how to know when it's safe
to consider a checkpoint complete. Maybe all that would be needed is
a long enough time delay after issuing sync(2) in the checkpoint code.
Do these drives guarantee "data will be written within 30 seconds" or
something like that? Or can the delay be indefinite when load is heavy?


I don't know. My guess is that they probably don't wait too long, they
just wait until they have enough to write to make it "worth their while"
i.e. they don't write 8k at a time, they write 64k or 512k at a time, now
that many have 8 Meg of cache, it would make sense to group writes to one
area together.

I'll test this out. Do you think "pgbench -c 50 -t 100000000" is enough
thrash, or do I need more connections at once? The machine I'm on has 2
gig ram, so it can probably open several hundred connections, but the time
to bring the database back up by replaying the WAL with hundreds of
concurrent transactions is gonna be a bit.
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 12 '05 #11
On Fri, 3 Oct 2003, scott.marlowe wrote:
On Fri, 3 Oct 2003, Tom Lane wrote:

It'd be interesting to think about whether a write-caching IDE drive
could safely be used for data storage, if WAL is elsewhere.


Well, I just so happen to have a machine with two drives in it. I'll get
back to you on that.


Ok, I just tested it. I put pg_xlog and pg_clog on a drive that was set
to write cache disabled, and left the data on a drive where caching was
enabled. The tps on a pgbench -c 5 -t 500 on the single drive was 45 to
55. With the pg_[xc]log moved to another drive and all, I got up to 108
tps. About double performance, as you'd expect. I didn't test the data
drive with write caching disabled, but my guess is it wouldn't be any
slower since pgsql doesn't wait on the rest.

I pulled the plug three times, and all three times the database came up in
recovery mode and sucessfully recovered. I didn't bother testing to see
if write caching would corrupt it as I'm pretty sure it would, it
certainly did when everything was on one drive.

Would you like to try some kind of wal patch out on it while I've got it
for testing? I'd be glad to torture that poor little box some more if
you're in the mood and the beta period is winding down. It's running 7.4
beta3, by the way.
---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 12 '05 #12
scott.marlowe wrote:
On Fri, 3 Oct 2003, scott.marlowe wrote:
On Fri, 3 Oct 2003, Tom Lane wrote:

It'd be interesting to think about whether a write-caching IDE drive
could safely be used for data storage, if WAL is elsewhere.


Well, I just so happen to have a machine with two drives in it. I'll get
back to you on that.


Ok, I just tested it. I put pg_xlog and pg_clog on a drive that was set
to write cache disabled, and left the data on a drive where caching was
enabled. The tps on a pgbench -c 5 -t 500 on the single drive was 45 to
55. With the pg_[xc]log moved to another drive and all, I got up to 108
tps. About double performance, as you'd expect. I didn't test the data
drive with write caching disabled, but my guess is it wouldn't be any
slower since pgsql doesn't wait on the rest.

I pulled the plug three times, and all three times the database came up in
recovery mode and sucessfully recovered. I didn't bother testing to see
if write caching would corrupt it as I'm pretty sure it would, it
certainly did when everything was on one drive.


You would have had to pull the plug between the time the system did a
checkpoint (and wrote to the write cache), and before it flushed the
write cache to disk --- no idea how you would find that window, but my
guess is that if you pulled the plug right after the checkpoint
completed, the WAL recovery would fail.

--
Bruce Momjian | http://candle.pha.pa.us
pg***@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 12 '05 #13
On Sat, 4 Oct 2003, Bruce Momjian wrote:
scott.marlowe wrote:
On Fri, 3 Oct 2003, scott.marlowe wrote:
On Fri, 3 Oct 2003, Tom Lane wrote:
>
> It'd be interesting to think about whether a write-caching IDE drive
> could safely be used for data storage, if WAL is elsewhere.

Well, I just so happen to have a machine with two drives in it. I'll get
back to you on that.


Ok, I just tested it. I put pg_xlog and pg_clog on a drive that was set
to write cache disabled, and left the data on a drive where caching was
enabled. The tps on a pgbench -c 5 -t 500 on the single drive was 45 to
55. With the pg_[xc]log moved to another drive and all, I got up to 108
tps. About double performance, as you'd expect. I didn't test the data
drive with write caching disabled, but my guess is it wouldn't be any
slower since pgsql doesn't wait on the rest.

I pulled the plug three times, and all three times the database came up in
recovery mode and sucessfully recovered. I didn't bother testing to see
if write caching would corrupt it as I'm pretty sure it would, it
certainly did when everything was on one drive.


You would have had to pull the plug between the time the system did a
checkpoint (and wrote to the write cache), and before it flushed the
write cache to disk --- no idea how you would find that window, but my
guess is that if you pulled the plug right after the checkpoint
completed, the WAL recovery would fail.


I'm not sure what you mean. Are you talking about the failure more with
write cache enabled? That always failed when I tested it. I was testing
it with 80 parallel transactions, by the way. I'll try it anyway just to
be sure that it causes the problem I'm expecting it to (i.e. write cache
enabled on pg_xlog causes database corruption under heavy parallel load
when plug is pulled.)
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 12 '05 #14
scott.marlowe wrote:
Ok, I just tested it. I put pg_xlog and pg_clog on a drive that was set
to write cache disabled, and left the data on a drive where caching was
enabled. The tps on a pgbench -c 5 -t 500 on the single drive was 45 to
55. With the pg_[xc]log moved to another drive and all, I got up to 108
tps. About double performance, as you'd expect. I didn't test the data
drive with write caching disabled, but my guess is it wouldn't be any
slower since pgsql doesn't wait on the rest.

I pulled the plug three times, and all three times the database came up in
recovery mode and sucessfully recovered. I didn't bother testing to see
if write caching would corrupt it as I'm pretty sure it would, it
certainly did when everything was on one drive.


You would have had to pull the plug between the time the system did a
checkpoint (and wrote to the write cache), and before it flushed the
write cache to disk --- no idea how you would find that window, but my
guess is that if you pulled the plug right after the checkpoint
completed, the WAL recovery would fail.


I'm not sure what you mean. Are you talking about the failure more with
write cache enabled? That always failed when I tested it. I was testing
it with 80 parallel transactions, by the way. I'll try it anyway just to
be sure that it causes the problem I'm expecting it to (i.e. write cache
enabled on pg_xlog causes database corruption under heavy parallel load
when plug is pulled.)


The issue is that a force write to disk is required for the sync() that
is run before the WAL files are recycled. You can get by with a write
cache enabled on the data drive as long as the crash doesn't happen in
the window between the sync (and WAL files removed) and the data
actually making it to the platers. If it does, I don't think the system
will recover, or if it does, it will not be consistent.

--
Bruce Momjian | http://candle.pha.pa.us
pg***@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 12 '05 #15

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: andy morrow | last post by:
hi, fairly new to postgres admin stuff...... i have a production machine that is running postgresql 7.1.3 also, there's a test machine which already had 7.0.3, and which i newly installed...
0
by: Joshua D. Drake | last post by:
Alright, we are testing pg_restore while restoring a 7GB database. This database has about 6GB of large objects and about 1Gb of textual data. The problem has been verified on PostgreSQL 7.3.2,...
14
by: Ron Johnson | last post by:
Hi, While on the topic of "need for in-place upgrades", I got to think- ing how the pg_restore could be speeded up. Am I wrong in saying that in the current pg_restore, all of the indexes are...
1
by: Nicolae Fieraru | last post by:
Hi All, I have a Table1 with ID, FirstName, Surname, Address1, Address2, PostCode, Suburb, Purchase. The table list all purchases made by all customers. A customer can have multiple purchases...
3
by: Paul Tomlinson | last post by:
All very simple app which loads a log file, parses it and displays the lines in a listview. I was playing about with some largeish files 800K and I noticed that the app would never return control,...
6
by: Mike Charnoky | last post by:
Hello, I am currently using PostgreSQL v7.3.4 on a RedHat 8.0 system (2.4.23 kernel) using the ext3 filesystem. I am experiencing problems when performing a pg_restore using a file which is...
7
by: Tim Penhey | last post by:
Maybe it's just me, but I can't seem to get pg_restore to restore a database... I am running 8.0 beta 2 (using the dev3 installer) on Windows XP. I created a very simple database with one...
1
by: ruben | last post by:
Hi: I'm trying to dump tableA and restore it to tableB: $ ./pg_dump -Fc -t tableA databaseA -f tableA.dump -v $ ./pg_restore -t tableB -d databaseA tableA.dump -v pg_dump creates...
7
by: jabernet | last post by:
The following insert takes ages, although it started out pretty fast. I monitored it with DB2mon, and in the begin insert count was about 1000 per minute while after 300000 or so rows, it droped to...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.