By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,160 Members | 1,983 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,160 IT Pros & Developers. It's quick & easy.

Crash in postgres/linux on verly large database

P: n/a
Hi,

we have a table width about 60.000.000 entrys and about 4GB storage size.
When creating an index on this table the whole linux box freezes and the
reiser-fs file system is corrupted on not recoverable.

Does anybody have experience with this amount of data in postgres 7.4.2?
Is there a limit anywhere?

Thanks

Bernhard Ankenbrand
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 23 '05 #1
Share this Question
Share on Google+
3 Replies


P: n/a
Bernhard Ankenbrand <b.**********@media-one.de> writes:
we have a table width about 60.000.000 entrys and about 4GB storage size.
When creating an index on this table the whole linux box freezes and the
reiser-fs file system is corrupted on not recoverable. Does anybody have experience with this amount of data in postgres 7.4.2?
Is there a limit anywhere?


Many people run Postgres with databases far larger than that. In any
case a Postgres bug could not cause a system-level freeze or filesystem
corruption, since it's not a privileged process.

I'd guess that you are dealing with a hardware problem: flaky disk
and/or bad RAM are the usual suspects. See memtest86 and badblocks
as the most readily available hardware test aids.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 23 '05 #2

P: n/a
On Tuesday 06 April 2004 12:22, Bernhard Ankenbrand wrote:
Hi,

we have a table width about 60.000.000 entrys and about 4GB storage size.
When creating an index on this table the whole linux box freezes and the
reiser-fs file system is corrupted on not recoverable.

Does anybody have experience with this amount of data in postgres 7.4.2?
Is there a limit anywhere?


Plenty of people with more data than that. It should be impossible for an
application to corrupt a file-system in any case. The two things to look at
would be your hardware or perhaps reiser-fs itself. I have heard about
problems with SMP machines locking up (some unusual glitch in some versions
of Linux kernel IIRC).

You might be able to see what is going wrong with careful use of vmstat and
strace -p <pid>. Start to create your index, find the pid of the backend
doing so and strace it. See if anything interesting comes out of it.

HTH, and stick around - someone else might have better advice.
--
Richard Huxton
Archonet Ltd

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 23 '05 #3

P: n/a
On Tue, 6 Apr 2004, Bernhard Ankenbrand wrote:
Hi,

we have a table width about 60.000.000 entrys and about 4GB storage size.
When creating an index on this table the whole linux box freezes and the
reiser-fs file system is corrupted on not recoverable.

Does anybody have experience with this amount of data in postgres 7.4.2?
Is there a limit anywhere?


If your file system is getting corrupted, then you likely have found a bug
in reiserfs or the linux kernel. While some pgsql bug might be able to
corrupt the contents of a file belonging to it, it doesn't have the power
to corrupt the file system itself.

Or is the problem a corrupted database, not a corrupted file system?
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 23 '05 #4

This discussion thread is closed

Replies have been disabled for this discussion.