By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
438,834 Members | 2,234 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 438,834 IT Pros & Developers. It's quick & easy.

Analyse - max_locks_per_transaction - why?

P: n/a
Dear PostgreSQL experts,

This is with version 7.4.2.

My database has grown a bit recently, mostly in number of tables but
also their size, and I started to see ANALYSE failing with this message:

WARNING: out of shared memory
ERROR: out of shared memory
HINT: You may need to increase max_locks_per_transaction.

So I increased max_locks_per_transaction from 64 to 200 and, after doing
a /etc/init.d/postgresql/restart rather than a
/etc/init.d/postgresql/reload, it seems to work again.

Naively I imagined that ANALYSE looks at each table in turn,
independently. So why does it need more locks when there are more
tables? Isn't "ANALYSE" with no parameter equivalent to

for i in all_tables_in_database {
ANALYSE i;
}

I'm working in a memory-poor environment (a user-mode-linux virtual
machine) and I'm a bit concerned about the memory overhead if I have to
keep increasing max_locks_per_transaction just to keep ANALYSE happy.

As an aside, what I really need in this particular case is to analyse
all of the tables in a particular schema. Having "ANALYSE schemaname"
or "ANALYSE schemaname.*" would be great. I presume that I can write a
function to get the same effect - has anyone already done that?

Regards,

Phil Endecott.

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 23 '05 #1
Share this Question
Share on Google+
3 Replies


P: n/a
Phil Endecott <sp**************************@chezphil.org> writes:
Naively I imagined that ANALYSE looks at each table in turn,
independently. So why does it need more locks when there are more
tables?


7.4 runs a database-wide ANALYZE as a single transaction, so the locks
accumulate. This was recognized to be a bad idea :-(. 8.0 is a bit
smarter.

The best bet in 7.4 is probably to use VACUUM ANALYZE rather than
analyzing separately. That will force it to use a transaction per
table.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to ma*******@postgresql.org)

Nov 23 '05 #2

P: n/a
I asked:
Naively I imagined that ANALYSE looks at each table in turn,
independently. So why does it need more locks when there are more
tables?

Tom replied:
7.4 runs a database-wide ANALYZE as a single transaction, so the locks
accumulate. This was recognized to be a bad idea :-(. 8.0 is a bit
smarter.
Thanks Tom. I will upgrade to 8.0 one day but not soon. In the
meantime, is there a way to judge a suficient setting for
max_locks_per_transaction so that a global ANALYZE will work? It
doesn't seem to be one lock per table or anything as simple as that.
The best bet in 7.4 is probably to use VACUUM ANALYZE rather than
analyzing separately. That will force it to use a transaction per
table.


That's another possibility.

Thanks.

--Phil.
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 23 '05 #3

P: n/a
Phil Endecott <sp**************************@chezphil.org> writes:
Thanks Tom. I will upgrade to 8.0 one day but not soon. In the
meantime, is there a way to judge a suficient setting for
max_locks_per_transaction so that a global ANALYZE will work? It
doesn't seem to be one lock per table or anything as simple as that.


No. The shared lock table has room for max_locks_per_transaction *
max_connections entries (actually rather more, but that's the supported
limit), so as soon as this exceeds the number of tables in your DB
the ANALYZE will work ... at least as long as nothing else is going on.
Any other transactions you may be running will eat some lock entries,
and you have to allow for those.

The conservative answer is to set max_locks_per_transaction to
(tables in database)/max_connections more than you were using before.
This is still probably overkill, since for most purposes the default
value is plenty.

There was some discussion recently of renaming the
max_locks_per_transaction variable to make it clearer that it's not
a hard per-transaction limit but a global average. Nobody really
came up with a better name though.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to ma*******@postgresql.org

Nov 23 '05 #4

This discussion thread is closed

Replies have been disabled for this discussion.