By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
464,292 Members | 1,165 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 464,292 IT Pros & Developers. It's quick & easy.

getting execution plans with multiple database connections

P: n/a
Hi,

I am running OSDL-DBT3 test against PostgreSQL. I found performance
difference between the runs even though the data and queries are the
same. I tried to study this problem by getting execution plans at the
beginning of each test. The following script gets execution plan for 22
queries, except query 15.

i=1
while [ $i -le 22 ]
do
if [ $i -ne 15 ]; then
$dbdriver_pgsql_path/explain_single_query.sh $i
$run_dir/plan/power_query$i.txt
fi
let "i=$i+1"
done

The script 'explain_single_query.sh' takes 2 parameters: query_number
and output file name.

It works for power test(single database connection). In other words, I
can get execution plans for all the 21 queries if there is only one
stream connecting to the database.

But for the throughput test(multiple database connections), some query
execution plan files are blank. It seems to be random, for example, for
the run at:
http://khack.osdl.org/stp/277495/results/plan/
throughput_stream1_query14.txt, throughput_stream3_query4.txt,
throughput_stream3_query5.txt, throughput_stream4_query4.txt,
throughput_stream4_query18.txt are of size 0. And if I do the run
again, a different set of files are blank.

Can anyone give me an explanation?
Thanks,
--
Jenny Zhang
Open Source Development Lab Inc
12725 SW Millikan Way
Suite 400
Beaverton, OR 97005
(503)626-2455 ext 31
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 11 '05 #1
Share this Question
Share on Google+
2 Replies

P: n/a
Tom,

Thanks for your reply. It does look like hitting limit of some system
resource since it is random. But I am not sure if it is the NFILE.
I changed max_files_per_process to 200 and increased open files to
16384.
Here is my ulimit -a output:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 16384
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 67587
virtual memory (kbytes, -v) unlimited

And I still have the same problem. Since PG should fail if no more
files can be opened, my test should fail. But I got all the other
results back at the end of the run.

Thanks,
Jenny
On Fri, 2003-08-15 at 11:46, Tom Lane wrote:
Jenny Zhang <je***@osdl.org> writes:
It works for power test(single database connection). In other words, I
can get execution plans for all the 21 queries if there is only one
stream connecting to the database.

But for the throughput test(multiple database connections), some query
execution plan files are blank. It seems to be random, for example, for
the run at:
http://khack.osdl.org/stp/277495/results/plan/
throughput_stream1_query14.txt, throughput_stream3_query4.txt,
throughput_stream3_query5.txt, throughput_stream4_query4.txt,
throughput_stream4_query18.txt are of size 0. And if I do the run
again, a different set of files are blank.


I wonder if you are not running out of kernel file table slots. The
described behavior sounds rather like the explain-ing script is failing
to notice that it couldn't open its output file. If you haven't bumped
up the kernel's file table size (NFILE or similar parameter) and/or
adjusted Postgres' max_files_per_process parameter downwards to hold
Postgres to a small number of open files per backend, it's quite easy
for Postgres to eat all your file table slots. Postgres itself
generally will not complain (it has strategies for dealing with ENFILE
failures), but an awful lot of other stuff will fall right over when
it can't open files.

regards, tom lane
-------------------------------------------------------
This SF.Net email sponsored by: Free pre-built ASP.NET sites including
Data Reports, E-commerce, Portals, and Forums are available now.
Download today and enter to win an XBOX or Visual Studio .NET.
http://aspnet.click-url.com/go/psa00...t_072303_01/01
_______________________________________________
osdldbt-general mailing list
os*************@lists.sourceforge.net
https://lists.sourceforge.net/lists/...sdldbt-general

--
Jenny Zhang
Open Source Development Lab Inc
12725 SW Millikan Way
Suite 400
Beaverton, OR 97005
(503)626-2455 ext 31
---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 11 '05 #2

P: n/a
Jenny Zhang <je***@osdl.org> writes:
And I still have the same problem. Since PG should fail if no more
files can be opened, my test should fail. But I got all the other
results back at the end of the run.


No, you miss my point. Usually PG will *not* fail when it hits ENFILE.
The code is designed to close other open files until it can successfully
open the required file. Only a freshly-started backend (with no pool of
already-open files) is likely to report any actual failure from ENFILE.

So a database under load can run quite happily when there are zero free
file table slots in the kernel --- but everything else in the system is
likely to show random failures.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 11 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.