473,686 Members | 2,783 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Getting an out of memory failure.... (long email)

To all,

Running into an out of memory error on our data warehouse server. This
occurs only with our data from the 'September' section of a large fact
table. The exact same query running over data from August or any prior
month for that matter works fine which is why this is so weird. Note that
June 2004 through today is stored in the same f_pageviews table.

Nothing has changed on the server in the last couple of months. I upgraded
to 7.4.5 from 7.4.3 today thinking maybe that would solve it by no joy.

Configuration and details: (Our postgresql.conf file is at the end of this
email)

Dell 2650 with 2 CPU's, 4GB ram running Linux 2.6.5-1.358smp hyper-threading
turned off. We are attached to a fully configured Apple X-Raid system
running in a RAID50 configuration (2.8TB formatted). The file system is
ext2. The warehouse is using about 800GB with indexes.

Thanks.

--sean
We are looking at these two tables:

Table "public.f_pagev iews"
Column | Type | Modifiers
------------------------+---------+----------------------------------------
id | integer | not null
date_key | integer | not null
time_key | integer | not null
content_key | integer | not null
location_key | integer | not null
session_key | integer | not null
persistent_cook ie_key | integer | not null
ip_key | integer | not null
referral_key | integer | not null
servlet_key | integer | not null
tracking_key | integer | not null
provider_key | text | not null
marketing_campa ign_key | integer | not null
orig_airport | text | not null
dest_airport | text | not null
commerce_page | boolean | not null default false
job_control_num ber | integer | not null
sequenceid | integer | not null default 0
url_key | integer | not null
useragent_key | integer | not null
web_server_name | text | not null default 'Not Available'::tex t
cpc | integer | not null default 0
referring_servl et_key | integer | not null default 1
first_page_key | integer | not null default 1
newsletterid_ke y | text | not null default 'Not Available'::tex t
userid_key | integer | not null
pool | text | default 'Not Available'::tex t
cpm | integer | default 0
vendor_key | integer | default 1
teaser_key | integer | default 1
query_key | integer | default 1
guid | text | default 'NA'::text
Indexes:
"idx_pageviews_ primary" unique, btree (id, date_key)
"idx_pageviews_ date" btree (date_key)
"idx_pageviews_ session" btree (session_key)

82+ million records in the September section we are looking at, about
425,677,597 records total in the table.

This table is created each time we run the report.

Table "public.addloc_ segmented_sub"
Column | Type | Modifiers
--------+---------+-----------
userid | integer |
subage | integer |
Indexes:
"idx_addloc_seg mented_sub" btree (userid)

60605 records
We attempt to issue this query: (we force a table scan as it much faster
then using the date_key index due to the size of the table)

Set enable_indexsca n = false;

SELECT subage, COUNT( DISTINCT (t1.session_key )), COUNT(
DISTINCT(t1.use rid_key)) FROM f_pageviews t1
JOIN addloc_segmente d_sub t0 ON (t1.userid_key = t0.userid) WHERE
t1.date_key BETWEEN 610 AND 631
GROUP BY 1
Explain:

GroupAggregate (cost=11764067. 79..11764067.82 rows=2 width=12)
-> Sort (cost=11764067. 79..11764067.80 rows=2 width=12)
Sort Key: t0.subage
-> Hash Join (cost=11762857. 88..11764067.78 rows=2 width=12)
Hash Cond: ("outer".use rid = "inner".userid_ key)
-> Seq Scan on addloc_segmente d_sub t0 (cost=0.00..905 .92
rows=60792 width=8)
-> Hash (cost=11762857. 88..11762857.88 rows=1 width=8)
-> Seq Scan on f_pageviews t1 (cost=0.00..117 62857.88
rows=1 width=8)
Filter: ((date_key >= 610) AND (date_key <= 631))

I cannot run an explain analyze, it dies.

Whether I run it from the java based report program or from psql I get the
same out of memory error. This is the java based version:
ERROR [WAREHOUSE] <20:36:09> org.postgresql. util.PSQLExcept ion: ERROR: out of
memory

at
org.postgresql. util.PSQLExcept ion.parseServer Error(PSQLExcep tion.java:139)
at org.postgresql. core.QueryExecu tor.executeV3(Q ueryExecutor.ja va:154)
at org.postgresql. core.QueryExecu tor.execute(Que ryExecutor.java :101)
at org.postgresql. core.QueryExecu tor.execute(Que ryExecutor.java :43)
at
org.postgresql. jdbc1.AbstractJ dbc1Statement.e xecute(Abstract Jdbc1Statement. jav
a:515)
at
org.postgresql. jdbc2.AbstractJ dbc2Statement.e xecute(Abstract Jdbc2Statement. jav
a:50)
at
org.postgresql. jdbc1.AbstractJ dbc1Statement.e xecuteQuery(Abs tractJdbc1State men
t.java:231)
at
com.TripResearc h.warehouse.rep orts.AddLocUser CreationAgeActi vityReport._pro ces
s(AddLocUserCre ationAgeActivit yReport.java:52 3)
at
com.TripResearc h.warehouse.rep orts.AddLocUser CreationAgeActi vityReport.init iat
eDataGathering( AddLocUserCreat ionAgeActivityR eport.java:444)
at
com.TripResearc h.warehouse.rep orts.AddLocUser CreationAgeActi vityReport.main (Ad
dLocUserCreatio nAgeActivityRep ort.java:140)

Here is the psql error message:

tripmaster=# set enable_indexsca n = false;
SET
tripmaster=# SELECT subage, COUNT( DISTINCT (t1.session_key )), COUNT(
DISTINCT(t1.use rid_key)) FROM f_pageviews t1
tripmaster-# JOIN addloc_segmente d_sub t0 ON (t1.userid_key = t0.userid) WHERE
t1.date_key BETWEEN 610 AND 631
tripmaster-# GROUP BY 1;
ERROR: out of memory
DETAIL: Failed on request of size 60.


The server side error message is below:
TopMemoryContex t: 32768 total in 3 blocks; 4472 free (5 chunks); 28296 used
TopTransactionC ontext: 8192 total in 1 blocks; 8136 free (0 chunks); 56 used
DeferredTrigger Xact: 0 total in 0 blocks; 0 free (0 chunks); 0 used
MessageContext: 57344 total in 3 blocks; 22376 free (0 chunks); 34968 used
PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemor y: 1024 total in 1 blocks; 912 free (0 chunks); 112 used
ExecutorState: 24576 total in 2 blocks; 12480 free (2 chunks); 12096 used
HashTableContex t: 0 total in 0 blocks; 0 free (0 chunks); 0 used
HashBatchContex t: -119689104 total in 512 blocks; 8416 free (18 chunks);
-119697520 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
AggContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
CacheMemoryCont ext: 516096 total in 6 blocks; 33600 free (3 chunks); 482496
used
idx_addloc_segm ented_sub: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
idx_addloc_user s_date: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_temp_2930752 333: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_description_ o_c_o_index: 2048 total in 1 blocks; 768 free (0 chunks);
1280 used
pg_depend_depen der_index: 2048 total in 1 blocks; 768 free (0 chunks); 1280
used
pg_depend_refer ence_index: 2048 total in 1 blocks; 768 free (0 chunks); 1280
used
d_date_pkey: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
idx_date_5: 1024 total in 1 blocks; 320 free (0 chunks); 704 used
idx_date_4: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
idx_date_3: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
idx_date_2: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
idx_date_1: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_index_indrel id_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_attrdef_adre lid_adnum_index : 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_amop_opc_str ategy_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_shadow_usena me_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_amop_opr_opc _index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used
pg_conversion_o id_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_language_nam e_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_statistic_re lid_att_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_attribute_re lid_attnam_inde x: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_shadow_usesy sid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_cast_source_ target_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_conversion_n ame_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_trigger_tgre lid_tgname_inde x: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_namespace_ns pname_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_conversion_d efault_index: 2048 total in 1 blocks; 704 free (0 chunks);
1344 used
pg_class_relnam e_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_aggregate_fn oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_inherits_rel id_seqno_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_language_oid _index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_type_typname _nsp_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_group_sysid_ index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_namespace_oi d_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_proc_proname _args_nsp_index : 2048 total in 1 blocks; 704 free (0 chunks);
1344 used
pg_opclass_am_n ame_nsp_index: 2048 total in 1 blocks; 768 free (0 chunks);
1280 used
pg_group_name_i ndex: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_proc_oid_ind ex: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_operator_oid _index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_amproc_opc_p rocnum_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_index_indexr elid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_operator_opr name_l_r_n_inde x: 2048 total in 1 blocks; 704 free (0
chunks); 1344 used
pg_opclass_oid_ index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_rewrite_rel_ rulename_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_type_oid_ind ex: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_attribute_re lid_attnum_inde x: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_class_oid_in dex: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
MdSmgr: 8192 total in 1 blocks; 5760 free (0 chunks); 2432 used
DynaHash: 8192 total in 1 blocks; 6912 free (0 chunks); 1280 used
DynaHashTable: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used
DynaHashTable: 8192 total in 1 blocks; 5080 free (0 chunks); 3112 used
DynaHashTable: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used
DynaHashTable: 8192 total in 1 blocks; 1984 free (0 chunks); 6208 used
DynaHashTable: 8192 total in 1 blocks; 3520 free (0 chunks); 4672 used
DynaHashTable: 24576 total in 2 blocks; 13240 free (4 chunks); 11336 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ErrorContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
2004-09-27 20:46:56 ERROR: out of memory
DETAIL: Failed on request of size 60.


Our configuration file:
# -----------------------------
# PostgreSQL configuration file
# -----------------------------
#
# This file consists of lines of the form:
#
# name = value
#
# (The '=' is optional.) White space may be used. Comments are introduced
# with '#' anywhere on a line. The complete list of option names and
# allowed values can be found in the PostgreSQL documentation. The
# commented-out settings shown in this file represent the default values.
#
# Any option can also be given as a command line switch to the
# postmaster, e.g. 'postmaster -c log_connections =on'. Some options
# can be changed at run-time with the 'SET' SQL command.
#
# This file is read on postmaster startup and when the postmaster
# receives a SIGHUP. If you edit the file on a running system, you have
# to SIGHUP the postmaster for the changes to take effect, or use
# "pg_ctl reload".
#---------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#---------------------------------------------------------------------------

# - Connection Settings -

tcpip_socket = true
max_connections = 100
# note: increasing max_connections costs about 500 bytes of shared
# memory per connection slot, in addition to costs from
shared_buffers
# and max_locks_per_t ransaction.
#superuser_rese rved_connection s = 2
#port = 5432
#unix_socket_di rectory = ''
#unix_socket_gr oup = ''
#unix_socket_pe rmissions = 0777 # octal
#virtual_host = '' # what interface to listen on; defaults to
any
#rendezvous_nam e = '' # defaults to the computer name

# - Security & Authentication -

#authentication _timeout = 60 # 1-600, in seconds
#ssl = false
#password_encry ption = true
#krb_server_key file = ''
#db_user_namesp ace = false
#---------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#---------------------------------------------------------------------------

# - Memory -

shared_buffers = 10000 # min 16, at least max_connections *2, 8KB
each
sort_mem = 65536 # min 64, size in KB
#vacuum_mem = 8192 # min 1024, size in KB

# - Free Space Map -

#max_fsm_pages = 20000 # min max_fsm_relatio ns*16, 6 bytes each
#max_fsm_relati ons = 1000 # min 100, ~50 bytes each

# - Kernel Resource Usage -

#max_files_per_ process = 1000 # min 25
#preload_librar ies = ''
#---------------------------------------------------------------------------
# WRITE AHEAD LOG
#---------------------------------------------------------------------------

# - Settings -

fsync = true # turns forced synchronization on or off
wal_sync_method = fsync # the default varies across platforms:
# fsync, fdatasync, open_sync, or
open_datasync
wal_buffers = 64 # min 4, 8KB each

# - Checkpoints -

checkpoint_segm ents = 30 # in logfile segments, min 1, 16MB each
checkpoint_time out = 1800 # range 30-3600, in seconds
checkpoint_warn ing = 180 # 0 is off, in seconds
commit_delay = 50000 # range 0-100000, in microseconds
#commit_sibling s = 5 # range 1-1000
#---------------------------------------------------------------------------
# QUERY TUNING
#---------------------------------------------------------------------------

# - Planner Method Enabling -

#enable_hashagg = true
#enable_hashjoi n = true
#enable_indexsc an = true
#enable_mergejo in = true
#enable_nestloo p = true
#enable_seqscan = true
#enable_sort = true
#enable_tidscan = true

# - Planner Cost Constants -

effective_cache _size = 300000 # typically 8KB each
#random_page_co st = 4 # units are one sequential page fetch cost
#cpu_tuple_cost = 0.01 # (same)
#cpu_index_tupl e_cost = 0.001 # (same)
#cpu_operator_c ost = 0.0025 # (same)

# - Genetic Query Optimizer -

#geqo = true
#geqo_threshold = 11
#geqo_effort = 1
#geqo_generatio ns = 0
#geqo_pool_size = 0 # default based on tables in statement,
# range 128-1024
#geqo_selection _bias = 2.0 # range 1.5-2.0

# - Other Planner Options -

#default_statis tics_target = 10 # range 1-1000
#from_collapse_ limit = 8
#join_collapse_ limit = 8 # 1 disables collapsing of explicit JOINs
#---------------------------------------------------------------------------
# ERROR REPORTING AND LOGGING
#---------------------------------------------------------------------------

# - Syslog -

#syslog = 0 # range 0-2; 0=stdout; 1=both; 2=syslog
#syslog_facilit y = 'LOCAL0'
#syslog_ident = 'postgres'

# - When to Log -

#client_min_mes sages = notice # Values, in order of decreasing detail:
# debug5, debug4, debug3, debug2, debug1,
# log, info, notice, warning, error

#log_min_messag es = notice # Values, in order of decreasing detail:
# debug5, debug4, debug3, debug2, debug1,
# info, notice, warning, error, log,
fatal,
# panic

#log_error_verb osity = default # terse, default, or verbose messages

#log_min_error_ statement = panic # Values in order of increasing severity:
# debug5, debug4, debug3, debug2, debug1,
# info, notice, warning, error,
panic(off)

#log_min_durati on_statement = -1 # Log all statements whose
# execution time exceeds the value, in
# milliseconds. Zero prints all queries.
# Minus-one disables.

#silent_mode = false # DO NOT USE without Syslog!

# - What to Log -

#debug_print_pa rse = false
#debug_print_re written = false
#debug_print_pl an = false
#debug_pretty_p rint = false
#log_connection s = false
#log_duration = false
#log_pid = false
#log_statement = false
log_timestamp = true
#log_hostname = false
#log_source_por t = false
#---------------------------------------------------------------------------
# RUNTIME STATISTICS
#---------------------------------------------------------------------------

# - Statistics Monitoring -

#log_parser_sta ts = false
#log_planner_st ats = false
#log_executor_s tats = false
#log_statement_ stats = false

# - Query/Index Statistics Collector -

#stats_start_co llector = true
stats_command_s tring = true
#stats_block_le vel = false
stats_row_level = true
#stats_reset_on _server_start = true
#---------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#---------------------------------------------------------------------------

# - Statement Behavior -

#search_path = '$user,public' # schema names
#check_function _bodies = true
#default_transa ction_isolation = 'read committed'
#default_transa ction_read_only = false
#statement_time out = 0 # 0 is disabled, in milliseconds

# - Locale and Formatting -

#datestyle = 'iso, mdy'
#timezone = unknown # actually, defaults to TZ environment
setting
#australian_tim ezones = false
#extra_float_di gits = 0 # min -15, max 2
#client_encodin g = sql_ascii # actually, defaults to database encoding

# These settings are initialized by initdb -- they may be changed
lc_messages = 'en_US.UTF-8' # locale for system error message
strings
lc_monetary = 'en_US.UTF-8' # locale for monetary formatting
lc_numeric = 'en_US.UTF-8' # locale for number formatting
lc_time = 'en_US.UTF-8' # locale for time formatting

# - Other Defaults -

#explain_pretty _print = true
#dynamic_librar y_path = '$libdir'
#max_expr_depth = 10000 # min 10
#---------------------------------------------------------------------------
# LOCK MANAGEMENT
#---------------------------------------------------------------------------

#deadlock_timeo ut = 1000 # in milliseconds
#max_locks_per_ transaction = 64 # min 10, ~260*max_connec tions bytes each
#---------------------------------------------------------------------------
# VERSION/PLATFORM COMPATIBILITY
#---------------------------------------------------------------------------

# - Previous Postgres Versions -

#add_missing_fr om = true
#regex_flavor = advanced # advanced, extended, or basic
#sql_inheritanc e = true

# - Other Platforms & Clients -

#transform_null _equals = false

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 23 '05 #1
4 6004
Sean Shanny <sh************ **@earthlink.ne t> writes:
-> Seq Scan on f_pageviews t1 (cost=0.00..117 62857.88
rows=1 width=8)
Filter: ((date_key >= 610) AND (date_key <= 631))


How many rows are actually going to match that filter condition? (The
symptoms seem to indicate that the answer is "a whole lot", not "1".)

I speculate that you're overdue for an ANALYZE on this table, and that
the planner thinks this scan is going to yield no rows because the
stats it has say there are no rows with date_key >= 610.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 23 '05 #2
Tom,

We have been running pg_autovacuum on this entire DB so I did not even
consider that. I am running an analyze verbose now.

We should see about 82mm rows that will match the Filter: ((date_key >= 610)
AND (date_key <= 631))

I'll update in an hour or so.

--sean
On 9/27/04 11:49 PM, "Tom Lane" <tg*@sss.pgh.pa .us> wrote:
Sean Shanny <sh************ **@earthlink.ne t> writes:
-> Seq Scan on f_pageviews t1 (cost=0.00..117 62857.88
rows=1 width=8)
Filter: ((date_key >= 610) AND (date_key <= 631))


How many rows are actually going to match that filter condition? (The
symptoms seem to indicate that the answer is "a whole lot", not "1".)

I speculate that you're overdue for an ANALYZE on this table, and that
the planner thinks this scan is going to yield no rows because the
stats it has say there are no rows with date_key >= 610.

regards, tom lane


---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Nov 23 '05 #3
Tom,

The Analyze did in fact fix the issue. Thanks.

--sean
On 9/27/04 11:54 PM, "Sean Shanny" <sh************ **@earthlink.ne t> wrote:
Tom,

We have been running pg_autovacuum on this entire DB so I did not even
consider that. I am running an analyze verbose now.

We should see about 82mm rows that will match the Filter: ((date_key >= 610)
AND (date_key <= 631))

I'll update in an hour or so.

--sean
On 9/27/04 11:49 PM, "Tom Lane" <tg*@sss.pgh.pa .us> wrote:
Sean Shanny <sh************ **@earthlink.ne t> writes:
-> Seq Scan on f_pageviews t1 (cost=0.00..117 62857.88
rows=1 width=8)
Filter: ((date_key >= 610) AND (date_key <= 631))


How many rows are actually going to match that filter condition? (The
symptoms seem to indicate that the answer is "a whole lot", not "1".)

I speculate that you're overdue for an ANALYZE on this table, and that
the planner thinks this scan is going to yield no rows because the
stats it has say there are no rows with date_key >= 610.

regards, tom lane


---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 23 '05 #4
Sean Shanny wrote:
Tom,

The Analyze did in fact fix the issue. Thanks.

--sean


Given the fact that you are using pg_autovacuum, you have to consider
a few points:

1) Is out there a buggy version that will not analyze big tables.
2) The autovacuum fail in scenarios with big tables not eavy updated,
inserted.

For the 1) I suggest to check in your logs and see how the total rows
in your table are displayed, the right version show you the rows number
as a float:
[2004-09-28 17:10:47 CEST] table name: empdb."public". "user_logs"
[2004-09-28 17:10:47 CEST] relid: 17220; relisshared: 0
[2004-09-28 17:10:47 CEST] reltuples: 5579780.000000; relpages: 69465
[2004-09-28 17:10:47 CEST] curr_analyze_co unt: 171003; curr_vacuum_cou nt: 0
[2004-09-28 17:10:47 CEST] last_analyze_co unt: 165949; last_vacuum_cou nt: 0
[2004-09-28 17:10:47 CEST] analyze_thresho ld: 4464024; vacuum_threshol d: 2790190

for the point 2) I suggest you to "cron" analyze during the day.

Regards
Gaetano Mendola

Nov 23 '05 #5

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
2325
by: kak3012 | last post by:
Hi, After my question today, I am not sure if this is the right group to ask but I have a simple question I guess. I have multi dimensional array called cover what I do is this iT GIVES AN ERROR that the memory cannot be read. So I believe I do not how how to handle multi dimensional arrays.
16
2509
by: Jacob | last post by:
It is common practice (I've heard) to always catch allocation exceptions from "new". How is done in practice? What would be a common procedure (path of sequence) after such an event has occured? (For a multi-million LOC, GUI based application). It is also a common advice not to allocate objects with "new" unless absolutely required. The alternative is to keep objects on the stack, automatically allocated and deallocated by scope...
7
422
by: shilpi.harpavat | last post by:
Hi, I am trying to allocate more memory if the following condition is true. IS it free of memory leaks?? if(numLevels >= numAlloc) { char **oldLevels = levels; long *oldOrder = order; numAlloc *= 10; levels = (char **) calloc(numAlloc, sizeof(char *));
3
2348
by: Karthik D | last post by:
Hello All, Basically I would like to take suggestions from programmers about implementation of a function which processes some error messages.(fyi:this is not a homework problem) Let me explain my problem in detail.I have a function which closely emulates dlerror() function. The function is as below: void dlerror(void *handle, char *err, size_t len) {
6
2399
by: Sandeep Chikkerur | last post by:
Hi, If the entire heap memory for dynamic allocation is not available, does the compiler always return NULL ? eg: char *s; s = (char *)malloc(...);
18
3698
by: jacob navia | last post by:
In C, we have read-only memory (const), read/write memory (normal data), and write only memory. Let's look at the third one in more detail. Write only memory is a piece of RAM that can only be written to, since its contents are undefined. The program is allocating a new piece of data, and the previous contents aren't relevant. This memory
66
3613
by: Johan Tibell | last post by:
I've written a piece of code that uses sockets a lot (I know that sockets aren't portable C, this is not a question about sockets per se). Much of my code ended up looking like this: if (function(socket, args) == -1) { perror("function"); exit(EXIT_FAILURE); } I feel that the ifs destroy the readability of my code. Would it be
20
2150
by: Udo A. Steinberg | last post by:
Hi all, In a ternary statement such as: x = (cond ? a : b); it is obviously guaranteed that "x" will be equal to "a" only if the condition "cond" holds. Assuming that "a" is a memory location, is it also guaranteed that "a" will not be accessed in memory if the condition does not hold? Or, in other words, is a compiler allowed to speculatively fetch a and b from memory and then assign one of them to x based on the condition?
5
4020
by: parag_paul | last post by:
hi All, Even though my function is not extern "C" ed I keep getting this failure ucliForceCmdObject.C", line 194: Warning (Anachronism): Formal argument mhpiConnCb of type extern "C" int(*)(unsigned long*,unsigned long*) in call to mhpi_connectivity_traverse(unsigned long*, unsigned long*, extern "C" int(*)(unsigned long*,unsigned long*), mhpiConnTrvrsModeT) is being passed int(*)(unsigned long*,unsigned long*).
0
8585
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
8517
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
9057
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
8933
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
8779
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
7600
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
5799
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
4309
by: TSSRALBI | last post by:
Hello I'm a network technician in training and I need your help. I am currently learning how to create and manage the different types of VPNs and I have a question about LAN-to-LAN VPNs. The last exercise I practiced was to create a LAN-to-LAN VPN between two Pfsense firewalls, by using IPSEC protocols. I succeeded, with both firewalls in the same network. But I'm wondering if it's possible to do the same thing, with 2 Pfsense firewalls...
2
2206
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.