473,320 Members | 1,909 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,320 software developers and data experts.

Out of memory error when doing an update with IN clause

To all,

The facts:

PostgreSQL 7.4.0 running on BSD 5.1 on Dell 2650 with 4GB RAM, 5 SCSI
drives in hardware RAID 0 configuration. Database size with indexes is
currently 122GB. Schema for the table in question is at the end of this
email. The DB has been vacuumed full and analyzed. Between 8 and 12
million records are added to the table in question each night. An
analyze on the entire DB is done after the data has been loaded each night.

The command below was run from psql and failed. When I removed the last
3 elements in the IN clause (98,105,106) it worked fine.(If I only
removed 1 or 2 it still failed) I then ran the same update statement
again with those remaining 3 elements and it completed without any
problems. Trying to figure out why this would happen? The system was
not out of memory. Note that I also have run other queries that use the
form:

SELECT x FROM f_commerce_impressions WHERE id IN (SELECT some large
number of elements to match with id), up to 120k tuples in the sub
select, without problems.

Note that I have also posted another out of memory failure on this list
with subject line:

An out of memory error when doing a vacuum full
Thanks.

--sean

update f_commerce_impressions set servlet_key = 60 where servlet_key in
(68,69,70,71,87,90,94,91,98,105,106);
ERROR: out of memory
DETAIL: Failed on request of size 1024.


\d f_commerce_impressions
Table "public.f_commerce_impressions"
Column | Type | Modifiers
--------------------+---------+----------------------------------------
id | integer | not null
page_view_key | integer | not null
content_key | integer | not null
provider_key | integer | not null
is_match | boolean | not null
date_key | integer | not null
time_key | integer | not null
area | text | not null
slot | integer | not null
cpc | integer | not null
servlet_key | integer | not null
web_server_name | text | not null default 'Not Available'::text
job_control_number | integer | not null
Indexes:
"f_commerce_impressions_pkey" primary key, btree (id)
"idx_commerce_impressions_date_dec_2003" btree (date_key) WHERE
((date_key >= 335) AND (date_key <= 365))
"idx_commerce_impressions_date_nov_2003" btree (date_key) WHERE
((date_key >= 304) AND (date_key <= 334))
"idx_commerce_impressions_page_view" btree (page_view_key)
"idx_commerce_impressions_servlet" btree (servlet_key)

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #1
8 3737
Sean Shanny <sh**************@earthlink.net> writes:
update f_commerce_impressions set servlet_key = 60 where servlet_key in
(68,69,70,71,87,90,94,91,98,105,106);
ERROR: out of memory


How many rows will this try to update? Do you have any triggers or
foreign keys in this table? I'm wondering if the list of pending
trigger events could be the problem ...

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 12 '05 #2
Tom,

There are no FK's or triggers on this or any of the tables in our
warehouse schema. Also I should have mentioned that this update will
produce 0 rows as these values do not exist in this table. We have a
dimension table named d_servlet that holds servlet names and id's.
This table is shared amongst several fact tables including the one in
question. This update was to ensure that the changes in the d_servlet
table would be reflected in f_commerce_impressions. It turns out that
the values did not exist in the table.

Here is output from the /usr/local/pgsql/data/servlerlog when this fails:
TopMemoryContext: 40960 total in 4 blocks; 12920 free (25 chunks); 28040
used
TopTransactionContext: 8192 total in 1 blocks; 8136 free (0 chunks); 56 used
DeferredTriggerXact: 0 total in 0 blocks; 0 free (0 chunks); 0 used
MessageContext: 57344 total in 3 blocks; 9000 free (1 chunks); 48344 used
PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 1024 total in 1 blocks; 936 free (0 chunks); 88 used
ExecutorState: 24576 total in 2 blocks; 5008 free (8 chunks); 19568 used
DynaHashTable: 534773784 total in 65 blocks; 31488 free (255 chunks);
534742296 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
CacheMemoryContext: 1040384 total in 7 blocks; 9504 free (1 chunks);
1030880 used
idx_commerce_impressions_servlet: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
idx_commerce_impressions_page_view: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
idx_commerce_impressions_date_dec_2003: 1024 total in 1 blocks; 640 free
(0 chunks); 384 used
idx_commerce_impressions_date_nov_2003: 1024 total in 1 blocks; 640 free
(0 chunks); 384 used
f_commerce_impressions_pkey: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
idx_pageviews_content: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
idx_pageviews_content: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_description_o_c_o_index: 2048 total in 1 blocks; 768 free (0 chunks);
1280 used
pg_depend_depender_index: 2048 total in 1 blocks; 768 free (0 chunks);
1280 used
pg_depend_reference_index: 2048 total in 1 blocks; 768 free (0 chunks);
1280 used
pg_attrdef_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
idx_pageviews_servlet: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
idx_pageviews_session: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
idx_pageviews_referring_servlet: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
idx_pageviews_date_dec_2003: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
idx_pageviews_date_nov_2003: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
f_pageviews_pkey: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_index_indrelid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_amop_opc_strategy_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_shadow_usename_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_amop_opr_opc_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used
pg_conversion_oid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_language_name_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_shadow_usesysid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_cast_source_target_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_namespace_nspname_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_conversion_default_index: 2048 total in 1 blocks; 704 free (0
chunks); 1344 used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_language_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_group_sysid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_namespace_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_proc_proname_args_nsp_index: 2048 total in 1 blocks; 704 free (0
chunks); 1344 used
pg_opclass_am_name_nsp_index: 2048 total in 1 blocks; 768 free (0
chunks); 1280 used
pg_group_name_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_proc_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_operator_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_amproc_opc_procnum_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_operator_oprname_l_r_n_index: 2048 total in 1 blocks; 704 free (0
chunks); 1344 used
pg_opclass_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_type_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_class_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
MdSmgr: 8192 total in 1 blocks; 5976 free (18 chunks); 2216 used
DynaHash: 8192 total in 1 blocks; 6912 free (0 chunks); 1280 used
DynaHashTable: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used
DynaHashTable: 8192 total in 1 blocks; 5080 free (0 chunks); 3112 used
DynaHashTable: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used
DynaHashTable: 8192 total in 1 blocks; 1984 free (0 chunks); 6208 used
DynaHashTable: 8192 total in 1 blocks; 3520 free (0 chunks); 4672 used
DynaHashTable: 24576 total in 2 blocks; 13240 free (4 chunks); 11336 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ErrorContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
ERROR: out of memory
DETAIL: Failed on request of size 1024.

Thanks

--sean

Tom Lane wrote:
Sean Shanny <sh**************@earthlink.net> writes:

update f_commerce_impressions set servlet_key = 60 where servlet_key in
(68,69,70,71,87,90,94,91,98,105,106);
ERROR: out of memory


How many rows will this try to update? Do you have any triggers or
foreign keys in this table? I'm wondering if the list of pending
trigger events could be the problem ...

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 12 '05 #3
Sean Shanny <sh**************@earthlink.net> writes:
There are no FK's or triggers on this or any of the tables in our
warehouse schema. Also I should have mentioned that this update will
produce 0 rows as these values do not exist in this table.
Hm, that makes no sense at all ...
Here is output from the /usr/local/pgsql/data/servlerlog when this fails:
...
DynaHashTable: 534773784 total in 65 blocks; 31488 free (255 chunks);
534742296 used


Okay, so here's the problem: this hash table has expanded to 500+Mb which
is enough to overflow your ulimit setting. Some digging in the source
code shows only two candidates for such a hash table: a tuple hash table
used for grouping/aggregating, which doesn't seem likely for this query,
or a tuple-pointer hash table used for detecting already-visited tuples
in a multiple index scan.

Could we see the EXPLAIN output (no ANALYZE, since it would fail) for
the problem query? That should tell us which of these possibilities
it is.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

Nov 12 '05 #4
Tom,

As you can see I had to reduce the number of arguments in the IN clause
to even get the explain.

explain update f_commerce_impressions set servlet_key = 60 where
servlet_key in (68,69,70,71,87,90,94);

QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Index Scan using idx_commerce_impressions_servlet,
idx_commerce_impressions_servlet, idx_commerce_impressions_servlet,
idx_commerce_impressions_servlet, idx_commerce_impressions_servlet,
idx_commerce_impressions_servlet, idx_commerce_impressions_servlet on
f_commerce_impressions (cost=0.00..1996704.34 rows=62287970 width=59)
Index Cond: ((servlet_key = 68) OR (servlet_key = 69) OR (servlet_key
= 70) OR (servlet_key = 71) OR (servlet_key = 87) OR (servlet_key = 90)
OR (servlet_key = 94))
(2 rows)
Tom Lane wrote:
Sean Shanny <sh**************@earthlink.net> writes:

There are no FK's or triggers on this or any of the tables in our
warehouse schema. Also I should have mentioned that this update will
produce 0 rows as these values do not exist in this table.


Hm, that makes no sense at all ...
Here is output from the /usr/local/pgsql/data/servlerlog when this fails:
...
DynaHashTable: 534773784 total in 65 blocks; 31488 free (255 chunks);
534742296 used


Okay, so here's the problem: this hash table has expanded to 500+Mb which
is enough to overflow your ulimit setting. Some digging in the source
code shows only two candidates for such a hash table: a tuple hash table
used for grouping/aggregating, which doesn't seem likely for this query,
or a tuple-pointer hash table used for detecting already-visited tuples
in a multiple index scan.

Could we see the EXPLAIN output (no ANALYZE, since it would fail) for
the problem query? That should tell us which of these possibilities
it is.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to ma*******@postgresql.org so that your
message can get through to the mailing list cleanly

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Nov 12 '05 #5
Sean Shanny <sh**************@earthlink.net> writes:
As you can see I had to reduce the number of arguments in the IN clause
to even get the explain.


You mean you get an out-of-memory error just from EXPLAIN (without
ANALYZE)?? That makes even less sense ... the hash table we identified
before should not be created or filled during EXPLAIN.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

Nov 12 '05 #6
Tom,

I run this:

explain update f_commerce_impressions set servlet_key = 60 where
servlet_key in (68,69,70,71,87,90,94,91,98,105,106);
ERROR: out of memory
DETAIL: Failed on request of size 1024.
I get this in the server log:

TopMemoryContext: 32768 total in 3 blocks; 6376 free (4 chunks); 26392 used
TopTransactionContext: 8192 total in 1 blocks; 8136 free (0 chunks); 56 used
DeferredTriggerXact: 0 total in 0 blocks; 0 free (0 chunks); 0 used
MessageContext: 57344 total in 3 blocks; 28760 free (1 chunks); 28584 used
PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 8192 total in 1 blocks; 3936 free (0 chunks); 4256 used
PortalHeapMemory: 23552 total in 5 blocks; 2888 free (0 chunks); 20664 used
ExecutorState: 24576 total in 2 blocks; 5032 free (8 chunks); 19544 used
DynaHashTable: 534773784 total in 65 blocks; 31488 free (255 chunks);
534742296 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
CacheMemoryContext: 516096 total in 6 blocks; 205344 free (1 chunks);
310752 used
idx_commerce_impressions_servlet: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
idx_commerce_impressions_page_view: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
idx_commerce_impressions_date_dec_2003: 1024 total in 1 blocks; 640 free
(0 chunks); 384 used
idx_commerce_impressions_date_nov_2003: 1024 total in 1 blocks; 640 free
(0 chunks); 384 used
f_commerce_impressions_pkey: 1024 total in 1 blocks; 640 free (0
chunks); 384 used
pg_index_indrelid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_amop_opc_strategy_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_shadow_usename_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_amop_opr_opc_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used
pg_conversion_oid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_language_name_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_shadow_usesysid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_cast_source_target_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_namespace_nspname_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_conversion_default_index: 2048 total in 1 blocks; 704 free (0
chunks); 1344 used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_language_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_group_sysid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_namespace_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_proc_proname_args_nsp_index: 2048 total in 1 blocks; 704 free (0
chunks); 1344 used
pg_opclass_am_name_nsp_index: 2048 total in 1 blocks; 768 free (0
chunks); 1280 used
pg_group_name_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_proc_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_operator_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_amproc_opc_procnum_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_index_indexrelid_index: 1024 total in 1 blocks; 640 free (0 chunks);
384 used
pg_operator_oprname_l_r_n_index: 2048 total in 1 blocks; 704 free (0
chunks); 1344 used
pg_opclass_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_type_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 320 free (0
chunks); 704 used
pg_class_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
MdSmgr: 8192 total in 1 blocks; 6120 free (0 chunks); 2072 used
DynaHash: 8192 total in 1 blocks; 7064 free (0 chunks); 1128 used
DynaHashTable: 8192 total in 1 blocks; 5080 free (0 chunks); 3112 used
DynaHashTable: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used
DynaHashTable: 8192 total in 1 blocks; 3016 free (0 chunks); 5176 used
DynaHashTable: 8192 total in 1 blocks; 4040 free (0 chunks); 4152 used
DynaHashTable: 24576 total in 2 blocks; 13240 free (4 chunks); 11336 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
DynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ErrorContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
ERROR: out of memory
DETAIL: Failed on request of size 1024.
Tom Lane wrote:
Sean Shanny <sh**************@earthlink.net> writes:

As you can see I had to reduce the number of arguments in the IN clause
to even get the explain.


You mean you get an out-of-memory error just from EXPLAIN (without
ANALYZE)?? That makes even less sense ... the hash table we identified
before should not be created or filled during EXPLAIN.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Nov 12 '05 #7
Sean Shanny <sh**************@earthlink.net> writes:
I run this: explain update f_commerce_impressions set servlet_key = 60 where
servlet_key in (68,69,70,71,87,90,94,91,98,105,106);
ERROR: out of memory
DETAIL: Failed on request of size 1024.


Well, I have to confess to total bafflement. AFAICS the overflowing
hash table must be the duplicate-tuple hash table that nodeIndexscan.c
sets up --- but that table shouldn't get any entries loaded into it
if you just do EXPLAIN with no ANALYZE. Furthermore, it should only
get loaded with entries for tuples that match the WHERE clause, and
you said earlier that there are no rows with these servlet_key values.
The code involved is all new in 7.4, so finding a bug in it wouldn't
surprise me much, but I can't see how this could be happening.

It would help if you could rebuild with --enable-debug (if that wasn't
on already) and get a stack trace from the errfinish() call. Or, is
there any chance I could get access to your machine and look at the
problem for myself?

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Nov 12 '05 #8
Tom,

We can give you access to the machine Tom. We need to know what sort of
access you require. Since I don't have the ability to correspond with
you via email due to the earthlink filter you have on could you send me
another email address privately or a phone number or smoke signal so we
can exchange the info you need?

Thanks.

--sean

Tom Lane wrote:
Sean Shanny <sh**************@earthlink.net> writes:

I run this:

explain update f_commerce_impressions set servlet_key = 60 where
servlet_key in (68,69,70,71,87,90,94,91,98,105,106);
ERROR: out of memory
DETAIL: Failed on request of size 1024.


Well, I have to confess to total bafflement. AFAICS the overflowing
hash table must be the duplicate-tuple hash table that nodeIndexscan.c
sets up --- but that table shouldn't get any entries loaded into it
if you just do EXPLAIN with no ANALYZE. Furthermore, it should only
get loaded with entries for tuples that match the WHERE clause, and
you said earlier that there are no rows with these servlet_key values.
The code involved is all new in 7.4, so finding a bug in it wouldn't
surprise me much, but I can't see how this could be happening.

It would help if you could rebuild with --enable-debug (if that wasn't
on already) and get a stack trace from the errfinish() call. Or, is
there any chance I could get access to your machine and look at the
problem for myself?

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html

Nov 12 '05 #9

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

8
by: Sean Shanny | last post by:
To all, The facts: PostgreSQL 7.4.0 running on BSD 5.1 on Dell 2650 with 4GB RAM, 5 SCSI drives in hardware RAID 0 configuration. Database size with indexes is currently 122GB. DB size...
4
by: Bob Stearns | last post by:
The statement: merge into nullid.animals_et_in t1 using is3.animals t2 on t1.sire_assoc=t2.assoc and t1.sire_prefix=t2.prefix and t1.sire_regnum=t2.regnum when matched then update set...
3
by: bill_hounslow | last post by:
I'm trying to transfer data from one Sql Server into a table on another, using a simple INSERT query in an Access database with links to tables on both servers (the reasons for this are complicated...
11
by: perspolis | last post by:
hi all When I update my dataset it gives me an error which follows "Concurrecny failed, 0 record affected" I don't know why??..I don't do anything that raise this error.. just when user press...
1
by: Jeremy Ames | last post by:
I have a datagrid that updates the table using a stored procedure. The stored procedure is confirmed to complete correctly, yet the sql data adapter is returning an error that my application is...
3
by: Jerry | last post by:
Well, here is some weirdness. First, I noticed that I have 2 Set keywords (silly me). so I removed the 2nd "Set" but still got a syntax error. Then I removed the Where clause, and now it works...
19
by: Steve | last post by:
ASP error number 13 - Type mismatch with SELECT...FOR UPDATE statement I got ASP error number 13 when I use the SELECT...FOR UPDATE statement as below. However, if I use SELECT statement without...
669
by: Xah Lee | last post by:
in March, i posted a essay “What is Expressiveness in a Computer Language”, archived at: http://xahlee.org/perl-python/what_is_expresiveness.html I was informed then that there is a academic...
0
by: mbenedict | last post by:
I am rather new at this code and am attempting to modify existing code to use clob datatypes, which I have never used before. The database tables have been set up for clob data. When trying to use...
0
by: DolphinDB | last post by:
The formulas of 101 quantitative trading alphas used by WorldQuant were presented in the paper 101 Formulaic Alphas. However, some formulas are complex, leading to challenges in calculation. Take...
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, youll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
1
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: Vimpel783 | last post by:
Hello! Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
1
by: PapaRatzi | last post by:
Hello, I am teaching myself MS Access forms design and Visual Basic. I've created a table to capture a list of Top 30 singles and forms to capture new entries. The final step is a form (unbound)...
0
by: Shllpp 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: af34tf | last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.