473,396 Members | 2,115 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

transaction log is full

i am using db2 udb v8.1 , so if parameter retain log records is off ,
i am assuing it use circular logging, in the doc it explains the log
records fill the log files and then overwrite the initial log records
in the initial log file, so if so, why we still have transaction log
is full error complaint? but if the log never clean up, so isn't it
will be problem that i have to keep increase the log file size, log
files numbers? our application only insert one record each time and
auto commit is on , but still has log full error.
Nov 12 '05 #1
20 18938
Either increase the number of logs, commit more regularly, or configure your
system to rollback a single transaction from filling more than a specific
number of logs (max_log parameter).

Your final statement that you are committing after each insert is probably
incorrect.

BTW, using circular logging means you cannot roll forware after a restore,
meaning you have no proper recovery. It's OK for test systems with random
data, but should never be used in production.

"xixi" <da****@yahoo.com> wrote in message
news:c0**************************@posting.google.c om...
i am using db2 udb v8.1 , so if parameter retain log records is off ,
i am assuing it use circular logging, in the doc it explains the log
records fill the log files and then overwrite the initial log records
in the initial log file, so if so, why we still have transaction log
is full error complaint? but if the log never clean up, so isn't it
will be problem that i have to keep increase the log file size, log
files numbers? our application only insert one record each time and
auto commit is on , but still has log full error.

Nov 12 '05 #2
Either increase the number of logs, commit more regularly, or configure your
system to rollback a single transaction from filling more than a specific
number of logs (max_log parameter).

Your final statement that you are committing after each insert is probably
incorrect.

BTW, using circular logging means you cannot roll forware after a restore,
meaning you have no proper recovery. It's OK for test systems with random
data, but should never be used in production.

"xixi" <da****@yahoo.com> wrote in message
news:c0**************************@posting.google.c om...
i am using db2 udb v8.1 , so if parameter retain log records is off ,
i am assuing it use circular logging, in the doc it explains the log
records fill the log files and then overwrite the initial log records
in the initial log file, so if so, why we still have transaction log
is full error complaint? but if the log never clean up, so isn't it
will be problem that i have to keep increase the log file size, log
files numbers? our application only insert one record each time and
auto commit is on , but still has log full error.

Nov 12 '05 #3
"Mark Yudkin" <my***********************@nospam.org> wrote in message
news:c727pb$kc8$1@ngspool-

BTW, using circular logging means you cannot roll forware after a restore,
meaning you have no proper recovery. It's OK for test systems with random
data, but should never be used in production.

It is OK to use circular logging in many data warehouse applications where
the data can be reloaded from an external source. If a full or incremental
offline backup is taken after each load, that is usually sufficient.

Obviously, in a transaction system, roll forward recovery is highly
desirable, if not mandatory.
Nov 12 '05 #4
"Mark Yudkin" <my***********************@nospam.org> wrote in message
news:c727pb$kc8$1@ngspool-

BTW, using circular logging means you cannot roll forware after a restore,
meaning you have no proper recovery. It's OK for test systems with random
data, but should never be used in production.

It is OK to use circular logging in many data warehouse applications where
the data can be reloaded from an external source. If a full or incremental
offline backup is taken after each load, that is usually sufficient.

Obviously, in a transaction system, roll forward recovery is highly
desirable, if not mandatory.
Nov 12 '05 #5
it is circular logging. the insert, delete, update will always be one
row, with auto commit on, it sould be commit right away, i am using
prepare statement to do write, so it is ps.executeUpdate(); should
commit the write right away, and i did the test, i didn't have
connection.commit() to commit the change and i have the data in the
table. i still don't understand why i only do change on data one row a
time then commit still have the log full error
Nov 12 '05 #6
it is circular logging. the insert, delete, update will always be one
row, with auto commit on, it sould be commit right away, i am using
prepare statement to do write, so it is ps.executeUpdate(); should
commit the write right away, and i did the test, i didn't have
connection.commit() to commit the change and i have the data in the
table. i still don't understand why i only do change on data one row a
time then commit still have the log full error
Nov 12 '05 #7
That would be read-only data.

BTW, many data warehousing applications have a multitude of data sources and
up to daily updates, so that "reloading" is not really an option. In our
case, even losing 24 hours of data is a major, "stay up until midnight" task
(and that was caused by a software bug we introduced in the update logic).

"Mark A" <ma@switchboard.net> wrote in message
news:Zj**************@news.uswest.net...
"Mark Yudkin" <my***********************@nospam.org> wrote in message
news:c727pb$kc8$1@ngspool-

BTW, using circular logging means you cannot roll forware after a restore, meaning you have no proper recovery. It's OK for test systems with random data, but should never be used in production.

It is OK to use circular logging in many data warehouse applications where
the data can be reloaded from an external source. If a full or incremental
offline backup is taken after each load, that is usually sufficient.

Obviously, in a transaction system, roll forward recovery is highly
desirable, if not mandatory.

Nov 12 '05 #8
That would be read-only data.

BTW, many data warehousing applications have a multitude of data sources and
up to daily updates, so that "reloading" is not really an option. In our
case, even losing 24 hours of data is a major, "stay up until midnight" task
(and that was caused by a software bug we introduced in the update logic).

"Mark A" <ma@switchboard.net> wrote in message
news:Zj**************@news.uswest.net...
"Mark Yudkin" <my***********************@nospam.org> wrote in message
news:c727pb$kc8$1@ngspool-

BTW, using circular logging means you cannot roll forware after a restore, meaning you have no proper recovery. It's OK for test systems with random data, but should never be used in production.

It is OK to use circular logging in many data warehouse applications where
the data can be reloaded from an external source. If a full or incremental
offline backup is taken after each load, that is usually sufficient.

Obviously, in a transaction system, roll forward recovery is highly
desirable, if not mandatory.

Nov 12 '05 #9
My contention was that you are not committing as you think you are. I
strongly suggest your using DB2 monitoring to figure out what is _really_
happening in the database. Staring at your code isn't going to tell you
thatm, especially with statements like "should commit", when it is apparent
from the error message that is isn't. You can debug your code later.

"xixi" <da****@yahoo.com> wrote in message
news:c0**************************@posting.google.c om...
it is circular logging. the insert, delete, update will always be one
row, with auto commit on, it sould be commit right away, i am using
prepare statement to do write, so it is ps.executeUpdate(); should
commit the write right away, and i did the test, i didn't have
connection.commit() to commit the change and i have the data in the
table. i still don't understand why i only do change on data one row a
time then commit still have the log full error

Nov 12 '05 #10
My contention was that you are not committing as you think you are. I
strongly suggest your using DB2 monitoring to figure out what is _really_
happening in the database. Staring at your code isn't going to tell you
thatm, especially with statements like "should commit", when it is apparent
from the error message that is isn't. You can debug your code later.

"xixi" <da****@yahoo.com> wrote in message
news:c0**************************@posting.google.c om...
it is circular logging. the insert, delete, update will always be one
row, with auto commit on, it sould be commit right away, i am using
prepare statement to do write, so it is ps.executeUpdate(); should
commit the write right away, and i did the test, i didn't have
connection.commit() to commit the change and i have the data in the
table. i still don't understand why i only do change on data one row a
time then commit still have the log full error

Nov 12 '05 #11
i have change the logfilsiz = 1024, 15 logprimary, 20 log seconday, so
total is 120 M log space, but still have error 964 for one single row
insert, about 230 bytes long, i don't understand
Nov 12 '05 #12
i have change the logfilsiz = 1024, 15 logprimary, 20 log seconday, so
total is 120 M log space, but still have error 964 for one single row
insert, about 230 bytes long, i don't understand
Nov 12 '05 #13
do db get cfg, i have log_retain_status=no , user_exit_status = off,
logfilsiz = 1024, 15 log primary, 20 file seconday, the disk still
have over 10g space, but i found only 33 files under
\DB2\NODE0000\SQL00002\SQLOGDIR, and i get error 964, why is that?
Nov 12 '05 #14
do db get cfg, i have log_retain_status=no , user_exit_status = off,
logfilsiz = 1024, 15 log primary, 20 file seconday, the disk still
have over 10g space, but i found only 33 files under
\DB2\NODE0000\SQL00002\SQLOGDIR, and i get error 964, why is that?
Nov 12 '05 #15
xixi wrote:
do db get cfg, i have log_retain_status=no , user_exit_status = off,
logfilsiz = 1024, 15 log primary, 20 file seconday, the disk still
have over 10g space, but i found only 33 files under
\DB2\NODE0000\SQL00002\SQLOGDIR, and i get error 964, why is that?


I would guess that you have other applications running, which caused logs to
be written but have not yet committed (or rolled back) the changes. Thus,
the logs are still active and cannot be reused.

--
Knut Stolze
Information Integration
IBM Germany / University of Jena
Nov 12 '05 #16
xixi wrote:
do db get cfg, i have log_retain_status=no , user_exit_status = off,
logfilsiz = 1024, 15 log primary, 20 file seconday, the disk still
have over 10g space, but i found only 33 files under
\DB2\NODE0000\SQL00002\SQLOGDIR, and i get error 964, why is that?


I would guess that you have other applications running, which caused logs to
be written but have not yet committed (or rolled back) the changes. Thus,
the logs are still active and cannot be reused.

--
Knut Stolze
Information Integration
IBM Germany / University of Jena
Nov 12 '05 #17
I've said twice already: you need to monitor what's happening. Staring at
your source code or publishing your configuration here isn't going to help
you. You must use the DB2 monitoring facilities to find out what's
happening.

"xixi" <da****@yahoo.com> wrote in message
news:c0**************************@posting.google.c om...
do db get cfg, i have log_retain_status=no , user_exit_status = off,
logfilsiz = 1024, 15 log primary, 20 file seconday, the disk still
have over 10g space, but i found only 33 files under
\DB2\NODE0000\SQL00002\SQLOGDIR, and i get error 964, why is that?

Nov 12 '05 #18
I've said twice already: you need to monitor what's happening. Staring at
your source code or publishing your configuration here isn't going to help
you. You must use the DB2 monitoring facilities to find out what's
happening.

"xixi" <da****@yahoo.com> wrote in message
news:c0**************************@posting.google.c om...
do db get cfg, i have log_retain_status=no , user_exit_status = off,
logfilsiz = 1024, 15 log primary, 20 file seconday, the disk still
have over 10g space, but i found only 33 files under
\DB2\NODE0000\SQL00002\SQLOGDIR, and i get error 964, why is that?

Nov 12 '05 #19
Hi xixi,

check with snapshot for all applications which application is using your
active log space. Furthermore, check your first active log file.
Possibly you have num_log_span or max_log set?
What is in your db2diag and notify log?

Stefan
xixi schrieb:
do db get cfg, i have log_retain_status=no , user_exit_status = off,
logfilsiz = 1024, 15 log primary, 20 file seconday, the disk still
have over 10g space, but i found only 33 files under
\DB2\NODE0000\SQL00002\SQLOGDIR, and i get error 964, why is that?

Nov 12 '05 #20
Hi xixi,

check with snapshot for all applications which application is using your
active log space. Furthermore, check your first active log file.
Possibly you have num_log_span or max_log set?
What is in your db2diag and notify log?

Stefan
xixi schrieb:
do db get cfg, i have log_retain_status=no , user_exit_status = off,
logfilsiz = 1024, 15 log primary, 20 file seconday, the disk still
have over 10g space, but i found only 33 files under
\DB2\NODE0000\SQL00002\SQLOGDIR, and i get error 964, why is that?

Nov 12 '05 #21

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

10
by: TZoner | last post by:
1) Can one find the location of the 'Transaction Log' at: <Hard Disk>\Program Files\Microsoft SQL Server\MSSQL\Data\MyDb_Log.ldf? 2) Is it safe to delete it, as SQL will create a new Transaction...
3
by: Thiko | last post by:
Hi I take one nightly full database backup at 02:00 and backup the transaction log to one backup set every 15mins. The commands to do this are as follows and are set up to run as database...
3
by: Bucfan1 | last post by:
Hello All, I have been encountering trouble with a SQL Server 2000 Transaction log file, mainly with the constant growth and lack of the autoshrink option. Here are the details: 1.) OS is...
0
by: xo55ox | last post by:
Hi, I have been trying to set up an automated restore process from prod to backup server. First, I schedule the full database backup nightly, transfer the backup file and restore it to the...
2
by: francois1 | last post by:
I am running a website with a SQL Server database attached. My transaction logs are full and my hosting co. won't allocate more disk space for me. I need to delete my database transaction logs...
3
by: TG | last post by:
Coming from a (mostly) Oracle shop, I am unclear how SS transaction logs work as far as up to point of failure recovery goes. I have the few MSSQL databases I look after in full recovery mode,...
4
by: yashgt | last post by:
Hi, We have created a SQL server 2000 database. We observe that the transaction log keeps growing over time. We are now about to run out of space. We have been periodically shrinking the...
0
by: flobroed | last post by:
Hi, I've a question regarding the transaction-log backup on SQL-Server 2000. We have implemented a low cost replication. Every evening we make a full backup and beginning at 7 to 18 we make...
3
by: sifrah | last post by:
Hi All, My SQL server transaction log is getting bigger every day and my HDD if running out of space. So i follow the MS KB about how to Shrinking the Transaction Log. After doing so the log is...
5
by: Kruton | last post by:
Hello everyone, This is more of an architectural question about SQL Server. Can someone please explain why when I perform a query such as the one below that updates a table using begin and end...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.