470,628 Members | 2,251 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 470,628 developers. It's quick & easy.

making identical table changes on many databases

Hi,
We have a significant (and increasing) number of identical databases
spread over numerous servers (linux) and are finding it a pain making
table changes since we are currently issuing the alter command manually
against each db. We are looking for ways to simplify this process - one
idea we have is to put the alter stmt into a script file and have
another script which issues a connect to each db and then execute the
script containing the alter stmt. Some sort of error handling would be
needed of course. Anyone doing anything similar or have any other ideas
?
Regards,
John Enevoldson

Aug 3 '06 #1
3 1324
I've done exactly that. Implementation included using a list of
databases to be processed one at a time, which made it easy to update
the process without recoding the procedure. I also included the following:
1. The ability to use an input parameter to update a one or more
databases only. (Great for testing an update and incremental
implementation.)
2. Output from the update(s) was appended to a file - feed it into a
"tee" command with the append option so you can see the results and have
a permanent record of the updates. The output file can be fed into grep
to verify what databases were updated.
3. Append a timestamp to the file before writing any output to it. If
you have multiple DBAs, include the userid with the timestamp.

Keep the output file from the updates as a permanent record of the
completed work. I also used the name of the file containing the update
statements to form the output file name.
Phil Sherman

jo*************@pulsen.se wrote:
Hi,
We have a significant (and increasing) number of identical databases
spread over numerous servers (linux) and are finding it a pain making
table changes since we are currently issuing the alter command manually
against each db. We are looking for ways to simplify this process - one
idea we have is to put the alter stmt into a script file and have
another script which issues a connect to each db and then execute the
script containing the alter stmt. Some sort of error handling would be
needed of course. Anyone doing anything similar or have any other ideas
?
Regards,
John Enevoldson
Aug 3 '06 #2
Hey, guys,

I am curious, why do you have lots of copies of identical databases?
Do they have the same scheme and different data, or is the data
identical too? Is it an embedded application?

Cheers,
-Paul LaPointe

Phil Sherman wrote:
I've done exactly that. Implementation included using a list of
databases to be processed one at a time, which made it easy to update
the process without recoding the procedure. I also included the following:
1. The ability to use an input parameter to update a one or more
databases only. (Great for testing an update and incremental
implementation.)
2. Output from the update(s) was appended to a file - feed it into a
"tee" command with the append option so you can see the results and have
a permanent record of the updates. The output file can be fed into grep
to verify what databases were updated.
3. Append a timestamp to the file before writing any output to it. If
you have multiple DBAs, include the userid with the timestamp.

Keep the output file from the updates as a permanent record of the
completed work. I also used the name of the file containing the update
statements to form the output file name.
Phil Sherman

jo*************@pulsen.se wrote:
Hi,
We have a significant (and increasing) number of identical databases
spread over numerous servers (linux) and are finding it a pain making
table changes since we are currently issuing the alter command manually
against each db. We are looking for ways to simplify this process - one
idea we have is to put the alter stmt into a script file and have
another script which issues a connect to each db and then execute the
script containing the alter stmt. Some sort of error handling would be
needed of course. Anyone doing anything similar or have any other ideas
?
Regards,
John Enevoldson
Aug 4 '06 #3
Development1, Development2, unit test, systems test, audit/validation,
production. This doesn't include cases where there are multiple
production systems for different business units that insist that their
data must be totally separate (including hardware) from other business
units. Yes, I know it sounds insane but when that's the way a client
wants to run their business, or that's the way their auditors tell them
it has to be done, the DBA can't arbitrarily tell them that they can't
keep everything separate.

Welcome to the real world!

Phil Sherman

paul.lapointe wrote:
Hey, guys,

I am curious, why do you have lots of copies of identical databases?
Do they have the same scheme and different data, or is the data
identical too? Is it an embedded application?

Cheers,
-Paul LaPointe

Phil Sherman wrote:
>>I've done exactly that. Implementation included using a list of
databases to be processed one at a time, which made it easy to update
the process without recoding the procedure. I also included the following:
1. The ability to use an input parameter to update a one or more
databases only. (Great for testing an update and incremental
implementation.)
2. Output from the update(s) was appended to a file - feed it into a
"tee" command with the append option so you can see the results and have
a permanent record of the updates. The output file can be fed into grep
to verify what databases were updated.
3. Append a timestamp to the file before writing any output to it. If
you have multiple DBAs, include the userid with the timestamp.

Keep the output file from the updates as a permanent record of the
completed work. I also used the name of the file containing the update
statements to form the output file name.
Phil Sherman

jo*************@pulsen.se wrote:
>>>Hi,
We have a significant (and increasing) number of identical databases
spread over numerous servers (linux) and are finding it a pain making
table changes since we are currently issuing the alter command manually
against each db. We are looking for ways to simplify this process - one
idea we have is to put the alter stmt into a script file and have
another script which issues a connect to each db and then execute the
script containing the alter stmt. Some sort of error handling would be
needed of course. Anyone doing anything similar or have any other ideas
?
Regards,
John Enevoldson

Aug 5 '06 #4

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

10 posts views Thread by Phil Latio | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.