I've done exactly that. Implementation included using a list of
databases to be processed one at a time, which made it easy to update
the process without recoding the procedure. I also included the following:
1. The ability to use an input parameter to update a one or more
databases only. (Great for testing an update and incremental
implementation.)
2. Output from the update(s) was appended to a file - feed it into a
"tee" command with the append option so you can see the results and have
a permanent record of the updates. The output file can be fed into grep
to verify what databases were updated.
3. Append a timestamp to the file before writing any output to it. If
you have multiple DBAs, include the userid with the timestamp.
Keep the output file from the updates as a permanent record of the
completed work. I also used the name of the file containing the update
statements to form the output file name.
Phil Sherman
jo*************@pulsen.se wrote:
Hi,
We have a significant (and increasing) number of identical databases
spread over numerous servers (linux) and are finding it a pain making
table changes since we are currently issuing the alter command manually
against each db. We are looking for ways to simplify this process - one
idea we have is to put the alter stmt into a script file and have
another script which issues a connect to each db and then execute the
script containing the alter stmt. Some sort of error handling would be
needed of course. Anyone doing anything similar or have any other ideas
?
Regards,
John Enevoldson