468,257 Members | 1,365 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,257 developers. It's quick & easy.

Failover of DB2 Tools Catalog on MSCS

Does anyone know how to correctly setup the Administration DB for fail
over clustering on a MSCS server running DB2.
Problem I have when I follow IBM's docs is that I install the Tools
catalog on the first node but when I failover the cluster to the second

node - DB2's Task Center no longer works since the Tools Catalog still
points to the first node. Is there anyway to re-catalog the tools db so

that it appears under the Virtual hostname/IP of the Cluster?
So I have:-
DB2nodeA - this is the first node in the MS cluster and I install
Administration Server here as per IBM's doc.
DB2nodeB - this is the second node in the MS Cluster
DB2Virt - this is the MS Cluster 'Virtual' name
Failover of node from A to B results in the Admin Server (Tools DB) and

all the jobs in the Task Center still referencing node A and rendering
it useless. Has anyone had any experience on a fix for this?

Jan 26 '06 #1
8 3814
Also cluster DB2 Admin server
when seting up new job, try to use the cluster node, instead of the
physical server.

ar********@gmail.com wrote:
Does anyone know how to correctly setup the Administration DB for fail
over clustering on a MSCS server running DB2.
Problem I have when I follow IBM's docs is that I install the Tools
catalog on the first node but when I failover the cluster to the second

node - DB2's Task Center no longer works since the Tools Catalog still
points to the first node. Is there anyway to re-catalog the tools db so

that it appears under the Virtual hostname/IP of the Cluster?
So I have:-
DB2nodeA - this is the first node in the MS cluster and I install
Administration Server here as per IBM's doc.
DB2nodeB - this is the second node in the MS Cluster
DB2Virt - this is the MS Cluster 'Virtual' name
Failover of node from A to B results in the Admin Server (Tools DB) and

all the jobs in the Task Center still referencing node A and rendering
it useless. Has anyone had any experience on a fix for this?


Jan 26 '06 #2
Could you possibly expand upon the steps?

I did actually cluster the Admin server too, but when you failover the
node from A to B - the admin server still thinks its on node A, If I go
to Control Center all my DBs appear under 'Node A' rather than Cluster
Virtual Name.

When i catalog the cluster name it appears as a cataloged system in
control center, but Db2 won't allow me to connect to it or catalog any
DBs underneath it.

Jan 26 '06 #3
1. list all your resource including DB2 from Cluster Admin.
2. When your client connect to DB2, which IP do they use?
3. When y create a new task, What system y use?

ar********@gmail.com wrote:
Could you possibly expand upon the steps?

I did actually cluster the Admin server too, but when you failover the
node from A to B - the admin server still thinks its on node A, If I go
to Control Center all my DBs appear under 'Node A' rather than Cluster
Virtual Name.

When i catalog the cluster name it appears as a cataloged system in
control center, but Db2 won't allow me to connect to it or catalog any
DBs underneath it.


Jan 27 '06 #4
I had a similar problem with my cluster on solaris and the solution was
that db2as wasn't in /var/db2/global.reg on serverB... I used db2greg
to add it but as the manuals says :"You should only use this tool if
requested to do so by DB2 Customer Support" so either contact IBM
support to get some help or just do it....
I didn't get any help from IBM support or the Presale support for DB2
to solve the problem so I fixed it myself...Note that I worked for IBM
with DB2 support earlier in my career so I have some experience with
these kind of tools...

Jan 30 '06 #5
1. I have DB2 Group with:
DB2 Resource
DB2DAS00
DB2CLUST (net name - with net value of DB2SRV)
Public IP
Disk H:

2. Clients connect via DB2 Cluster Net name value which I set to DB2SRV

3. When I create DB2 Task I terminal service to cluster (DB2SRV) and
create job but in task center -> when you choose Scheduler system from
drop down it says 'Node A' and this is the problem. If node fails over
in the middle of the night my scheduled jobs will fail to run because
they reference Node A rather than the Virtual Cluster Name.

Jan 31 '06 #6
Manullyt change all your tasks of "run system", from "Node A" to
"DB2SRV", also "DB2 instance and partition". When the job run, it will
depends on the cluster, instead of the physical node.
The tools db has also to be in the clustered instance, when fail over
happens, the tools db is still in the cluster.
When fail over to "Node B", you have to use schedule system "DB2SRV".
By this time, you will not be able to create a new task same as in
"Node A", scheduler system has to be changed, same as the run system.

ar********@gmail.com wrote:
1. I have DB2 Group with:
DB2 Resource
DB2DAS00
DB2CLUST (net name - with net value of DB2SRV)
Public IP
Disk H:

2. Clients connect via DB2 Cluster Net name value which I set to DB2SRV

3. When I create DB2 Task I terminal service to cluster (DB2SRV) and
create job but in task center -> when you choose Scheduler system from
drop down it says 'Node A' and this is the problem. If node fails over
in the middle of the night my scheduled jobs will fail to run because
they reference Node A rather than the Virtual Cluster Name.


Jan 31 '06 #7
DB2SRV does not appear as an option - so do i need to catalog this as a
node? These are the steps that I take to create the cluster - correct
me if I am doing something wrong here:

1. Create cluster - with 2 nodes - NodeA and NodeB

2. Install DB2 on each node.

3. Issue db2idrop db2 - from NodeB to remove the DB2 instance on this
node

4. Run db2mscs -f:db2mscs.cfg from NodeA with the following cfg file
parameters:-
DB2_INSTANCE=DB2
CLUSTER_NAME=DB2CLS
DB2_LOGON_USERNAME=db2adm
DB2_LOGON_PASSWORD=****
GROUP_NAME=DB2 Group
IP_NAME=IP Address for DB2
IP_ADDRESS=172.16.1.100
IP_SUBNET=255.255.255.0
IP_NETWORK=Public
NETNAME_NAME=DB2CLUST
NETNAME_VALUE=DB2SRV
NETNAME_DEPENDENCY=IP Address for DB2
DISK_NAME=Disk X:

5. From Cluster Admin specfiy NodeA as preferred owner of DB2 Group

6. Cluster Admin Server - Set on NodeA DB2DAS00 service to manual

7. Stop Db2 Admin service on all nodes

8. Issue db2admin drop on NodeB

9. Run - db2mscs -f:db2mscs.admin on NodeA - cfg:-
DAS_INSTANCE=DB2DAS00
CLUSTER_NAME=DB2CLS
DB2_LOGON_USERNAME=db2adm
DB2_LOGON_PASSWORD=****
GROUP_NAME=DB2 Group
DISK_NAME=Disk X:

10. On NodeB issue - db2set -g db2adminserver=DB2DAS00

And thats it - I don't see any refernce to the DB2SRV Cluster name
anywhere on DB2 Task Center or Control Center. So whenever I fail the
node over - it thinks its NodeA - no matter which node I am on. I'm not
sure if I need to catalog something here or change some system
parameters or something to make this work like it should....

Feb 3 '06 #8
I did follow the same step as y. But when setup task, manullyt change
all tasks of "run system", from "Node A" to "DB2SRV", also "DB2
instance and partition".

ar********@gmail.com wrote:
DB2SRV does not appear as an option - so do i need to catalog this as a
node? These are the steps that I take to create the cluster - correct
me if I am doing something wrong here:

1. Create cluster - with 2 nodes - NodeA and NodeB

2. Install DB2 on each node.

3. Issue db2idrop db2 - from NodeB to remove the DB2 instance on this
node

4. Run db2mscs -f:db2mscs.cfg from NodeA with the following cfg file
parameters:-
DB2_INSTANCE=DB2
CLUSTER_NAME=DB2CLS
DB2_LOGON_USERNAME=db2adm
DB2_LOGON_PASSWORD=****
GROUP_NAME=DB2 Group
IP_NAME=IP Address for DB2
IP_ADDRESS=172.16.1.100
IP_SUBNET=255.255.255.0
IP_NETWORK=Public
NETNAME_NAME=DB2CLUST
NETNAME_VALUE=DB2SRV
NETNAME_DEPENDENCY=IP Address for DB2
DISK_NAME=Disk X:

5. From Cluster Admin specfiy NodeA as preferred owner of DB2 Group

6. Cluster Admin Server - Set on NodeA DB2DAS00 service to manual

7. Stop Db2 Admin service on all nodes

8. Issue db2admin drop on NodeB

9. Run - db2mscs -f:db2mscs.admin on NodeA - cfg:-
DAS_INSTANCE=DB2DAS00
CLUSTER_NAME=DB2CLS
DB2_LOGON_USERNAME=db2adm
DB2_LOGON_PASSWORD=****
GROUP_NAME=DB2 Group
DISK_NAME=Disk X:

10. On NodeB issue - db2set -g db2adminserver=DB2DAS00

And thats it - I don't see any refernce to the DB2SRV Cluster name
anywhere on DB2 Task Center or Control Center. So whenever I fail the
node over - it thinks its NodeA - no matter which node I am on. I'm not
sure if I need to catalog something here or change some system
parameters or something to make this work like it should....


Feb 6 '06 #9

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

reply views Thread by Kexiao Liao | last post: by
3 posts views Thread by sea | last post: by
reply views Thread by datapro01 | last post: by
1 post views Thread by theSpinel | last post: by
2 posts views Thread by sandeep.manthena | last post: by
3 posts views Thread by afreema | last post: by
reply views Thread by kermitthefrogpy | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.