How to perform the storage migration (from IBM to Hitachi) in HACMP cluster in aix?
1.
Inform the application people and get
confirmation about the migration
2.
Ask the application people to stop the
application.
3.
Down the cluster services – smitty clstop (bring
RF offline)
4.
Check and confirm that the RG’s are offline –
clRGinfo
5.
Confirm that the cluster service is in init
state. – lssrc –ls clstrmgrES
6.
Check the bootlist and boot image of the root
disks.
7.
Perform the sanity reboot (shutdown –Fr)
8.
Once the servers came online, please start the
cluster services (smitty clstart – Manage RG manually)
Smitty hacmp -> cspoc -> RG and apps -> Bring
RG online.
9.
Ask the application team to check the app is in
good state like before.
10.
Once you get the confirmatioin, please ask them
to stop the application again.
11. stop the cluster services down (
smitty clstop -> Bring the RG offline)
12.
Check and confirm that Cluster service is in
init state and RG’s offline (clRGinfo, lssrc –ls clstrmgrES)
13.
Now the proceed with the rootvg migration (using
alt_disk based)
alt_disk_install –C –B hdiskX hdiskY
14.
Once done with rootvg ask storage to proceed
with datavg.
15.
Delete the disk from lpar and delte the vtd and
disk on 4 vios
16.
Once they remove the SVC check cfgmgr and
confirm again using lsdev –Cc disk
17.
Once we confirmed that the svc is not return ask
storage to map the vsp.
18.
Once they done to add the vsp execute cfgmgr and
confirm newly added vsp disk.
19.
Now add the heartbeat disk first to the vios and
then add rest.
20.
Once everything (all the disks) added then
change the attribute (healthcheck)
21. Verify the mapping (lsmap –all)
22. Configure the disk back on HA lpar (cfgmgr)
23. Setup disk parameters (health check interval)
24. Check VG and filesystem outside of HA before starting cluster (varyonvg)
25. Varyoff the vg again after checking all the filesytem is mounted (varyoffvg vg)
26. Check step 20,21 on the another cluster node
27. Verify the cluster config and sync the cluster (smitty hacmp-extended config-extended verification and sync)
28. Perform the cluster heartbeat before the starting the cluster
/usr/sbin/rsct/bin/dhb_read -p /dev/hdiskX –r (receive mode –on the first node)
/usr/sbin/rsct/bin/dhb_read -p /dev/hdiskX –t (transit mode – on the second node)
29. Start HA on both node and ensure application comes up as part of HA
Backout
=======
1. lspv – to check the disks
2.
Perform cluster sync with no errors (smitty
cl_sync)
3.
Take a cluster snapshot (smitty
cm_add_snap.dialog)
4.
Verify lsat vio backup is available (viosbr –view
–list)
5.
Stop the cluster services (smitty clstop ->
with bring RG offline)
6.
Ensure FS are unmounted and VG varied off (df –gt,
lsvg –o)
7.
Remove the
data disk of shared VG from both lpar (rmdev –dl hdisk)
8.
Verify and remove VTD mapping from respective shared
vg (rmdev –dev vtd)
9.
Verify and put backend vios disk in defined
state (rmdev –l hdisk)
10.
Inform storage to represent the disk directly
via SVC to all 4 vios
11.
Rescan the LUN on vios to bring disks via SVC
(cfgmgr)
12.
Ensure mapping is removed and VSP (hitachi) disk
not available
13.
Check IBM disks came back to available sate
(lsdev –Cc disk|grep –I ibm)
14.
Check the disk parameter of Hitachi disks
15.
Verify the corresponding vhosts mapping in last
viosbr backup
Viosbr –view –file <viosbrbackup.tar.gz> -type
svsa
16.
Restore the corresponding vhosts mapping on al
the 4 vios
Viosbr –restore –file <backup_config_file_name> -inter
–type vscsi
17.
Verify the mapping (lsmap –all)
18.
Configure the disk back on HA lpar (cfgmgr)
19.
Setup disk parameters (health check interval)
20.
Check VG and filesystem outside of HA before
starting cluster (varyonvg)
21.
Varyoff the vg again after checking all the
filesytem is mounted (varyoffvg vg)
22.
Check step 20,21 on the another cluster node
23.
Verify the cluster config and sync the cluster
(smitty hacmp-extended config-extended verification and sync)
24.
Perform the cluster heartbeat before the
starting the cluster
/usr/sbin/rsct/bin/dhb_read
-p /dev/hdiskX –r (receive mode –on the first node)
/usr/sbin/rsct/bin/dhb_read
-p /dev/hdiskX –t (transit mode – on the second node)
25.
Start HA on both node and ensure application
comes up as part of HA