Sunday, February 9, 2020

How to restore the operating system using backup in aix?

How to restore the operating system using backup in aix?

Logon to NIM server
====================
uname -a
cd /export/mksysbs
ls -l testserver.mksysb (find the mksysb which is required for the restore)


check the machine is already defined, if not define the, machine newly
=======================================================================

lsnim -l testserver
follow if the machine is not defined
smitty nim •> Perform the nim administration tasks-> Manage Machines -> Define a Machine -> give the hostne of the client machine and press enter
lsnim -l testserver


Define the mksysb resource like below.
=======================================

smitty nim_mkres —> define resource -> mksysb -> Resource name, Resource type, server of resource, location of resource.
lsnim -l testserver_res_mksysb  
(or)
nim -o define —t 'mksysb' —a server=master -a location—/export/mksysbs/testserver.mksysb/ testserver_res_mksysb 

Note: testserver_res_mksysb is the mksysb resource of the server.


Allocate spot and mksysb to the client and go for the bos_inst
==============================================================

nim -o allocate -a mksysb=testserver_res_mksysb -a spot=a1x61_T109_SP1O_spot testserver
nim -o bos_inst —a source=mksysb -a accept_licenses—yes -a boot_client=no testserver
(or)
nim -o allocate -a spot=a1x61_T109_SP1O_spot -a mksysb=testserver_res_mksysb testserver
nim -o bos_inst —a source=mksysb -a accept_licenses—yes -a boot_client=no testserver


Now logon to the sms menu from the HMC
=======================================

chsysstate -r ipar -m <frame_name> -o on -f Normal -b sms -n <testserver_name>

2 - Remote IPL
2 - Interpartition logical Lun
1 IPv4 - Address Format
1 Bootp
1 - IP parametecs (need to fill it up client Ip. server Ip, gateway ip, subnet mask)
press escape
3 - ping test
1 - Execute ping test (check the ping test success, now press enter)
M - return to main menu
5 - select boot option
1 - Select Install/Boot Device
6 - Network
1 - Bootp
2 - Interpartition Logical LAN 
2 - Normal mode moot
1 - yes
1 - 1 type and press 1 enter to have English during Install
2 - change/show Installation Settings and Install
1 - Disk(s) where you want to install hdiskO
77 - Display more Disk Information
77 — gisplay wore Disk Information
select the correct disk [ 1 - hdiskO ]
>>> choice [O]: 3 Import user volume Groups (changed to YES) (remember 3 import user volume Groups and 4 recover Devices should be in YEs)
>>> 0 Install with the settings listed above

After restoration done, you will get a login prompt

uname-a
oslevel -s
lspv
lsvq -a
df -gt
lppchk -vm3


Post restoration work
======================
logon to the nim server and remove the mksysb resource
nm —a remove testserver_res_mksysb
(or)
smltty nim_rmmac •> select the correct mksysb —> and remove system backup image=yes (CLICK ENTER)


Deallocate all the allocated resources and reset the NIM client on the NIM server
========================================================================
nim -o deallocate -F -a subclass=all testserver
nim -o reset -a force=yes testserver



Tuesday, February 4, 2020

How to perform the storage migration (from IBM to Hitachi) in HACMP cluster in aix?

How to perform the storage migration (from IBM to Hitachi) in HACMP cluster in aix?


1.       Inform the application people and get confirmation about the migration
2.       Ask the application people to stop the application.
3.       Down the cluster services – smitty clstop (bring RF offline)
4.       Check and confirm that the RG’s are offline – clRGinfo
5.       Confirm that the cluster service is in init state.  – lssrc –ls clstrmgrES
6.       Check the bootlist and boot image of the root disks.
7.       Perform the sanity reboot (shutdown –Fr)
8.       Once the servers came online, please start the cluster services (smitty clstart – Manage RG manually)
Smitty hacmp -> cspoc -> RG and apps -> Bring RG online.
9.       Ask the application team to check the app is in good state like before.
10.   Once you get the confirmatioin, please ask them to stop the application again.         
11.   stop the cluster services down ( smitty clstop -> Bring the RG offline)
12.   Check and confirm that Cluster service is in init state and RG’s offline (clRGinfo, lssrc –ls clstrmgrES)
13.   Now the proceed with the rootvg migration (using alt_disk based)
alt_disk_install –C –B hdiskX hdiskY
14.   Once done with rootvg ask storage to proceed with datavg.
15.   Delete the disk from lpar and delte the vtd and disk on 4 vios
16.   Once they remove the SVC check cfgmgr and confirm again using lsdev –Cc disk
17.   Once we confirmed that the svc is not return ask storage to map the vsp.
18.   Once they done to add the vsp execute cfgmgr and confirm newly added vsp disk.
19.   Now add the heartbeat disk first to the vios and then add rest.
20.   Once everything (all the disks) added then change the attribute (healthcheck)
21.   Verify the mapping (lsmap –all)
22.   Configure the disk back on HA lpar (cfgmgr)
23.   Setup disk parameters (health check interval)
24.   Check VG and filesystem outside of HA before starting cluster (varyonvg)
25.   Varyoff the vg again after checking all the filesytem is mounted (varyoffvg vg)
26.   Check step 20,21 on the another cluster node
27.   Verify the cluster config and sync the cluster (smitty hacmp-extended config-extended verification and sync)
28.   Perform the cluster heartbeat before the starting the cluster
                 /usr/sbin/rsct/bin/dhb_read  -p /dev/hdiskX –r  (receive mode –on the first node)
                /usr/sbin/rsct/bin/dhb_read  -p /dev/hdiskX –t  (transit mode – on the second node)
29.   Start HA on both node and ensure application comes up as part of HA

Backout
=======

1.       lspv – to check the disks
2.       Perform cluster sync with no errors (smitty cl_sync)
3.       Take a cluster snapshot (smitty cm_add_snap.dialog)
4.       Verify lsat vio backup is available (viosbr –view –list)
5.       Stop the cluster services (smitty clstop -> with bring RG offline)
6.       Ensure FS are unmounted and VG varied off (df –gt, lsvg –o)
7.       Remove the  data disk of shared VG from both lpar (rmdev –dl hdisk)
8.       Verify and remove VTD mapping from respective shared vg (rmdev –dev vtd)
9.       Verify and put backend vios disk in defined state (rmdev –l hdisk)
10.   Inform storage to represent the disk directly via SVC to all 4 vios
11.   Rescan the LUN on vios to bring disks via SVC (cfgmgr)
12.   Ensure mapping is removed and VSP (hitachi) disk not available
13.   Check IBM disks came back to available sate (lsdev –Cc disk|grep –I ibm)
14.   Check the disk parameter of Hitachi disks
15.   Verify the corresponding vhosts mapping in last viosbr backup
Viosbr –view –file <viosbrbackup.tar.gz> -type svsa
16.   Restore the corresponding vhosts mapping on al the 4 vios
Viosbr –restore –file <backup_config_file_name> -inter –type vscsi
17.   Verify the mapping (lsmap –all)
18.   Configure the disk back on HA lpar (cfgmgr)
19.   Setup disk parameters (health check interval)
20.   Check VG and filesystem outside of HA before starting cluster (varyonvg)
21.   Varyoff the vg again after checking all the filesytem is mounted (varyoffvg vg)
22.   Check step 20,21 on the another cluster node
23.   Verify the cluster config and sync the cluster (smitty hacmp-extended config-extended verification and sync)
24.   Perform the cluster heartbeat before the starting the cluster
                 /usr/sbin/rsct/bin/dhb_read  -p /dev/hdiskX –r  (receive mode –on the first node)
                /usr/sbin/rsct/bin/dhb_read  -p /dev/hdiskX –t  (transit mode – on the second node)
25.   Start HA on both node and ensure application comes up as part of HA