Wednesday, April 1, 2020

Important things about HACMP and PowerHA in aix?

Important things about HACMP and PowerHA in aix?

To move the resouce group from one node to another
# clRGmove -g testRG -n nodeB -m
To create a FS in hacmp

# /usr/sbin/cluster/sbin/cl_crfs -cspoc “-n nodeA,nodeB” 
-v jfs2 -g testVG -a size=65572 -m/testFS -p rw -a size=4096

To extend the FS in hacmp

# /usr/es/sbin/cluster/sbin/cl_chfs -cspoc "-g testRG" 
-a size=+65572 /testfs        (Note: -a size is the size in block)

To list out the VG which are part of the cluster:
# cl_lsvg
To get the info about the RGs:
# clRGinfo
To get the detailed info about RGs:
# clshowres
To get the info about the disk in the RG:
# cllsdisk -g testrg
To get the filesystem info about the RG:
# cllsfs -g testrg
To get the info about the disk in cluster VG:
# /usr/es/sbin/cluster/cspoc/cl_lsrgvgdisks
To get the complete the info about cluster:
# /usr/es/sbin/cluster/utilities/cllscf
To get detailed info about VG in cluster:
# /usr/es/sbin/cluster/utilities/cllsvgdata
To get the topology info in cluster:
# cltopinfo
TO get the network info in hacmp:
# /usr/es/sbin/cluster/utilities/cllsif  and # /usr/es/sbin/cluster/utilities/cllsnw
To get the hacmp nodes and network config info:
# /usr/es/sbin/cluster/utilities/cllsnode
To list out the network present on the hacmp:
# /usr/es/sbin/cluster/utilities/cllsipnw
To get the info about network interface alive:
# /usr/es/sbin/cluster/utilities/cllsaliveif
To show cluster state and substate; needs clinfo.
# clstat
To SNMP-based tool to show cluster state
# cldump
To similar to cldump, perl script to show cluster state.
# cldisp
To list the local view of the cluster topology.
# cltopinfo
To list the local view of the cluster subsystems.
# clshowsrv -a
To locate the resource groups and display status.
# clfindres (-s)
To locate the resource groups and display status.
# clRGinfo -v
To rotate some of the log files.
# clcycle
To cluster ping program with more arguments.
# cl_ping
To cluster rsh program that take cluster node names as argument.
# clrsh
To which nodes are active?
# clgetactivegnodes
To what is the name of the local node?
# get_local_nodename
To check the HACMP ODM.
# clconfig
To online/offline or move resource groups.
# clRGmove
To sync/fix the cluster.
# cldare
To list the resource groups.
# cllsgrp
To create a large snapshot of the hacmp configuration.
# clsnapshotinfo
To list the network configuration of an hacmp cluster.
# cllscf
To show the resource group configuration.
# clshowres
To show network interface information.
# cllsif
To show short resource group information.
# cllsres
To list the cluster manager state.
# lssrc -ls clstrmgrES
To show heartbeat information.
# lssrc -ls topsvcs
To list a node centric overview of the hacmp configuration.
# cllsnode

How to increase the filesystem size in Hacmp cluster in aix?

How to increase the filesystem size in Hacmp cluster in aix?




Using the below steps we can increase the filesytem size in hacmp cluster in aix.


#cd /usr/sbin/cluster/cspoc
#./cli_chfs -a size=+10G /opt/tivoli


(OR)


smitty hacmp
system management (C-SPOC) 
HACMP logical volume management
Shared Filesystems
Enhanced Journaled File systems
Change / Show characteristic of a shared enhanced journaled file system
Then select the particular filesystem and change the values considering the requirement and VG space.


Sunday, February 9, 2020

How to restore the operating system using backup in aix?

How to restore the operating system using backup in aix?

Logon to NIM server
====================
uname -a
cd /export/mksysbs
ls -l testserver.mksysb (find the mksysb which is required for the restore)


check the machine is already defined, if not define the, machine newly
=======================================================================

lsnim -l testserver
follow if the machine is not defined
smitty nim •> Perform the nim administration tasks-> Manage Machines -> Define a Machine -> give the hostne of the client machine and press enter
lsnim -l testserver


Define the mksysb resource like below.
=======================================

smitty nim_mkres —> define resource -> mksysb -> Resource name, Resource type, server of resource, location of resource.
lsnim -l testserver_res_mksysb  
(or)
nim -o define —t 'mksysb' —a server=master -a location—/export/mksysbs/testserver.mksysb/ testserver_res_mksysb 

Note: testserver_res_mksysb is the mksysb resource of the server.


Allocate spot and mksysb to the client and go for the bos_inst
==============================================================

nim -o allocate -a mksysb=testserver_res_mksysb -a spot=a1x61_T109_SP1O_spot testserver
nim -o bos_inst —a source=mksysb -a accept_licenses—yes -a boot_client=no testserver
(or)
nim -o allocate -a spot=a1x61_T109_SP1O_spot -a mksysb=testserver_res_mksysb testserver
nim -o bos_inst —a source=mksysb -a accept_licenses—yes -a boot_client=no testserver


Now logon to the sms menu from the HMC
=======================================

chsysstate -r ipar -m <frame_name> -o on -f Normal -b sms -n <testserver_name>

2 - Remote IPL
2 - Interpartition logical Lun
1 IPv4 - Address Format
1 Bootp
1 - IP parametecs (need to fill it up client Ip. server Ip, gateway ip, subnet mask)
press escape
3 - ping test
1 - Execute ping test (check the ping test success, now press enter)
M - return to main menu
5 - select boot option
1 - Select Install/Boot Device
6 - Network
1 - Bootp
2 - Interpartition Logical LAN 
2 - Normal mode moot
1 - yes
1 - 1 type and press 1 enter to have English during Install
2 - change/show Installation Settings and Install
1 - Disk(s) where you want to install hdiskO
77 - Display more Disk Information
77 — gisplay wore Disk Information
select the correct disk [ 1 - hdiskO ]
>>> choice [O]: 3 Import user volume Groups (changed to YES) (remember 3 import user volume Groups and 4 recover Devices should be in YEs)
>>> 0 Install with the settings listed above

After restoration done, you will get a login prompt

uname-a
oslevel -s
lspv
lsvq -a
df -gt
lppchk -vm3


Post restoration work
======================
logon to the nim server and remove the mksysb resource
nm —a remove testserver_res_mksysb
(or)
smltty nim_rmmac •> select the correct mksysb —> and remove system backup image=yes (CLICK ENTER)


Deallocate all the allocated resources and reset the NIM client on the NIM server
========================================================================
nim -o deallocate -F -a subclass=all testserver
nim -o reset -a force=yes testserver



Tuesday, February 4, 2020

How to perform the storage migration (from IBM to Hitachi) in HACMP cluster in aix?

How to perform the storage migration (from IBM to Hitachi) in HACMP cluster in aix?


1.       Inform the application people and get confirmation about the migration
2.       Ask the application people to stop the application.
3.       Down the cluster services – smitty clstop (bring RF offline)
4.       Check and confirm that the RG’s are offline – clRGinfo
5.       Confirm that the cluster service is in init state.  – lssrc –ls clstrmgrES
6.       Check the bootlist and boot image of the root disks.
7.       Perform the sanity reboot (shutdown –Fr)
8.       Once the servers came online, please start the cluster services (smitty clstart – Manage RG manually)
Smitty hacmp -> cspoc -> RG and apps -> Bring RG online.
9.       Ask the application team to check the app is in good state like before.
10.   Once you get the confirmatioin, please ask them to stop the application again.         
11.   stop the cluster services down ( smitty clstop -> Bring the RG offline)
12.   Check and confirm that Cluster service is in init state and RG’s offline (clRGinfo, lssrc –ls clstrmgrES)
13.   Now the proceed with the rootvg migration (using alt_disk based)
alt_disk_install –C –B hdiskX hdiskY
14.   Once done with rootvg ask storage to proceed with datavg.
15.   Delete the disk from lpar and delte the vtd and disk on 4 vios
16.   Once they remove the SVC check cfgmgr and confirm again using lsdev –Cc disk
17.   Once we confirmed that the svc is not return ask storage to map the vsp.
18.   Once they done to add the vsp execute cfgmgr and confirm newly added vsp disk.
19.   Now add the heartbeat disk first to the vios and then add rest.
20.   Once everything (all the disks) added then change the attribute (healthcheck)
21.   Verify the mapping (lsmap –all)
22.   Configure the disk back on HA lpar (cfgmgr)
23.   Setup disk parameters (health check interval)
24.   Check VG and filesystem outside of HA before starting cluster (varyonvg)
25.   Varyoff the vg again after checking all the filesytem is mounted (varyoffvg vg)
26.   Check step 20,21 on the another cluster node
27.   Verify the cluster config and sync the cluster (smitty hacmp-extended config-extended verification and sync)
28.   Perform the cluster heartbeat before the starting the cluster
                 /usr/sbin/rsct/bin/dhb_read  -p /dev/hdiskX –r  (receive mode –on the first node)
                /usr/sbin/rsct/bin/dhb_read  -p /dev/hdiskX –t  (transit mode – on the second node)
29.   Start HA on both node and ensure application comes up as part of HA

Backout
=======

1.       lspv – to check the disks
2.       Perform cluster sync with no errors (smitty cl_sync)
3.       Take a cluster snapshot (smitty cm_add_snap.dialog)
4.       Verify lsat vio backup is available (viosbr –view –list)
5.       Stop the cluster services (smitty clstop -> with bring RG offline)
6.       Ensure FS are unmounted and VG varied off (df –gt, lsvg –o)
7.       Remove the  data disk of shared VG from both lpar (rmdev –dl hdisk)
8.       Verify and remove VTD mapping from respective shared vg (rmdev –dev vtd)
9.       Verify and put backend vios disk in defined state (rmdev –l hdisk)
10.   Inform storage to represent the disk directly via SVC to all 4 vios
11.   Rescan the LUN on vios to bring disks via SVC (cfgmgr)
12.   Ensure mapping is removed and VSP (hitachi) disk not available
13.   Check IBM disks came back to available sate (lsdev –Cc disk|grep –I ibm)
14.   Check the disk parameter of Hitachi disks
15.   Verify the corresponding vhosts mapping in last viosbr backup
Viosbr –view –file <viosbrbackup.tar.gz> -type svsa
16.   Restore the corresponding vhosts mapping on al the 4 vios
Viosbr –restore –file <backup_config_file_name> -inter –type vscsi
17.   Verify the mapping (lsmap –all)
18.   Configure the disk back on HA lpar (cfgmgr)
19.   Setup disk parameters (health check interval)
20.   Check VG and filesystem outside of HA before starting cluster (varyonvg)
21.   Varyoff the vg again after checking all the filesytem is mounted (varyoffvg vg)
22.   Check step 20,21 on the another cluster node
23.   Verify the cluster config and sync the cluster (smitty hacmp-extended config-extended verification and sync)
24.   Perform the cluster heartbeat before the starting the cluster
                 /usr/sbin/rsct/bin/dhb_read  -p /dev/hdiskX –r  (receive mode –on the first node)
                /usr/sbin/rsct/bin/dhb_read  -p /dev/hdiskX –t  (transit mode – on the second node)
25.   Start HA on both node and ensure application comes up as part of HA

Friday, January 24, 2020

How to migrate from AIX 6.1 to AIX 7.1 using DVD in aix?

How to migrate from AIX 6.1 to AIX 7.1 using DVD in aix?


chsysstate -m <Frame_name> -r lpar -o shutdown --immed -n <target_lpar>
chsysstate -r lpar -m <frame_name> -o on -f normal -b sms -n <target_lpar>

vtmenu
select the target lpar and connect the console, it will lead you to the SMS menu -
5. select Boot options
1. select Install/Boot Device
3. CD/DVD
5. SATA
1. pci@7OOOOOO2OO0O300/pci2048,06DC@1/sata
1. SATA CD-ROM
2. Normal Mode Boot
1. Yes
Type a 1 and press Enter to use this terminal as the system console
>> 1 Type 1 and press Enter to have English during install.
2 change/show Installation settings and Install
>>>  choice [0]: 1
>>> 3 Migration Install
>>> choice []: 2 (or choose the correct disk)
if everything is fine, then >>> Choice [0):
See the Migration Installation Summary and then  >>> 1 Continue with Install
once you get the Migration Confirmation, then  >>> 0 Continue with the migration.
Then migration starts from the DVD
once the migration is done, please check with “oslevel -s” command.


Note: Do not forget to change the boot mode into normal

Saturday, January 11, 2020

OS migration from 6.1 to 7.1 using NIM server in aix?

OS migration from 6.1 to 7.1 using NIM server in aix?


On the NIM server
==============

lsnim -1 <target_lpar>
nim -o allocate -a lppsource=aix71_TL05_SP02_lppsource -a spot=aix71_TL05_SP02_spot <target_lpar>
nim -o bos_inst -a lppsource=aix71_TL05_SP02_lppsource -a spot=aix71_TL05_SP02_spot -a accept_licenses=yes -a boot_client=no <target_lpar>
lsnim -l <target_lpar>
tail /etc/bootptab

On the HMC
==========
chsysstate -m <frame_name> -r ipar -n <targe-t_lpar_name> —o shutdown --immed
chsysstate -r ipar -m <frame_name> -o on -f Normal -b sms -n <target_lpar_name>

vtmenu
select the target lpar to get into the console
2 - Remote ipl
2 Interpartit Ion logical lun (choose th correct one)
1 - IPv4 Address Format
1 - Bootp
1 - IP parameters (need to fill it up Client lp,Server Ip, gateway ip, subnet mask... sometimes it automatically shows the detalli,ln this case chek the IPs against /et/bootptab)
press escape
3 - ping test
1 - Execute ping test (check the ping test success, now press enter)
M - return to in menu
5 - select boot option
1 - Select Install/boot device
6 -  Network
1 - Bootp
2 - Interpartition Logical LAN
2 - Normal Node Boot
1 - yes
1 - 1 Type 1 and press Ffltrr to have English (During install)
2 - change/show Installation Settings and Install
1 - System Settings
3 - Migration Install
77 - Display More Disk Infore.at ion
77 - Display More Disk Information
Select the correct disk (1-hdisk0)
2 - Primary Language Lnvlronmrnt Settings (After Install):
38 - English (Great Britain) English (ISO8859-t) English (Great Britain)
1 English(Great Britain) KBD ID 168
[0] enter to get into actual migration
1 - Continue with Install

1 - List the saved Base System configuration files which will not be merged into the system. These files are saved in /tmp/bos.
2 list the filesets which will be removed and not replaced
3 List directories which will have all current contents removed

>>>  Choice(0J; 1
>>>  Choiceloj: 2
>> > Cholce[0J: 3
0 - Continue with the migration

Migration starts now

once the migration is done, check the OS level and version
oslevel -s
instfix -i|grep -i ml
lppchk -vm3


Note: Do not forget to change the booting option into Normal mode.
chsysstate -r ipar -m <frame_name> -o on -f Normal -b norm -n <target_lpar_name>


How to recover the root password from maintenance mode in aix?

How to recover the root password from maintenance mode in aix?



On the NIM server
=================

lsnim -1 <target_lpar_name>    (If the machine not defined, then defined it using smitty nim)
lsnim -1 <target_lpar_name>    (check “Cstate= ready for a NIM operation”)
smitty nim_mac_op           (Select Client/Spot and enable a machine to boot in maintenance mode)
lsnlm -1 <target_lpar_name>    (Check cstate - maintenance boot has been enabled)
cat /etc/bootptab

On the HMC
===========

chsysstate -m <frame_name> -r ipar -n <targe-t_lpar_name> —o shutdown --immed
chsysstate -r ipar -m <frame_name> -o on -f Normal -b sms -n <target_lpar_name>

vtmenu
select the target lpar to get into the console

2. Setup Remote IPI. (Initial Program load)
Choose the proper Interpartition Logical LAN
1. 1Pv4 Address Format
1. BOOTP
1. IP Parameters

Fill it up the below 
1. Client IP Address/Server IP Address/Gateway IP Address/Subnet Mask

Once done ESC key — return to previous screen
3. Ping Test
1. Execute Ping Test

once ping test successful, Press any key to continue
Press M = return to Main Menu
5. Select Boot Options
1. Select Install/Boot Device
6. Network
1. Bootp

Now select Interpartition logical LAN
2. Normal Mode Boot
1 Yes

You will see Elapsed time since release of system processors and welcome to aix board
Type a 1 and press Enter to use this terminal as the system console. (>>> Choice [1): enter)

>>> I Access a Root Volume Group ( >>> Choice [1): Enter)
0 Continue
Now select the root volume group disk (Now you could see the volume group info)
1) Accss this Volume Group and start a sheji

now you will get the promt (#)

passwd root
changing password for “root”
root's New password:
Re-enter roots new password: -
# sync;sync;sync;reboot