Tuesday, April 7, 2020

Migration issues - rsh issue on the nimadmvg in aix?

Migration issues - rsh issue on the nimadmvg in aix?


testnim: # nimadm -j nimadmvg -c testlpar -s aix710502_spot -l aix710502_lppsource -d hdisk1 –Y

Initializing the NIM master
Initializing the NIM client testlpar
rshd: 0826-826 The host name for your address is not known
0505-159 nimadm: warning,unexpected result from remote command to testlpar.
0505-153 nimadm: unable to execute remote client commands.
Cleaning up alt_disk_migration on the NIM master.
Done!!!
testnim: #


Troubleshoot
============

testnim: # rsh testlpar uname -a
rshd: 0826-826 The host name for your address is not known.
testnim: #



Solution
========

Check the nim servers details (hostname-IP address) are updated on the client's /etc/hosts and vice versa.

Remove the /etc/niminfo file and recreate with niminit command (If required disable the and enable the rsh again) and then try again.






Migraiton issues: Unable to restart the rsh deamon after we enable the rsh deamon in aix?

Migraiton issues: Unable to restart the rsh deamon after we enable the rsh deamon in aix?


testlpar: # chsubserver -a -v shell -p tcp6 -r inetd
testlpar: # refresh -s inetd
0513-056 Timout waiting for command response. If you specified a foreign host, see the /etc/initta file on the foreign host to verify the SRC deamon (srcmstr) was started with the -r flag to accept the remote requests.
testllpar: #


Solution:
=======

I tried all the possibilities but could not able to fix this. So I rebooted the server and the issue is fixed.

migration issue: After upgrade to AIX 7.1.5.2 - the oslevel showing as downgraded that is 7100-04-06-1806 and lppchk display rpm.rte filesets in broken state.

migration issue: After upgrade to AIX 7.1.5.2 - the oslevel showing as downgraded that is 7100-04-06-1806 and lppchk display rpm.rte filesets in broken state.


testlpar: # lppchk -v
lppchk: the following filesets need to be installed or corrected to bring the system to a consistent state.
rpm.rte 3.0.5.52    (usr: COMMITED, root:BROKEN)
testlpar: #

testlpar: # lslpp -l|grep -i rpm.rte
rpm.rte                 3.0.5.52      COMMITTED     RPM package manager
rpm.rte                 3.0.5.52      BROKEN             RPM Package manager
testlpar: #

Solution:
=======
Mount the 7.1.5.2 lppsource and perform the smitty update_all. It will fix the rpm.rte issue and we will reach to 7.1.5.2



lppchk failed with the lower level of the emc packages in aix?

lppchk failed with the lower level of the emc packages in aix?



lppchk  issue for the lower level of the emc packages.

We have received some filesets consistency issue after we migrated to 7.1.5.2

testlpar# lppchk -vm3
lppchk: The following filesets need to be installed or corrected to bring the system to a consistent state.
bos.rte v<7  (Not installed: requisite fileset)

lppchk: The following descripes  dependencies  from installed filesets on one or more of the filesets to a consistent state.
Fileset EMC.Symmetrix.aix.rte 5.3.0.9 requires bos.rte v<7
Fileset EMC.Symmetrix.fcp.MPIO.rte 5.3.0.9 requires: bos.rte v<7
testlpar#


Solution:

Such issue is related to low level fileset installed on AIX 7.1.

The fileset which is causing such issue are EMC.Symmetrix.aix.rte EMC.Symmetrix.fcp.MPIO.rte. and Fileset EMC.Symmetrix.aix.rte 5.3.0.9 requires: Bos.rte v<7


As all the disks are VSCSI (We can confirm using lsdev -Cc disk) . Hence such fileset is not required by your AIX. (It is recommended to contact EMC to check regarding removal of such fileset)

lppchk -vm3
lslpp -hac |grep bos.mp64
lslpp -hac |grep sddpcm*
lslpp -l|grep -i EMC.Symmetrix.aix.rte
lppchk -vm3
lslpp -l|grep -i emc
lsdev -Cc disk
installp -ug EMC.Symmetrix.aix.rte EMC.Symmetrix.fcp.MPIO.rte
lslpp -l|grep -i emc
lppchk -vm3
lppchk -v
instfix -i!grep -i ml
osleve1 -rI 7100-05



Friday, April 3, 2020

How to extract tar.gz in aix?

How to extract tar.gz in aix?


To extract tar.gz 
==============
gzip -d -c <file_name> | tar -xvf -


Uncompress and untar each package into a separate empty directory, using the following command.

zcat .tar.Z | tar -xf -


How to restart the IVM in aix?

How to restart the IVM in aix?


Please execute the blow commands on the VIO server to restart the IVM

/usr/ios/lpm/sbin/httpdmgr stop
/usr/ios/lpm/sbin/httpdmgr start
/usr/ios/lpm/sbin/httpdmgr status


# /usr/ios/lpm/sbin/httpdmgr status
lwistart.sh is running
Done
#

# /usr/ios/lpm/sbin/httpdmgr stop
Done
#

# /usr/ios/lpm/sbin/httpdmgr start
ready-11

^C# ^C
#exit


Note: IVM is just like the HMC which is used to manage the Blade partition


some important "for loop" script to support in your day-today life (BAU)

Some important "for loop" script to support in your day-today life (BAU) 



To identify the disk size on the server:
for s in `lspv|awk '{print $1'}`;do echo $s `bootinfo -s $s`; done

To identify the lun number of the disks in lpar
for s in `lspv|awk '{print $1'}; do echo $s `odmget CuAt|grep -wp $s|grep -i value|head -1|awk '{print $3'}|cut -c12-15`; done  

To identify the lun number of the disks in vio server.
for s in `lspv|awk '{print $1'}; do echo $s `odmget CuAt|grep -wp $s|grep -i value|head -1|awk '{print $3'}|cut -c8-11`; done

To identify the wwpn of the fcs card:
for s in `lscfg |grep fcs|awk '{print $2'}`; do echo $s `lscfg -vl $s|grep -i netowrk|awk '{print $2'}|cut -c21-37`; done 

To identify the wwpn of the fcs in vio server:
for s in `lsdev -type adapter|grep fcs|awk '{print $1'}`; do echo $s `lsdev -dev $s -vpd|grep Network`; done 

To idenfy the c-slot numer of the disk (It is useful to identify the mapping with vio server)
for s in `lspv|awk '{print $1'}`; do echo $s `lscfg -vl $s|grep -i $s|cut -c42-44`; done    

To get the vhost information of the particular disk on the vio server.
for s ‌in `lsmap -all|grep -p hdiskX|grep -i vtd|awk '{print $2'}`; do lsdev -dev $s -field parent|tail -1; done 


To identify the Ethernet card info on the server (This is useful before you do reboot)
for s in `lsdev -Cc adapter|grep -i ent|awk '{print $1'}`; do echo $s; lsattr -El $s; echo "========================"; done     
for s in `ifconfig -a|grep -i :|awk '{print $1'}|grep -i en|cut -c1-3`; do echo $s; lsattr -El $s; echo "======================"; done   
for s in `lsde v-Cc adapter|grep -i ent|awk '{print $1'}`; do echo $s; entstat -d $s; echo "==========================="; done    

To get the server name against the ip address
for s in `cat server_network`; do echo $s `nslookup $s|grep -i name|awk '{print $4'}`; done  

To get the IP address against the server-name
for s in `cat server_network`; do echo $s `nslookup $s|grep -i address`; done

To enable the failed paths on the server.
for s in `lspath|grep -i failed|awk '{print $2'}`; do chpath -l $s -p `lspath|grep -i failed|awk '{prit $3'} -s Enabled; done   

Thursday, April 2, 2020

How to move the resource group from one node to another in hacmp?

How to move the resource group from one node to another in hacmp?


Using the below steps to move the resource group from one node to another. 

# clshowres     - To get the resource attributes

# clRGinfo   - To check the resource group status like where it is online.

# clRGmove -g testRG -n nodeB -m        - move the testRG to node B

# clRGinfo   - To check the resource group status like whether the RG is moved to nodeB


Wednesday, April 1, 2020

Important things about HACMP and PowerHA in aix?

Important things about HACMP and PowerHA in aix?

To move the resouce group from one node to another
# clRGmove -g testRG -n nodeB -m
To create a FS in hacmp

# /usr/sbin/cluster/sbin/cl_crfs -cspoc “-n nodeA,nodeB” 
-v jfs2 -g testVG -a size=65572 -m/testFS -p rw -a size=4096

To extend the FS in hacmp

# /usr/es/sbin/cluster/sbin/cl_chfs -cspoc "-g testRG" 
-a size=+65572 /testfs        (Note: -a size is the size in block)

To list out the VG which are part of the cluster:
# cl_lsvg
To get the info about the RGs:
# clRGinfo
To get the detailed info about RGs:
# clshowres
To get the info about the disk in the RG:
# cllsdisk -g testrg
To get the filesystem info about the RG:
# cllsfs -g testrg
To get the info about the disk in cluster VG:
# /usr/es/sbin/cluster/cspoc/cl_lsrgvgdisks
To get the complete the info about cluster:
# /usr/es/sbin/cluster/utilities/cllscf
To get detailed info about VG in cluster:
# /usr/es/sbin/cluster/utilities/cllsvgdata
To get the topology info in cluster:
# cltopinfo
TO get the network info in hacmp:
# /usr/es/sbin/cluster/utilities/cllsif  and # /usr/es/sbin/cluster/utilities/cllsnw
To get the hacmp nodes and network config info:
# /usr/es/sbin/cluster/utilities/cllsnode
To list out the network present on the hacmp:
# /usr/es/sbin/cluster/utilities/cllsipnw
To get the info about network interface alive:
# /usr/es/sbin/cluster/utilities/cllsaliveif
To show cluster state and substate; needs clinfo.
# clstat
To SNMP-based tool to show cluster state
# cldump
To similar to cldump, perl script to show cluster state.
# cldisp
To list the local view of the cluster topology.
# cltopinfo
To list the local view of the cluster subsystems.
# clshowsrv -a
To locate the resource groups and display status.
# clfindres (-s)
To locate the resource groups and display status.
# clRGinfo -v
To rotate some of the log files.
# clcycle
To cluster ping program with more arguments.
# cl_ping
To cluster rsh program that take cluster node names as argument.
# clrsh
To which nodes are active?
# clgetactivegnodes
To what is the name of the local node?
# get_local_nodename
To check the HACMP ODM.
# clconfig
To online/offline or move resource groups.
# clRGmove
To sync/fix the cluster.
# cldare
To list the resource groups.
# cllsgrp
To create a large snapshot of the hacmp configuration.
# clsnapshotinfo
To list the network configuration of an hacmp cluster.
# cllscf
To show the resource group configuration.
# clshowres
To show network interface information.
# cllsif
To show short resource group information.
# cllsres
To list the cluster manager state.
# lssrc -ls clstrmgrES
To show heartbeat information.
# lssrc -ls topsvcs
To list a node centric overview of the hacmp configuration.
# cllsnode

How to increase the filesystem size in Hacmp cluster in aix?

How to increase the filesystem size in Hacmp cluster in aix?




Using the below steps we can increase the filesytem size in hacmp cluster in aix.


#cd /usr/sbin/cluster/cspoc
#./cli_chfs -a size=+10G /opt/tivoli


(OR)


smitty hacmp
system management (C-SPOC) 
HACMP logical volume management
Shared Filesystems
Enhanced Journaled File systems
Change / Show characteristic of a shared enhanced journaled file system
Then select the particular filesystem and change the values considering the requirement and VG space.