Wednesday, April 25, 2018

How to create a ISO image using mksysb and perform the restoration in aix?

How to perform the AIX restore using the ISO image in aix?



Assume the server name is "testserver"


Taking the mksysb on the /mnt directory. (it can be nim server nfs filesystem)
mksysb -i -e -X /mnt/testserver.mksysb     


To confirm the mksysb backup is good for restore.
listvgbackup –f /mnt/testserver.mksysb


To create a iso directory to hold the iso image and go inside it.
mkdir /mnt/testserver/iso
cd /mnt/testserver/iso


To create iso image using mksysb
mkcd -L -S -I /mnt/testserver/iso -m /mnt/testserver.mksysb


cd_image_12345 file has been created, we can rename it 


Rename the iso image for the better naming convention.
mv cd_image_12345 testserver.iso


Confirm the rename is successful
ls -l testserver.iso 


Now the iso can be stored in the /home/padmin directory of the VIO server.
scp testservcer.iso padmin@vioserver:/home/padmin/.


To Lists and displays information about the Virtual Media Repository.
lsrep


To create the media repository
mkrep -sp rootvg -size 10G


To confirm the media repository created
lsrep


To copy the iso image to the repository image
cp testserver.iso /var/vio/VMLibrary/


To create a optical device and map it on the proper vhostX
lsmap -vadapter vhost0
mkvdev -fbo -vadapter vhost0 -dev testserver_opt0
lsmap -vadapter vhost0


To load the virtual media on the optical device
loadopt -disk testserver.iso -vtd testserver_opt0
lsmap -vadapter vhost0


Now the output should be like


VTD testserver_opt0
Status Available
LUN 0x8200000000000000
Backing device /var/vio/VMLibrary/testserver.iso
Physloc
Mirrored N/A


Now we can able to restore the iso image to the corresponding lpar.

Login to the hmc and Activate the AIX server “testserver” to the SMS Menu
And choose the below options.

(5) Select boot option
(1) Select install/ Boot device
(7)  List all devices
(2) SCSI-CD ROM  (location code lun id match with the lun id of the mapped vhost - 0x8200000000000000)
(2)  Select normal boot
(1) Yes  (to exit system management service)

Once exiting from the sms menu, the server “testserver” will boot up from the virtual optical device to the AIX Installation Menus

Now the install /restoration aix operating system.starts



Note:

Sometimes, 2 ISO images created due to the large size of mksysb. In that case, while perform the restoration first we have to add the first iso image and after sometime we need to load the second .iso image. (The system will let you know to add the second iso image - "Please remove volume 1, insert volume 2, and press the ENTER key.")


Follow the below procedure if you are in the situation to add the second iso image.


lsmap –vadapter vhost0
unloadopt -vtd testserver_opt0
loadopt -disk testserver2.iso -vtd testserver_opt0
lsmap –vadapter vhost0



Saturday, April 7, 2018

How to migrate GPFS from 3.5 to 4.2 in AIX?

How to Migrate GPFS version from 3.5 to 4.2 in AIX?


How to upgrade GPFS version from 3.5 to 4.2 in AIX?


The GPFS is a high-performance clustered file system that can be deployed in shared-disk. GPFS provides concurrent high-speed file access to the applications executing on multiple nodes of clusters which can be used with AIX5L, Linux, and Windows platforms.

Note: We didn't see anywhere for the specific step by step document for upgrading the gpfs from 3.5 to 4.2. Myself and my Boss spent much time to prepare these steps for upgrade the gpfs. Hope it is helpful.


1.   Ground work:

mmgetstate                  To displays the state of the GPFS daemon on one or more nodes
lslpp -l gpfs                  To list the current gpfs version details
mmdf                           To check the gpfs file system size
mmlsfs all                      To check all GPFS file systems
mmlscluster                 To displays GPFS cluster configuration information
mmlsconfig                   To displays the configuration data for a GPFS cluster
mmlsmgr                      To displays the file system manager node
mmlsmgr  -c                 To view the GPFS manager
mmlsnsd                       To list NSD disks
mmlsnsd -M                  To view the detailed NSD disks
mmlsdisk -d                  To view the disk information
mmlslicense                 To view the GPFS license
mmlsmgr                      To check the cluster Manager and File Manager
lspv                              To get the disk details of the server.
installp -s                     To check any filesets are in applied state
installp -c all                To commit all the filesets if anything is on the applied state.
prtconf                         To get the basic information about the server.


2.   Stop GPFS cluster and uninstall the current version

Stop all user activity in the file systems.
#mmexportfs all –o /var/tmp/exporDataFile     To export the current cluster configuration
#mmshutdown –a                                  To Stop GPFS on all nodes in the cluster.
#lslpp –l|grep –I gpfs                             To identify the current level of the gpfs 3.5
#Installp –u gpfs                                    To uninstall the gpfs on all the nodes.
#lslpp –l|grep –I gpfs                             To confirm the gpfs is completely uninstall on servers.
#shutdown –Fr                                     For each node, initiate the reboot of the servers.


3.   Upgrade to GPFS 4.2 (Spectrum Scale)

#lslpp –l|grep –I gpfs               To identify the current level of the gpfs
#cd /var/tmp/gpfs_4.2             To go the particular dir where we keep the 4.2 filesets.
#smitty update_all                   Do the preview first and commit next, perform same on all the nodes.
#lslpp –l|grep –I gpfs                To confirm the gpfs 4.2 has been installed on all the nodes.
#mmgetstate –a                       To check the gpfs cluster status after 4.2 installation
#mmstartup –a                         To start the gpfs cluster
#mmgetstate –a                       To check the gpfs cluster status after 4.2 installation
#df –gt  /data/test                    To confirm the gpfs filesytem mounted automatically (if not, follow the below)
#mmimportfs all –I /var/tmp/exportDataFile   To import the gpfs filesytem configuration details.
#mmmount /dev/testlv –a         To manually mount the gpfs filesytem
#df –gt /data/test                     To confirm the gpfs filesytem mounted all the nodes.
#shutdown –Fr                         To reboot all the nodes.


4.   Validation

#mmgetstate –a                         To confirm the cluster service is active on all the nodes.
#df –gt/data/test                       To confirm the cluster filesystem mounted on all the nodes.
#lslpp –l |grep –I gpfs              To confirm the latest level of gpfs is in commited state.
Ask application team to peform the technical checks out of the servers and confirm all is OK at their end.


5. Post migration tasks to the new level of spectrum scale

#mmlscconfig                                              At the moment, the filesytem configuration still shows as 3.5
#mmchconfig release=LATEST                     To migrate the cluster configuration data and enable new functionality throughout the cluster
#mmchconfig release=LATEST --accept-empty-cipherlist-security  To execute this command only if the above mmchconfig command failed.
#mmlslicense -L                                          To get GPFS license is for each of the nodes in the cluster
#mmchlicense server -N NodeList               To assign a GPFS client license to the nodes that require it
#mmchfs <FileSystem> -V compat              To enable backward-compatible format changes
(or)
#mmchfs FileSystem -V full                        To migrate all file systems to the latest metadata format changes

Note:
If you issue mmchfs -V compat, only changes that are backward compatible with GPFS 3.5 will be enabled. Nodes in remote clusters that are running GPFS 3.5 will still be able to mount the file system.

If you issue mmchfs -V full, all new functions that require different on-disk data structures will be enabled. Nodes in remote clusters running an older GPFS version will no longer be able to mount the file system


6. Backout steps for the gpfs from 4.2 to 3.5

#mmexportfs all –o /tmp/exporDataFile  To export the current cluster configuration
#lslpp –l|grep –I gpfs                           To check the current level of the gpfs
Installp –u gpfs                                    To uninstall the current 4.2 gpfs on all the nodes.
#lslpp –l|grep –I gpfs                           To confirm the gpfs 4.2 is completely uninstall on servers.
#shutdown –Fr                                    To reboot all the nodes.
#cd /var/tmp/gpfs_3.5                        To go the particular dir where we keep the 3.5 filesets.
#smitty update_all                              Do the preview first and commit next, perform the same on all the nodes.
#lslpp –l|grep –I gpfs                             To confirm the gpfs 3.5 has been installed on all the nodes.
#mmgetstate –a                                    To check the gpfs cluster status after 3.5 installation
#mmstartup –a                                      To start the gpfs cluster
#mmgetstate –a                                    To check the gpfs cluster status after 3.5 installation
#df –gt  /data/test                                To confirm the gpfs filesytem mounted automatically (if not, follow the below)
#mmimportfs all –I /tmp/exportDataFile   To import the gpfs filesytem configuration details.
#mmmount /dev/testlv –a                     To manually mount the gpfs filesytem
#df –gt /data/test                                  To confirm the gpfs filesytem mounted all the nodes.
#shutdown –Fr                                      To reboot all the nodes.
Ask application team to perform the technical checks out and get confirmation that all is OK