How to Migrate GPFS version from 3.5 to 4.2 in AIX?
How to upgrade GPFS version from 3.5 to 4.2 in AIX?
The GPFS is a high-performance clustered file system that can be deployed in shared-disk. GPFS provides concurrent high-speed file access to the applications executing on multiple nodes of clusters which can be used with AIX5L, Linux, and Windows platforms.
Note: We didn't see anywhere for the specific step by step document for upgrading the gpfs from 3.5 to 4.2. Myself and my Boss spent much time to prepare these steps for upgrade the gpfs. Hope it is helpful.
1. Ground work:
mmgetstate To displays the state of the GPFS daemon on one or more nodes
lslpp -l gpfs To list the current gpfs version details
mmdf To check the gpfs file system size
mmlsfs all To check all GPFS file systems
mmlscluster To displays GPFS cluster configuration information
mmlsconfig To displays the configuration data for a GPFS cluster
mmlsmgr To displays the file system manager node
mmlsmgr -c To view the GPFS manager
mmlsnsd To list NSD disks
mmlsnsd -M To view the detailed NSD disks
mmlsdisk -d To view the disk information
mmlslicense To view the GPFS license
mmlsmgr To check the cluster Manager and File Manager
lspv To get the disk details of the server.
installp -s To check any filesets are in applied state
installp -c all To commit all the filesets if anything is on the applied state.
prtconf To get the basic information about the server.
2. Stop GPFS cluster and uninstall the current version
Stop all user activity in the file systems.
#mmexportfs all –o /var/tmp/exporDataFile To export the current cluster configuration
#mmshutdown –a To Stop GPFS on all nodes in the cluster.
#lslpp –l|grep –I gpfs To identify the current level of the gpfs 3.5
#Installp –u gpfs To uninstall the gpfs on all the nodes.
#lslpp –l|grep –I gpfs To confirm the gpfs is completely uninstall on servers.
#shutdown –Fr For each node, initiate the reboot of the servers.
3. Upgrade to GPFS 4.2 (Spectrum Scale)
#lslpp –l|grep –I gpfs To identify the current level of the gpfs
#cd /var/tmp/gpfs_4.2 To go the particular dir where we keep the 4.2 filesets.
#smitty update_all Do the preview first and commit next, perform same on all the nodes.
#lslpp –l|grep –I gpfs To confirm the gpfs 4.2 has been installed on all the nodes.
#mmgetstate –a To check the gpfs cluster status after 4.2 installation
#mmstartup –a To start the gpfs cluster
#mmgetstate –a To check the gpfs cluster status after 4.2 installation
#df –gt /data/test To confirm the gpfs filesytem mounted automatically (if not, follow the below)
#mmimportfs all –I /var/tmp/exportDataFile To import the gpfs filesytem configuration details.
#mmmount /dev/testlv –a To manually mount the gpfs filesytem
#df –gt /data/test To confirm the gpfs filesytem mounted all the nodes.
#shutdown –Fr To reboot all the nodes.
4. Validation
#mmgetstate –a To confirm the cluster service is active on all the nodes.
#df –gt/data/test To confirm the cluster filesystem mounted on all the nodes.
#lslpp –l |grep –I gpfs To confirm the latest level of gpfs is in commited state.
Ask application team to peform the technical checks out of the servers and confirm all is OK at their end.
5. Post migration tasks to the new level of spectrum scale
#mmlscconfig At the moment, the filesytem configuration still shows as 3.5
#mmchconfig release=LATEST To migrate the cluster configuration data and enable new functionality throughout the cluster
#mmchconfig release=LATEST --accept-empty-cipherlist-security To execute this command only if the above mmchconfig command failed.
#mmlslicense -L To get GPFS license is for each of the nodes in the cluster
#mmchlicense server -N NodeList To assign a GPFS client license to the nodes that require it
#mmchfs <FileSystem> -V compat To enable backward-compatible format changes
(or)
#mmchfs FileSystem -V full To migrate all file systems to the latest metadata format changes
Note:
If you issue mmchfs -V compat, only changes that are backward compatible with GPFS 3.5 will be enabled. Nodes in remote clusters that are running GPFS 3.5 will still be able to mount the file system.
If you issue mmchfs -V full, all new functions that require different on-disk data structures will be enabled. Nodes in remote clusters running an older GPFS version will no longer be able to mount the file system
6. Backout steps for the gpfs from 4.2 to 3.5
#mmexportfs all –o /tmp/exporDataFile To export the current cluster configuration
#lslpp –l|grep –I gpfs To check the current level of the gpfs
Installp –u gpfs To uninstall the current 4.2 gpfs on all the nodes.
#lslpp –l|grep –I gpfs To confirm the gpfs 4.2 is completely uninstall on servers.
#shutdown –Fr To reboot all the nodes.
#cd /var/tmp/gpfs_3.5 To go the particular dir where we keep the 3.5 filesets.
#smitty update_all Do the preview first and commit next, perform the same on all the nodes.
#lslpp –l|grep –I gpfs To confirm the gpfs 3.5 has been installed on all the nodes.
#mmgetstate –a To check the gpfs cluster status after 3.5 installation
#mmstartup –a To start the gpfs cluster
#mmgetstate –a To check the gpfs cluster status after 3.5 installation
#df –gt /data/test To confirm the gpfs filesytem mounted automatically (if not, follow the below)
#mmimportfs all –I /tmp/exportDataFile To import the gpfs filesytem configuration details.
#mmmount /dev/testlv –a To manually mount the gpfs filesytem
#df –gt /data/test To confirm the gpfs filesytem mounted all the nodes.
#shutdown –Fr To reboot all the nodes.
Ask application team to perform the technical checks out and get confirmation that all is OK