Thursday, November 1, 2018

How to deal with the concept of micro-partition in aix?

How to deal with the concept of micro-partition in Aix?

 It is really hard to get the exact concept/purpose of micro-partition of Aix in google or from IBM site. However, The below is the "what is what" description for the micro-partition concept in Aix. I am sure when you complete to read this, you will get a good understanding about capacity planning in aix. It will help you if you are prepare for your aix interview. 


Power VM:

     PowerVM is the virtualization solution for AIX on IBM Power technology.


Power Hypervisor:

     As the foundation of PowerVM is the Power hypervisor. It is a firmware layer sitting between the hosted operating systems and the server hardware and the power hypervisor delivers the virtualized functions


Micro partition:

     Micro-Partitioning technology allows allocating fractions of processors to a logical partition. A logical partition using fractions of processors is also known as a Micro-Partition. Micro-Partitions run over a set of processors called a Physical Shared-processor pool or simply Shared-Processor Pool.


Processing mode:

      Assign entire processors for dedicated use or assign processor which can be shared with other micro-partitions.


Dedicating processing mode:

      In dedicated mode, physical processors cores are assigned as a whole to partitions. Thus the processing capacity, in this case, is fixed and can’t go beyond that. For example, if you have assigned 2 processors to a partition in a dedicated processing mode, that’s it. It can’t access beyond two processors if required.


Shared dedicated mode:

      The problem with dedicated processing mode is that processing capacity is fixed and during low workload timings, processing capacity get wasted. In power 6, In this Shared Dedicated Mode, unused cycles are harvested and then donated to the physical shared-processor pool associated with Micro-Partitioning. This ensures the opportunity for maximum processor utilization throughout the system.

The Power hypervisor ensures that only spare CPU cycles are donated, hence enabling this feature can help to increase system utilization, without affecting the critical partitions configured in a dedicated processor mode.

 When the CPU utilization of the core goes below a threshold, and all the SMT threads of the CPU are idle from a hypervisor perspective, the CPU will be donated to the shared processor pool. The donated processor is returned instantaneously (within micro-seconds) to the dedicated processor partition when the timer of one of the SMT threads on the donated CPU expires, which means thread got instructions to execute.


·“Allow when the partition is inactive” - if checked, will allow the processors assigned for the dedicated processor partition to be included in the shared processor pool when the partition is powered off (that’s why inactive).


 · “Allow when partition is active” - if checked, will allow the processors assigned for the dedicated processor partition to be included in the shared processor pool when the partition is active, but not making full use of the processors (Shared dedicated processing mode).



Shared Processing Mode:

      In shared processing mode, partitions get fractions of a physical processor. The processing capacity assigned to a partition is known as its entitled capacity and expressed in processing units (PUs). At minimum, a partition can get 0.1 processing units (in other words 10% of a processor’s capacity) and after that any value with an increment of 0.01.

The guaranteed capacity available to a partition within one dispatch cycle, which is 10ms.

Note: The total entitled capacity of all the partitions configured on the system can never exceed the number of processor in that system.



Processing unit:
Desired processing unit / Min / Max processing unit:

      The desired processing units define the processing capacity you desire for this partition to have and Min/Max processing units decide a valid range in which you can change the processing units for this partition.


Virtual processor:
Desired virtual processor / Min / Max Virtual processor:

      Desired virtual processors is the number of virtual processors you want this partition to have and Min/Max defines a valid range for the dynamic movement.


Capped:  Limited to the entitled capacity
•      Example:  1.5 capped processing units means a partition can use up to 15 ms of execution time during each timeslice, but no more than that


Uncapped: If a partition needs extra CPU cycles (more than its entitled capacity), it can utilize unused capacity in the shared pool
      Example:  An uncapped partition with 1.5 processing units is guaranteed to be able to use 1.5 units, but may use more if necessary (and more is available)


Simultaneous Multithreading (SMT):

       Simultaneous multithreading was first introduced on Power5 offerings, supporting two threads and has been further enhanced in Power7 offerings by allowing four separate threads to run concurrently on the same physical processor core. In the vmstat, with SMT on, the lcpu=2 means you have one core. It’s possible to use mpstat –s to see how the threads are being dispatched.


Virtual Processors

       A virtual processor (VP or VCPU) is a depiction of a physical processor that is presented to the operating system running in a micro-partition. Virtual processors are allocated in whole numbers. Each virtual processor can represent between 0.1 and 1.0 CPUs, known as processing units (PU). So a virtual processor can never consume more than 1.0 physical processor.


In other words, the capacity of a virtual processor will always be equal to or less than the processing capacity of a physical processor. A shared partition can define number of virtual processors up to 10 times the number of processing units assigned to the partition.


Concept of virtual processors:

       virtual processors determine how many cores a partition thinks it has. If 2 virtual processors configured on a partition, it will think it has two physical cores. The number of virtual processors configured for a micro-partition establishes the usable range of processing units. 

For example, a partition with one virtual processor can operate with between 0.1 and 1.0 processing units and with two virtual processors can get between 0.2 and 2.0 processing units. The upper limit of processing capacity, up to which an uncapped micro-partition can grow, is determined by the number of virtual processors that it possesses


For example, if you have a partition with entitled capacity of 0.50 processing units and one virtual processor, the partition cannot exceed 1.00 processing units. However, if the same partition is assigned two virtual processors and processing resources are available in the shared processor pool, the partition can go up to 2.00 processing units (an additional 1.50 processing units) and 4.00 processing units in case of 4 virtual processors


By default, the number of processing units that you specify is rounded up to the minimum whole number of virtual processors needed to satisfy the assigned number of processing units.


 For example:
· If you specify 0.50 processing units, one virtual processor will be assigned.
· If you specify 2.25 processing units, three virtual processors will be assigned.


A micro-partition must have enough virtual processors to satisfy its assigned processing capacity. For example, if a micro-partition has an entitled capacity of 2.5 processing units then the minimum number of virtual processors would be 3


Processing Capacity: Desired, Minimum, and Maximum:


 ·  Minimum, desired, and maximum processing units
 · Minimum, desired, and maximum virtual processors

The desired processing units and desired virtual processor values can be changed dynamically without stopping or reactivating the partition and that’s where minimum and maximum values come into picture.


The minimum and maximum settings for both processing units and virtual processors represent the extremes between which the desired values can be dynamically changed. Maximum value is only used as an upper limit for the dynamic operation and doesn’t play any role in processing capacity allotment for the partition.



Deep understanding about the micro partition:

       A desired value defines a value which you desire or like to have. This is not an ensured processing capacity because there might not be enough capacity in the shared-processor pool. In addition to define a lower boundary for dynamic operations, the minimum value defines a minimum value which must be available in the shared-processor pool for the partition to start, failing which partition will not start.


When a partition is started, preference is given to the desired value you set. When there is enough capacity in the shared-processor pool, the allocated entitled capacity will be equal to the desired value. Otherwise, the entitled capacity will be lower than the desired value, a value greater than or equal to the minimum capacity attribute. If the minimum capacity requirement cannot be met, the micro-partition will not start.
The entitled processor capacity is allocated among the partitions in the sequence the partitions are started. Consider a scenario where a physical shared-processor pool has 2.0 processing units available and there are 3 partitions started in the sequence 


Partitions 1, 2, and 3 with following attributes: -
Partition 1 (Minimum: 0.5, Desired: 1.5, Maximum: 1.8)
Partition 2 (Minimum: 1.0, Desired: 1.5, Maximum: 2.0)
Partition 3 (Minimum: 0.5, Desired: 1.0, Maximum: 1.5)


 Since Partition 1 is started first, it will get the entitled processing capacity of 1.5 because there is enough processing capacity of 2.0 processing units available in the physical-processor pool. Partition2 will not be able to start because after allocating 1.5 processing units to partition 1, only 0.5 processing units are left in the physical processing pool and Partition 2 minimum capacity requirement is greater than that. Partition 3 will get started with entitled capacity of 0.5, which is the capacity left in the physical-processing pool. This value is less than Partition 3’s desired capacity requirement. The allocated capacity entitlements are summarized below:

· Partition 1 will be activated with allocated capacity entitlement of 1.5, which is the desired value. 
· Partition 2 will not start because minimum capacity requirements will not met.
 · Partition 3 will be activated with allocated capacity entitlement of 0.5, which is lesser than the desired value but sufficient enough to start the partition.

note: My sincere thanks to Neeraj.
----------------------------------------------------------------------------------------------------------------------------------


Sunday, September 30, 2018

How to check the oslevel of altinst_rootvg in aix?

How to check the oslevel of altinst_rootvg in aix?


How to check the oslevel of the altaltinst_rootvg in aix?

Sometimes we will be in the situation to identify the oslevel / TL / SP of the altinst_rootvg to proceed further for our activity (next task), Using the below steps we can easily identify the os version of the altinst_rootvg in aix


Test2:/# oslevel -s
7100-01-04-1141
Test2:/# 

Test2:/# lspv
hdisk0        00d342e7131c6b47                  rootvg          active
hdisk1        00d342d637j21a59                  altinst_rootvg
Test2:/# 


Test2:/# alt_rootvg_op -W -d hdisk1
Waking up altinst_rootvg volume group ...

Test2:/# 


Now the altinst_rootvg is in Active state also alt filesystems are in mounted state on the server


Test2:/# lspv
hdisk0        00d342e7131c6b47                  rootvg          active
hdisk1        00d342d637j21a59                  altinst_rootvg  active
Test2:/# 


Test2:/# df
Filesystem    512-blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4          524288    380024   28%     3091     3% /
/dev/hd2         3801088    396856   90%    34020     8% /usr
/dev/hd9var      2621440   2279336   14%     3560     2% /var
/dev/hd3          524288    499272    5%      105     1% /tmp
/dev/hd1          524288    507336    4%      102     1% /home
/proc                  -         -    -         -     -  /proc
/dev/hd10opt      524288    278872   47%     3370     6% /opt
/dev/alt_hd4      524288    365552   31%     3871     3% /alt_inst
/dev/alt_hd1      524288    507336    4%      104     1% /alt_inst/home
/dev/alt_hd10opt 1310720   562888   58%     5694     4% /alt_inst/opt
/dev/alt_hd3     524288    499120    5%      116     1% /alt_inst/tmp
/dev/alt_hd2     5636096   184120   97%   103336    15% /alt_inst/usr
/dev/alt_hd9var  2621440   1835656   30%   6632     3% /alt_inst/var

We need to start the chroot shell within the alternate rootvg to identify the OS level/TL/SP information.

Test2:/# chroot /alt_inst /usr/bin/ksh

Test2:/# oslevel -s
7100-01-01-1216
Test2:/# 
Test2:/# exit

you can return to the rootvg environment by exit from the alt shell.


Now it is really really important to put the cloned rootvg back to sleep.


Test2:/# alt_rootvg_op -S altinst_rootvg
Putting volume group altinst_rootvg to sleep ...
forced unmount of /alt_inst/var
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/home
forced unmount of /alt_inst
Fixing LV control blocks...
Fixing file system superblocks...
The bootlist command confirms that you will reboot using the alternate rootvg disk (hdisk1).

Test2:/# bootlist -m normal -o
hdisk1 blv=hd5 pathid=0
hdisk1 blv=hd5 pathid=1

Test2:/# lspv
hdisk0        00d342e7131c6b47                  rootvg          active
hdisk1        00d342d637j21a59                  altinst_rootvg
Test2:/# 

you can use the same steps to get the os level information from old_rootvg as well.




Saturday, September 1, 2018

How to find out WWPN number of the HBA (FC) card in aix?

How to find out WWPN number of the HBA (FC) card in aix?


More often, We are in the situation to find out the wwpn number of the hba card in aix.

we are normally using "lscfg -vpl fcs(X)", or somebody using "lscfg -vpl fcs(X)|grep -i network"

But using the below for loop, we can easily identify all the hba card and its wwpn number of the server immediately.

Please keep this in your notepad, so that you can use it whenever it required.


for i in $(lscfg |grep fcs |awk '{print $2}'); do echo $i && lscfg -vl $i|grep Network;done


The output is like below. 

fcs0     Network Address.............10000000ABCD1234
fcs1     Network Address.............10000000EFGH4567
fcs2     Network Address.............10000000IJKL88900
fcs3     Network Address.............10000000ABEF4567




Wednesday, April 25, 2018

How to create a ISO image using mksysb and perform the restoration in aix?

How to perform the AIX restore using the ISO image in aix?



Assume the server name is "testserver"


Taking the mksysb on the /mnt directory. (it can be nim server nfs filesystem)
mksysb -i -e -X /mnt/testserver.mksysb     


To confirm the mksysb backup is good for restore.
listvgbackup –f /mnt/testserver.mksysb


To create a iso directory to hold the iso image and go inside it.
mkdir /mnt/testserver/iso
cd /mnt/testserver/iso


To create iso image using mksysb
mkcd -L -S -I /mnt/testserver/iso -m /mnt/testserver.mksysb


cd_image_12345 file has been created, we can rename it 


Rename the iso image for the better naming convention.
mv cd_image_12345 testserver.iso


Confirm the rename is successful
ls -l testserver.iso 


Now the iso can be stored in the /home/padmin directory of the VIO server.
scp testservcer.iso padmin@vioserver:/home/padmin/.


To Lists and displays information about the Virtual Media Repository.
lsrep


To create the media repository
mkrep -sp rootvg -size 10G


To confirm the media repository created
lsrep


To copy the iso image to the repository image
cp testserver.iso /var/vio/VMLibrary/


To create a optical device and map it on the proper vhostX
lsmap -vadapter vhost0
mkvdev -fbo -vadapter vhost0 -dev testserver_opt0
lsmap -vadapter vhost0


To load the virtual media on the optical device
loadopt -disk testserver.iso -vtd testserver_opt0
lsmap -vadapter vhost0


Now the output should be like


VTD testserver_opt0
Status Available
LUN 0x8200000000000000
Backing device /var/vio/VMLibrary/testserver.iso
Physloc
Mirrored N/A


Now we can able to restore the iso image to the corresponding lpar.

Login to the hmc and Activate the AIX server “testserver” to the SMS Menu
And choose the below options.

(5) Select boot option
(1) Select install/ Boot device
(7)  List all devices
(2) SCSI-CD ROM  (location code lun id match with the lun id of the mapped vhost - 0x8200000000000000)
(2)  Select normal boot
(1) Yes  (to exit system management service)

Once exiting from the sms menu, the server “testserver” will boot up from the virtual optical device to the AIX Installation Menus

Now the install /restoration aix operating system.starts



Note:

Sometimes, 2 ISO images created due to the large size of mksysb. In that case, while perform the restoration first we have to add the first iso image and after sometime we need to load the second .iso image. (The system will let you know to add the second iso image - "Please remove volume 1, insert volume 2, and press the ENTER key.")


Follow the below procedure if you are in the situation to add the second iso image.


lsmap –vadapter vhost0
unloadopt -vtd testserver_opt0
loadopt -disk testserver2.iso -vtd testserver_opt0
lsmap –vadapter vhost0



Saturday, April 7, 2018

How to migrate GPFS from 3.5 to 4.2 in AIX?

How to Migrate GPFS version from 3.5 to 4.2 in AIX?


How to upgrade GPFS version from 3.5 to 4.2 in AIX?


The GPFS is a high-performance clustered file system that can be deployed in shared-disk. GPFS provides concurrent high-speed file access to the applications executing on multiple nodes of clusters which can be used with AIX5L, Linux, and Windows platforms.

Note: We didn't see anywhere for the specific step by step document for upgrading the gpfs from 3.5 to 4.2. Myself and my Boss spent much time to prepare these steps for upgrade the gpfs. Hope it is helpful.


1.   Ground work:

mmgetstate                  To displays the state of the GPFS daemon on one or more nodes
lslpp -l gpfs                  To list the current gpfs version details
mmdf                           To check the gpfs file system size
mmlsfs all                      To check all GPFS file systems
mmlscluster                 To displays GPFS cluster configuration information
mmlsconfig                   To displays the configuration data for a GPFS cluster
mmlsmgr                      To displays the file system manager node
mmlsmgr  -c                 To view the GPFS manager
mmlsnsd                       To list NSD disks
mmlsnsd -M                  To view the detailed NSD disks
mmlsdisk -d                  To view the disk information
mmlslicense                 To view the GPFS license
mmlsmgr                      To check the cluster Manager and File Manager
lspv                              To get the disk details of the server.
installp -s                     To check any filesets are in applied state
installp -c all                To commit all the filesets if anything is on the applied state.
prtconf                         To get the basic information about the server.


2.   Stop GPFS cluster and uninstall the current version

Stop all user activity in the file systems.
#mmexportfs all –o /var/tmp/exporDataFile     To export the current cluster configuration
#mmshutdown –a                                  To Stop GPFS on all nodes in the cluster.
#lslpp –l|grep –I gpfs                             To identify the current level of the gpfs 3.5
#Installp –u gpfs                                    To uninstall the gpfs on all the nodes.
#lslpp –l|grep –I gpfs                             To confirm the gpfs is completely uninstall on servers.
#shutdown –Fr                                     For each node, initiate the reboot of the servers.


3.   Upgrade to GPFS 4.2 (Spectrum Scale)

#lslpp –l|grep –I gpfs               To identify the current level of the gpfs
#cd /var/tmp/gpfs_4.2             To go the particular dir where we keep the 4.2 filesets.
#smitty update_all                   Do the preview first and commit next, perform same on all the nodes.
#lslpp –l|grep –I gpfs                To confirm the gpfs 4.2 has been installed on all the nodes.
#mmgetstate –a                       To check the gpfs cluster status after 4.2 installation
#mmstartup –a                         To start the gpfs cluster
#mmgetstate –a                       To check the gpfs cluster status after 4.2 installation
#df –gt  /data/test                    To confirm the gpfs filesytem mounted automatically (if not, follow the below)
#mmimportfs all –I /var/tmp/exportDataFile   To import the gpfs filesytem configuration details.
#mmmount /dev/testlv –a         To manually mount the gpfs filesytem
#df –gt /data/test                     To confirm the gpfs filesytem mounted all the nodes.
#shutdown –Fr                         To reboot all the nodes.


4.   Validation

#mmgetstate –a                         To confirm the cluster service is active on all the nodes.
#df –gt/data/test                       To confirm the cluster filesystem mounted on all the nodes.
#lslpp –l |grep –I gpfs              To confirm the latest level of gpfs is in commited state.
Ask application team to peform the technical checks out of the servers and confirm all is OK at their end.


5. Post migration tasks to the new level of spectrum scale

#mmlscconfig                                              At the moment, the filesytem configuration still shows as 3.5
#mmchconfig release=LATEST                     To migrate the cluster configuration data and enable new functionality throughout the cluster
#mmchconfig release=LATEST --accept-empty-cipherlist-security  To execute this command only if the above mmchconfig command failed.
#mmlslicense -L                                          To get GPFS license is for each of the nodes in the cluster
#mmchlicense server -N NodeList               To assign a GPFS client license to the nodes that require it
#mmchfs <FileSystem> -V compat              To enable backward-compatible format changes
(or)
#mmchfs FileSystem -V full                        To migrate all file systems to the latest metadata format changes

Note:
If you issue mmchfs -V compat, only changes that are backward compatible with GPFS 3.5 will be enabled. Nodes in remote clusters that are running GPFS 3.5 will still be able to mount the file system.

If you issue mmchfs -V full, all new functions that require different on-disk data structures will be enabled. Nodes in remote clusters running an older GPFS version will no longer be able to mount the file system


6. Backout steps for the gpfs from 4.2 to 3.5

#mmexportfs all –o /tmp/exporDataFile  To export the current cluster configuration
#lslpp –l|grep –I gpfs                           To check the current level of the gpfs
Installp –u gpfs                                    To uninstall the current 4.2 gpfs on all the nodes.
#lslpp –l|grep –I gpfs                           To confirm the gpfs 4.2 is completely uninstall on servers.
#shutdown –Fr                                    To reboot all the nodes.
#cd /var/tmp/gpfs_3.5                        To go the particular dir where we keep the 3.5 filesets.
#smitty update_all                              Do the preview first and commit next, perform the same on all the nodes.
#lslpp –l|grep –I gpfs                             To confirm the gpfs 3.5 has been installed on all the nodes.
#mmgetstate –a                                    To check the gpfs cluster status after 3.5 installation
#mmstartup –a                                      To start the gpfs cluster
#mmgetstate –a                                    To check the gpfs cluster status after 3.5 installation
#df –gt  /data/test                                To confirm the gpfs filesytem mounted automatically (if not, follow the below)
#mmimportfs all –I /tmp/exportDataFile   To import the gpfs filesytem configuration details.
#mmmount /dev/testlv –a                     To manually mount the gpfs filesytem
#df –gt /data/test                                  To confirm the gpfs filesytem mounted all the nodes.
#shutdown –Fr                                      To reboot all the nodes.
Ask application team to perform the technical checks out and get confirmation that all is OK