Saturday, October 12, 2013

How to find out the parent and child device in aix?

How to find out the parent and child device in AIX?

To find out the Parent device:

lsdev -l <device name> -F parent


Example: If we want to find out the parent device of the hard disk hdisk0, then

#lsdev -l hdisk0 -F parent 

Output: fscsi0


To find out the Child device:

lsdev -p <device name>


Example: If we want to find out the child device of the fscsi0, then pls execute below

#lsdev -p fscsi0

Output: hdisk0

Sunday, September 15, 2013

How to deal with Paging Space in AIX?

How to deal with Paging space in AIX?


PAGING SPACE TIPS:

Determining if more paging space is needed:

Allocating more paging space than necessary results in unused paging space that wastes disk space. However, allocating too little paging space can result in one or more of the avoidable symptoms listed below.

Use the following guidelines for determining the necessary paging space:

Enlarge paging space if any of the following messages are displayed on the console or in response to a command on any terminal:


ü INIT: Paging space is low

ü ksh: cannot fork no swap space

ü Not enough memory

ü Fork function failed

ü fork () system call failed

ü Unable to fork, too many processes

ü Fork failure - not enough memory available

ü Fork function not allowed. Not enough memory available.

ü Cannot fork: Not enough space

Add a paging space if the average of the %Used column in the output of the lsps -a command is greater than 80

Add a paging space if the %Used column in the output of the lsps -s command is greater than 80.


Note: Only extend a paging space as a last option.

Use the following commands to determine if you need to make changes regarding paging space logical volumes: 

Iostat:  Check the tm_act field for the hdisk containing the paging space for a high percentage relative to the other hdisks

vmstat : Assure fr/sr columns of the vmstat page field do not consistently exceed the ratio of 1:4.


lsps Use the -a flag to list all characteristics of all paging spaces. The size is given in megabytes. Use the –s flag to list the summary characteristics of all paging spaces. This information consists of the total paging space in megabytes and the percentage of paging space currently assigned (used). If the -s flag is specified, all other flags are ignored.

========================================================================

THINGS TO CONSIDER WHEN CREATING OR ENLARGING PAGING SPACE:

Before creating a new paging space or enlarging an existing paging space, consider the following:

Ø If a disk drive containing an active hd6 paging space logical volume is removed from the system, the system will crash.

   Ø Do not put more than one paging space logical volume on a physical volume.If you add more than one paging  space to one of the physical volumes, the paging activity is no longer spread equally across the physical volumes.

   Ø All processes started during the boot process are allocated paging space on the default paging space logical  volume (hd6). When additional paging space logical volumes are activated, paging space is allocated in a "round robin" manner, in 4KB chunks.

   Ø Avoid putting a paging space logical volume on the same physical volume as a heavily active logical volume, such as that used by a database.

   Ø It is not necessary to put a paging space logical volume on each physical volume.

   Ø Make each paging space logical volume roughly equal in size.

   Ø If paging spaces are of different sizes, and the smaller ones become full, paging activity will no longer be spread across all of the physical volumes.

   Ø Do not extend a paging space logical volume onto multiple physical volumes.

   Ø For best system performance, put paging space logical volumes on physical volumes that are each attached to a different disk controller.

   Ø It is technically supported to create default paging space (hd6) on ESS, EMC or RAID array, although it is not recommended, and should be avoided if possible.


   Ø NOTE: If system is paging enough to cause an I/O bottleneck, tuning the location of the paging space is not the answer.


DAY to DAY operation in aix:

LIST
How to list out all the paging space details?       #lsps –a       
How to list out the consolidate paging space size?  #lsps –s
CREATE
How to create the paging space
#mkps –s <no of pp> -n –a rootvg       mkps –s 8 –n –a rootvg
INCREASE
How to Increase the Paging space?
#chps –s <no of pp> <paging name>       chps –s 8 paging00
DECREASE
How to decrease the paging space?
#chps –d <no of pp> <paging name>      chps –d 4 pagin00
DELETE
How to delete the paging space?
#swapoff /dev/<paging name>               swapoff /dev/paging00
#rmps <paging name>                            rmps paging00
CONFIGURATION FILE
All the paging space is defined in /etc/swapspaces.       # cat /etc/swapspaces
ENABLE
How to enable the paging space?      #swapon /dev/pagin00
DISABLE
How to disable the paging space?     #swapoff /dev/paging00


Tuesday, June 18, 2013

VIOS frequently asked questions

VIOS FAQ

VIOS frequently asked questions


1) What is the Virtual I/O Server?

 The Virtual I/O Server is an appliance that provides virtual storage and shared Ethernet adapter capability to client logical partitions on POWER5 systems. It allows a physical adapter with attached disks on the Virtual I/O Server partition to be shared by one or more partitions, enabling clients to consolidate and potentially minimize the number of physical adapters required.



2) Is there a VIOS website?

Yes. The below VIOs website contains links to documentation, hints and tips, VIOS updates and fixes.



3) What documentation is available for the VIOS?

The VIOS documentation can be found online in the below  link.



4) What is NPIV?

N_Port ID Virtualization(NPIV) is a standardized method for virtualizing a physical fibre channel port. An NPIV-capable fibre channel HBA can have multiple N_Ports, each with a unique identity. NPIV coupled with the Virtual I/O Server (VIOS) adapter sharing capabilities allow a physical fibre channel HBA to be shared across multiple guest operating systems. The PowerVM implementation of NPIV enables POWER logical partitions (LPARs) to have virtual fibre channel HBAs, each with a dedicated world wide port name. Each virtual fibre channel HBA has a unique SAN identity similar to that of a dedicated physical HBA.
The minimum requirement for the 8 Gigabit Dual Port Fibre Channel adapter, feature code 5735, to support NPIV is 110304. You can obtain this image from the http://www-933.ibm.com/support/fixcentral/  



5) What is virtual SCSI (VSCSI)?

Virtual SCSI is based on a client and server relationship. The Virtual I/O Server owns the physical resources and acts as server, or target, device. Physical adapters with attached disks on the Virtual I/O Server partition may be shared by one or more partitions. These partitions contain a virtual SCSI client adapter that sees these virtual devices as standard SCSI compliant devices and LUNs.


6) What is the shared Ethernet adapter (SEA)?

A shared Ethernet adapter is a bridge between a physical Ethernet adapter or link aggregation and one or more virtual Ethernet adapters on the Virtual I/O Server. A shared Ethernet adapter enables logical partitions on the virtual Ethernet to share access to the physical Ethernet and communicate with stand-alone servers and logical partitions on other systems. The shared Ethernet adapter provides this access by connecting the internal VLANs with the VLANs on the external switches.


7) What physical storage can be attached to the VIOS?

See the http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/datasheet.html  for supported storage and configurations.


8) What client operating systems support attachment to the VIOS?

1.     AIX 5.3 and AIX 6.1 TL 2
2.     SUSE LINUX Enterprise Server 9 for POWER
3.     Red Hat Enterprise Linux AS for POWER Version 3(update 2 or newer)
4.     Red Hat Enterprise Linux AS for POWER Version 4
5.     IBM i

9) What solutions can be supported using virtual devices and the VIOS?

Virtual SCSI disk devices are standard SCSI compliant devices that support all mandatory SCSI commands. Solutions that have special requirements at the device level should consult the IBM Solutions team to determine if the device meets your requirements.
The VIOS datasheet includes some information on VSCSI solutions.


10) Can SCSI LUNs be moved between the physical and virtual environment as is?

That is, given a physical SCSI device(ie LUN), with user data on it, that resides in a SAN environment; can this device be allocated to a VIOS and then provisioned to a client partition and used by the client as is?
No, this is not supported at this time. The device cannot be used as is, virtual SCSI devices are new devices when created, and the data must be put onto them after creation. This typically would require some type of backup of the data in the physical SAN environment with a restoration of the data onto the virtual disk.


11) In the context of virtual I/O, what do the terms server, hosting, client, and hosted partition mean?

Server and hosting partition is synonymous, as is client and hosted. The server/hosting partition(s) own physical resources and facilitates the sharing of the physical resource amongst the client/hosted partition(s).


12) Do AIX, Linux, and IBM i all provide Virtual I/O Servers?

The Linux and IBM i operating systems do provide various virtual I/O server/hosting features(virtual SCSI, ethernet bridging, etc). AIX does not provide virtual I/O server/hosting capabilities. There is only one product named the Virtual I/O Server. It is a single function appliance that provides I/O resource to client partitions, and does not support general purpose applications.


13) The VIOS appears to have some similarites with AIX, explain.

The VIOS is not AIX. The VIOS is a critical resource and as such, the product was originally based on a version of the AIX operating system to create a foundation based on a very mature and robust operating system. The VIOS provides a generic command line interface for management. Some of the commands in the VIOS CLI may have common names with AIX and Linux commands. These command names were chosen only because they were generic, the flags and parameters will differ. While some of the VIOS commands may drop the user into an AIX-like environment, this environment is only supported for the installing and setup of certain software packages(typically software for managing storage devices, see the VIOS's Terms and Conditions). Any other tasks performed in this environment are not supported. While the VIOS will continue to support it's current user interfaces going foward, the underlying operating system may change at any time.


14) What is the purpose of the oem_setup_env CLI command?

The sole purpose of the oem_setup_env VIOS CLI command is for ease in installing and setting up certain software packages for the VIOS. See the VIOS datasheet for a list of supported VIOS software solutions.


15) What type of performance can I expect from the VSCSI devices?

Please see the section titled "Planning for Virtual SCSI Sizing Considerations" in the VIOS online pubs in InfoCenter.


16) How do I size the VIOS and client partitions?

The VIOS online pubs in InfoCenter include sections on sizing for both Virtual SCSI and SEA. For Virtual SCSI, please see the section titled "Planning for shared Ethernet adapters".
In addition, the WorkLoad Estimator Tool is being upgraded to accommodate virtual I/O and the VIOS.


17) Why can't AIX VSCSI MPIO devices do load balancing?

Typical multipathing solutions provide two key functions: failover and load balancing. MPIO for VSCSI devices does provide failover protection. The benefit of load balancing is less obvious in this environment. Typically, load balancing allows the even distribution of I/O requests across multiple HBA's of finite resource. Load balancing for VSCSI devices would mean distributing the I/O workload between multiple VIOS's. Since the resources allocated to a given VIOS can be increased to handle larger workloads, load balancing seems to have limited benefit.


18) What is APV (Advanced Power Virtualization)?

The Advanced POWER Virtualization feature is a package that enables and manages the virtual I/O environment on POWER5 systems. The main technologies include:
  • Virtual I/O Server
    - Virtual SCSI Server
    - Shared Ethernet Adapter
  • Micro-Partitioning technology
  • Partition Load Manager
The primary benefit of Advanced POWER Virtualization is to increase overall utilization of system resources by allowing only the required amount of processor and I/O resource needed by each partition to be used.


19) What are some of the restrictions and limitations in the VIOS environment?

  • Logical volumes used as virtual disks must be less than 1 TB in size.
  • Logical volumes on the VIOS used as virtual disks cannot be mirrored, striped, or have bad block relocation enabled.
  • Virtual SCSI supports certain Fibre Channel, parallel SCSI, and SCSI RAID devices as backing devices.
  • Virtual SCSI does not impose and software limitations on the number of supported adapters. A maximum of 256 virtual slots can be assigned to a single partition. Every virtual slot that is created requires resources in order to be instantiated. Therefore, the resources allocated to the Virtual I/O Server limits the number of virtual adapters that can be configured.
  • The SCSI protocol defines mandatory and optional commands. While virtual SCSI supports all of the mandatory commands, some optional commands may not be supported at this time.
  • The Virtual I/O Server is a dedicated partition to be used only for VIOS operations. No other applications can be run in the Virtual I/O Server partition.
  • Future considerations for VSCSI devices: The VIOS uses several methods to uniquely identify a disk for use in as a virtual SCSI disk, they are:
    • Unique device identifier(UDID)
    • IEEE volume identifier
    • Physical volume identifier(PVID)
Each of these methods may result in different data formats on the disk. The preferred disk identification method for virtual disks is the use of UDIDs.

MPIO uses the UDID method.

Most non-MPIO disk storage multi-pathing software products use the PVID method instead of the UDID method. Because of the different data format associated with the PVID method, customers with non-MPIO environments should be aware that certain future actions performed in the VIOS LPAR may require data migration, that is, some type of backup and restore of the attached disks. These actions may include, but are not limited to the following:
    • Conversion from a Non-MPIO environment to MPIO
    • Conversion from the PVID to the UDID method of disk identification
    • Removal and rediscovery of the Disk Storage ODM entries
    • Updating non-MPIO multi-pathing software under certain circumstances
    • Possible future enchancements to VIO
  • Due in part to the differences in disk format as descibed above, VIO is currently supported for new disk installations only
  • Considered when implementing shared Ethernet adapters:
    • Only Ethernet adapters can be shared. Other types of network adapters cannot be shared.
    • IP forwarding is not supported on the Virtual I/O Server.






Friday, May 17, 2013

How to find out the vscsi configuration from vio client in aix?



How to trace VSCSI Configuration from vio client in AIX?

Execute the below command on the VIO Client (Lpar) to find out the VIOS and VSCSI information.


#echo "cvai" | kdb | grep vscsi

read vscsi_scsi_ptrs OK, ptr = 0x59A03C0
vscsi0 0x000007 0x0000000000 0x0 vios1->vhost8
vscsi1 0x000007 0x0000000000 0x0 vios2->vhost8


Using above command from the client server to find out the appropriate VHOST information for our VSCSI and the name of the VIO servers as well.

Note:

Using kdb, we can easily trace the vscsi configuration in aix. This command will save much time when we compare with the old method to do the same.

In the old method, We can find out the slot number (like C13 or C14) of vscsi in the client server, after that login to the vio server and find out the appropriate vhost information for the vscsi.


Monday, May 13, 2013

How to upgrade the adapter's firmware level in aix?


Firmware upgrade for adapter in aix?

1. List the fibre channel adapters installed in the system 

lsdev -C | grep fcsX  where fcsX is the adapter you on which you want to install the microcode


2. Determine the current microcode level on the adapter 

lsmcode -d fcsX 


3. Download the Microcode RPM Package and put it into the temporary directory and Unpacking it using the below command.

rpm -ivh --ignoreos <rpm_package>   (If the microcode package unpacks successfully, the microcode file will be added to the /etc/micrococde directory)


4. Update the adapter's microcode level

"diag -d fcsX -T download" 


5. Confirm the current microcode level.

lsmcode -d fcsX


How to upgrade TL and SP in AIX?


 How to upgrade TL and SP in AIX?
1. Ground work:
oslevel -s                                 --> find out the current os level of the system.
instfix –i|grep –i ml                --> check currently installed ML levels are consistent
lppchk –vm3                           --> check currently installed filesets are consistent
df -g                                         --> check the system has the enough free space.
bootlist -m normal -o             --> check the blv has been created on the rootvg disk
emgr -l                                     --> check the installed ifixes/emergency fixes.
emgr -r -L <ifix label>           --> To removing the ifixes / emergency fixes.
Ensure the latest "mksysb" has been taken.
Download the TL/SP package from IBM fix central and put it to nim server


2. Create the alt_disk (Consider we have hdisk0 & hdisk1 on the rootvg)
sysdumpdev –s /dev/sysdumpnull    --> If secondary dump device resides on hdisk1.
rmlv dump_lv                                    --> Remove the dump lv
unmirrorvg rootvg hdisk1                --> unmirrror (Have to use the hdisk1 for alt_disk)
chpv –c hdisk1                                   --> clear the boot image on the hard disk1.
reducevg rootvg hdisk1                    --> remove the hdisk1 from the rootvg disk.
alt_disk_copy -d hdisk1                    --> Take the alt disk clone on hdisk1
lspv                                                     --> confirm the "altinst_rootvg"  created on hdisk1


3. Perform the TL upgrade
smitty commit                                     --> commit all the old applied filesets
installp -s                                            --> check if any os filesets in applied mode
mount <nim_server>:<package holding directory> /mnt  --> mount directory which hold the TL pkg. 
cd /mnt
smitty update_all                               -- > Do the preview first and commit next, once done follow below
oslevel -s                                            --> Check the new TL level.
lppchk -v                                            --> No output should displayed, only the prompt
bootlist -m normal hdisk0             --> Change the bootlist to hdisk0 - remember alt_dsk reside on hdisk1
shutdown -Fr                                      --> for fast reboot.


4. Validation
oslevel –s                                           --> check the new TL level
instfix –i|grep  ML                            --> confirm the new TL level are consistent.
lppchk –v                                           --> No output should displayed, only the prompt

5. Remove the alt_disk and re-mirror
alt_disk_install -X                            --> Remove the alt_disk
extendvg -f rootvg hdisk1                 --> extend the hdisk1 to the rootvg
mirrorvg rootvg hdisk1                    --> To mirror with the hdisk1
bosboot –ad /dev/hdisk1                   --> Create the boot image
bootlist –m normal  hdisk0 hdisk1  --> Set the boot sequence
bootlist –m normal –o                       --> check the boot sequence order
mklv –y <lvname> -t sysdump rootvg <num of LP’s> hdisk1  --> Create the dump lv on hdisk1 if you have removed during the alt disk clone
sysdumpdev –s /dev/<lv name>        --> Create the secondary dump device on hdisk1
sysdumpdev –l                                   --> confirm the dumplv



Saturday, May 11, 2013

How to apply ACL (Access Control List) for a file?


What is ACL (Access Control List)?


We all know that by default every file contains permissions for the owner, group and other(world). If we want to set something like read only for 1 user, read/write for a group, read/write/execute for another set of users for a particular file, then We can use ACL.


We can use the below commands to do the ACL for a file.

aclget  - To display ACL for a file
acledit - To edit the ACL for a file
aclput  - To set the ACL for a file using a ACL control file

Few examples,

Let us take file1 as the target file.

To display the current ACL values for file1,
# aclget file1

To edit the ACL for a file,  (How to apply acl for a file)
# acledit file1

This will open a editor with ACL values showing some values like below

attributes: SUID
base permissions:
   owner  (frank): rw-
   group (system): r-x
   others        : ---
extended permissions:
   disabled

If you want to enable and set ACL values, just change the stanza "extended permissions" like below

attributes: SUID
base permissions:
   owner  (frank): rw-
   group (system): r-x
   others        : ---
extended permissions:
enabled
       permit    rw-    u:user1
       deny      r--    u:user2, g:group1
       permit    rw-    g:user3, g:group2


Wednesday, May 8, 2013

How to perform the VIO Migration using NIM?


How to perform the VIO Migration using NIM? 


The key prerequisites to perform the migration are:


•        The system must be managed by HMC version 7 or later
•        The virtual IO server must be at least at version 1.3.
•        The physical volume for rootvg must be assigned to the VIO


Below steps can be done at anytime before VIOS Migration:

          #mount nim_master:/images /mnt/
          #backupios -file /mnt/<hostname>/`hostname`.mksysb_date -mksysb
          #viosbr -backup –file /mnt/<hostname>/`hostname`.viosbr
          
Take the configuration files like lsmap,lspv,lspath ....for our future reference. 


Create the resource for your VIO on NIM Server
         
          VIOS_MIG_21_lpp          resources       lpp_source
          VIOS_MIG_21_spot         resources       spot


Define the machine using ---> "smitty nim_mkmac"

Prepare the nim master for installaiton ---> "smitty nim_bosinst"


Steps to Migrate the VIOS:

Note: Make sure all the client LPAR related to the VIOS are shutdown.

Step 1: Shutdown the VIOS Partition and Activate the VIOS Partition in SMS mode via HMC.
Step 2: From the SMS menu on the console, follow this procedure:


SMS - SYSTEM MANAGEMENT SERVICES -

1. Select Language
2. Change Password Options
3. View Error Log
4. Setup Remote IPL (RIPL (Remote Initial Program Load))
5. Change SCSI Settings
6. Select Console
7. Select Boot Options


4. Setup Remote IPL (RIPL (Remote Initial Program Load))

Next you’ll have a selection of adapters to use. Select your adapter that corresponds to the adapter/host you defined in NIM. You will not see “ent0/ent1...etc” options. You will however see the hardware addresses and slots.

We then are brought to 3 options :
1. IP Parameters
2. Adapter Configuration
3. Ping Test

Chose the option 1 to set the IP Parameters

Now we need to set our IP Parameters.
1. Client IP Address [###.###.###.###]
2. Server IP Address [###.###.###.###]
3. Gateway IP Address [###.###.###.###]
4. Subnet Mask [###.###.###.###]

After setting the IP Addresses use ‘M’ to return to the main menu. You typically do not want to go into the “Adapter Parameters” (option 2 on the previous screen) to change the adapter parameters or disable spanning tree.

Also, with the Ping Test.... 

With our IP Parameters set we should now be back at the main menu.
 

SMS - SYSTEM MANAGEMENT SERVICES -
 
1. Select Language
 
2. Change Password Options
 
3. View Error Log
 
4. Setup Remote IPL (RIPL (Remote Initial Program Load))
 
5. Change SCSI Settings
 
6. Select Console
 
7. Select Boot Options
 

Now we’re ready to select our boot device.
 
7. Select Boot Options
 

The next menu should come up :
 
1. Select Install or Boot Device
 
2. Configure Boot Device Order
 
3. Multiboot Startup
 

You can take either option 1 which will make this a 1 time boot device selection,
.
1. Select Install or Boot Device

Select Device Type :
1. Diskette
2. Tape
3. CD/DVD
4. IDE
5. Hard Drive
6. Network
7. List all Devices

select : 

7. List all Devices

The system will go out for you and scan itself to determine which devices are available to boot from. All of your available boot devices will be displayed here. The menu can be a little tricky here. If you have a device pre-selected it will have a 1 next to it under the “Current Position” column. Use the “Select Device Number” listing to choose the device you want to boot from.

The next screen will offer you three choices :
1. Information
2. Normal Mode Boot
3. Service Mode Boot

Select :
2. Normal Mode Boot 

It shouldn’t really matter if you select normal or service mode, I always select normal mode.  Finally it asks if you’re sure you want to exit from SMS. Select ‘yes’ and let the boot go.

What you SHOULD see :
You’ll likely see a brief splash screen then the bootp request attempt.
Ideally you’ll see something like :
BOOTP : S=1 R=1

Which indicates it sent and received a request successfully. 


once the migration done, you will get a login prompt.  Follow the below for confirm the successful VIO migration.


$ ioslevel   
$ lsmap -all 
$ lstcpip

-------------------------------------------------------------------------------------

For updating the VIO server (For example: 2.1 to 2.2)

$ updateios –commit    (To confirm that there are no uncommitted updates)
$ updateios -accept -install -dev /vio_update_package    (To update the vio)