Thursday, February 28, 2013

What is the difference between hard link and soft link in aix?

What is the difference between hard link and soft link in aix?



Hard link
Soft link
inode same
inode vary
inode count increase
inode count not increased
If source file deleted means, destination file should  available
If Source file deleted then destination also deleted
If destination file deleted, souce file should be available
If destination file deleted, source will be available



Wednesday, February 27, 2013

How to move a cdrom from one lpar to another lpar?


How to move a cdrom from one LPAR to another LPAR?

1. Find the lpar which is currently holding the cdrom:


Login to the HMC, Select the Managed System and open "Properties", Look for the I/O device with the description "Other Mass Storage Controller" and read the "Owner" field. This will show the LPAR currently owning that device.


2. On the currently assigned lpar:


1. Find the parent adapter of the CD device:

$ lsdev -Cl cd0 -F parent
ide0

2. Find the slot containing the IDE bus:

$ lsslot -c slot
# Slot Description Device(s)
U787B.001.DNWG2AB-P1-T16 Logical I/O Slot pci1 ide0
U9133.55A.105C2EH-V7-C0 Virtual I/O Slot vsa0
U9133.55A.105C2EH-V7-C2 Virtual I/O Slot ent0
U9133.55A.105C2EH-V7-C3 Virtual I/O Slot vscsi0

so PCI1 is the slot containing the IDE adapter and CD drive.

3. Remove the slot from this host:

# rmdev -dl pci1 -R
cd0 deleted
ide0 deleted
pci1 deleted



3: Moving the cdrom to the target lpar using HMC:


Login to the HMC
Go to lpar 
(select) Dynamic lpar 
(select) Physical Adapters
(select) Move or remove


Now one box will open, on there you have to select the (cdrom) adapter to remove and provide the destination server details on the "Move to partition" option.



4: Verfication on the target lpar:


Now login to the target lpar and verfiy the cdrom has been moved successfully, with the help of below steps.

#cfgmgr
#lsdev -Cc cd0   ("cd0 available" -Here we have to confirm that the cdrom has been moved)
#mkdir /cdrom
#mount -v cdrfs -o ro /dev/cd0 /cdrom
#umount /cdrom


Sunday, February 24, 2013

How to upgrade the firmware level through HMC?


Steps to upgrade the firmware level through HMC


Check the existing firmware level of the servers (lpars) in the managed systems.

1. Login to the HMC with right privileges.

2. Click the managed server which you want to upgrade the firmware level.

3. Click the lpar and do shutdown, few minutes later, you can see the status as - "not activated"

4. poweroff and poweron the managed systems (To make it "STANDBY" mode)

5. Click the "change LIC for the current release "  (You can see this under the upgrade menu in the bottom of the HMC window0

start change licensed internal code wizard (press 0k)

        select ftp site (press ok)

                check the directory of the release (which is hold the package) (Also make sure the ftp site and user id and give the  passwd to proceed)  (press ok)

                        Press OK if you found no error in the next screen

                                press "Next"

                                        press "next"

                                                Accept the license (press "accept")

                                                        select the managed systems name on the check box and press finish to start the upgradation.

                                                                press "ok"  

                                                                        you can see the dialouge box of upgradation. (on that it mentioned 

In here the process automatically poweroff the managed system and prepare to install once the installation completed, the managed systems will be in the power on state.

Start the lpars which is hold by the managed systems. (That we already shutdown when we start this firmware upgradation see number:3)

Login the server and check it out the firmware level using "#lsmcode -c".


Friday, February 22, 2013

How to migrating to AIX 6.1 with nimadm?


How to migrating to AIX 6.1 with nimadm?


How to migrating to AIX 6.1 with nimadm?

Here I have provided the steps in simplest way in the Interview perspective. If you are familiar with AIX and NIM, you can understand the below steps very easily.

Ground work:

On NIM Master:

**Need Nim Master server with the 6.1 version --> #oslevel -s

** Master having alt_disk package installed  --> #lslpp -l bos.alt....'

** LPP and SPOT should be in high level (OS ver = 6.1) -->  #lsnim -c resources

** NIMADMVG should create on the NIM master server --> #lsvg -l nimadmvg

** Nim master should have the space -->  # df -gt

** Client needs to define with the master -->  #smitty nim_mkmac

** Take mksysb backup of the target client server --> "mksysb"

On NIM Client"


** Addtional disk needs to be there --> "lspv"

** Break the mirror to get the spare disk   #unmirrorvg rootvg hdisk1

**Clear the boot image on the disk   #chpv -c hdisk1

** RSH should be working --> #chsubserver -a -v shell -p tcp6 -r inetd
                                                #refresh -s inetd; cd / ; rm .rhosts;
                                                # vi .rhosts    (type + on the.rhosts file)
                                                # chmod 600 .rhosts


Implementation steps:

nimadm –j <nim volume group> -c <client name> -s <spot> -l <lpp_source> -d <disk of the client> -Y <accept>

Phase 1 Starting alt disk migration on the client
Phase 2 Creating nimadm cache file systems on volume group nimadmvg.
Phase 3 Syncing the nim client's data to cache filesystem.
Phase 4 Merging the system configuration files.
Phase 5 system configuration files are saved and the bos image is restored.
Phase 6 Migrating the system filesets to the new higher version.
Phase 7 Post migration has been checked.
Phase 8 Creating a boot image to client's alternate boot disk
Phase 9 Syncing cache data to client's alternate rootvg via rsh.
Phase 10 unmount and remove the cache filesystem on the nimadmvg
Phase 11 Boot list is set to the alt disk on the client.
Phase 12 Clean up the alt_disk_migration on nim master and nim client to end the migration.

** Reboot the server -->  shutdown -Fr 


Verfication:

** Once the system is up check the current OS level --> "oslevel -s"

** Check all the filesets are found --> instfix -i |grep -i ml

** Disable the RSH  --> "chsubserver -d -v shell -p tcp6 -r inetd"
                                          refresh -s inetd
                                          cd / ; rm .rhosts; ln -s /dev/null .rhosts


Post implementation:

# lspv | grep old_rootvg              --> Get the old_rootvg
# alt_rootvg_op -X old_rootvg   --> remove the alt vg
# extendvg –f rootvg hdisk0       --> Extend the disk to rootvg
# mirrorvg rootvg hdisk0             --> Mirror it back to the rootvg
# bosboot -ad /dev/hdisk0  /dev/hdisk1 --> create the boot image
# bootlist -m normal hdisk0 hdisk1  --> set the boot list
# bootlist -m normal -o   --> Confirm the boot sequence.

Advantage of nimadm method:

** Reduced downtime for the client.
** The migration is executed while the system is up and running as normal
** There is no disruption to any of the applications or services running on the client
** Quick recovery from migration failures. All changes are performed on the rootvg copy (altinst_rootvg).
** If there are any serious problems with the migration, the original rootvg is still available and the system has not been impacted.
** If a migration fails or terminates at any stage, nimadm is able to quickly recover from the event and clean up afterwards.



How to find vhost assigned to a particular VIO Client?


How to find vhost assigned to a particular VIO Client?


Method 1:      


Using two commands/steps we can easily find out the vhost information of the particular disk.


login to the vio client, and find out the disk's slot number ( for ex: hdisk1) 
                
         Command:>  lscfg|grep hdisk1 
        Output:>       hdisk1   U9133.55A.065040H-V21-C19  Virtual SCSI Server Adapter


login to the vio server and execute the below command to find out the vhost information of the particular disk.

                          Command:>  lsdev -slots|grep C19
                            Output:>    vhost3

Using the above output we can confirm that the vhost3 is assigned to the hdisk1.



--------------------------------------------------------------------------------------------------------------------------



Method 2:      


* Find the lpar id and convert into hexa decimal.

     For ex: if your LPAR ID is 15, then the hexadecimal value is "f"  

Using the below command we can convert decimal no to hexa-decimal.


    printf "%x\n" 15           
output is: "f"


 * Execute the below command on the VIO server to find out the vhost info.

     lsmap -all | grep vhost | grep 0x0000000f


Note: totally 6 zeros are needs to put after the "x" on the command and last two digit ("0f") is for hexa decimal number.

Advantages of this command is, if you have 6 virtual disks on the lpar from the vios, then we  will get all the 6 virtual disks associated with the vhosts name on the VIOS..

--------------------------------------------------------------------------------------------------------------------------

Method 3


Using kdb, we can easily trace the vscsi configuration in aix. This command will save much time when we compare with the old method to do the same.

In the old method, We can find out the slot number (like C13 or C14) of vscsi in the client server, after that login to the vio server and find out the appropriate vhost information for the vscsi.


#echo "cvai" | kdb | grep vscsi

read vscsi_scsi_ptrs OK, ptr = 0x59A03C0
vscsi0 0x000007 0x0000000000 0x0 vios1->vhost8
vscsi1 0x000007 0x0000000000 0x0 vios2->vhost8


Using above command from the client server to find out the appropriate VHOST information for our VSCSI and the name of the VIO servers as well.



--------------------------------------------------------------------------------------------------------------------------

Method 4:


Login to the lpar and get into the VG and make note of the PVID of the disk. Now login to the VIO Server and get the disk name using the PVID.  now we can able to get the appropriate vhost name with the corresponding disk name like below.

lpar

# lspv  --> To get the pvid of the disk


Vio server:

lspv|grep -i <pvid>        --> To get the disk name
lsmap -all|grep -p <disk name>   --> To get the VTD name

lsdev -dev <VTD name> -field parent   --> We can get the vhost name



--------------------------------------------------------------------------------------------------------------------------

Method 5:


As per Target's standard VTD naming convention, VTD name will contain the VIO client name & LUN ID of the disk used for backing device. So we can run a lsmap -all | more and find the VTD with VIO Client we are searching for. Below output shows that vhost3 is the Server SCSI adapter for VIO Client "testmachine"


$ lsmap -all | more
Physloc               U7311.D20.067DDBB-P1-C02-T1-L23

VTD                   esvdevcmsweb-42e
Status                Available
LUN                   0x9500000000000000
Backing device        hdiskpower16
Physloc               U7311.D20.067DDBB-P1-C02-T1-L19

SVSA            Physloc                                      Client Partition ID
--------------- -------------------------------------------- ------------------
vhost3          U9133.55A.065040H-V22-C16                    0x00000003

VTD                   testmachine-16
Status                Available
LUN                   0x8700000000000000
Backing device        hdiskpower54
Physloc               U7311.D20.067DDBB-P1-C02-T1-L113

VTD                   testmachine-15
Status                Available
LUN                   0x9200000000000000
Backing device        hdiskpower30
Physloc               U7311.D20.067DDBB-P1-C02-T1-L66