Adsense Code

Tuesday, 26 January 2016

AIX disk extending - physical volumes, volume groups and mount points

I come from a Windows background and I have done a bit of work with AIX before - really basic stuff and to be honest I struggled with some of the disk concepts.  However, now that I have had exposure to Storage Spaces within Windows, some of the basic disk management in AIX has become a whole lot clearer to me!

A physical disk is a physical disk.  You can add physical disks to a volume group should a volume group run out of space.  You can extend one of the physical disks that belong to a volume group should a volume group run of space.

The volume groups in AIX are akin to a storage group in Windows Storage Spaces.

The logical volumes in AIX are akin to a storage space in Windows Storage Spaces.

So - here is how to extend a disk within AIX - assuming you have a SAN - as that what I have worked with.

We got the message that our /other directory (mountpoint) had run of space.  This can be cofirmed by typing in


df -g /other       

(The g here stipulates Gigabytes, you can use m for megabytes and k for kilobytes - this helps demonstrate the heritage that AIX/Unix has - but you're more than likely going to need Gigabytes)

Now type in 

lsvg | lsvg -li


This will show you all the mount points and, importantly, will show you the volume groups and logical volumes that the mount points are allocated against. Have a look at the screen grab below.




 So on the right hand side, within the red box, you can see the mount point /other.  Over on the left hand side on the same line as /other, you can see the logical volume name (other02_lv) that provides the mount point.  Multiple logical volumes can exist on a volume group (it's a group of volumes!) So above the other02_lv you can see othervg02: - this shows that the logical volume other02_lv is on the volume group othervg02.  In this instance, we actually have a dedicated volume group per logical volume which helps explain the structure you can see on the other disk structures you can see.

So now we know the volume group it is on, we can now type in 

lspv

This will list the physical volumes - which you can see below.




You can see the name of the volume group on the right hand side, othervg02 and now we can see the physical disk that that volume group is configured on.

On our setup we use an Hitachi SAN for the storage backend so I need to type in

dlnkmgr view -lu

This will show the LUNs as presented from the SAN - see below.



For reason that I don't know why, unlike on Windows and VMware, the number you see (the 0182) is not the HLUN number - it is the actual LUN/LDEV number that the SAN itself uses for management.  It does change anything but it is just something to factor in when you are doing SAN management stuff and you don't deleted / extend the incorrect LUN.  So, on the Hitachi SNM program, just check the host group and check the size


So you can see how the number 182 correlates with 760GB of disk allocation.  Good, we have done all the tracing and we know the mount point is on a particular logical volume, and we know that that logical volume is on a particular volume group and we know that that volume group is on a particular physical disk (that happens to be presented from a SAN) and we know that the physical disk has been extended on the SAN.  What size does the physical disk in AIX think it is?  Type in this command

lspv hdisk10



So we can see on the above screen grab that the physical disk is 573184 Megabtes in size, or 573GB and there is 768MB free.  We know from the SAN software that the disk is larger than that - so we need to extend!  How do we do that?

Type in

chvg -g othervg02

Amazingly - this doesn't actually do anything.  It just makes the volume group have a good long hard look at itself.  

Type in 

lspv hdisk10

again and you will now see a different amount space available on the physical volume.




If you type in 

lsvg othervg02

this will show you the amount of space now available to the volume group that it can expand into (remember, we could have added more physical disks to the volume group and this would provide a similar result)



So you can see the command that we have entered at the top and on the right in the green box the amount of space free.

So now we just type in

chfs -a size=+200G /other

We are telling the /other mountpoint to increase in size by 200GB.  You should get a nice little response saying that the filesystem has changed size.

Type in

lsvg othervg02

again and now we get this response.



The number of Free PPs has gone down and the number of Used PPs (the field below) has gone up.  

If we type in

df -g /other    (this was the very first command we typed in at the beginning)

we will get a different response to what we got at the beginning and more space being reported.

Job done.






Tuesday, 19 January 2016

After upgrade to vCenter 5.5 Update3b on the appliance, SRM no longer works - How to fix it!

You have upgraded to vCenter 5.5 Update3b on the VMware appliance and SRM has stopped working.  You know this is because of SSLv3 (you have read the upgrade notes after all!) - but you need to upgrade because of updates and security etc - but you can't upgrade to vCenter 6 as your backup product does not support vCenter 6 - what can you do?

You can still upgrade!

Upgrade as you would normally and the vCenter replication will still continue but your SRM management will fail.  You will use the vSphere client and you will get error messages saying it cannot communicate with the SRM server.  If you try to install, modify, upgrade your installation on the SRM server you will get this error message.

Internal error: unexpected error code: -1

Fortunately, this knowledge base article 2139396 (from VMWare has the answer contained within it - but which option should you use?  It's the VMware Virtual Center Server (vpxd) - Port 443 option.


VMware Virtual Center Server (vpxd) - Port 443

To enable SSLv3:
  1. Open the vpxd.cfg file:

    • Windows default location: C:\ProgramData\VMware\VMware VirtualCenter\vpxd.cfg
    • vCenter Server Appliance default location: /etc/vmware-vpx/vpxd.cfg
  2. Create a backup copy of the file.
  3. Edit the file to add or remove <sslOptions>16924672</sslOptions> to enable or disable SSLv3 respectively:

    <vmacore>
    <cacheProperties>true</cacheProperties>
    <ssl>
    <useCompression>true</useCompression>
    <sslOptions>16924672</sslOptions>
    </ssl>
    <threadPool>
    <TaskMax>90</TaskMax>
    <threadNamePrefix>vpxd</threadNamePrefix>
    </threadPool>
    </vmacore>

  4. Save the file.
  5. Restart the vpxd Service.     - Do this by typing in   service vmware-vpxd restart
  6. To disable SSLv3, ensure that the sslOptions is not set in the vpxd.cfg file.