Adsense Code

Friday, 27 February 2015

How convert Windows 2012R2 standard to Datacenter

There are few requirements for Datacenter as Standard now supports a lot of RAM out of the box.  The main thing that Datacenter does offer is HotPlug of RAM and CPU - useful for database servers or web servers where it is difficult to negotiate to downtime to bring down a system.
 
A system can be upgraded to Datacenter without losing the software configuration.  It takes about 10 - 15 minutes and it requires two reboots.
 
Logon to the Windows server you want to do the work on and start a command prompt with Admin privileges.
 
Type in:
 
dism /online /Get-TargetEditions
 
This will show you the OS you can upgrade to - it is useful just as a little check that the OS you are doing the work on will do what you are expecting it to do.
 
If all good, and you want to go to Datacenter then type in
 
dism /online /Set-Edition:ServerDatacenter /ProductKey:W3GGN-FT8W3-Y4M27-J84CP-Q3VJ9 /AcceptEula
 
The product key is just the KMS key that tells the server to look for a KMS server to authenticate it.  If you are working on a server in the DMZ, then you will need to use a MAK key and then use the telephone authentication system.
 
You can then add the extra RAM and CPU as required.  If you are doing this work on a live SQL server you will need to type in commands for the SQL process to 'see' the extra hardware resources - which will be another blog!

How to configure a DMZ server to use your corporate licensing server

Servers in the AD find the KMS server for operating system and Microsoft Office licensing because of DNS.  However, servers in the DMZ do not have access to that, so some manual configuration is required.

You will need to ensure that port 1688 TCP traffic is allowed to pass through from your DMZ environment into your normal production environment so that the licensing can actually take place.
 
On the DMZ server in question, edit the local hosts file and add the two following lines - you will need the IP address of your KMS licensing server and know the host name.
 
<your IP address>    <The fully qualified host name>
<your IP address>   <host name>
 
Save and exit.
 
Then from a command prompt with admin privileges, type in
 
slmgr.vbs /skms <The fully qualified host name>:1688
 
 
Check that the license has activated
 
slmgr.vbs /dli
 
 
If not, type in
 
 

slmgr.vbs /ato    then a   slmgr.vbs /dli  to confirm

How to monitor VMware SSD disk performance

At the time of writing, February 2014, there are no graphs to monitor to look at the cache ratio performance.
 
You need to use the esxcli command prompt against the host that the VM is currently situated on.

Enable SSH on the host and connect with your SSH client - probably Putty! :)
 
Log in and type
 
esxcli storage vflash cache list
 
 
That will provide you with a list of VMDKs that have got SSD caching enabled against them.  They are usually  vfc-<unique number references>-<servername><VMDK number>
 
So now type in
 

esxcli storage vflash cache stats get -c <That string of stuff>

Thursday, 19 February 2015

Virtual Flash Read Cache - Fantastic - but something odd today... massive disk latency

We have been adding Flash Read Cache against selected VMDKs gently and we have been getting excellent results.  In the main, we have allocated against some of our virtual Citrix servers and we have noticed a decrease in the latency reported on the Citrix VMs and a reduction in the CPU utilisation on our SAN.

Our Netbackup Master server is also a virtual server and we thought with all those data reads, adding vFRC against the VMDKs on the 20% ratios that VMware recommend would only improve things.

Not so!

Over the morning, more and more VMs over different datastores, over different IO architectures, reported disk latencies.  We were, on a couple of VMs, getting disk latencies of over 22500 milliseconds (yes, 22 and a half seconds!).

It took a bit of digging and thinking of the only major disk IO change that we had done.  We removed the FRC against the master netbackup server and over about 10 minutes, everything started behaving properly again!

So - we have solved the problem - but puzzling on why that would be the case.  If anyone has any thoughts - do let me know!

How to remove deduplication on a Windows 2012R2 server - hold onto your hats!

We ended up having to remove deduplication off our Windows 2012R2 server on a few volumes as Netbackup could not 'in a few circumstances' be able to restore data.  This was despite us checking the compatibility and, of course, completing some tests.

Fortunately, we had replicated the server over to our DR site using VMware's SRM product, so we were able to test this next step before we did it, gulp, on the live environment.


First things first - DO NOT UNINSTALL THE DEDUPLICATION SERVICE ON THE WINDOWS SERVER and then read through all of this before you do the work!

Then start a powershell and type in

start-dedupjob -volume VolumeLetter -Type Unoptimization



Note the spelling of unoptimisation - I want to type it with an S - I'm never sure which are the correct way round in this trans-Atlantic world that we live in today.

The unoptimization will start its work.  It will take a long time.  It will increase disk IO.  And despite you checking what your data size is and your unoptimised size - you will need to increase the disk size allocated to your Windows server.  The unoptimisation creates a large data footprint on the same volume but does tidy up after itself.  You can shrink the volumes afterwards.  We didn't change the disk size on our practice server and it just worked over many hours.  However, with the live environment, which we monitored during the working day, we could the remaining disk space just being eaten up and used up.

You can check how it is all going by typing in

get-dedupjob

It will stay at 0% for ages - hours even.  Then it will just go 50%, 60% and the remaining percentages over an hour or so.

This does work, we haven't lost data and we only did it because we couldn't guarantee the data protection for the backups - but that made it scary to do on the live environment (despite checking it in test which was a duplication of live) as if it all went wrong, we didn't have backup to go back to.

But it worked!

Hope it helps others.