Category Archives: Hyper-V

VM Inception | Nested Virtualization in Azure

I bet that most of you have watched the movie “Inception”, where a group of people are building a dream within a dream within a dream. Before Windows Server 2016 you couldn’t deploy a VM within a VM in Hyper-V. Lot of people are/were encouraged to use VMware as it supported this capability called “Nested Virtualization”. But with the release of Windows Server 2016 & Hyper-V server 2016 this functionality has been introduced. This is specially useful when you don’t have lot of hardware to run your lab environments or want to deploy a PoC system without burning thousands of dollars.

Microsoft announced the support for nested virtualization Azure IaaS VMs using the newly announced  Dv3 and Ev3 VM sizes. This capability allows you to create nested VMs in an Azure VM and also run Hyper-V containers in Azure by using nested VM hosts. Now let’s have look on how this is implemented in Azure Azure Compute fabric.

Image Courtesy Build 2017

As you can see in the above diagram, on top of the Azure hardware layer, Microsoft has deployed the Windows Server 2016 Hyper-V hypervisor. Microsoft then adds vCPU on top of that to expose the Azure IaaS VMs that you would normally get. With nested virtualization, you can enable Hyper-V inside those Azure IaaS VMs running Windows Server 2016. You can then run any number of  Hyper-V 2016 supported guest operating systems inside these nested VM hosts.

Following references from MSFT provides more information on how you can get started with nested virtualization in Azure.

 

 

 

 

 

VM Version Upgrade | Windows Server 2016 & Windows 10

If you have recently upgraded your datacentre infrastructure to Windows Server 2016 (or your client device to Windows 10) you can benefit from the latest Hyper-V features available on your virtual machines by upgrading their configuration version. Before you upgrade to the latest VM version make sure;

  • Your Hyper-V host are running latest version of Windows or Windows Server and you have upgraded the cluster functional level.
  • You are not going to move back the VMs to a Hyper-V host that is running a previous version of Windows or Windows Server.

The process is fairly simple and involves only four steps. First check the current VM configuration version.

  • Run Windows PowerShell as an administrator.
  •  Run the Get-VM cmdlet as below  and check the versions of Hyper-V VMs. Alternatively the configuration version can be obtained by selecting the virtual machine and looking at the Summary tab in Hyper-V Manager.

Get-VM * | Format-Table Name, Version

  • Shutdown the VM.
  • Select Action > Upgrade Configuration Version. If you don’t see this option for any VM that means that  it’s already at the highest configuration version supported by that particular Hyper-V host.

If you prefer PowerShell you can run the below command to upgrade the configuration version.

Update-VMVersion <vmname> 

Introducing Technical Preview 4 | Windows Server 2016 & System Center 2016

With dawn of the year 2016 almost upon us, Microsoft has released another build for it’s upcoming Windows Server & System Center 2016 suite of products. This Technical Preview 4 contains much new advancements and fixes based on customer feedback on the product clearly making it’s way as the cloud OS for next generation of computing.

Nano Server gets a new touch

Nano server, a headless installation option like server core which is going to be one of the installation option for Windows Server 2016 has improved a lot since last preview. In this release IIS & DNS server roles can be installed in Nano server in addition to existing Hyper-V & Scale-out File Server features.

Introducing Hyper-V Containers

Providing additional layer of isolation for Windows Containers, Hyper-V containers can be now deployed as virtual sandboxes to host application workloads. This technology utilizes the nested virtualization capability introduced in Windows Server TP4. Also you can use both docker & PowerShell to create, deploy and manage Windows Containers.

System Center 2016 Improvements

Another milestone is the System Center 2016 TP4 release with some awesome features for private cloud management. Now you can use the SCOM agent to monitor your Nano Servers in TP4. SCCM 2016 TP4 has introduced some new functionality to improve Windows 10 deployment experience via SCCM.

  • Mobile Device management (MDM): enhanced feature parity with Intune standalone – Many of the  MDM feature that are supported via Intune standalone (cloud only) are also enabled for Configuration Manager integrated with Intune (hybrid) in this release.

  • Integration with Windows Update for Business – Now you can view the list of devices that are controlled by Windows Update for Business.

  • Certificate provisioning for Windows 10 devices managed via on-premises mobile device management

You can download Windows Server 2016 Technical Preview 4 & System Center 2016 Technical Preview 4 evaluation bits from here.

Retrospect | System Center Universe Europe 2016

I do love to travel and I love sharing my technical knowledge with community. Here in Sri Lanka I contribute to couple of technical forums as a community leader and a speaker. I’ve always been passionate about technology and I seize any opportunity to serve the community anytime.

In March this year I’ve had the pleasure to present as a speaker at SCU APAC in Singapore. Two weeks back I travelled to Basel, Switzerland to present at the SCU Europe conference on invitation of my MVP friend Marcel Zehner.

SCU Europe is a 3-day technical conference delivered by great speakers around the globe. The conference is not only focused on System Center, Microsoft Azure, Windows Server technologies but also various other Microsoft technologies. itnetX AG, Bern the prominent SI company in Switzerland organizes this conference annually.

I’ve delivered two breakout sessions on Containers & Open Source adaption in Microsoft Azure & System Center. It’s always a pleasure to meet Open Source enthusiasts in Microsoft world as I’m a true believer in heterogeneity.

It was also a great opportunity for me to meet all my European MVP friends in person and refresh the knowledge with them. We had a great time at Bar Rouge during the Speakers Party and also it was great to connect with the community peers during the Network Event.

I must really thank Marcel for having me at SCU Europe. itnetX team has again done a wonderful job organizing this conference and I’m pretty much satisfied that I took part in this effort.

SCU Europe 2016 will be held in Berlin, Germany from 24 – 26 August 2016. Although I’ve got busy schedule in August, I’m looking forward to be there again.

Meet all the rock stars presented at SCU Europe. Session recordings and slide decks will be available in Channel 9 soon.

Library Server Failure in SCVMM 2012 R2

Few days back I was working with my colleague Law Wen Feng on a SCVMM Managed Hyper-V Cluster. The idea was to update the environment from SCVMM 2012 R2 UR 2 to UR 7. We noticed a strange issue where the Library Server (VMM Server itself) was complaining about a refresh failure. It seemed like the VMM agent was no longer functioning properly in the VMM Management Server.

WinRM Issue  (1)

As a poor man’s alternative we removed the library server from VMM. Then we tried to re-add the same VMM server as a library server which resulted in bizarre output. Nevertheless the VMM rejected another file share in a different server which we were hoping to add an alternative.

WinRM Issue  (2)

The error reads as the VMM Agent was no longer functional on the target server. But it was indeed running without any issue.

WinRM Issue  (3)

WinRM Issue  (4)

I’ve reached out to my fellow MVP colleagues Krisitan Nese, Stanislav Zhelyazkov & Daniel Neuman for some suggestions. They suggested that we do re-associate the VMM Agent with VMM Server. Yes it sound like chicken and egg situation. But this is no ordinary Hyper-V host but the VMM server itself.

Register-SCVMMManagedComputer cmdlet can be used to re-associate a managed computer on which VMM agent software is installed with a different VMM management server. But here we chose to add it to the same VMM server.

WinRM Issue  (5)Now it was complaining about WinRM was no longer functional. For those who are familiar WinRM is necessary component that is needed for you to remotely manage Windows Server. By default during the installation SCVMM takes care of enabling and running the WinRM service. Rebuilding the VMM server with retain DB option was not an option as we were middle of preparing demo lab and as I always believe needed to get to the bottom of it.

The evil WinRM GPO

We checked the GPO settings for the domain and found out WinRM was forced to all computers in our domain by a GPO. We moved the VMM server to a test OU and then disabled inheritance for that particular GPO and guess what, after a gpupdate /force in the VMM server we were able to add the library server back again.

WinRM Issue  (6)

Is that All? No it is not.

But I suspected it couldn’t be the only solution or the issue. So some digging into the default WinRM behavior in SCVMM I noticed that infact there was an actual configuration item that has been missed in the GPO itself.

According to Microsoft, there are some consideration for WinRM when you adda Hyper-V host to a VMM environment. Following has been extracted from above TechNet Article the highlighted section focuses on configuring WinRM listeners for both IPv4 & IPv6.

If you use Group Policy to configure Windows Remote Management (WinRM) settings, understand the following before you add a Hyper-V host to VMM management:

  • VMM supports only the configuration of WinRM Service settings through Group Policy, and only on hosts that are in a trusted Active Directory domain. Specifically, VMM supports the configuration of the Allow automatic configuration of listeners, Turn On Compatibility HTTP Listener, and Turn on Compatibility HTTPS Listener Group Policy settings. VMM does not support configuration of the other WinRM Service policy settings.
  • If you enable the Allow automatic configuration of listeners policy setting, you must configure it to allow messages from any IP address. To verify this configuration, view the policy setting and make sure that the IPv4 filter and IPv6 filter (depending on whether you use IPv6) are set to *.
  • VMM does not support the configuration of WinRM Client settings through Group Policy. If you configure WinRM Client Group Policy settings, these policy settings may override client properties that VMM requires for the VMM agent to work correctly.

I had a look at the Allow Automatic Configuration of Listeners policy setting under Computer Configuration\Administrative Templates\Windows Components\Windows Remote Management node in the GPO and the IPv6 filter was set to null, we changed that to accept from any IP address by putting an asterisk (*). Of course IPv6 was enabled in all Hyper-V hosts and the VMM Server by default.

WinRM Issue  (7)

Now it was about time to move back the VMM Server to it’s original OU with the GPO applied and execute a gpupdate /force. Surprisingly it did the trick. We were able to re-add the library server (in VMM) and couple of other file share as library shares without any issue.

WinRM Issue  (8)

Amazing isn’t it? We may never gaze upon TechNet for such trivial issues when they happen but it was worth all the trouble without rebuilding the VMM server. I must thank all who helped by sharing their ideas to sort this issue out. That is what I love about the community. When all is lost somewhere far away in the world, there will always be good people to help you out.

CSV Access Redirected in Hyper-V Cluster

I’ve been working with Hyper-V for quite sometime. During a recent Hyper-V Cluster deployment that myself and my colleague Hasitha Willarachchi (Enterprise Client Managament MVP) were working with, we have come across an issue which was really interesting to troubleshoot.

For some odd reason one of three Cluster Disks in a 3-Node Hyper-V 2012 R2 Cluster was in Redirected Access status.

CSV GFI Filter 1

When we were going through the cluster event noticed a bunch of 5125 Events complaining about an active system filter driver which is not compatible with CSV. Basically the I/O access to that volume has been redirected through another Hyper-V Node.

CSV GFI Filter 2

We tried changing the ownership of the particular CSV to another node, followed by trying to Turn off the Restricted Access Mode by right clicking the CSV and selecting that option. Changing the ownership was no success and for our surprise the operation to turn off the redirected access mode always failed with Set Operation Failed error.

After doing some research we decided to check up the CSV state and what are the active system filters in that particular volume. So we decided to run below commands in the current node owning the CSV.

CSV GFI Filter 3

We noticed a filter called esecdrv60 was having a frame value of Legacy. The nest command confirms that in all three nodes the CSV access is redirected. Then we immediately checked rest of the nodes with fltmc instances command and found out that same legacy filter was present there as well.

The Culprit aka GFI EndPoint Security

esecdrv60 filter actually belongs to GFI EndPoint Security software, which was installed and running in all three Hyper-V nodes. This software was pushed through it’s default policies and somehow Hyper-V cluster was not excluded in deployment list.

CSV GFI Filter 4

Uninstalling GFI was not possible locally so therefore we worked with GFI administrator to uninstall the software from all three hosts. Remember uninstalling GFI  requires a reboot and therefore we had to live migrate all the VMs and reboot one server at a time.

After uninstalling GFI and rebooting  all three hosts executed fltmc instances again to see whether GFI legacy filters were present or not. As you can see below all legacy filters were gone and CSV was back to normal operation mode without any error.

CSV GFI Filter 5

Following references were really helpful to identify and rectify the issue.

  1. Troubleshooting ‘Redirected Access’ on a Cluster Shared Volume (CSV)
  2. Cluster Shared Volume Diagnostics

Replication Failure in Azure Site Recovery

Azure Site Recovery is a great product for those who want to setup their DR environment with a minimal cost. It is based on Hyper-V replica technology for Hyper-V workloads and supports replication VMware & Physical server workloads to DR as well. Today I’m going to discuss a common issue one can encounter when enabling ASR replication to the cloud.

I’ve been working on an ASR setup during couple months and encountered strange issue when I enabled replication in protected VMs.

The enable protection job fails with below error.

Job ID: f9f84765-b18c-4002-96a4-d420dfb76ea6-2015-05-14 10:00:29Z

Start Time: 5/14/2015 3:30:29 PM

Duration: 10 MINUTES

Protection couldn’t be enabled for the virtual machine. (Error code: 70094)

Provider error: Unable to complete the request. Operation on the <Hyper-V Node>  timed out.

Try the operation again. (Provider error code: 2924)

Possible causes: Protection can’t be enabled with the virtual machine in its current state. Check the Provider errors for more information.

Recommendation: Fix any issues in the Event Viewer logs (Applications and Service Logs – MicrosoftAzureRecoveryServices) on the Hyper-V host server. If this virtual machine is enabled for replication on the Hyper-V host, disable this setting. Then try to enable protection again.

UTC Time: Thu May 14 2015 10:15:59 GMT+0530 (Sri Lanka Standard Time)

Browser: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36

Language: en-us

Portal Version: 5.4.00298.11 (rd_auxportal_stable.150511-1702)

PageRequestId: a04f08ed-8932-43f2-95bc-2faab60ed958

Email Address: xxxxxx@outlook.com (MSA)

Subscriptions: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx

In the particular Hyper-V host following error has been logged in Event logs.

Enable replication failed for virtual machine ‘XXXXXX’ due to a network communication failure. (Virtual Machine ID 807780f6-bb7c-48d5-937d-4857a654dec3, Data Source ID 2256321007502018113, Task ID 8c1a5d7d-0693-4d6b-9243-37cc5e96a7d6)

This ASR setup was a on-premise to Cloud scenario with a single SCVMM server.

After spending a good number of troubleshooting hours I finally figured out what went wrong. The Hyper-V Hosts themselves need Internet connectivity to replicate the VMs to ASR. If you cannot enable direct Internet connectivity on the Hyper-V hosts you should do so via a proxy setup. You can change the proxy settings in ASR Provider in Hyper-V Host.

ASR replication requires traffic to be sent over port 443 (SSL) and in my case only the SCVMM server was configured with Internet access. If you are using a proxy server you may need to consider allowing below for successful replication.

  • *.hypervrecoverymanager.windowsazure.com
  • *.accesscontrol.windows.net
  • *.backup.windowsazure.com
  • *.blob.core.windows.net
  • *.store.core.windows.net
  • Allow the IP addresses in Azure Datacenter IP Ranges and HTTPS (443) protocol. Also your IP address whitelist should contain that of your primary region and  West US IP address ranges.

Docker Client for Windows is here

Last year Microsoft has partnered with Docker Inc to provide the next generation applications called Containers. As a result of the journey towards heterogeneous apps,  Microsoft has released the GA version of Docker CLI for Windows last week. As of today, using this tool you can manage Linux containers hosted in Azure or your own VMs straight from your Windows desktop. Microsoft plans to introduce their own container technology as below.

Windows Server containers

The idea behind this container is similar to Linux Container technology. Containers are isolated, but they share OS kernel and, where appropriate bins/libraries. Simply put we are talking about OS Virtualization where applications doesn’t need to be OS specific.

Hyper-V Containers

Using Microsoft Hyper-V technology these containers are fully isolated from the OS itself by running on the hypervisor layer. This ensures that one container has no impact on it’s host or any other containers in the same system. Even though these containers are running inside a hypervisor it doesn’t have any restriction over container deployment. You can simply deploy containers that you targeted for Windows Server in Hyper-V containers and vice versa without any modification.

Nano Server

Microsoft’s Nano server is the Windows version of Red Hat’s Atomic host, an OS designed to run containers in cloud. This version of Windows has no GUI stack, 32 bit support (WOW64), MSI and a number of default Server Core components has also been taken off. Also local logon and Remote Desktop has been removed and managing a nano server can be done only via  WMI and PowerShell. As per Microsoft nano server has 93% lower VHD size, 92% fewer critical bulletins and most importantly 80% fewer reboots.

Installing Docker CLI in Windows

There are two methods currently supported for installing Docker CLI for Windows.

Boot2Docker

Boot2Docker will install a tiny Linux VM running on Virtual Box (Yes you will have to disable Hyper-V engine for this). It is a lightweight linux distro called Tiny Core Linux specifically designed to run Docker containers. You can download the Windows version from here.

Chocolatey

This is Machine Package manager like built for Windows.Think it as YUM or apt-get for Windows. Installation is rather simple. Let’s see how we install Docker CLI using this method. You can visit their website for more information on all supported packages other than Docker.

  • Open a Command Prompt as admin and execute below command.

C:/>@powershell -NoProfile -ExecutionPolicy unrestricted -Command “iex ((new-object net.webclient).DownloadString(‘https://chocolatey.org/install.ps1’))” && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin

  • Once it finishes open a PS seesion as an administrator and set the execution policy to at least Bypass. Then type the below command to proceed.

PS:/>iex ((new-object net.webclient).DownloadString(‘https://chocolatey.org/install.ps1’))

  • Now it’s time to install the Docker CLI. Using either PowerShell or Command prompt execute below command to install Docker CLI.

C:/>choco install docker

  • To upgrade the Docker Client type choco upgrade docker

 

Introducing Linux Integration Services 4.0 Preview for Microsoft Azure

In Hyper-V platform integration services or rather device drivers for emulated hardware plays a vital role. The purpose of these services are to enhance the functionality of VMs to get the maximum performance in par with an actual physical server. Microsoft has announced the availability of LIS 4.0 for Azure VMs recently as a early preview.

This preview version of LIS supports CentOS 6.0-6.6, 7.0-7.1 64 bit editions running on Azure VMs and has introduced below additional functionality to Azure Linux IaaS VMs.

  • CentOS version 6.6 through 7.1 is now supported.
  • Dynamic Memory – Hot Add for above CentOS releases which allows you to dynamically increase the amount of memory that is available to a running VM.

You can download and install the LIS Package from this official Microsoft Link. For a list of features offered by Integration Services for Linux & FreeBSD refer here.

Upgrading Linux Integration Services on Azure Linux VMs

Following procedure needs to be performed as a super user or a user in suborders list.

  • Verify the Linux version first by running below command.

# cat /etc/centos-release

  • SCP (secure copy) the lis4.tar.gz file to the target VM. You can use putty for this.
  • Extracted the tar file by executing

# tar xvzf lis4.tar.gz

  • Traverse to the appropriate release version inside the lis4 directory where XX is the version obtained earlier.

# cd lis4/CentOs<XX>

  • Execute the upgrade script and reboot the VM.

# ./upgrade.sh

# reboot

 

 

Protect your Private Cloud with 5Nine Cloud Security

When it comes to virtualization lot of people start asking questions about how they can secure their environment against security threats. Installing an AV solution inside individual VMs looks like the correct answer but what will happen in case of a network related security threat? Let’s explore the best answer for these issues in Hyper-V context.

5nine Cloud Security is an agentless security solution for Hyper-V which uses the extensible Hyper-V switch capabilities. This solution is capable of providing VM isolation, compliance and antivirus features.

5Nine also offers firewall, AV & IDS functions out of the box. The most important thing about this is it is an agent;less solution where you do not install any agent inside VMs to achieve these goals.

For hosters using Windows Azure Pack 5Nine offers Azure Pack extension which allows them to bring true IDS capabilities to their tenants. As the number of tenants increase security becomes the number one concern of any hoster. Not only that the 5Nine Cloud Security SCVMM plugin let you to deploy all these features via SCVMM if you are only focused about managing your own environment through SCVMM, making it easier to integrate both solutions.

All these features come at an attractive price $199/2 CPUs per host. If you are interested you can visit www.5nine.com for more information. Below is a short demonstration of what 5Nine Cloud Security can do to protect your Hyper-V Hosts, Private Cloud or Service Provider Cloud.

In a future post I’m going to discuss how to configure 5Nine Cloud Security to protect your Microsoft virtualization solution.