Microsoft has released a new patch KB4025334 which prevents a critical data corruption issue with NTFS sparse files in Windows Server 2016. This patch will prevent possible data corruptions that could occur when using Data Deduplication in Windows Server 2016. However this update also a remedy prevent this issue in all applications and Windows components that leverage sparse files on NTFS in Windows Server 2016.
Although this is an optional update, Microsoft recommends to install this KB to avoid any corruptions in Data deduplication although this KB doesn’t provide a way to recover from existing data corruptions. The reason being is that NTFS incorrectly removes in-use clusters from the file and there is no way to identify what clusters were incorrectly removed afterwards. Furthermore this update will become a mandatory patch in the “Patch Tuesday” release cycle in August 2017.
Since this issue is hard to notice, you won’t be able detected that by monitoring the weekly Dedup integrity scrubbing job. To overcome this challenge this KB also includes an update to chkdsk which will allow you to identify which files are already corrupted.
Identifying corrupted NTFS sparse files with chkdsk in KB4025334
- First, install KB4025334 on affected servers and restart same. Keep in mind that if your servers are in a failover cluster this patch needs to be applied for all the servers in your cluster.
- Execute chkdsk in read-only mode which is the default mode for chkdsk.
- For any possibly corrupted files, chkdsk will provide an output similar to below. Here 20000000000f3 is the file id and make a note of all the file ids of the output.
The total allocated size in attribute record (128, "") of file 20000000000f3 is incorrect.
- Then you can use fsutil to query the corrupted files by their ids as per below example.
D:\afftectedfolder> fsutil file queryfilenamebyid D:\ 0x20000000000f3
- Once you run above command, you should get a similar output like below. D:/affectedfolder/TEST.0 is the corrupted file in this case.
A random link name to this file is [file://%3f/D:/affectedfolder/TEST.0]\\?\D:\affectedfolder\TEST.0
Microsoft Azure now allows file and folder level recovery with Azure Backup. This feature has been available as a preview for both Windows & Linux VMs and is now generally available. Previously Azure Backup allowed VM level recovery and for those who wanted to leverage file level recovery had to leverage solutions like DPM or Azure Backup to achieve their protection goals.
Restore-as-a-Service in Azure Backup allow syou to recovery files or folders instantaneously without deploying any other additional components. Azure Backup leverages an iSCSI-based approach to open/mount application files directly from its’ recovery points. This eliminates the need to restore the entire VM to recover files.
RaaS in Action
In below example I’m going to explain how you can recover files or folders from a Windows IaaS VM in Azure.
- Select the VM that you want to recover files from, in the recovery services vault under Backup items. Click the File Recovery option.
- Select the recovery point as in (1) and download the executable that allows you to browse and recovery files as in (2). Run this as an administrator and you will have to provide the password as in (3) to execute this file.
- This script can be executed on any machine that has the same (or compatible) operating system as the backed-up VM. Unless the the protected Azure VM uses Windows Storage Spaces (for Windows Azure VMs) or LVM/RAID Arrays(for Linux VMs), you can run the executable/script on the same VM. If they do, run it on any other machine with a compatible operating system.
||Compatible client OS
|Windows Server 2012 R2
|Windows Server 2012
|Windows Server 2008 R2
||12.04 and above
||6.5 and above
||6.7 and above
||7 and above
||6.4 and above
If you are restoring from a Linux VM you need bash version 4 or above and python version 2.6.6 and above to execute the script.
- When you run the script the volumes are mounted in the client OS that you are using and will have different drive letters that the ones from the original VM. Make sure you identify the new drives attached. You can view your new drives in Windows Explorer and copy them to an alternate location.
- Finally after restoring your files/folders to unmount the drives, Click the File Recovery blade in the Azure portal and select Unmount Disks as in (4).
I bet that most of you have watched the movie “Inception”, where a group of people are building a dream within a dream within a dream. Before Windows Server 2016 you couldn’t deploy a VM within a VM in Hyper-V. Lot of people are/were encouraged to use VMware as it supported this capability called “Nested Virtualization”. But with the release of Windows Server 2016 & Hyper-V server 2016 this functionality has been introduced. This is specially useful when you don’t have lot of hardware to run your lab environments or want to deploy a PoC system without burning thousands of dollars.
Microsoft announced the support for nested virtualization Azure IaaS VMs using the newly announced Dv3 and Ev3 VM sizes. This capability allows you to create nested VMs in an Azure VM and also run Hyper-V containers in Azure by using nested VM hosts. Now let’s have look on how this is implemented in Azure Azure Compute fabric.
Image Courtesy Build 2017
As you can see in the above diagram, on top of the Azure hardware layer, Microsoft has deployed the Windows Server 2016 Hyper-V hypervisor. Microsoft then adds vCPU on top of that to expose the Azure IaaS VMs that you would normally get. With nested virtualization, you can enable Hyper-V inside those Azure IaaS VMs running Windows Server 2016. You can then run any number of Hyper-V 2016 supported guest operating systems inside these nested VM hosts.
Following references from MSFT provides more information on how you can get started with nested virtualization in Azure.
Microsoft Azure recently announced the support for large disks up to 4 TB. Now Azure Site Recovery supports protecting on-premises VMs and physical servers with disks up to 4095 GB in size to Azure. Many customers use disks with more than 1 TB in capacity for various reasons. A good example would be SQL databases and file servers. The availability of large disks in Azure allows you to leverage ASR as a DR solution for your datacenter infrastructure.
Large disks in Azure are available both in standard and premium tiers. Standard disks offer two sizes S40 (2TB) and S50 (4TB) for both managed and unmanaged disks. If you have IO intensive workloads that require premium storage you can use P40 (2TB) and P50 (4TB) for both managed and unmanaged disks.
Pre-requisites for protecting VMs with large disks in ASR
You need to make sure that your on-premises ASR infrastructure components are up-to-date before you you start protecting VMs and/or physical servers with disks greater than 1 TB in size.
||Install the latest update on the Configuration server, additional process servers, additional master target servers and agents.
|SCVMM managed Hyper-V environments
||Install the latest Microsoft Azure Site Recovery Provider update on the on-premises VMM server.
|Standalone Hyper-V servers not managed by SCVMM
||Install the latest Microsoft Azure Site Recovery Provider on each Hyper-V server that is registered with Azure Site Recovery.
Note that protecting Azure VMs with large disks is not a currently supported scenario.
Microsoft has recently announced a preview for protecting Azure IaaS VMs with ASR. Now you can protect Azure VMs running Windows Server 2016 . Also ASR now supports protecting Azure IaaS VMs with Storage Spaces. Storage Spaces allow you to improve IO performance by striping disks and to create logical disks larger than 4 TB.
Following is a list of all supported OS versions that can be protected using ASR.
- Windows Server 2016 (Server Core and Server with Desktop Experience)
- Windows Server 2012 R2
- Windows Server 2012
- Windows Server 2008 R2 SP1 and above
- Red Hat Enterprise Linux 6.7, 6.8, 7.0, 7.1, 7.2, 7.3
- CentOS 6.5, 6.6, 6.7, 6.8, 7.0, 7.1, 7.2, 7.3
- Ubuntu 14.04/16.04 LTS Server (only supported kernel versions)
- SUSE Linux Enterprise Server 11 SP3
- Oracle Enterprise Linux 6.4, 6.5 running either the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3 (UEK3)
Microsoft Azure Stack is an extension of Azure, which brings the Azure Services to your datacenter. This revolutionary product from Microsoft is a true hybrid cloud product which allows you to deploy Azure services across both public Azure and your own local Azure environments. During Microsoft Inspire event, Microsoft announced the general availability of Azure Stack and the road map.
Azure Stack Integrated Systems
Azure Stack production release is delivered as a multi-server integrated system. DELL EMC, HPE & LENOVO are the incumbent OEM partners for Azure Stack who will be shipping the hardware for Azure Stack in September 2017. CISCO & HUAWEI will join the OEM programme in 2018.
Azure Stack Development Kit (ASDK)
Previously known as technical previews, ASDK allows you to test Azure Stack in a single-server deployment using your own hardware. This is useful to build and validate your workloads for production use in Azure Stack or for Proof-of-Concept scenarios.
I have listed some key resources that is available to you to learn and embrace Azure Stack momentum.
MVPs Kerrier Meyler, Jacob Svendsen, Steve Buchanan, Mark Scholman and myself have been working on a book on Microsoft Hybrid Cloud, which will take you through the journey of mastering the required skills to manage Azure Stack. The book is expected to be released in Q4 2017 and you can pre-order your copy via Amazon using this link.