Category Archives: Windows Server

Data corruption issue with NTFS sparse files in Windows Server 2016

Microsoft has released a new patch KB4025334 which prevents a critical data corruption issue with NTFS sparse files in Windows Server 2016.  This patch will prevent possible data corruptions that could occur when using Data Deduplication in Windows Server 2016. However this update also a remedy prevent this issue in all applications and Windows components that leverage sparse files on NTFS in Windows Server 2016.

Although this is an optional update, Microsoft recommends to install this KB to avoid any corruptions in Data deduplication although this KB doesn’t provide  a way to recover from existing data corruptions. The reason being is that NTFS incorrectly removes in-use clusters from the file and there is no way to identify what clusters were incorrectly removed afterwards. Furthermore this update will become a mandatory patch in the “Patch Tuesday” release cycle in August 2017.

Since this issue is hard to notice, you won’t be able detected that by monitoring the weekly Dedup integrity scrubbing job. To overcome this challenge this KB also includes an update to chkdsk which will allow you to identify which files are already corrupted.

Identifying corrupted NTFS sparse files with chkdsk in KB4025334

  • First, install KB4025334 on affected servers and restart same. Keep in mind that if your servers are in a failover cluster this patch needs to be applied for all the servers in your cluster.
  • Execute chkdsk in read-only mode which is the default mode for chkdsk.
  • For any possibly corrupted files, chkdsk will provide an output similar to below. Here 20000000000f3 is the file id and make a note of all the file ids of the output.
The total allocated size in attribute record (128, "") of file 20000000000f3 is incorrect.
  • Then you can use fsutil to query the corrupted files by their ids as per below example.
D:\afftectedfolder> fsutil file queryfilenamebyid D:\ 0x20000000000f3
  • Once you run above command, you should get a similar output like below. D:/affectedfolder/TEST.0 is the corrupted file in this case.
A random link name to this file is [file://%3f/D:/affectedfolder/TEST.0]\\?\D:\affectedfolder\TEST.0

VM Version Upgrade | Windows Server 2016 & Windows 10

If you have recently upgraded your datacentre infrastructure to Windows Server 2016 (or your client device to Windows 10) you can benefit from the latest Hyper-V features available on your virtual machines by upgrading their configuration version. Before you upgrade to the latest VM version make sure;

  • Your Hyper-V host are running latest version of Windows or Windows Server and you have upgraded the cluster functional level.
  • You are not going to move back the VMs to a Hyper-V host that is running a previous version of Windows or Windows Server.

The process is fairly simple and involves only four steps. First check the current VM configuration version.

  • Run Windows PowerShell as an administrator.
  •  Run the Get-VM cmdlet as below  and check the versions of Hyper-V VMs. Alternatively the configuration version can be obtained by selecting the virtual machine and looking at the Summary tab in Hyper-V Manager.

Get-VM * | Format-Table Name, Version

  • Shutdown the VM.
  • Select Action > Upgrade Configuration Version. If you don’t see this option for any VM that means that  it’s already at the highest configuration version supported by that particular Hyper-V host.

If you prefer PowerShell you can run the below command to upgrade the configuration version.

Update-VMVersion <vmname> 

Storage Spaces Direct | Deploying S2D in Azure

This post explores how to build a Storage Space Direct lab in Azure. Bear in mind that S2D in Azure is not a supported scenario for production workloads as of yet.

Following are the high level steps that needs to be followed in order to create provision a S2D lab in Azure. For this lab, I’m using DS1 V2 VMs with Windows Server 2016 Datacenter edition for all the roles and two P20 512 GB Premium SSD disks in each storage node.

Create a VNET

In my Azure tenant I have created a VNET called s2d-vnet with 10.0.0.0/24 address space with a single subnet as below.

1-s2d-create-vnet

Create a Domain Controller

I have deployed a domain controller called jcb-dc in a new windows active directory jcb.com with DNS role installed. Once DNS role has been installed, I have changed the DNS server IP address in the s2d-vnet to my domain controller’s IP address. You may wonder what is the second DNS IP address. It is actually the default Azure DNS IP address added as a redundant DNS server in case if we lose connectivity to the domain controller. This will provide Internet name resolution to the VMs in case domain controller is no longer functional.

1-s2d-vnet-dns

Create the Cluster Nodes

Here I have deployed 3 Windows Server VMs jcb-node1, jcb-node2 and jcb-node3 and joined them to the jcb.com domain. All 3 nodes are deployed in a single availability set.

Configure Failover Clustering

Now we have to configure the Failover Cluster. I’m installing the Failover Clustering role in all 3 nodes using below PowerShell snippet.

$nodes = (“jcb-node01”, “jcb-node02”, “jcb-node03”)

icm $nodes {Install-WindowsFeature Failover-Clustering -IncludeAllSubFeature -IncludeManagementTools}

3-s2d-install-fc

Then I’m going to create the Failover Cluster by executing below snippet in any of the three nodes. This will create a Failover Cluster called JCB-CLU.

$nodes = (“jcb-node01”, “jcb-node02”, “jcb-node03”)

New-Cluster -Name JCB-CLU -Node $nodes –StaticAddress 10.0.0.10

4-s2d-create-fc

Deploying S2D

When I execute Enable-ClusterS2D cmdlet, it will enable Storage Paces Direct and start creating a storage pool automatically as below.

5-s2d-enable-1

5-s2d-enable-2

12-s2d-csv

You can see that the storage pool has been created.

7-s2d-pool-fcm

8-s2d-pool

Creating a Volume

Now we can create a volume in our new S2D setup.

New-Volume -StoragePoolFriendlyName S2D* -FriendlyName JCBVDisk01 -FileSystem CSVFS_REFS -Size 800GB

9-s2d-create-volume

Implementing Scale-out File Server Role

Now we can proceed with SOFS role installation followed by adding SOFS cluster role.

icm $nodes {Install-WindowsFeature FS-FileServer}

Add-ClusterScaleOutFileServerRole -Name jcb-sofs

10-s2d-sofs-install

11-s2d-sofs-enable

Finally I have created an SMB share called Janaka in the newly created CSV.
13-s2d-smb-share

Automating S2D Deployment in Azure with ARM Templates

If you want to automate the entire deployment of the S2D lab you can use below ARM template by Keith Mayer which will create a 2-node S2D Cluster.

Create a Storage Spaces Direct (S2D) Scale-Out File Server (SOFS) Cluster with Windows Server 2016 on an existing VNET

This template requires you to have active VNET and a domain controller deployed first which you can automate using below ARM template. 

Create a 2 new Windows VMs, create a new AD Forest, Domain and 2 DCs in an availability set

We will discuss how to use DISKSPD & VMFLET to perform load and stress testing in a S2D deployment in our next post.

Storage Spaces Direct | Architecture

In my last post I’ve explained the basics of Storage Spaces Direct in Windows Server 2016. This post explores the internals of S2D and it’s architecture in much simple context.

S2D Architecture & Design

(Image Courtesy) Microsoft Technet

S2D is designed to provide nearly 600K IOPS (read) & 1 Tbps of throughput at it’s ultimate configuration with RDMA adapters & NVMe SSD drives. S2D is all about Software Defined Storage and let’s dissect the pieces that makes up the S2D paradigm one by one.

Physical Disks – You can deploy S2D just inside 2 servers up to 16 servers on from 2 to 16 servers with locally-attached SATA, SAS, or NVMe drives. Keep in mind that each server should at least have 2 SSDs, and at least 4 additional drives which can be SAS or SATA HDD. These commodity SATA and SAS devices should be leverage a host-bus adapter (HBA) and SAS expander. 

Software Storage Bus – Think this as the Fiber Channel and Shared SAS cabling in your SAN solution. Software Storage Bus spans across the storage cluster to establish a software-defined storage fabric and allows all the servers can see all the local drives in each and every host in the cluster.

Failover Cluster & Networking – For server communication, S2D leverages the native clustering feature in Windows Server ans uses SMB3, including SMB Direct and SMB Multichannel, over Ethernet. Microsoft recommends to use 10+ GbE (Mellanox) network cards and switches with remote-direct memory access (RDMA), either iWARP or RoCE.

Storage Pool & Storage Spaces – With the recommendation of one pool per cluster Storage Pools consists of the drives that forms the S2D and it is created by discovering and adding all eligible drives automatically to the Storage Pool. Storage Spaces are your software-defined RAID based on Storage Pools. With S2D the data can have tolerance up to two simultaneous drive or server failures along with chassis and rack fault tolerance as well.

Storage Bus Layer Cache – The duty of the Software Storage Bus  is to dynamically bind the fastest drives present  to slower drives (i.e SSD to HDD) which provides server-side read/write caching to accelerate IO and to boost throughput.

Resilient File System (ReFS) & Cluster Shared Volumes – ReFS is a file system that has been built to enhance server virtualization experience in Windows Server. With Acclerated VHDX Operations feature in ReFS it improves the creation, expansion, and checkpoint merging in Virtual Disks significantly. Cluster Shared Volumes consolidate all the ReFS volumes into a single namespace which you can access from any server so it becomes shared storage.

Scale-Out File Server (SOFS) – If your S2D deployment is a Converged solution it is required to implement SOFS which provides remote file access using the SMB3 protocol to clients. i.e Hyper-V Computer Cluster. In a Hyper Converged S2D solution both storage and compute reside in the same cluster thus there is no need to introduce SOFS.

In my next post I’m going to explore how we can deploy S2D in Azure. This will be a Converged setup as Azure doesn’t allow nested virtualization. 

Storage Spaces Direct | Introduction

What is Storage Spaces Direct?

Storage Spaces Direct (S2D) is a new storage feature in Windows Server 2016 which allows you to leverage the locally attached disk drives of the servers in your datacentre to build highly available, highly scalable software-defined storage solutions. S2D helps you save your investments on expensive SAN or NAS solutions by allowing you to use your existing NVMe, SSD or SAS drives combined together to provide high performing and simple storage solutions for your datacentre workloads.

S2D Deployment Choices

There are two deployment options available with S2D.

Converged

In a Converged or disaggreagted S2D architecture, Scale-out File Server/s (SoFS) built on top of S2D  provides shared storage on  SMB3 file shares. Like your traditional NAS systems this separates the storage layer from compute and this option is ideal for large scale enterprise deployments such as Hyper-V VMs hosted by a service provider. 

(Image Courtesy) Microsoft TechNet

Hyper Converged

With Hyper Converged S2D deployments, both compute and storage layers reside in same server/s and this allows to further reduce the hardware cost and ideal for SMEs. 

(Image Courtesy) Microsoft TechNet

S2D is the successor of Storage Spaces introduced in Windows Server 2012 and it is the underlying storage system for Microsoft Azure & Azure Stack. In my next post I will explain about the S2D architecture and key components of an S2D solution in much detail.

Following video explains the core concepts of S2D and it’s  internals and use cases.

Introducing Technical Preview 4 | Windows Server 2016 & System Center 2016

With dawn of the year 2016 almost upon us, Microsoft has released another build for it’s upcoming Windows Server & System Center 2016 suite of products. This Technical Preview 4 contains much new advancements and fixes based on customer feedback on the product clearly making it’s way as the cloud OS for next generation of computing.

Nano Server gets a new touch

Nano server, a headless installation option like server core which is going to be one of the installation option for Windows Server 2016 has improved a lot since last preview. In this release IIS & DNS server roles can be installed in Nano server in addition to existing Hyper-V & Scale-out File Server features.

Introducing Hyper-V Containers

Providing additional layer of isolation for Windows Containers, Hyper-V containers can be now deployed as virtual sandboxes to host application workloads. This technology utilizes the nested virtualization capability introduced in Windows Server TP4. Also you can use both docker & PowerShell to create, deploy and manage Windows Containers.

System Center 2016 Improvements

Another milestone is the System Center 2016 TP4 release with some awesome features for private cloud management. Now you can use the SCOM agent to monitor your Nano Servers in TP4. SCCM 2016 TP4 has introduced some new functionality to improve Windows 10 deployment experience via SCCM.

  • Mobile Device management (MDM): enhanced feature parity with Intune standalone – Many of the  MDM feature that are supported via Intune standalone (cloud only) are also enabled for Configuration Manager integrated with Intune (hybrid) in this release.

  • Integration with Windows Update for Business – Now you can view the list of devices that are controlled by Windows Update for Business.

  • Certificate provisioning for Windows 10 devices managed via on-premises mobile device management

You can download Windows Server 2016 Technical Preview 4 & System Center 2016 Technical Preview 4 evaluation bits from here.

Docker Client for Windows is here

Last year Microsoft has partnered with Docker Inc to provide the next generation applications called Containers. As a result of the journey towards heterogeneous apps,  Microsoft has released the GA version of Docker CLI for Windows last week. As of today, using this tool you can manage Linux containers hosted in Azure or your own VMs straight from your Windows desktop. Microsoft plans to introduce their own container technology as below.

Windows Server containers

The idea behind this container is similar to Linux Container technology. Containers are isolated, but they share OS kernel and, where appropriate bins/libraries. Simply put we are talking about OS Virtualization where applications doesn’t need to be OS specific.

Hyper-V Containers

Using Microsoft Hyper-V technology these containers are fully isolated from the OS itself by running on the hypervisor layer. This ensures that one container has no impact on it’s host or any other containers in the same system. Even though these containers are running inside a hypervisor it doesn’t have any restriction over container deployment. You can simply deploy containers that you targeted for Windows Server in Hyper-V containers and vice versa without any modification.

Nano Server

Microsoft’s Nano server is the Windows version of Red Hat’s Atomic host, an OS designed to run containers in cloud. This version of Windows has no GUI stack, 32 bit support (WOW64), MSI and a number of default Server Core components has also been taken off. Also local logon and Remote Desktop has been removed and managing a nano server can be done only via  WMI and PowerShell. As per Microsoft nano server has 93% lower VHD size, 92% fewer critical bulletins and most importantly 80% fewer reboots.

Installing Docker CLI in Windows

There are two methods currently supported for installing Docker CLI for Windows.

Boot2Docker

Boot2Docker will install a tiny Linux VM running on Virtual Box (Yes you will have to disable Hyper-V engine for this). It is a lightweight linux distro called Tiny Core Linux specifically designed to run Docker containers. You can download the Windows version from here.

Chocolatey

This is Machine Package manager like built for Windows.Think it as YUM or apt-get for Windows. Installation is rather simple. Let’s see how we install Docker CLI using this method. You can visit their website for more information on all supported packages other than Docker.

  • Open a Command Prompt as admin and execute below command.

C:/>@powershell -NoProfile -ExecutionPolicy unrestricted -Command “iex ((new-object net.webclient).DownloadString(‘https://chocolatey.org/install.ps1’))” && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin

  • Once it finishes open a PS seesion as an administrator and set the execution policy to at least Bypass. Then type the below command to proceed.

PS:/>iex ((new-object net.webclient).DownloadString(‘https://chocolatey.org/install.ps1’))

  • Now it’s time to install the Docker CLI. Using either PowerShell or Command prompt execute below command to install Docker CLI.

C:/>choco install docker

  • To upgrade the Docker Client type choco upgrade docker

 

The curious case of .NET 3.5 in System Center Installation

As you know .NET framework 3.5 is a mandatory prerequisite for most of the system Center 2012 R2 products such as SCOM, SCCM, SCVMM etc… Recently I have faced a strange issue with .NET 3.5 installation on Windows Server 2012 R2.

In Windows Server 2012 R2 you’ll have to manually point the binaries from an ISO or CD for .NET 3.5. These contained in <Drive Letter:>\sources\sxs path in the ISO or CD. If not you’ll need to supply a Install.wim image to achieve same.

Microsoft.NET 3.5 IssueThe problem was with two security updates KB2966827 & KB2966828 that has been installed as a result of fully patching the server. This TechNet article is all about the issue but still I couldn’t resolve the issue.

I did the the following and still the installation was failing with the same error.

  1. Uninstalled both security updates.
  2. Restarted the server.

Then I realized that Internet access is through a proxy server and directly connected the server to the Internet. and viola .NET 3.5 installed flawlessly.

Conclusion

When I examined C:/Windows/logs/cbs/cbs.log I noticed even though the sources are explicitly mentioned the installer program performs an integrity test with Microsoft Update sources. While on proxy it was unable to do that. To avoid this issue you can do two things.

  1. Enable .NET 3.5 before you patch the server.
  2. If you have already patched the server to the latest make sure that when you enable .NET the server is directly connected to the Internet.

OKAY HOW ABOUT MY AZURE VM? WHERE DO I FIND THE SXS FOLDER IN THERE? DO I NEED TO DOWNLOAD AN ISO?

The answer is NO. If you have properly setup your Azure VM it has internet access enabled on the fly. Just run below PowerShell cmdlet in an elevated PowerShell window.

Add-WindowsFeature NET-Framework-Core

Even with above security patches installed this works flawlessly as the binaries are downloaded from the Internet itself.

Server Core & MinShell in Windows Server 2012 R2

Some of us are just used to one way of managing a server. Be it a Linux or Windows we still prefer the old command line to save time and effort (number of clicks) in a away. Microsoft has introduced Server Core and MinShell features for those who prefer having some transparency with command line while keeping GUIs at hand’s distance.

Server Core

This feature offers a command line management console like MS-DOS. It is ideal when you want reduce the resource consumption of your server. If you don’t use full GUI that means there are lesser number of patches that you need to worry about.

You can turn on/off Server Core at any time as well as switch back to Full GUI. Only thing that you need to have to do is install/uninstall a feature called “User Interfaces and Infrastructure” that provide the underlying GUI for windows server. You can do it via server manager or PowerShell the choice is up to you.

A complete guide can be found here from howtogeek.com

Keep in mind you’ll need a restart every time you switch back from either of these modes.

MinShell (Part GUI)

This one is same like server core but you can launch your favorite MMC snap-ins from command line. But there are some notable limitations on this one.

  • Common Dialog box is functional (except networking)
  • Any UI with dependencies on items implemented as Shell Namespace Extensions will fail
    • Certain CPLs are namespace extensions, e.g. Networking
  • Internet Explorer is not available when Server Graphical Shell is uninstalled.
    • Links in UI won’t work.
    • Help isn’t available – calls
  • Some file associations and protocol handlers broken
    • http://
    • file://
    • *.chm
  • Some DLL files not installed
    • Check for dependencies or delay loads might fail!
    • DUMPBIN (Windows SDK)
    • Dependency Walker (http://www.dependencywalker.com, freeware)
    • Test your applications on the Minimal Server Interface!
    • to HTML Help API will return NULL!

All you have to do is uninstall Server Graphical Shell sub feature from User Interfaces and Infrastructure feature. Just use below PowerShell cmdlet to do so.

Uninstall-Windowsfeature -name Server-GUI-Shell -Restart

If you want you can completely remove the binary installation files for above features as well. If you do so you’ll need acess to an installation media the next time you want to enable it. To completely remove a role or feature, use –Remove with the Uninstall-WindowsFeature cmdlet of Windows PowerShell.

It’s not rocket science but it is indeed worth having a look at.

Build your Test Lab | Client Hyper-V

If you are by any chance a developer reading my blog, you know how painful it is to go after IT department begging for resources for your test lab. Guess what! Screw IT guys (Don’t take it hard on them. They are doing their best. Only problem is $$$$) with Windows 8 & Windows 8.1 you can build your own test lab using Client Hyper-V. Now I know this first hand because I face this problem daily with our development team and we had a awareness session recently on Client Hyper-V.

Client Hyper-V is same as Hyper-V server or Windows Server with Hyper-V role installed (of source with some limitations). All you have to do is enable Hardware Virtualization in you laptop/PC and enable the Hyper-V feature in the OS.

Following features from the Server version of Hyper-V is lack on the Client version.

  • Remote FX ability to virtualize GPUs
  • Live migration of VMs
  • Hyper-V Replica
  • SR-IOV networking
  • Virtual Fibre Channel

Now lets take a peek on how to do it in a proper way.

Pre-requisites

  1. A PC/laptop with a minimum of 4 GB RAM running on 64 bit version of Windows 8/8.1 Professional or Enterprise version (Yes this is a as it is requirement)
  2. 64 bit processor with Second Level Address Translation (SLAT)
  3. Hardware Virtualization support in the chipset. You can check this in your BIOS. Most of the modern motherboards have this feature and you can turn it on from BIOS setup. It should be something of a check box saying”enable Virtualization Technology  (VTx)”

Installation

  1. Enable Hardware Virtualization from your BIOS setup (Not so sure. Just Google it)
  2. Go to Control Panel > Programs and Features > Turn Windows Features on or off > Select Hyper-V and Click OK. You need to chosse both Hyper-V Management Tools & Hyper-V Platform. If you choose the management tools alone, you can only remotely administer Hyper-V host and cannot create any VMs on your PC.

AND THAT’S IT! NO MORE BUGGING.

Well you may still require to create a Virtual switch (External Virtual Switch is recommended in order to allow Internet access to your VMs) and associate vNICs of your VMs to that. Well here is a fully featured article from Canadian IT PRO connection blog that explains how to do it yourself. (A big thank you for them as well). Now take advantage of this cool feature from you Windows 8/8.1 PC/laptop and build your test lab in minutes.

One more thing.

More VMs = More Physical RAM + Disk Space

Obviously you need around 16 GB of RAM plus adequate disk space if you need couple VMs depending on your memory allocation. Also take a look at your CPU as well/ Better the CPU is better the performance of your VMs.