Tag Archives: Windows Server 2016

Storage Spaces Direct | Deploying S2D in Azure

This post explores how to build a Storage Space Direct lab in Azure. Bear in mind that S2D in Azure is not a supported scenario for production workloads as of yet.

Following are the high level steps that needs to be followed in order to create provision a S2D lab in Azure. For this lab, I’m using DS1 V2 VMs with Windows Server 2016 Datacenter edition for all the roles and two P20 512 GB Premium SSD disks in each storage node.

Create a VNET

In my Azure tenant I have created a VNET called s2d-vnet with 10.0.0.0/24 address space with a single subnet as below.

1-s2d-create-vnet

Create a Domain Controller

I have deployed a domain controller called jcb-dc in a new windows active directory jcb.com with DNS role installed. Once DNS role has been installed, I have changed the DNS server IP address in the s2d-vnet to my domain controller’s IP address. You may wonder what is the second DNS IP address. It is actually the default Azure DNS IP address added as a redundant DNS server in case if we lose connectivity to the domain controller. This will provide Internet name resolution to the VMs in case domain controller is no longer functional.

1-s2d-vnet-dns

Create the Cluster Nodes

Here I have deployed 3 Windows Server VMs jcb-node1, jcb-node2 and jcb-node3 and joined them to the jcb.com domain. All 3 nodes are deployed in a single availability set.

Configure Failover Clustering

Now we have to configure the Failover Cluster. I’m installing the Failover Clustering role in all 3 nodes using below PowerShell snippet.

$nodes = (“jcb-node01”, “jcb-node02”, “jcb-node03”)

icm $nodes {Install-WindowsFeature Failover-Clustering -IncludeAllSubFeature -IncludeManagementTools}

3-s2d-install-fc

Then I’m going to create the Failover Cluster by executing below snippet in any of the three nodes. This will create a Failover Cluster called JCB-CLU.

$nodes = (“jcb-node01”, “jcb-node02”, “jcb-node03”)

New-Cluster -Name JCB-CLU -Node $nodes –StaticAddress 10.0.0.10

4-s2d-create-fc

Deploying S2D

When I execute Enable-ClusterS2D cmdlet, it will enable Storage Paces Direct and start creating a storage pool automatically as below.

5-s2d-enable-1

5-s2d-enable-2

12-s2d-csv

You can see that the storage pool has been created.

7-s2d-pool-fcm

8-s2d-pool

Creating a Volume

Now we can create a volume in our new S2D setup.

New-Volume -StoragePoolFriendlyName S2D* -FriendlyName JCBVDisk01 -FileSystem CSVFS_REFS -Size 800GB

9-s2d-create-volume

Implementing Scale-out File Server Role

Now we can proceed with SOFS role installation followed by adding SOFS cluster role.

icm $nodes {Install-WindowsFeature FS-FileServer}

Add-ClusterScaleOutFileServerRole -Name jcb-sofs

10-s2d-sofs-install

11-s2d-sofs-enable

Finally I have created an SMB share called Janaka in the newly created CSV.
13-s2d-smb-share

Automating S2D Deployment in Azure with ARM Templates

If you want to automate the entire deployment of the S2D lab you can use below ARM template by Keith Mayer which will create a 2-node S2D Cluster.

Create a Storage Spaces Direct (S2D) Scale-Out File Server (SOFS) Cluster with Windows Server 2016 on an existing VNET

This template requires you to have active VNET and a domain controller deployed first which you can automate using below ARM template. 

Create a 2 new Windows VMs, create a new AD Forest, Domain and 2 DCs in an availability set

We will discuss how to use DISKSPD & VMFLET to perform load and stress testing in a S2D deployment in our next post.

Storage Spaces Direct | Architecture

In my last post I’ve explained the basics of Storage Spaces Direct in Windows Server 2016. This post explores the internals of S2D and it’s architecture in much simple context.

S2D Architecture & Design

(Image Courtesy) Microsoft Technet

S2D is designed to provide nearly 600K IOPS (read) & 1 Tbps of throughput at it’s ultimate configuration with RDMA adapters & NVMe SSD drives. S2D is all about Software Defined Storage and let’s dissect the pieces that makes up the S2D paradigm one by one.

Physical Disks – You can deploy S2D just inside 2 servers up to 16 servers on from 2 to 16 servers with locally-attached SATA, SAS, or NVMe drives. Keep in mind that each server should at least have 2 SSDs, and at least 4 additional drives which can be SAS or SATA HDD. These commodity SATA and SAS devices should be leverage a host-bus adapter (HBA) and SAS expander. 

Software Storage Bus – Think this as the Fiber Channel and Shared SAS cabling in your SAN solution. Software Storage Bus spans across the storage cluster to establish a software-defined storage fabric and allows all the servers can see all the local drives in each and every host in the cluster.

Failover Cluster & Networking – For server communication, S2D leverages the native clustering feature in Windows Server ans uses SMB3, including SMB Direct and SMB Multichannel, over Ethernet. Microsoft recommends to use 10+ GbE (Mellanox) network cards and switches with remote-direct memory access (RDMA), either iWARP or RoCE.

Storage Pool & Storage Spaces – With the recommendation of one pool per cluster Storage Pools consists of the drives that forms the S2D and it is created by discovering and adding all eligible drives automatically to the Storage Pool. Storage Spaces are your software-defined RAID based on Storage Pools. With S2D the data can have tolerance up to two simultaneous drive or server failures along with chassis and rack fault tolerance as well.

Storage Bus Layer Cache – The duty of the Software Storage Bus  is to dynamically bind the fastest drives present  to slower drives (i.e SSD to HDD) which provides server-side read/write caching to accelerate IO and to boost throughput.

Resilient File System (ReFS) & Cluster Shared Volumes – ReFS is a file system that has been built to enhance server virtualization experience in Windows Server. With Acclerated VHDX Operations feature in ReFS it improves the creation, expansion, and checkpoint merging in Virtual Disks significantly. Cluster Shared Volumes consolidate all the ReFS volumes into a single namespace which you can access from any server so it becomes shared storage.

Scale-Out File Server (SOFS) – If your S2D deployment is a Converged solution it is required to implement SOFS which provides remote file access using the SMB3 protocol to clients. i.e Hyper-V Computer Cluster. In a Hyper Converged S2D solution both storage and compute reside in the same cluster thus there is no need to introduce SOFS.

In my next post I’m going to explore how we can deploy S2D in Azure. This will be a Converged setup as Azure doesn’t allow nested virtualization. 

Storage Spaces Direct | Introduction

What is Storage Spaces Direct?

Storage Spaces Direct (S2D) is a new storage feature in Windows Server 2016 which allows you to leverage the locally attached disk drives of the servers in your datacentre to build highly available, highly scalable software-defined storage solutions. S2D helps you save your investments on expensive SAN or NAS solutions by allowing you to use your existing NVMe, SSD or SAS drives combined together to provide high performing and simple storage solutions for your datacentre workloads.

S2D Deployment Choices

There are two deployment options available with S2D.

Converged

In a Converged or disaggreagted S2D architecture, Scale-out File Server/s (SoFS) built on top of S2D  provides shared storage on  SMB3 file shares. Like your traditional NAS systems this separates the storage layer from compute and this option is ideal for large scale enterprise deployments such as Hyper-V VMs hosted by a service provider. 

(Image Courtesy) Microsoft TechNet

Hyper Converged

With Hyper Converged S2D deployments, both compute and storage layers reside in same server/s and this allows to further reduce the hardware cost and ideal for SMEs. 

(Image Courtesy) Microsoft TechNet

S2D is the successor of Storage Spaces introduced in Windows Server 2012 and it is the underlying storage system for Microsoft Azure & Azure Stack. In my next post I will explain about the S2D architecture and key components of an S2D solution in much detail.

Following video explains the core concepts of S2D and it’s  internals and use cases.