Today we start a new blog post series on Azure Site Recovery. In this series we are going to implement a DR solution for Hyper-V VMs in a VMM cloud. The series is a collection of 4 posts where I’ll guide you through each step in the process. Note that this is just a Proof-of-Concept lab where I’ve used minimal resources to setup.
In this setup we will be replicating one Linux VM from a Hyper-V cluster environment to Azure for DR purposes. This VM contains a sample Hello World page in an apache web server.
First Things First
This is the checklist that you want to have for this scenario.
Azure Account – An active Azure subscription. You can also use a free trial.
Storage Account – This should be Geo-Replicated in the same region as the Recovery Site service.
VMM Server – Should be System Center 2012 R2
VMM Clouds – At least one VMM cloud with one or more VMM Host Groups, Hyper-V host servers or clusters in each host group and one or more Generation 1 VMs. Please see here for the compatibility matrix for VMs.
Lets take a look at the tasks that need to be performed in an overview.
Create an Azure Site Recovery Vault
Install Azure Site Recovery Provider & Generate a registration key
Configure Azure Storage Account
Install ASR agent on Hyper-V Hosts
Configure Cloud protection
Configure network mapping – map source VM networks to target Azure Virtual networks
Enable VM protection
Test run – run a test fail-over or create a recovery plan and r un a test fail-over for same.
Lets discuss how to create the ASR vault & install the ASR Provider in our next post.
For those who were worried about not being able to backup their workloads in Server 2008 to the cloud, I have some good news. Windows Server 2008 had been added to the list of supported OS for Azure Back up. Here is the support matrix for same. There is no support for 32 bit OS but if you are using a 32 bit server OS it’s high time to migrate to a newer version of 64 bit architecture.
Technologies to be used
Windows Server 2008 (64-bit)
Files and Folders
Files and Folders,Hyper-V Virtual Machines,MS-SQL databases
System Center Data Protection Manager with Azure Backup
Additionally you’ll need to meet below per-requisites to install the Azure agent.
Recently I had to conduct a POC for a customer on Azure Backup Service. They provided a physical server with Windows Server 2008 R2 SP1 installed. When I tried to install the backup agent I noticed that a strange error happened all the time and the installtion has aborted.
“Unable to execute the embedded application to complete the installation.”
Now the funny thing is being a Microsoft techie for years I forgot to check .NET per-requisites and all. But in this case I found that there are two updates that needs to be in place prior installation of backup agent in this OS workload.
As sysadmins we spend much time to realize what went wrong, who did the last configuration change, what setting affected unexpected behavior in our data center environments. Those days are long gone now. Recently System Center Advisor team introduced Change Tracking Intelligence pack which allows you to track all the software and windows service changes in your servers, be it physical or virtual.
Key features include,
Track software and windows service changes for any time period
Logon Account, Start-up Type, Status changes in the last 24 hours or any time period
Changes applied to any application specific workload i. e Exchange, SQL, SharePoint etc…
Patch tracking – Did patch A get applied on a particular set of servers
Application Inventory – On what servers is a particular application installed
Configuration change management – Track computers with top configuration changes
System Center Advisor is still in it’s Limited preview so we can expect more feature for the core product itself within the first quarter of 2015. With the introduction of System Center Preview, Advisor will become more important in monitoring workloads.
Microsoft wishes to add more change tracking scenarios to the list but values your input on this. You can answer a short survey and get involved in the development of this particular feature.
For those folks who are running their VMs in Microsoft Azure, I have some good news today. Running System Center Data Protection Manager in a Azure VM to protect Azure VM workloads is now fully supported. Of course Microsoft guarantees that data (including VHDs) are replicated three times within a data center to ensure high availability, but what about your local compliance policy that requires the management to see the actual backups? Now this comes handy in scenarios like that.
Now lets take a look at the supported scenarios in this model.
You can install DPM on any Azure VM that is A2 or higher. You can check Azure VM sizing details from here.
DPM in Azure can protect workloads across multiple cloud services as long as they are in the same virtual networks and subscription.
The size of the DPM storage pool depends on the size of the VM. Currently the maximum number of data disks supported by an Azure VM is 16 making the maximum size of the pool to 16 TB.
Wait! this is the real catch. Not all workloads supported by an on-premise DPM is available in Azure DPM. Below table lists only the supported context in Azure.
DPM 2012 R2
DPM 2012 with SP1
Protection and Recovery
Windows Server 2012 R2 – Datacenter and Standard
Volumes, Files, Folders
Windows Server 2012 – Datacenter and Standard
Volumes, Files, Folders
Windows Server 2008 R2 SP1 – Standard and Enterprise
Volumes, Files, Folders
SQL Server – 2012, 2008 R2, 2008
SQL Server Database
SharePoint – 2013, 2010
Farm, Database, Frontend web server content
There are some best practices and guidelines that you may want to follow if you wish to deploy DPM in Azure IaaS. Let me explain them as simple as I can.
It is recommended to use a VM is the Standard Tier rather than the Basic Tier since the maximum IOPS per disk is much higher in the Standard Tier. Backup tasks are IOPS eaters so you may will have to take this as a rule of thumb.
A separate storage account for the VM running DPM is recommended. Again this is to ensure that the VM can fully utilize IOPS & size constraints of a storage account rather than sharing same with an existing storage account which as running VMs. If you use a shared storage account there is high chance that you run into performance bottlenecks.
Offload your data older than a day to Azure Backup and keep the latest in the DPM data disks. Again by doing this you ensure storage account consumption is minimal along with longer retention period. This way you won’t be needing to scale the storage for Azure VM.
SCOM is indeed a great product with regards to monitoring but sometime we require external monitoring for products like web applications. If you are familiar with solutions like site24x7 then you know how much you have to pay for those applications just to monitor your website up and and running. If you already have a SCOM deployment in your organization, you don’t need to go for such solutions. We have System Center Global Service Monitor which provides the same services and many more.
GSM is basically a cloud service which utilizes Microsoft Azure points of presence all over the world to give you end user experience of your web applications. This means the monitoring you get is how it would look like to a user in a remote location. This is a huge advantage as for application owners can get more insights regardless of external factors such as network latency, service outages etc… The beauty of this service is it focuses on the application itself not the network health or connectivity. GSM is capable of Web Application Availability monitoring (single URLs) and Visual Studio Web Tests monitoring which can be run from 15 external locations.
Let’s see how we can address the availability monitoring for a URL with GSM scenario.
You should have either a trial or paid GSM subscription integrated with your SCOM 2012 installation. For that you need to install and import GSM Management Pack to your SCOM Management server/s. You can refer here if you want to learn how to achieve that.
Configuring availability tests for URLs
Open Operations Manager Console. Navigate to Administration > Global Service Monitor, and then select Configure Web Application Availability Tests.
In the next screen provide name and a description for your testing scenario. Also as usual create a custom management pack to store this custom test. If you already have created one you can select it as well.
Enter the URLs that you need monitoring in the next screen. Even you can also paste URLs from CSV file which contains the format of “Name, URL”. Make sure to include the correct protocol (http:// or https://) depending on the URL. Click Add to import names and URLs from another source.
In the Where to Monitor From dialog, you can select the locations from where you want the URLs to be monitored. This includes both external and internal locations. External locations include 15 locations from below countries and regions. Australia (Sydney), Brazil (Sao Paulo) , Europe (Amsterdam, London, Paris, Stockholm, Zurich), Russia (Moscow), Singapore, Taipei, United States (Chicago, Los Angeles, Miami, Newark, San Antonio)
As for Internal location you can add from the servers in your environment from which the URLs going to be monitored. Of course these should be internet facing if your URLs are externally hosted.
The next dialog box allows you to view your tests (internal or external) and validate the configuration of only internal tests. To do so select the test you want run and click Run Test. To change the default settings for the tests for internal or externals tests that you created, click Change Configuration.
Once you click Change configuration you can customize the test frequency, alerts, response times etc… For the time being you can leave them as defaults. Once you are done click Apply> then OK
If you run a test you will see a test result window that will verify whether your test was a success or points out for any configuration error if there are any. Go through the details and once you confirm that the test is properly finished click close to return to Step 6
The next screen provides a summary of your new URL monitor with GSM. Click Create to create the monitor. You can create a dashboard view for this monitor later on for an easier visual representation.
Now you are all set to monitor your websites with GSM integrated to SCOM. Let’s talk about SCOM dashboards in a later post. If you are using a third party solution for website monitoring still now it’s time to go with GSM.
Two weeks ago, Microsoft has released UR3 for System Center 2012 R2. This was a much awaited update as it fixed number of issues with related to system center products. Update Rollups are set of cumulative fixes targeted at specific product which fix the bugs and add new features/functionality to the products. Lets see what are the new features that came up with UR3.
Brand new Office 365 Management Pack
This is a life saver. Previously admins had to log in to Office 365 management portal to view the tenant health, any ongoing issues or service outages. With the release of Office 365 Management pack they can leverage SCOM to do the monitoring on behalf of the Office 365 tenant. Individual services like Exchange Online, Lync Online, SharePoint Online can be thoroughly monitored with any active incidents occurred on same. Also you can view details related to service health, service interruption corresponding to the Office 365 services along with important service messages from Microsoft. In simple words, your Office 365 dashboard can now reside in SCOM. We are going to discuss how we can import and configure this MP in a later post.
Resource Visualization dashboards for VMM
We have two new dashboards that can display resource utilization of your Virtualization Hosts & VMs . You can view the health of the list of Hosts and their performance counters. This can also provide overall health status of a VM (Red, Green, Amber colour code). We will discuss how to install and configure the Resource Visualization Pack for VMM in a later post.
Those who have used SCOM for monitoring know what it takes to filter out the unnecessary monitoring information from all those alerts, monitors and logs. If your SCOM setup is just click click install kind of setup then you probably have more unwanted monitoring scenarios configured by default. Sometimes your boss needs to look at the big picture of monitoring once in a while, present it to the management at a meeting. How do you achieve these challenges? You can fine tune your SCOM installation, override unnecessary MPs, create reports, dashboards etc… But guess what, you don’t have to be that thorough anymore if you are using Microsoft System Center Advisor integrated with you SCOM environment.
System Center Advisor is a free (selected countries) online service which lets you to analyze your SCOM data or individual Microsoft Workloads to provide different insights on your data center health. Currently it is in Limited Public preview, so you can grab your Microsoft Account or Corporate Account to sign up for a preview. The beauty of this product is you can use it even without System Center Installation by installing Advisor agent on individual servers that you want to monitor and let the online service to worry about the rest. Below are some features that Advisor is capable of.
System Center Advisor can collect, combine, correlate and visualize all your machine data. That is it can separate required monitoring history from the noise. It is also capable of searching through multiple systems in your data center to identify root cause for any issues.
With advisor you can be proactive not reactive. Advisor enables you to see what capacity shortages such as storage, Allocation bottlenecks such as CPU, Memory & Network and it even let you to plan capacity for future workloads
You can avoid configuration issues with Advisor as it proactive assess the best practices that you define in your data center. This feature doesn’t require SCOM installation and can be separately configured.
Advisor is capable of keeping track of your system patches. Be it on-premise or cloud, Advisor sees through the data center for any non-compliant servers and reports back for remediation.
As you have probably guessed by now, Advisor is also capable of monitoring your environment for viruses & malware. It can report which of your servers have security threats and any actions to rectify same.
Now let’s see how we setup Advisor in your SCOM environment. Though there are some per-requisites for this.
You should be running either System Center 2012 Service Pack 1 or System Center 2012 R2
Microsoft .NET Framework 3.5 SP1 should be installed
Windows Server 2008 upwards
Supported technologies for analysis
Advisor analyzes the following workloads.
Windows Server 2012/2012R2/2008/2008 R2 (Active Directory, Hyper-V Host, General operating system)
Hyper-V Server 2012/2012R2
SQL Server 2008 and later (SQL Engine)
Microsoft SharePoint Server 2010
Microsoft Exchange Server 2010
Microsoft Lync Server 2010
Microsoft Lync Server 2013 (new in October 2013)
System Center 2012 SP1 – Virtual Machine Manager
Setting up Advisor with SCOM
Sign in to the Advisor Limited Public Preview through here. You may have to sign up if you already don’t have an account (requires Microsoft Account or Corporate Account).
Open Operations Manager Console and navigate to Administration
Select System Center Advisor, and then select Advisor Connection.
Click Register to Advisor Service.
Sign in with your Microsoft or Organizational accounts in step 1.
Create a new Advisor account or choose an existing Advisor account associated with your Microsoft/Corporate account.
Save the changes.
Select Actions, click Add a Computer/Group.
Under Options, select Windows Server or All Instance Groups, and then add servers that you want data to be collected from.
Below are some useful references that you can use to learn more about System Center Advisor.
The world is heterogeneous. All though Windows Server is running on 75% of data centers, organizations use various Linux distributions to run enterprise applications. If you are in to middleware you know for a fact how critical these applications are and the need to monitor same very closely to avoid service interruption.
Let’s have a quick look on how to setup a monitoring solutions for your Linux servers with SCOM. I’m going describe the process in a high level.
Set up the environment for monitoring
Install & import the SCOM Management Pack for Linux/Unix Monitoring in the SCOM management server
Install the Operations Manager agent in the server to be monitored.
Define monitors, rules and tasks an necessary
Step 1 | Setup the environment
Below are a list of prerequisites that you need to configure prior setting up the monitoring.
SSH should be up and running on the destination server. It’s how SCOM talks to the agent.
Port 22 (for SSH) & Port 1270 (for SCOM Agent) should be opened on both sides.
OpenSSL should be up and running to for certificate signing. This is vital if you have couple of SCOM management servers and wish to use a SSH key for authentication
Configure resource pool & Run As accounts for Linux servers. If you’re not sure about this I’ve provided some great articles in the end to refer courtesy of TechNet
Step 2 | Installing the MP for Linux/Unix
Now you already know that a management pack contains the parameters & functions that are require to monitor a specific application, be it an OS or just an application. So in our scenario we will have to install the latest MP for Linux/Unix monitoring installed and imported to SCOM in order to setup basic health monitoring for Linux. You can download these through here.
Step 3 | Agent installation and Discovery
Now for those who wish to automatically discover the Linux resources can run an Discovery Wizard in SCOM. But remember the linux user assigned for the Run As account should have enough privileges (best if that user is in the sudoers list) for the agent installation. If you want to do a manual agent installation all you have to to is to install the System Center SCX agent which comes as an rpm file in the SCOM installation directory in the Linux server using rpm command. Once you run the discovery against the server it will automatically identify that a proper agent is already in place.
Step 4 | Define your own monitoring criteria
Now that you have installed the agent after a short while you may notice the basic system health data is being reported to SCOM. You can start creating your own custom MP to store overrides for existing monitors or create new ones. What I recommend is to leave the system for couple of hours to get settled in and the to start defining the monitoring subjects.
Below are some great resources that I’ve come across when I setup monitoring for Linux. Hope you will find same useful .
Do you know that Microsoft Azure offers a flexible management options rather than the Azure Portal? With Windows PowerShell you can perform most of the routine tasks that you do in your cloud tenants, from creating VMs to scaling your applications. This comes pretty much handy if you have scheduled or predefined cloud workloads that you need to perform on Azure. Lets take a look on how to install and configure Azure PowerSehll for your cloud tenant.
An Azure Subscription
A computer that is either running Windows 7 or Windows Server 2008 R2 upwards.
Installing Azure PowerShell
Azure PowerShell comes as a redistributable running in Microsoft Web Platform installer. You can download the setup from here. When prompted, select Azure PowerShell in the feature selection stage. You’ll notice that the new Azure PowerShell when you do a search or in Al Programs in your computer.
Connecting to your subscription
In order to manage a tenant first the Azure PowerShell needs to be connected to an active subscription. There are two methods for this. Using a downloaded management certificate which include the subscription information, or by logging into Microsoft Azure using your Microsoft Account associated to that subscription. Note that the Azure AD will perform the credential authentication in the latter method.
OK I’m all set. Now what can I do with Azure PowerShell?
Azure PowerShell provides a large number of cmdlets that can be used to provision, deploy, manage & maintain Azure services. These includes creating, modifying & deleting of VMs, VM networks, cloud services, storage, web sites etc… Much like in Windows PowerShell there is a comprehensive help content of each and every one of these cmdlets.
I’ve included some articles that I found on how to use Azure PowerShell. Also you can create PowerShell scripts and locally run them in your on premise infrastructure to manage your cloud tenant. The power is up-to you to automate.