SQL RP Installation Failure in Azure Stack TP1 | Fix It

Myself & my good friend CDM MVP Nirmal Thewarathanthri have been experimenting with Azure Stack for a while now. Although we tried more than 30 times to install SQL Resource Provider in our Azure Stack lab it was never quite successful. The biggest problem is cleaning up the Azure Stack environment every time after a failure as sometimes we had to do a fresh install from scratch.

The Epic Failure

Following were the symptoms of this issue.

  • The SQL VM installs just fine.
  • Deployment always fails at DSC configuration in the SQL VM.
  • The URL of the ARM template for SQL VM seems no longer valid as you can see.

Here is the full description of the error that we encountered.

VERBOSE: 8:54:27 AM – Resource Microsoft.Compute/virtualMachines/extensions ‘sqlrp/InstallSqlServer’ provisioning
status is running
New-AzureRmResourceGroupDeployment : 10:29:12 AM – Resource Microsoft.Compute/virtualMachines/extensions
‘sqlrp/InstallSqlServer’ failed with message ‘The resource operation completed with terminal provisioning state
‘Failed’.’
At D:\SQLRP\AzureStack.SqlRP.Deployment.5.11.61.0\Content\Deployment\SqlRPTemplateDeployment.ps1:207 char:5
+     New-AzureRmResourceGroupDeployment -Name “newSqlRPTemplateDeploym …
+     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [New-AzureRmResourceGroupDeployment], Exception
    + FullyQualifiedErrorId : Microsoft.Azure.Commands.Resources.NewAzureResourceGroupDeploymentCommand

New-AzureRmResourceGroupDeployment : 10:29:12 AM – An internal execution error occurred.
At D:\SQLRP\AzureStack.SqlRP.Deployment.5.11.61.0\Content\Deployment\SqlRPTemplateDeployment.ps1:207 char:5
+     New-AzureRmResourceGroupDeployment -Name “newSqlRPTemplateDeploym …
+     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [New-AzureRmResourceGroupDeployment], Exception
    + FullyQualifiedErrorId : Microsoft.Azure.Commands.Resources.NewAzureResourceGroupDeploymentCommand

The issue in this case was the unstable Internet connection we had. The ARM template for SQL RP downloads the SQL 2014 ISO first. In our case timeout in the download has stopped the entire process. Once the VM was created SLQ Server 2014 wasn’t installed in it.

To solve this issue we followed below procedure in a fresh installation of MAS TP1. You can try this out in an existing installation with failed SQL RP deployment but there’s no guarantee that you will be able to cleanup the existing resource group. If you have executed the SQL RP installation only once clean up may work but if you have tried it multiple times there’s a high chance of failing to cleanup the existing resource group/s.

  1. Download the SQL image from here.
  2. Open the default PIR image. This is available in the MAS TP1 host  \\sofs\Share\CRP\PlatformImage\WindowsServer2012R2DatacenterEval\WindowsServer2012R2DatacenterEval.vhd
  3. Once you mount the VHD (simply double click to mount), create new Folder called  SQL2014 on the PIR image under C:\ drive
  4. Copy all files from the downloaded ISO into the folder SQL2014
  5. Start the deployment script. If you are trying this on an existing failed deployment, then  re-run the deployment after cleaning up the existing resource group/s for SQL RP.

Once all the deployment tasks are completed you can see a successfully deployed SQL Resource Provider in the portal as below.

SQL RP Success (1)

You can refer the MSFT guide on how to add a SQL resource provider in MAS TP1 deployment here for more information.

Azure Resource Policies | Part 1

Any data center should adhere to certain organizational compliance policies whether it is on-premises or cloud. If your organization is using Microsoft Azure and want your resources to adhere resource conventions and standards that govern the data center policy of your organization how would you do that? For an example you want to restrict person A to not to create VMs larger than Standard A2.  The answer would be to leverage custom resource policies and assigning them at the desired level, be it a subscription, resource group or an individual resource.

Is it same as RBAC?

No it isn’t. Role Based Access Controls in Azure is about actions a user or a group can perform while policies are about actions that can be applied at a resource level.  As an example RBAC sets different access levels in different scopes while policies can control what type of resources that can be provisioned or which locations those resources can be provisioned in an resource group/subscription. These two work together as in order to use a policy a user should be authenticated through RBAC.

Why do we need custom policies?

Imagine that you need to calculate chargeback for your Azure resources by team or department. Certain departments will need to have a limited consumption imposed and you need to charge the proper business unit at the end. Also if your organization wants to restrict what resource or where they are provisioned in Azure. For an example you want to impose a policy that allows user to create Standard A2 VMs only in West Europe region. Another good example is that you want to restrict creating load balancers in Azure for all the teams except the network team.

Policy Structure

As all ARM artifacts policies are also written in JSON format which contains a control structure. You need to specify a condition and what to perform when that condition is met simply like an IF THEN ELSE statement. There are two key components in a custom Azure resource policy.

Condition/Logical operators which contains a set of conditions which can be manipulated through a set of logical operators.

Effect which describes the action that will be performed when the condition is satisfied, either deny, append or audit.  If you create an audit effect it will trigger a warning event service log. As an example your policy can trigger an audit if someone creates a VM larger than Standard A2.

  • Deny generates an event in the audit log and fails the request
  • Audit generates an event in audit log but does not fail the request
  • Append adds the defined set of fields to the request

Following is the simple syntax for creating an Azure Resource policy.

{
“if” : {
<condition> | <logical operator>
},
“then” : {
“effect” : “deny | audit | append”
}
}

Evaluating policies

A policy will be evaluated at the time of resource creation or when a template deployment happens using a HTTP PUT request. If you are deploying a template, it will be evaluated during the creation of each resource in the template.

In the next post let’s discuss some practical use cases of using Azure resource policies to regulate your resources.

Backup ARM VMs in Azure | Tips & tricks

As you already know Microsoft Azure Fabric is now in version 2 which is sometimes referred to as Azure Resource Manager (ARM) deployment model. Most of the services from old Azure Service Management model are now available in the new model (the new portal) and today we are going to see how we can backup VMs deployed using ARM deployment model using a Azure Recovery Services Vault.

Note that you may notice another two services in your Azure subscription called Backup vaults & Site Recovery vaults which are redundant and has no use. (They are just placeholders which will be removed soon I assume)

Backup ARM VMs (1)

Essentially following scenarios are supported in a new Recovery Services vault. If you are using premium storage accounts for your VMs  keep in mind that it is only supported in a public preview and not generally available as of yet.

  • Azure Resource Manager VMs
  • Classic VMs

The process can be done in few easy steps.

Creating a Recovery Services Vault

A Recovery Services vault holds all the backups and recovery points of the VMs that are being protected along with the backup policy applied to that vault.  One important thing to keep in mind is that Recovery Services Vaults are geo specific, meaning if you need to backup a VM in one region the target vault should reside in the same region as well.

In the Hub menu, click Browse and then search for Recovery Services. I’ve already added it as a favorite by clicking the star right next. Then select Recovery Services vault and click Add.

Backup-ARM-VMs-2.png

Provide a name, select the target Azure subscription, create a new resource group or select an existing one and finally select the region for your Recovery Services vault.

Backup-ARM-VMs-3.png

Next you can select the storage replication option. The default is Geo-redundant storage and if you want a cheaper (but not durable as Geo-redundant) option you can opt out for locally-redundant storage.  Click the All Settings option in your vault dashboard to get started.

Backup-ARM-VMs-4.png

Select a Backup Target

You need to discover your Azure ARM VMs first before they are added to a recovery services vault. This will identify the VMs that can be protected by your recovery services vault.

Backup-ARM-VMs-5.png

Define a Backup Policy

A backup policy defines how frequent the VMs are protected and when the recovery points are created along with the retention range for those recovery points. You can edit the default policy to fit to your needs or create new policy here. You can choose between a daily or weekly schedule to backup your VMs.

Backup-ARM-VMs-6.png

Next select the desired VMs that you wish to backup and finally click Enable Backup.

Backup-ARM-VMs-7.png

Backup-ARM-VMs-8.png

Start the Initial Backup

By default the first scheduled backup is the initial backup. If you want to manually force the first backup it is also possible. In the vault dashboard click Azure Virtual Machines and right click on the desired VM and select Backup Now.

Backup-ARM-VMs-9.png

You can see the backup job progress by clicking All Settings > Jobs > Backup Jobs as below from the vault dashboard.

Backup-ARM-VMs-10.png

When you further expand the backup job you can see the status of each task running underneath.

Backup-ARM-VMs-11.png

Azure Cool Blob Storage | What, Why & How?

What is Azure Cool Blob Storage?

Few days back Microsoft Azure storage team added a new variant of  a storage offering called Cool Blobs. Like Amazon S3, Azure blob storage is a low cost object storage offering for Azure which enables you store your backup, media content such as images and videos, scientific data, compliance and archival data.

Why Cool Blob Storage?

Cool Blob Storage is ideal of infrequent accessed object data, that is data accessed less than once a month. Based  on the frequency of access, you can select between Hot or Cool access tiers for a storage account now. Cool Blob Storage provides following benefits for you as an end user.

  • Cost effective: Data stored at cool access tier comes at a lower price point as low as $0.01 per GB in some regions, where data you store in a hot storage tier start at $0.024 in some regions.
  • Compatibility: This is  100% API compatible with exiting Azure Blob storage and you can use this new type of storage accounts right away in your exiting applications.
  • Performance: Both Hot and Cool tiers have the same performance in terms of latency and throughput.
  • Availability:The data write SLA for Hot access tier is 99.99% where it is 99% for Cool tier. Also the read SLA is 99.99% for Hot tier where it is 99.9 for the Cold tier by leveraging the Read Access-Geo Redundant Storage, storage replica option in Azure.
  • Durability: Unlike Amazon S3 which guarantees you have Nine 11s (99.999999999%) of durability, Microsoft guarantees that your data will never be lost.  The AWS S3 SLA really interprets as “If you store 10,000 objects with us, on average we may lose one of them every 10 million years or so. This storage is designed in such a way that we can sustain the concurrent loss of data in two separate storage facilities.” Both Hot and Cool storage tiers in Azure provide the same high durability that Azure is currently offering which is 0% data loss.
  • Scalability and Security: The same scalability and security options in Azure Storage is provided in the new Blob storage accounts tiers as well.

How to deploy?

Let’s explore how you can create a new blob storage account with hot or cold access tiers in Azure GUI. Notice that this is only possible with ARM storage accounts not with classic storage. Also as of now this feature is only supported in storage accounts with standard performance.Blob Storage 1Changing the access tier is easy and takes only a click of a button.

Blob Storage 2

FAQs

Can I store my VM’s in cool/hot storage? No. Azure IaaS VM disks require page blobs and this is offered only in block blobs.
Can I convert my existing storage account to a Blob storage account? No. You need to create a new storage account or migrate data from an existing storage account to a new account.
Is this available in the classic model? No. This only supports ARM based deployments.
Can I have both hot/cool tiers in a single storage account? Not at this time. The access tier attribute is set at an account level and applies to all objects in that account.
Will I be charged for changing the access tier of my blob storage account? Changing the access tier at an account level will apply to all objects stored in the account. If you are changing from from hot to cool there won’t be any charge but changing from cool to hot will incur a per GB cost for reading all the data in the storage account.

 

 

Another Year as a Microsoft MVP | What’s Next

MVP Award

Last Friday (Yes I know it’s April Fools Day) Microsoft informed me that that I’ve been awarded the Microsoft Most Valuable Professional Award again in 2016. In 2015 I was awarded as a System Center Cloud and Datacenter Management MVP and this year as a Cloud and Datacenter Management MVP which is the new award category for all things Cloud and Datacenter related. Today I’m going to share my journey to the cloud and how I became part of this amazing community.

My Story

21 years back (I was in kindergarten back then. So you can probably guess my actual age now) when I first laid my hands on a computer I knew that computers were the right match for me. Back then computers were a luxury and luckily in the bank where my mom and dad worked, computers were replacing the old electronic typewriters. My mom was one of the first employees who had the chance to be trained in Microsoft Office & Lotus Notes (Still she is the Office specialist in our home). So after school I used to visit her at the bank and I was amazed to work on MS-DOS and play with the computer whenever I was allowed.

I grew up with two amazing older brothers who have been my first critics, partners in crime and most importantly the first leaders in my life. My eldest brother Udeni sold his motor bike in 2001 to purchase a Pentium III computer with 40 GB hard disk and 64 MB RAM (Cool specs though at that time). Me and my elder sibling Nalin used to crash his computer when he is not around. Udeni had always given me the chance to do the new experiments using that PC ( being the youngest means that you are the guinea pig). Most of these new experiments were utter failures and we even had days/months long fights.

In the year 2000, me along with Nalin got into a computer programme for kids when both of us won a scholarship. This is where my first steps towards formal IT education started. Windows 98, ME, XP all that excitement led me to chose my first MCP course in Visual Basic.NET in 2005.

I never fit into the education system in my country. In fact when all my friends passed their G.C.E A/Ls (high school finals) with top results and got selected to the universities, here I was with three simple passes, without any option but to take the exam again. I was in college brass band for 7 years, music was my second passion except IT. I had no plan when I met my fiancée Aloka 9 years back, while we were still high school students.  Her friends often complained that I might lead to a complete disaster in the end. But she always believed that I can make a difference and always encouraged me to look for other means of higher education.

By that time, Nalin has started his career as banker like my parents and he had a vision to support my education in anyway he can. He told my folks “Whatever the little one wants to do let’s support him. I’m sure he is going to nail it” and immediately I started following a Telecom Diploma along with the British Computer Society Professional Qualifications after high school. I enrolled in to a government diploma in Electrical Power Engineering while I was pursuing my passion in IT and somewhere along the way I felt that it was not my thing. So I gave that up 🙂

How it all started

When I graduated (not with flying colours) it took me 6 months to land my first job in IT. My first employer was a long standing ISP + Managed Services Provider and I had a bunch of awesome colleagues who taught me what it takes to be a professional. In fact the first task given to me by my immediate supervisor Pratheesh, is to assemble at least three working PCs out of some remnant PC parts which I thought nonsense at first. Later when we became close friends (still he and the gang are my best friends) he told me that, at first he wondered how I got this job, a person who is always asking so many questions and talks too much. Nevertheless within the short time I spent with them I learned a lot and it was a turning point in my career.

I’ve done various IT jobs. I’ve been frequently asked the question “why do you change your place of work so often?”. I’m after knowledge and when I saturate at one place, I tend to explore what more I can learn from the outside and move on. It’s not just the benefits but I need to be laser focused on what I love most, sharing and improving my knowledge. Only a few understood the logic behind what I’m doing. I’m not an average IT professional who sits back and relax, I always try to innovate and this will always put me a in challenging ride.

Madura Sonnadara, one of the best superior’s that a person can have was my boss back in 2013 and he always wanted me to learn something new and share it with my peers. This led me to involve with Microsoft communities and my friend MVP Gogula showed me the importance of contributing back. So I started a blog, started engaging in TechNet forums. I focused more towards the cloud from 2012 and taken many Microsoft & Red Hat examinations to keep myself challenged. I’ve also had a great support from my ex-boss Raymond Chou (who also is a CDM MVP by the way) to polish my skills into a whole new level.

On 1st of April 2015 when I didn’t notice any notification from the MVP programme I felt discouraged and sad. MVP Hasitha, my mentor/senior while I was at Infront Consulting was rather upset than myself and told me to not to lose the passion. I decided to contribute more and more whether I’m going to be a MVP or not. The e-mail was actually there in the Junk folder and here I was the very first SCCDM MVP in Sri Lanka. I got the chance to work with so many MVPs worldwide, I presented in many conferences, engaged with more communities during the last MVP programme year. It took a lot of energy to keep up with these constantly changing technologies but I still believe that keeping myself updated is the best way that I can feel challenged and have a meaning  in my life.

What’s ahead in 2016 for me?

This year I’m going to invest my time significantly on couple of key technologies. Microsoft Azure Stack, Red Hat & OSS on Azure, Enterprise Mobility, Azure Automation & PowerShell DSC are some of areas that I’m currently focused on. I’ve always been an OSS advocate so I couldn’t be much happier when Microsoft embraced OSS with it’s top selling products such as Azure & Windows 10. Basically the plan is to learn more, engage more and contribute more like everyday is the last day on planet Earth. I’m a devoted Buddhist and as Buddhist I believe that what we do today will define the future. So today will always be a good day to start.

Where you can find me in 2016?

I will be a presenter at some of the top cloud and data center conferences around the world in 2016. I’ll be there at SCU Europe, Berlin in August and then PowerShell Conference Asia in Singapore. Being an active community geek I made a lot of friends all over the world. I meet most of them in various locations that I travel, and it always makes me realize what a small world we live in. Also running our local user group Sri Lanka IT PRO Forum has always kept me busy and if you are around Colombo by any chance, you can come and join us at anytime. I’ll be presenting, organizing, helping a quite a number of local/international user groups and events as long as they got something to do with a cloud (or at least there should be a silver lining in it) throughout 2016 and beyond.

My message to you

Take a risk whenever you can. If you don’t take the necessary risks you may well end-up in the same place for the rest of your life. Start teaching others and you will learn a lot by just doing that. The enthusiasm to contribute back to the community and learn from the community was the key to unlock this door for me. Raise your voice, share your opinion on technology and get engaged. Ask the questions, get it right and help someone to understand what you have learned. The moment you stop learning will be the moment you stop breathing.

Being an MVP is a journey not a destination.

This has been a rather long post but I wanted to share something that has inspired me as a little kid, teenager and as an adult. Following is from Nikolai Ostrovsky’s famous novel How Steel was Tempered.

Man’s dearest possession is life. It is given to him but once, and he must live it so as to feel no torturing regrets for wasted years, never know the burning shame of a mean and petty past; so live that, dying, he might say: all my life, all my strength were given to the finest cause in all the world──the fight for the Liberation of Mankind.

Knowledge is power, so share it in anyway you can. What could possibly be a more better way to fight for humanity’s liberation, than empowering the society with knowledge? Start sharing today and you’ll achieve more than you’d ever imagined.

Exporting your Azure Resource Groups to ARM Templates | Part 2

In my previous post I showed you how we can export Azure resource groups into ARM templates using the Azure Portal. For those of us who are not GUI fans (including myself) Azure PowerShell and Azure CLI provide cmdlets/commands to leverage the export feature for cloning, redeploying and automating Azure resource group deployments.

Azure PowerShell

With the latest Azure PowerShell you can execute below cmdlet to export a running resource group to an ARM template.

Export-AzureRmResourceGroup -ResourceGroupName <RG name> -Path <template path>

To export resource groups from a previous deployment you may use the below cmdlet syntax.

Save-AzureRmResourceGroupDeploymentTemplate -DeploymentName <Deployment Name> -ResourceGroupName <RG Name>-Path <template path>

Azure CLI

You can use the following syntax to export a running resource group to an ARM template.

azure group export <name> [template path]

Use below command syntax to export to an ARM template from a previously deployed Resource Group

group deployment template download [options] <resource-group> <name> [directory]

 

Exporting your Azure Resource Groups to ARM Templates | Part 1

Have you ever wanted to clone your resource group  deployment in Azure to another subscription or perhaps redeploy again without manual interaction with GUI? Now you can export your resource groups as ARM templates and redeploy wherever you want without having further barriers. Let’s explore how to use this feature in Azure.

Export from an existing Resource Group Deployment

When you select a resource group you can see the Export Template option in Settings.

Export RG to ARM (1)

Export RG to ARM (1)

Export from a previous deployment

In your resource group select the particular deployment slot and you will have the option to export that particular slot with parameters submitted for that specific instance of deployment.

Export RG to ARM (5)

Saving and Redeploying to a new resource group

Alternatively you have the option to Save the template and it will be saved under Browse > Templates in the Azure Portal.

Export RG to ARM (6)

Export RG to ARM (4)

Selecting the Deploy button will allow you to start a new deployment.

Export RG to ARM (3)

Keep in mind that currently not all the resource types are supported in with export feature. For an example you may encounter failure s when you try to export resources such as WebApps, Service Bus, Stream Analytics etc… Following is such an error which happened when we tried to export a resource group with Service Bus resources.

The schema of resource type ‘Microsoft.ServiceBus/namespaces’ is not available. Resources of this type will not be exported to the template. (Code: ResourceTypeSchemaNotFound)

This has been reported to Microsoft and this post will be updated once Microsoft provide a list of supported resource types/add more and more supported resource types to this feature. Right now I can confirm that IaaS resources are fully supported in this feature.

In this next post let’s see how we can leverage Azure PowerShell or Azure CLI to export resource groups into ARM templates.

Savision Whitepaper | Monitoring IT Services Proactively

A lot of companies are used to waiting for a disaster to happen in order to react. Only until there’s a service outage within their IT department do they take action instead of being more proactive and in control of their IT. The problem is that they don’t know where or how to get started in having a proactive approach to monitoring, even more so when they have a lot of infrastructure that needs to be monitored.

As a first step, IT needs to understand the business: all good designs come from understanding your IT services data dependencies and knowing how they relate to one another. Then they need to find out what are the best tools available today.

Microsoft System Center Operations Manager (SCOM) is a great platform to monitor components, and which a lot of people in the industry are already familiar with. There’s a lot of useful information within SCOM that can be used by the different personas in the company, however, the presentation layer and the way it is organized within SCOM is not the way those other personas look at IT. SCOM is still pretty technical and is all about components.

Looking at the personas in the IT service delivery organization, you will see that engineers definitely can work with SCOM. However, it usually takes them a while to figure out how they can easily get to the root-cause of a service outage, and what the business impact is of this outage will be.

Savision’s new whitepaper: “Business Service Management with System Center”, shows how to stay in control of your business services and make the most out of SCOM. Click here to download the whitepaper. The whitepaper is written by three experts: Microsoft MVP Robert Hedblom, Savision’s Co-Founder & VP of Product Management Dennis Rietvink, and Approved Consulting’s CEO & Solution Architect Jonas Lenntun.

Removing SCOM MPs like a Boss

It’s not like everyday you might want to remove certain management packs from your SCOM management group. The most  painful task is removing the dependent MPs as you need to manually track all of those and delete them first in order to successfully remove the parent management pack.

Microsoft Senior Software Engineer Chandra Bose has written a PowerShell script that can  identify and remove all of the dependent management packs automatically in such situations. Lets explore that script a little in this post.

How to get started?

First of all you need to run the Operations Manager Command shell as an administrator,  which should be a member of the Operations Manager Administrators group as well.

When you execute the script you can either provide the ID or the System Name of the parent management pack as below. You can find the MP ID by visiting Administration > Management Packs > Right click the desired MP and select properties > Look for the ID field in the General tab which shows the MP ID. The System Name is the unique name of the MP (i.e Microsoft.SQLServer.2012.Discovery)

 .\RecursiveRemove.ps1 <ID or System Name of the MP>

You can download the RecursiveRemove.ps1 script from here.

Deploying SQL RP in Azure Stack | InvalidApiVersionParameter

Last week I was deploying a new Azure Stack POC deployment with my friend CDM MVP Nirmal Thewarathanthri. While we were working on the SQL RP deployment there was a strange issue. The problem was every time when the SqlRPTemplateDeployment.ps1 was  executed, it always failed with below error.

New-AzureRmResourceGroupDeployment : InvalidApiVersionParameter: The api-version ‘2015-11-01’ is invalid. The supported versions 
are ‘1.0,2.0,2015-01-01,2014-04-01-preview,2014-04-01,2014-01-01,2013-03-01,2014-02-26,2014-04’.
At D:\SQLRP\AzureStack.SqlRP.Deployment.5.11.61.0\Content\Deployment\SqlRPTemplateDeployment.ps1:207 char:5
+     New-AzureRmResourceGroupDeployment -Name “newSqlRPTemplateDeploym …
+

Wrong Azure PowereShell version

The Azure Stack documentation instructs us to update to the latest version of AzurePowerShell in the Client VM before deploying the SQL RP. But the latest version 1.2.2 released in March seems not supported in this scenario. When we downgraded that to version 1.2.1 (February version) we were able to continue with the SQL RP deployment.

If you are using the web installer for Azure PowerShell be mindful to avoid that at least for now. You can explore all the releases from GitHub and download  them as MSI packages.