Category Archives: Azure

Azure VM Serial Console Access Preview

One of the biggest challenges in any public cloud IaaS platform is enabling the direct serial-based access so that engineers can fix boot issues. However Microsoft Azure Compute team has overcome this challenge and has introduced serial console access (preview feature) to Azure VMs recently.

This allows access to a text-based console for Linux and Windows virtual machines. Furthermore this serial connection  to COM1 serial port of the VM provides access to the VM settings that are not related to its network/operating system state.

Following prerequisites must be met in order to use this feature.

If you want to disable this feature all you have to do is to disable the boot diagnostics setting for the desired VM.

This can be found under VM setiings blade > Support + Troubleshooting > Serial console (Preview)

Enabling Serial Console access in Windows VMs

  • RDP into your Azure VM running Windows OS.
  • Execute the following commands as an administrator in command prompt.
bcdedit /ems {current} on

bcdedit /emssettings EMSPORT:1 EMSBAUDRATE:115200
  • Once you reboot the VM serial console access will be enabled.

This preview is available in all Azure regions.

Isolated Azure VMs | New Sizes

Microsoft has announced two new IaaS VM sizes called E64i_v3 and E64is_v3 earlier today. These VM SKUs have isolated hardware and can be dedicated to a single customer.Microsoft recommends these two SKUs for workloads that needs a  a high degree of isolation from other customers for compliance and regulatory requirements.
Performance & Pricing Matters

The performance and pricing of E64i_v3 and E64is_v3 are same as their predecessors cousins E64_v3 and E64s_v3. Also the two new SKUs are available in all regions supported by E64_v3 and E64s_v3 sizes. As you may have already guessed, the simple  letter ‘i’ in the VM size name denotes that they are isolated.

Here are some facts about the new sizes.

  • E64i_v3 and E64is_v3 are hardware bound sizes unlike   E64_v3 and E64s_v3.
  • E64i_v3 and E64is_v3 operate on Intel® Xeon® Processor E5-2673 v4 2.3GHz hardware only and will be available until at least December 2021.
  • These sizes will have 12 months in advance after December 2021 to be decommissioned and after that MSFT will  offer updated isolated sizes with their next gen hardware.
  • On May 1st, 2018 these sizes will be made available as one-year Reserved VM Instances.
What are my other options?

Here is the full list of isolate VM sizes that is available in Azure Public cloud as of today.

 

Azure Standard Load Balancer | NSGs are a must

Azure Standard Load Balancer is a new Load Balancer SKU that provide 10x performance and more features than the Basic SKU. One of the coolest features is the ability to use two Standard Load Balancers as a public and internal Load Balancer at the same time. This allows a VM NIC to be connected to the backend pool of one public and one internal Load Balancer resource  where both are of Standard SKU.

I’m my test lab I tried to replicate the Internal + External loadbalancing scenario and found that only the Internal Load Balancer has been working. I’ve tried every possible change I could do including:

  • replicating the same inbound rules as the Internal LB,
  • deleting and re-creating the LB,
  • changing the health probes countless time,
  • changing the health probe settings,

and none of the tweaks I did worked for the External Azure Standard LB.

The Culprit

This article explains the security model of the new Azure Standard Load Balancer.  I had a through look in the documentation and found out the following section.

Standard Load Balancer is fully onboarded to the virtual network. The virtual network is a private, closed network. Because Standard Load Balancers and Standard public IP addresses are designed to allow this virtual network to be accessed from outside of the virtual network, these resources now default to closed unless you open them. This means Network Security Groups (NSGs) are now used to explicitly permit and whitelist allowed traffic. You can create your entire virtual data center and decide through NSG what and when it should be available. If you do not have an NSG on a subnet or NIC of your virtual machine resource, we will not permit traffic to reach this resource.

As you can see if we don’t assign an NSG rule to explicitly allow inbound traffic from the Internet to reach the LB backends (VMs) connected to the Azure Standard Load Balancer. Let’s examine a practical example.

In my example the backend VMs are connected to an Azure Standard LB that resides in the DMZ subnet of a VNet. I have created subnet specific NSGs so therefore the rule I’m going to create is assigned to the DMZ NSG.

As you can see all the inbound traffic originating from the Internet which are destined to ports 443,8443,8444 on DMZ subnet 10.150.4.0/24  have been whitelisted in this rule.  Also note that the External Azure Standard LB has three loadbalancing rules corresponsing to the same ports.

It is a simple solution yet can cause a huge trouble in your environment. In my case not having the NSG rules assigned has resulted in a Traffic Manager endpoint (of course the endpoint was an External Azure Standard LB) to be  in a degraded state during a DR drill. Therefore it is quite important to make sure that you have created the appropriate NSG rules if you are using the Standard SKU for your external Azure LBs.

 

Process Server fails to communicate after ASR Update Rollup 22

Recently I have been working on an ASR implementation for a customer, where it was required to upgrade the MARS version from 9.8 to 9.13. Microsoft has set a deadline of 28th February for this update as after that enabling replication using older version of MARS wouldn’t work. Here is what you see when you are prompted to perform this upgrade.

Since we were running an incompatible version of MARS (Microsoft claims that you need to have a n-4 version of DRA in order to successfully perform this update) we had to perform a step upgrade from 9.8 to 9.10 and then to 9.13. This link provides to reference to that.

The Issue

Both process servers in the environment have been successfully upgraded to 9.13 as a step upgrade. But as soon as the latest version was installed, both the config server and the secondary process server lost communication with the ASR vault and refresh server connection task has been failing for no reason.

We have raised this with Microsoft support and the support engineer assigned to our case found out one alarming issue under ASR operational log under Windows Event viewer in the config server.

 

System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.  File name: 'Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'    

at SrsRestApiClientLib.SrsCreds.InitializeServiceUrl()    

at SrsRestApiClientLib.SrsCreds..ctor(String resourceId, String siteId, String draId, X509Certificate2 cert, AcsConfiguration acsConfig, Boolean retryFetch, String apiVersion, Proxy webProxy, String msiVersion, String vmmVersion)    

at SrsRestApiClientLib.ClientHelper.InitializeDra(String resourceId, String siteId, String draId, X509Certificate2 cert, AcsConfiguration acsConfig, String apiVersion, Proxy webProxy, String msiVersion, String vmmVersion)    

at Dra.SrsCommunication.SrsCommunicationClient.InitializeSrsCommunication()    

at Dra.SrsCommunication.SrsCommunicationClient..ctor(IDraFabricAdapter fabricAdapter)    

at Dra.Dra.Initialize(IDraFabricAdapter fabricAdapter)    

at Dra.DraFactory.CreateInstance(IDraFabricAdapter fabricAdapter)    

at Microsoft.VirtualManager.Engine.DRA.Core.Core.Initialize()   

WRN: Assembly binding logging is turned OFF.  To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1.  Note: There is some performance penalty associated with assembly bind failure logging.  To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog].  ,{00000000-0000-0000-0000-000000000000}



======================

The Culprit

As you may have already notice there is an exception logged stating that the DRA cannot find ‘Microsoft.IdentityModel, Version=3.5.0.0’, which is indeed a part of Windows Identity Foundation 3.5 role in Windows Server. When we checked the installed roles in the config server this role wasn’t present and we have enabled it again to see whether it would have made any difference. Viola!, the process servers reestablished the communication to the vault soon after this role has been installed.

Aftermath

ASR engineering team has confirm that the DRA has a pre-requisite to check whether the config server has Microsoft .NET framework 4.5 installed. Furthermore with .NET 4.5, WIF role is fully integrated into the .NET Framework, meaning that it should be automatically installed alongside .NET 4.5 should have been present in this case. They have reproduced the issue and verified that in upgrade from 9.10 to 9.13, no issues are observed as there are no options available to disable WIF along with .NET Framework 4.5 installation (DRA runs ion a minimum of .NET 4.5 and the latest requirement supports Recently .NET framework 4.6.2).

My best bet is that this has happened when we performed the upgrade from 9.8 to 9.10 as we haven’t reproduced that possibility. The config server haven’t had any major change in its configuration so the only suspect is the 9.8 to 9.10 upgrade. Well I’m exploring that possibility right now and will be publishing a another post if my theory is confirmed. But until then, it still remains a mystery.

Protecting Azure IaaS VMs with Managed Disks with Site Recovery

Microsoft Azure Site Recovery team has just announced the capability to protect Azure IaaS VMs with managed disks using ASR as a public preview in all ASR enabled regions. This was an important steps as lot of customers were expecting to protect their VM workloads with managed disks, compared to the previous unmanaged disk scenario.

A2A Managed Disk Architecture

The challenge here was how do you replicate disk without requiring a VM object. To overcome this hurdle, ASR uses one storage account, in source region which will cache the disks to the target region. This enables ASR to create a replica managed disk in the target region for each VM protected in the primary region and this replica disk will be the data store for the source disk in the primary region. One important thing to note is that the initial replication between the source and target happens externally using a snapshot at the VM level, so are delta syncs.

Things to Remember

  • VM protection can be enabled via VM settings blade or in the recovery services vault settings.
  • If you have VMs with unmanaged disks that are currently protected by ASR, you need to disable protection and convert the VMs to managed disks first as conversion from unmanaged to managed disks while protected by ASR is not supported.
  • There is an option for you to selected the type of the replica disks, to standard or premium. See below screenshot.
  • Only one cache storage account is needed to so store the data changes from source to primary regions, but you can leverage multiple cache accounts per VM as well.

Azure Backup OOTB with VM Creation

OOTB stands for Out-of-the-Box. What does Azure Backup has to do with it? Well this blogpost explains how you can protect your VM with Azure Backup at the time of its creation.

Previously when you wanted to add an Azure VM to be protected by Azure Backup, you had to created a recovery services vault and  select the desired VM or you had to allow backup through the VM Management blade. The catch here is both these options are post creation activities. In my opinion, administrators often forget to protect their Azure VMs  post creation. With this new update, we can protect our Azure VMs during it’s creation.

When you create a VM in the portal under Settings  > Configure Optional Features blade you can find the backup option now.

Here you can create a new recovery services vault or select an existing one and then create or choose an existing backup policy to backup your Azure VM.

 

 

Azure Stack Policy Module for Azure

When you want to deploy your services across both Azure & Azure Stack, you need to make sure that both environments has the same resources and services available. Also the resources should be compatible with each other. The Azure Stack Policy module enables the same versioning and services that Azure Stack has in your Azure subscription. This will allow you to create a new Azure Policy that limits the available resources in your Azure environment to that of Azure Stack.

Let’s see how we can achieve this with PowerShell.

Installing the Policy Module

Apply the Policy

If you want to apply the default Azure Stack policy to your Azure subscription itself, you can use the below example.

Login-AzureRmAccount
$s = Select-AzureRmSubscription -SubscriptionName "<Azure Subscription Name>"
$policy = New-AzureRmPolicyDefinition -Name AzureStackPolicyDefinition -Policy (Get-AzsPolicy)
$subscriptionID = $s.Subscription.SubscriptionId
New-AzureRmPolicyAssignment -Name AzureStack -PolicyDefinition $policy -Scope /subscriptions/$subscriptionID

If you want to apply the policy into a particular resource group, you can execute the below snippet. This is useful when you have other Azure services running in the same Azure subscription which are not compatible with Azure Stack. Targeting a specific resource group to deploy the default Azure Stack policy will allow you to test apps that are developed for Azure Stack.

Login-AzureRmAccount
$rgName = '<resource group name>'
$s = Select-AzureRmSubscription -SubscriptionName "<Azure Subscription Name>"
$policy = New-AzureRmPolicyDefinition -Name AzureStackPolicyDefinition -Policy (Get-AzsPolicy)
New-AzureRmPolicyAssignment -Name AzureStack -PolicyDefinition $policy -Scope /subscriptions/$subscriptionID/resourceGroups/$rgName

Service Tags for Azure Network Security Groups

While you are creating NSG rules have you ever faced a scenario where you need to group a bunch of IP addresses, sources or destinations? Service Tags in Azure denotes a group of IP address prefixes that can help you to simplify the NSG rule creation. Service Tags allow you to restrict (allow or deny) traffic to a specific Azure service both globally or per region.

Microsoft manages and updates the list of undelying IP address for each service tag thus eliminating the need to manually update your NSG rule each time the service endpoint IP addresses are changed. This allows you to leverage service tags insteadof  of specific IP addresses when creating security rules.

What’s available right now?

Following is the list of  service tags that are available to be used in a security rule definition.

Azure Service Tag Purpose
VirtualNetwork (Resource Manager) (VIRTUAL_NETWORK for classic): VNet address space (all CIDR ranges defined in the VNet), all connected on-premises address spaces, and peered VNets or VNets connected to a VNet gateway, are encompassed in this service tag.
AzureLoadBalancer (Resource Manager) (AZURE_LOADBALANCER for classic) If you are using any Internet facing Azure Load Balancer you can use this service tag. This tag translates to an Azure datacenter IP address where Azure’s health probes originate.
Internet (Resource Manager) (INTERNET for classic) Any ingress or egress traffic from IP addresses that are outside the VNet and reachable from the Internet, is encompassed in this service tag. The address range includes the Azure owned public IP address space as well.
AzureTrafficManager (Resource Manager only) The IP address space for the Azure Traffic Manager probe IPs is denoted by this service tags.
Storage (Resource Manager only) Azure Storage service IP range is denoted in this service tag so that once you specify Storage for the value, traffic is allowed or denied to storage. Furthermore, you can specify the region to allow or deny traffic to a storage in a specific region.This tag represents the Azure Storage service, but not specific instances of the service (storage accounts).
Sql (Resource Manager only) Address prefixes of the Azure SQL Database and Azure SQL Data Warehouse services are denoted in this service tag. Specifying Sql for the value allows you to allow or deny traffic to Sql. Regional restrictions are possible as in Storage service tag and also this tag represents the Azure SQL service not individual instances. (databases or data warehouses)

Here is an example of a service tag in action. As you can see we are using the Internet service tag to denote all inbound external traffic in this NSG rule.

Monitoring Azure Blobs with Azure Logic Apps

Azure Logic Apps is a very useful service that allows you to perform various enterprise integration tasks, process automation with on-premises and cloud tools a like. In this blog post I’m going to explain how we can leverage Azure Logic Apps to monitor Azure Blob containers. 

The Requirement

I have a blob container called fx-incoming-files that needs to periodically checke for any file that is older than 30 minutes. If any file that has been in the container for 30 minutes or more I need to alert the help desk.

Designing the Logic App

Following is the designer view of the Logic App that I have created. Let’s dissect each step in detail.

  • The recurrence action runs the logic app every 30 minutes.
  • Next step is to list all the blobs in the fx-incoming-files container. We are using List Blobs action for this.
  • First we need to check whether there are any files present in the container. Here is the condition to check that.
@greater(length(body('List_blobs_of_fx-incoming-files')?['value']), 0)
  • If the condition is met we need to check the condition of file age < 30 minutes. Here is the code for that condition. This check whether the last modified date/time for any blob is greater than 30 minutes than the current date/time.
@greater(addMinutes(utcNow(), -30), item()?['LastModified'])
  • Next action creates an HTML table from the body of the result of the previous step.
  • Next action is an Office 365 action which sends an email with the result from the previous step in the body of the email. Note that we need to select Is HTML to True so that the email would render properly with the HTML content.

As you can see all it needs is to figure out the correct connector and/or actions to use in your process flow. You can find the list of available connectors for Azure Logic Apps from this link.

Deploying Dependency Agent for Service Map with Azure VM Extension

OMS Service Map solution automatically discovers application components, processes and services on Windows and Linux systems and maps the communication between them. In order to use this solution, you need to deploy the Service Map Dependency Agent for the respective OS. As you can see in the below diagram, this agent does not transmit any data by itself, but rather transmits data to the OMS agent which will then publish the data to OMS. 

Image Courtesy Microsoft Docs

Microsoft has announced the release of a new Azure VM extension that will allow you to automatically deploy the dependency agent for Service Maps. Since Service Maps supports both Windows & Linux, there are two variants of this extension. One thing you do have to remember is that the dependency agent still depends on (ah the irony) the OMS agent and your Azure VMs need to have the OMS agent installed and configured prior deploying the dependency agent. 

Installing the dependency agent with Azure VM extension (PowerShell)

Following is a PowerShell code snippet (from Microsoft) that will allow you to install the Service Map on all VMs in an Azure resource group. 

$version = "9.1"

$ExtPublisher = "Microsoft.Azure.Monitoring.DependencyAgent"

$OsExtensionMap = @{ "Windows" = "DependencyAgentWindows"; "Linux" = "DependencyAgentLinux" }

$rmgroup = "<Your Resource Group Here>"

Get-AzureRmVM -ResourceGroupName $rmgroup |

ForEach-Object {

""

$name = $_.Name

$os = $_.StorageProfile.OsDisk.OsType

$location = $_.Location

$vmRmGroup = $_.ResourceGroupName

"${name}: ${os} (${location})"

Date -Format o

$ext = $OsExtensionMap.($os.ToString())

$result = Set-AzureRmVMExtension -ResourceGroupName $vmRmGroup -VMName $name -Location $location `

-Publisher $ExtPublisher -ExtensionType $ext -Name "DependencyAgent" -TypeHandlerVersion $version

$result.IsSuccessStatusCode

}
Installing the dependency agent with Azure VM extension (ARM Template)

However Microsoft didn’t publish any official reference on to deploy the Azure VM extension for deploying the dependency agent using ARM templates (as of yet). My colleague MVP Stanislav Zhelyazkov has published a blog post on how to do this with ARM templates. You can find his reference template from GitHub.

Note

This VM extension is currently available in the West US region and Microsoft is rolling it out all Azure regions in the next couple of days.